uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,566,015
arxiv
\section*{Introduction} Let $\sE$ be an ample rank $r$ bundle on a smooth toric projective surface, $S$, whose topological Euler characteristic is $e(S)$. In this article, we prove a number of surprisingly strong lower bounds for $c_1(\sE)^2$ and $c_2(\sE)$. First, we show Corollary (\ref{easyLowerBoundForC1Square}), which says that, given $S$ and $\sE$ as above, if $e(S)\ge 5$, then $c_1(\sE)^2\ge r^2e(S)$. Though simple, this is much stronger than the known lower bounds over not necessarily toric surfaces. For example, see \cite[Lemma 2.2]{BSS94}, where it is shown that there are many rank two ample vector bundles with $(c_1(\sE)^2,c_2(\sE))=(2,1)$ on products of two smooth curves, at least one of which has positive genus. We then prove an estimate, Theorem \ref{degree}, which is quite strong for large $e(S)$ and $r$. As $e(S)$ goes to $\infty$ with $r$ fixed, the leading term of this lower bound is $\displaystyle (4r+2)e(S)\ln_2(e(S)/12)$, while if $e$ is fixed and $r$ goes to $\infty$, the leading term of this lower bound is $\displaystyle 3(e(S)-4)r^2$. For example, $c_1^2(\sE)\ge 3r^2e(S)$, for $r\le 3$ if $e(S)\ge 13$, or for $r\le 6$ if $e(S)\ge 19$, or for $r\le 141$ if $e(S)\ge 100$. Or again, $c_1^2(\sE)\ge 5r^2e(S)$, for $r\le 10$ if $e(S)\ge 100$. We include a three line Maple program in Remark \ref{mapleProgram} for plotting the expression for the lower bound. The strategy is to use the adjunction process to find lower bounds for $c_1(\sE)^2$. Toric geometry has two major implications for the adjunction process. First, given an ample rank $r$ vector bundle $\sE$ on a smooth toric surface $S$, there is the inequality $-\det \sE \cdot K_S\ge e(S)({\mathop{\rm rank\,}\nolimits} \sE) $. Adjunction theory yields the lower bound for $c_1(\sE)^2$ given in Theorem \ref{easyLowerBoundForC1Square}, which implies that $\displaystyle c_1(\sE)^2 >r^2e(S)$ for $e(S)\ge 7$. The second important fact is that $h^0(tK_S+\det \sE)>0$ for integers $t$ between $0$ and at least ${\mathop{\rm rank\,}\nolimits} \sE+\ln_2(e(S)/6)$. Adjunction theory yields the strong lower bound given in Theorem (\ref{degree}) for $\displaystyle c_1(\sE)^2$ when $e(S)\ge 7$. Using Bogomolov's instability theorem, we get the strong lower bound given in Theorem (\ref{applicationOfBogomolov}) for the second Chern class, $c_2(\sE)$, of a rank two ample vector bundle. Basically if $c_2(\sE)$ is less than one fourth the lower bound already derived for $c_1(\sE)^2$, then we have an unstable bundle, and Bogomolov's instability theorem combined with the Hodge index theorem give strong enough conditions to get a contradiction. The short list of exceptions to the bound $c_2(\sE)>e(S)$ are classified. Even assuming $\sE$ very ample on a nontoric surface, the best general result \cite{BSS96} shows only that $c_2(\sE)\ge 1$ with equality for $\pn 2$. Inequalities derived from adjunction theory usually have the form, ``some inequality is true if certain projective invariants are large enough.'' Typically examples exist outside the range where the adjunction theoretic method works. For rank two ample vector bundles $\sE$ we use a variety of special methods, including adjunction theory and Bogomolov's instability theorem, to enumerate the exceptions to either the inequality $c_1(\sE)^2\ge 4e(S)$ or the inequality $c_2(\sE)\ge e(S)$ holding. The exceptions are collected in Table 1 We would like to thank the Department of Mathematics of the K.T.H. (Royal Institute of Technology) of Stockholm, Sweden, for making our collaboration possible. The second author would like to thank the Department of Mathematics of Colorado State University for their support and fine working environment during the period when the final research for this work was carried out. \section{Background material}\label{backgroundSection}In this paper we work over $\comp$. By a variety we mean a complex analytic space, which might be neither reduced or irreducible. A rank $2$ vector bundle $\sE$ on a nonsingular surface $S$ is called {\em Bogomolov unstable} \cite{R78}, or {\em unstable} for short, if $c_1(\sE)^2>4c_2(\sE)$. When $\sE$ is unstable there exists a line bundle $\sA$ and a zero subscheme $(\sZ,\sO_\sZ)$ fitting in the exact sequence \begin{equation}\label{BS} 0\to \sA\to \sE\to (\det\sE-\sA)\otimes\sI_\sZ\to 0; \end{equation} with the property that for all ample line bundles $\sL$ on $S$, $(2\sA-\det \sE)\cdot \sL >0$. The standard consequences of this result that we will often use in this article are: \begin{enumerate} \item $(2\sA-\det \sE)\cdot (2\sA-\det \sE)>0$, and $2\sA-\det \sE$ is $\rat$-effective; and \item for all nef and big line bundles $\sL$ on $S$, $(2\sA-\det \sE)\cdot \sL >0$. \end{enumerate} We define $\sH:=\det \sE$. Note that \begin{itemize} \item $c_2(\sE)=\sA\cdot(\sH-\sA) +\deg(\sZ)$, where $\deg(\sZ)=h^0(\sO_\sZ)$; and \item the line bundle $\sH-\sA$ is a quotient of $\sE$ off a codimension two subset and therefore it is ample when $\sE$ is ample. \end{itemize} Using the Hodge inequality $(\sH-\sA)^2\ (2\sA-\sH)^2\leq \left[(\sH-\sA)\cdot (2\sA-\sH)\right]^2$, we obtain the following: \begin{equation}\label{EQ1} \sA\cdot(\sH-\sA)\geq (\sH-\sA)^2+\sqrt{(\sH-\sA)^2} \end{equation} A toric surface $S$ is a surface containing a two dimensional torus as Zariski open subset and such that the action of the torus on itself extends to $S$. All toric surfaces are normal. In this article we consider surfaces polarized by an ample vector bundle, therefore $S$ will always denote a normal projective toric surface. For basic definitions on toric varieties we refer to \cite{O88}. We recall that if $e:=e(S)$ is the Euler characteristics of $S$ then \ ${\mathop{\rm rank\,}\nolimits}({\rm Pic}(S))=e-2\ $ and $\ K_{S}^{2}=12-e$. We need the following useful lemmas, which are probably well known. \begin{lemma}\label{vb} Let $\sE$ be a vector bundle over a normal $n$-dimensional toric variety. Assume $\proj{\sE}$ is toric, then $\sE=\oplus L_{i}$ with $L_{i}$ equivariant line bundles. \end{lemma} \proof Consider the bundle map $\proj\sE\to X$ with fiber $F=\pn{r-1}$ where $r:=rank(\sE)$. Every fiber has $r$-fixed points which define an unramified $r$ to one cover of $X$, $p:Y\to X$. $X$ being a normal toric variety, and thus simply connected, implies $Y=\cup X_{i}$ and $\sE=\oplus L_{i}$. \qed It is classical \cite{L82} that a surjective morphism $p : X\to Y$, with connected fibers between normal projective varieties, induces a homomorphism, from the connected component of the identity of the automorphism group of $X$ to the connected component of the identity of the automorphism group of $Y$, with respect to which $p$ is equivariant. Using this basic fact we have the following lemma. \begin{lemma}\label{map} Let $p:X\to Y$ a surjective morphism with connected fibers from a normal toric variety $X$ onto a normal variety $Y$. Then $Y$ and the general fiber of $p$ are toric. \end{lemma} \begin{corollary}\label{singFibers} Let $L$ be an ample line bundle on a smooth projective toric surface $S$. If $f: S\to \pn 1$ is a morphism with connected fibers, then the general fiber $F$ is isomorphic to $\pn 1$, there are at most two singular fibers, and $e(S)\le 2+2L\cdot F$. \end{corollary} \proof Since the general fiber is toric it is isomorphic to $\pn 1$. From equivariance we see that any singular fiber must lie over the two fixed points of $\pn 1$. Since there are at most $L\cdot F$ irreducible components in a fiber, and there are at most two singular fibers the inequality follows by considering the cases of no, one, or two singular fibers. \qed \begin{corollary}\label{simpleBlowup} Let $f: S\to S'$ express a smooth toric surface $S$ as the blowup of a smooth projective surface $S'$ at a finite set $B$. Then $e(S)\le 2e(S')$. \end{corollary} \proof Let $b:= e(B)$, i.e., $b$ equals the cardinality of the finite set $B$. Then we have $e(S)=e(S')+b$. Since $S'$ is toric and $B$ are fixed points of the toric action, we conclude that $e(B)$ is bounded by the cardinality of the set of toric fixed points on $S'$, which is equal the Euler characteristic of $S'$. Thus we have $e(S)=e(S')+b\le 2e(S')$. \qed Let $S$ be an irreducible toric surface. Then under the prescribed torus action there are $e:=e(S)$ one dimensional orbits. Denote their closures by $D_i$ where $1\le i\le e$. We have the fundamental fact that \begin{equation}\label{canonicalBundleFormula} -K_S=\sum_{i=1}^{e(S)}D_i. \end{equation} We begin with a very simple observation which is in fact an important tool in all our main results: \begin{lemma}\label{KL} Let $\sE$ be an ample rank $r$ vector bundle on a projective normal toric surface $S$, and let $\sH$ denote $\det\sE$. Then $-K_{S}\cdot\sH\geq re(S)$. \end{lemma} \proof Let $\sH:=\det\sE=\sum_{1}^{e}a_{i} D_{i}$. By ampleness $\sH\cdot D_{i}\geq r$ for all $i=1,\ldots,e$. Since $K_{S}=\sum_{1}^{e}(-D_{i})$ we have $\displaystyle -K_{S}\cdot \sH=\sum_{1}^{e}\sH\cdot D_{i}\geq er. $ \qed \begin{remark} In order to obtain the results in this paper we use the bound \ref{KL} for $-KL$. The following example shows that in general we cannot hope for a better bound then the above. Consider the toric surface given by the fan below, spanned by $12$ edges $\{\rho_{i}\}$ and with $12$ $2$-cones, i.e., $12$ fixed points. The number before each edge indicates the self intersection of the associated invariant divisor $D_{i}$. \[\begin{array}{ccccc} \xymatrix{ & & & & \\ & & & & \\ & & \uuto^(0.5){-3}_(0.9){\rho_{1}}\uurrto^(0.5){-3}_(0.9){\rho_{3}} \ar^(0.5) {-1}_(0.9){\rho_{2}} @{->}[ruu] \ar^(0.5){-1}_(0.9){\rho_{4}} @{->}[rru] \rrto^(0.4){-3}_(0.9){\rho_{5}}\ar^(0.5){-1}_(0.9){\rho_{6}}@{->}[rdd]\ddto^(0.5) {-3}_(0.9){\rho_{7}} \ar^(0.6){-1}_(0.9){\rho_{8}} @{->}[ldd]\ddllto^(0.6) {-3}_(0.9){\rho_{9}}\ar^(0.6){-1}_(0.9){\rho_{10}} @{->}[lld]\llto^(0.6){-3}_(0.9) {\rho_{11}} \uullto^(0.6){-1}_(0.9){\rho_{12}} & & \\ \\ & & & } \end{array}\] This surface is the equivariant blow up of $\pn{2}$ in $9$ points and thus the Euler characteristics $e(S)=12$. Consider the line bundle: $$L=3D_{1}+5D_{2}+3D_{3}+5D_{4}+3D_{5}+5D_{6}+3D_{7}+5D_{8}+3D_{9}+3D_{10}+3D_{11}+ 5D_{12}$$ It is ample since $L\cdot D_{i}=5-9+5=1$ for $i=1,3,5,7,9,11$ and $L\cdot D_{i}=3-5+3=1$ for $i=2,4,6,8,10,12$. This also gives $\displaystyle -LK_{S}=\sum_{1}^{12}L\cdot D_{i}=12=e$. Clearly this example can be generalized to higher values of $e$. \end{remark} We end with a simple corollary of Lemma \ref{KL}. \begin{corollary}\label{rSquareEvectorBundleLowerBound} Let $\sE$ be an ample rank $r$ vector bundle on a smooth projective toric surface $S$, and let $c_1^2:=c_1(\sE)^2$. If $c_1^2\le re(S)$, then $r\le 3$ and either $g(\det \sE)=0$, and $(S,\sE)$ is \begin{enumerate} \item $\pnpair 2 1$ with $(c_1^2,e)=(1,3)$; or \item $(\hirz 0,aE+bf)$ with $1\le ab\le 2$ and $(c_1^2,e)=(2ab,4)$; or \item $(\hirz 1,E+2f)$ with $(c_1^2,e)=(3,4)$; or \item $(\hirz 2,E+3f)$ with $(c_1^2,e)=(4,4)$; or \item $(\pn 2,\pnsheaf 21\oplus\pnsheaf 21)$ with $(c_1^2,e)=(4,3)$; or \end{enumerate} or $g(L)=1$, and $(S,\sE)$ is \begin{enumerate} \item $(S,-K_S)$ with $(c_1^2,e)=(6,6)$; or \item $(\hirz 0,(E+f)\oplus (E+f))$ with $(c_1^2,e)=(8,4)$; or \item $(\pn 2,\pnsheaf 21\oplus\pnsheaf 21\oplus\pnsheaf 21)$ with $(c_1^2,e)=(9,3)$. \end{enumerate} \end{corollary} \proof Let $\sH:=\det\sE$. If $\sH^2\le re$, then from $K_S\cdot \sH\le -re$ we conclude that $2g(\sH)-2=\sH^2+K_S\cdot \sH\le 0$, and thus that $g(\sH)\le 1$. If $g(L)=0$ we know from classification theory, e.g., \cite{BS95,F90}, that $S$ is $\pn 2$ or $\hirz r$. A simple calculation shows the listed examples are the only ones possible. If $g(L)=1$, then from classification theory, e.g., \cite{BS95,F90}, we know that $(S,\sH)$ is either a scroll over an elliptic curve or a Del Pezzo surface with $\sH=-K_S$. Since $S$ is toric and therefore rational, $S$ is Del Pezzo. \qed \section{Vector bundles over $\pn 2$ and $\hirz\epsilon$}\label{examples} In this section we describe all pairs $(S,\sE)$ where $\sE$ is an ample rank two bundle on a $\pn 2$ or a Hirzebruch surface, with the property that either $c_1(\sE)^2\le 4e(S)$ or $c_2(\sE)\le e(S)$. Later in the paper it will be shown that these are all of the examples of rank $2$ ample vector bundles $\sE$ on smooth toric surfaces, $S$, with either $c_1(\sE)^2\le 4e(s)$ or $c_2(\sE)\leq e(S)$. The following table includes the various cases. We give the Chern classes and indicate whether the bundle is Bogomolov unstable ($U$), stable ($S$) or it is a boundary case, i.e., $c_{1}^{2}=4c_{2}$, ($B$). \begin{table}[htb]\label{theTable}\caption{All pairs $(S,\sE)$, with $\sE$ an ample rank two vector bundle on a smooth toric projective surface $S$, and with either $c_1(\sE)^2\le 4e(S)$ or $c_2(\sE)\le e(S)$. The only class where we do not know existence and uniqueness is listed on the last line of the table.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $S $ &e(S)& $\sE$& $c_1(\sE)^2$& $c_2(\sE)$&$U/S/B$\\ \hline\hline $\pn{2}$&3&$\pnsheaf{2}1\oplus\pnsheaf{2}1$&$4$&$1$&$B$\\ \hline $\pn{2}$&3&$\pnsheaf{2}1\oplus\pnsheaf{2}2$&$9$&$2$&$U$\\ \hline $\pn{2}$&3&$T_{\pn{2}}$&$9$&$3$&$S$\\ \hline $\pn{2}$&3&$\pnsheaf{2}1\oplus\pnsheaf{2}3$&$16$&$3$&$U$\\ \hline $\pn{1}\times\pn{1}$&4&$p^*(\pnsheaf{1}1\oplus\pnsheaf{1}1)\otimes\xi$ &$8$&$2$&$B$\\ \hline $\pn{1}\times\pn{1}$&4&$p^*(\pnsheaf{1}1\oplus\pnsheaf{1}2)\otimes\xi$& $12$&$3$&$B$\\ \hline $\pn{1}\times\pn{1}$&4&$p^*(\pnsheaf{1}1\oplus\pnsheaf{1}3)\otimes\xi$& $16$&$4$&$B$\\ \hline $\pn{1}\times\pn{1}$&4&$p^*(\pnsheaf{1}2\oplus\pnsheaf{1}2)\otimes\xi$& $16$&$4$&$B$\\ \hline $\pn{1}\times\pn{1}$&4&$\sO_{\pn{1}\times\pn{1}}(1,1)\oplus \sO_{\pn{1}\times\pn{1}}(2,2)$&$18$&$4$&$U$\\ \hline $\hirz{1}$&4& $p^*(\pnsheaf{1}1\oplus\pnsheaf{1}1)\otimes\xi$& $12$&$3$&$B$\\ \hline $\hirz{1}$&4&$p^*(\pnsheaf{1}1\oplus\pnsheaf{1}2)\otimes\xi$& $16$&$4$&$B$\\ \hline $\hirz{2}$&4&$p^*(\pnsheaf{1}1\oplus\pnsheaf{1}1)\otimes\xi$& $16$&$4$&$B$\\ \hline Del Pezzo&6& $(-K_S)\oplus(-K_S)$&24&6&$B$\\ \hline Del Pezzo&6&if any example exists, $\det\sE=-2K_S$ &24&$\ge 7$&$S$\\ \hline \end{tabular} \end{center} \end{table} Fix the notation $c_2:=c_2(\sE)$, $\sH:=c_1=\det \sE$, and $e:=e(S)$. The strategy that we follow is to first classify the pairs with $c_1(\sE)^2\le 4e(S)$. Then any pair $(S,\sE)$ with $c_2\le e$ has already been enumerated, or we have $c_2\le e < 4c_1^2$. In the latter case the bundle is unstable and we use the extra relations arising from Bogomolov's instability theorem to classify the pair. \subsection{$\pn{2}$} Let $\sE$ be a rank two ample vector bundle over $\pn{2}$. Since $\sH$ is the determinant bundle of a rank two bundle, $\deg(\sH|_\ell)\geq 2$ for every line $\ell\in |\pnsheaf{2}1|$. It follows that $\sH=\pnsheaf{2}a$ with $a\geq 2$. If $\sH^2\le 4e=12$, then $a=2,3$. In case $a=2$, the restriction of $\sE$ to each line $\ell$ is $\pnsheaf 11\oplus \pnsheaf 11$, and thus by the classical results on uniform bundles \cite{OSS80}, $\sE=$\ \framebox{$\pnsheaf{2}1\oplus\pnsheaf{2}1$}. In case $a=3$, the restriction of $\sE$ to each line $\ell$ is $\pnsheaf 11\oplus \pnsheaf 12$, and thus by the classical results on uniform bundles \cite{OSS80}, $\sE$=\ \framebox{$\pnsheaf{2}1\oplus\pnsheaf{2}2$} or $\sE=$\ \framebox{$T_{\pn{2}}$}, the tangent bundle of $\pn 2$. Now assume that $c_2(\sE)\leq 3$, but $c_1^2> 4e=12$. Thus it follows that $\sH=\pnsheaf{2}a$ with $a\geq 4$. Since $\sE$ is unstable, we have a sequence as in (\ref{BS}) where $\sH-\sA= \pnsheaf{2}x$ and $\sA=\pnsheaf{2}{ x+b}$ for $x,b>0$. The inequalities $3\geq c_2(\sE)=x(x+b)+\deg(\sZ)$ and $a=2x+b\geq 4$ yield the only numerical possibility: $(x,b+x)=(1,3)$ and $\deg(\sZ)=0$. Since $H^1(\pn 2,2\sA-\sH)=0$, we conclude the exact sequence splits, and it follows that $\sE$=\ \framebox{$\pnsheaf{2}1\oplus\pnsheaf{2} 3$}. \subsection{The Hirzebruch surfaces $\hirz{\epsilon}$} Let $\hirz{\epsilon}=\proj{\sO_{\pn{1}}\oplus \pnsheaf{1}\epsilon}$ be the Hirzebruch surface of degree $r$. Denote by $p:\proj{\sO_{\pn{1}}\otimes \pnsheaf 1\epsilon}\to \pn{1}$ the projection map, and let $F$ denote a fiber of $p$. Let $\xi_\sE$ denote the tautological line bundle on $\hirz{\epsilon}$, such that $p_*\xi_\sE\cong \sE$. Recall that ${\rm Pic}(\hirz{r})=\zed F\oplus \zed E$, where $E$ is the section corresponding to the surjection $\sO_{\pn{1}}\oplus \pnsheaf1\epsilon\to \sO_{\pn{1}}$. Note that $E^2=-\epsilon$. The following is useful. \begin{lemma}\label{AmpleOnHirzRankR} Let $\sE$ be a rank $r$ ample vector bundle on $\hirz{\epsilon}$. Then $\det\sE\cdot F\ge r$ with equality if and only if $\sE\cong p^*V\otimes\xi_\sE$ where $V\cong\sE_E$. In particular, in this case $$c_1(\sE)^2=r^2\epsilon+2r\det\sE\cdot E\ge r^2(2+\epsilon),$$ and $$c_2(\sE)={r \choose 2}\epsilon+(r-1)\det\sE\cdot E\ge{r \choose 2} (2+\epsilon).$$ \end{lemma} \proof Since $\sE$ is a rank r ample vector bundle, and $F$ is a smooth rational curve, we conclude that $\det\sE\cdot F\ge r$ with equality if and only if $\sE_F\cong \pnsheaf 1 1\oplus\cdots\oplus \pnsheaf 11.$ In this case we have that $\sE\otimes \xi_\sE^*$ is trivial on every fiber and thus $\sE\otimes \xi_\sE^*\cong p^*V$ for some rank $r$ vector bundle on $\pn 1$. Finally, note that $V\cong (p^*V)_E\cong \sE_E$. The rest of the lemma is a straightforward calculation. We record one simple corollary of the above Lemma. \begin{corollary}\label{rGreaterThan1} Let $\sE$ be a rank $r$ ample vector bundle on $\hirz{\epsilon}$. If $\epsilon \ge 2$ and $c_1(\sE)^2\le 4r^2$, then $\epsilon=2$ and $\sE\cong p^*(\pnsheaf 11\oplus\cdots\oplus\pnsheaf 11)\otimes\xi_\sE$. In this case $c_1(\sE)^2=4r^2$ and $c_2(\sE)=2r(r-1)$. \end{corollary} \proof Let $\sH:=\det\sE =aE+bF$. Using Lemma \ref{AmpleOnHirzRankR}, we only need to show that $a=\sH\cdot F=r$. Assume therefore that $a\ge r+ 1$. Then we have $\sH^2\ge a(2b-a\epsilon)\ge (r+1)(2r+(r+1)\epsilon)> 4r^2$. \qed Now assume that $c_2\le e=4$ or $c_1^2\le 4e=16$ and $\sE|_{F}=\pnsheaf{1}a\oplus\pnsheaf{1}b$ with $a,b>0$. \Case I: First consider the case when $(a,b)=(1,1)$. We are in the situation of Lemma \ref{AmpleOnHirzRankR}. Letting $V=\pnsheaf{1}\alpha\oplus\pnsheaf{1}\beta$, then $$4\geq c_{2}(\sE)=c_{2}(p^{*}(V)\otimes\xi)=\xi^{2}+\alpha+\beta =\epsilon+\alpha+\beta; $$ or $$ 12\geq c_1^2=c_{1}(p^{*}(V)\otimes\xi)^2=4\xi^{2}+4\alpha+4\beta =4(\epsilon+\alpha+\beta); $$ The only possible numerical possibilities are $\sE=$\ \framebox{$p^*(\pnsheaf{1}\alpha\oplus\pnsheaf{1}\beta)\otimes\xi$} with $(\epsilon,\alpha,\beta)=(0,1,1),(0,1,2),(0,1,3),(0,2,2), (1,1,1),(1,1,2),(2,1,1)$. \Case II: Assume now that $(a,b)\neq (1,1)$. First, let us consider the case $\epsilon=0$. $\sH_F=\det(\sE)|_F=\pnsheaf{1}{a+b}$ implies $c_1^2\geq 18> 4e(S)$. Thus if $c_2\le e=4$, $c_1^2\geq 4c_2(\sE)$ which means $\sE$ is unstable. Consider the exact sequence (\ref{BS}). We have that $\sH-\sA=\sO_{\pn{1}\times\pn{1}}(x,y)$ for some $x>0$, $y> 0$, and $\sA=\sO_{\pn{1}\times\pn{1}}(x+t,y+l)$ for some $t>0$, $l>0$. The inequality $4\geq c_2(\sE)=x(y+l)+y(x+t)+\deg(\sZ)$ yields $\deg(\sZ)=0$ and $(x,y,x+t,y+l)=(1,1,2,2)$. Since $\deg(\sZ)=0$ and $H^1(\sO_{\pn{1}\times\pn{1}}(t,l))=0$, we conclude that $\sE=$\ \framebox{$\sO_{\pn{1}\times\pn{1}}(1,1)\oplus \sO_{\pn{1}\times\pn{1}}(2,2)$}\ . Now assume that $\epsilon\geq 1$, and let $\sH=yF+xE$ with $x=a+b\geq 3$ and $\sH\cdot E=-x\epsilon+y\geq 2$ (since $\sH$ is the determinant of a rank $2$ ample vector bundle). It follows that $\sH^2=a(2b-a\epsilon)\geq a(4+a\epsilon)\geq 3(4+a\epsilon)\geq 21>4e(S)$. Thus if $c_2\le e=4$, $c_1^2>4c_2(\sE)$ and thus $\sE$ is unstable. Let $\sA:=\alpha E_0+\beta F$ be the line bundle in the sequence (\ref{BS}). We have the following straightforward inequalities: \begin{enumerate} \item $x\ge 3$, $\epsilon\ge 1$, $y\ge x\epsilon+2\ge 5$; \item $x-\alpha>0$, $y-\beta>0$, $y\ge \beta+(x-\alpha)\epsilon+1$; \item $2\alpha > x$, $2\beta>y$; \item $0<(2\sA-\sH)^2=(2\alpha-x)(4\beta-2y-(2\alpha-x)\epsilon)>0$, and in particular $4\beta+x\epsilon >2y+2\alpha \epsilon$; and \item $\sA\cdot (\sH-\sA)\le c_2(\sE)\le 4$, which gives $-\alpha(x-\alpha)\epsilon+\beta(x-\alpha)+\alpha(y-\beta)\le 4$. \end{enumerate} Note that inequality (5) of the list can be written as $$ \alpha(\alpha \epsilon-x-\beta+y) +\beta (x-\alpha)\le 4. $$ Using inequality (2) from the list, $y-\beta\ge (x-\alpha)\epsilon +1$, we get \begin{eqnarray*} 4&\ge& \alpha(\alpha \epsilon-x+(x-\alpha)\epsilon +1) +\beta(x-\alpha)\\ &\ge& \alpha x (\epsilon-1) +\alpha +\beta(x-\alpha). \end{eqnarray*} Now using equations (3) and (2) from the list we get the absurdity $$ 4 \ge \alpha x(\epsilon-1) +\frac{x+1}{2} +\frac{y+1}{2} \ge 0 + \frac{4}{2} +\frac{5+1}{2}\ge 5. $$ \qed \section{Lower bounds for the Chern numbers of $\sE$} In this section we obtain a number of lower bounds for $c_1(\sE)^2$ for a rank $r$ ample vector bundle on a smooth toric surface. Our main tool is adjunction theory: good references for the standard adjunction results that we use are \cite[Ch.\ 10, 11]{BS95} and \cite{F90}. The following is a restatement, taking into account the geometry of toric surfaces, of the main result for the adjunction theory for surfaces. Recall that on a toric surface, a line bundle is ample if and only if it is very ample. \begin{theorem}\label{adjunctionThm} Let $L$ be an ample line bundle on a smooth projective toric surface $S$. \begin{enumerate} \item If $e=e(S)\geq 5$, then $K_S+L$ is spanned by global sections. \item If $e=e(S)\geq 7$, then $S$ is the equivariant blowup $\pi : S\to S_1$ of a smooth toric projective surface $S_1$ at a finite set $B$, such that $L=\pi^{*}L'-\pi^{-1}(B)$ where $K_S+L\cong \pi^*(K_{S_1}+L')$, and both $L'$ and $L_1:=K_{S_1}+L'$ are very ample. \end{enumerate} \end{theorem} \proof Using \cite[9.2.2]{BS95}, note that the exceptions to $K_S+L$ being spanned by global sections are all ruled out by $e(S)\ge 5$. The associated map $p_{K_{S}+L}$ has a Remmert-Stein factorization $p=s\circ \pi$ where $\pi:S\to S_{1}$ has connected fibers. By Lemma (\ref{KL}), we see that $e\ge 7$ rules out $\dim S_1=0$. If $\dim S_1=1$, then we have that $L\cdot F=2$ for a general fiber of $r$, but this and $e\ge 7$ contradicts Corollary \ref{singFibers}. Since $\dim S_1=2$, it follows from adjunction theory that $\pi : S\to S_1$ is the blow up of a smooth toric projective surface $S_1$ at a finite set $B$, such that $L=\pi^{*}L'-\pi^{-1}(B)$ where $K_S+L\cong \pi^*(K_{S_1}+L')$, and both $L'$ and $L_1:=K_{S_1}+L'$ are ample. The very ampleness of the last two bundles follows from the fact that ample line bundles are very ample on toric varieties. \qed \begin{corollary}\label{easyLowerBoundForC1Square} Let $\sE$ be an ample rank $r$ vector bundle on a nonsingular toric surface $S$. If $e(S)\ge 5$ then $$c_1(\sE)^{2}\ge r^2e(S)$$ with equality only if $\det\sE=-rK_S$ and $e(S)=6$. \end{corollary} \proof Let $\sH:= \det \sE$. Let $t$ be the smallest positive integer for which $tK_S+\sH$ is not ample. Since $e(S)\ge 5$, $E\cdot (tK_S+\sH)=0$ for a smooth rational curve $E$ with self-intersection $-1$. Thus we have $$-t+E\cdot\sH=E\cdot (tK_S+\sH)= 0.$$ Since $\sE$ has rank $r$, we have that $r\le\sH\cdot E=t$. Thus $rK_S+\sH$ is spanned. Using Lemma \ref{KL},we have $$\sH^2\ge -\sH\cdot rK_S\ge r^2e(S).$$ Moreover, since $\sH$ is ample, we have equality only if $\sH\cong -rK_S$. In this case we have $r^2K_S^2=\sH^2= r^2e(S)$, or $K_S^2= e(S)$. Since $K_S^2+e(S)=12$ we conclude that $K_S^2= 6$. \qed \begin{lemma}\label{delPezzo} Let $\sE$ be an ample rank two vector bundle on a nonsingular toric surface $S$. If $\det\sE=-2K_S$, $e(S)=6$, and $c_2(\sE)\le 6$, then $\sE:=-K_S\oplus -K_S$. \end{lemma} \proof A simple computation shows that the Chern character of $\sE\otimes K_S$ is $2+(K_S^2-c_2(\sE))=2$. Thus $\chi(\sE\otimes K_S)=2$. Since $H^2(\sE\otimes K_S)=H^0(\sE^*)=0$, we conclude that $\dim H^0(\sE\otimes K_S)\ge 2$. Choose linearly independent $s_{1}, s_{2}\in H^0(\sE\otimes K_S)$. If $s_{1}\wedge s_{2}\neq 0$ then, since $\det(\sE\otimes K_S)=\sO_S$, we conclude that $\sE\otimes K_S=\sO_S\oplus\sO_S$ i.e., $\sE\cong -K_S\oplus -K_S$. Thus we can assume without loss of generality that $s_{1}\wedge s_{2}= 0$. The saturation $\sA$ of the images of $\sO_S$ in $\sE$, under the two maps $g\to g\cdot s_i$, are equal. $\sA$ is invertible, and tensoring with $-K_S$ we have an exact sequence $$ 0\to \sA-K_S\to \sE \to \sQ \otimes\sI_\sZ\to 0, $$ with $\sZ$ a $0$-dimensional subscheme of $S$. Note that $\sQ$ is ample, and therefore since $S$ is toric, very ample. Since $e(S)=6$, we know that $S$ is not $\pn 2$ or a quadric, and thus \begin{equation}\label{sQsQLowerBound} \sQ^2\ge 3. \end{equation} Thus the Hodge index theorem gives $(\sQ\cdot (-K_S))^2\ge \sQ^2(-K_S)^2\ge 18$, which implies that \begin{equation}\label{sQ-K_SLowerBound} \sQ\cdot (-K_S)\ge 5. \end{equation} Since $h^0(\sA)\ge 2$, we have $\sQ\cdot\sA\ge 1$. Using this, and equations (\ref{sQsQLowerBound}) and (\ref{sQ-K_SLowerBound}) we have $$ 6=c_2(\sE)=(\sA-K_S)\cdot\sQ+\deg \sZ\ge 1+5+\deg\sZ. $$ Thus $\deg\sZ=0$ and $\sA\cdot\sQ=1$. The exact sequence $$0\to\sA-K_S\to \sE\to \sQ \to 0$$ gives $-2K_S=c_{1}(\sE)=\sA+\sQ-K_S$ and $K_S+\sA+\sQ=\sO$. Thus $(K_S+\sQ)\cdot\sQ=-\sA\cdot\sQ=-1$. This is absurd, since on any smooth surface $S$, the parity of $(K_S+L)\cdot L$ is even for any line bundle $L$. \qed \begin{remark} We do not know if there are any examples of $\sE$ satisfying all the hypotheses of Lemma \ref{delPezzo}, except that $c_2(\sE)>6$. \end{remark} \begin{remark} The only smooth toric surfaces $S$ with $e(S)\le 4$ are $\pn 2$ or Hirzebruch surfaces. Corollary \ref{rSquareEvectorBundleLowerBound} classifies the exceptions to $c_1^2(S)>r^2e(S)$ for $r=1$, and \S \ref{examples} classifies the exceptions for $r=2$ and $e(S)\le 4$. They are contained in Table 1. For $\pn 2$ it seems difficult to classify the exceptions when $r\ge 3$. For the Hirzebruch surfaces $\hirz\epsilon$, Corollary \ref{rGreaterThan1} classifies the exceptions if $\epsilon\ge 2$. \end{remark} If $e(S_{1})\geq 7$, we can repeat the procedure in Theorem \ref{adjunctionThm}, using $L_1$ on $S_1$ in the same way we used $L$ on $S$, and get $(S_{2},L_{2})$. We say the procedure has terminated when we reach the first integer $b$ with $e(S_{b})\le 6$. (See \cite{BL89} for a further study of the adjunction process.) We call the sequence $(S,L),\dots, (S_{b}, L_{b})$ the iterated adjunction sequence and $b$ the adjunction length of $S$. Notice that in the iterated adjunction sequence, at every step we contract down $(-1)$-lines in $S_{i}$ with respect to the polarization $K_{S_{i}}+L_{i}$. This implies by Corollary (\ref{simpleBlowup}) that $e(S_{i+1})\geq \lfloor\frac{e(S_{i})}{2}\rfloor$. If we assume, to start with, that the surface $S$ has $e(S)\geq 2^{b-1}\cdot 6+1$ then the adjunction length is at least $b$. We have the following strong bound. \begin{theorem}\label{degree} Let $S$ be a nonsingular toric surface with $2^{b}\cdot 12\ge e(S)\geq 2^{b}\cdot 6+1$ for some integer $b\ge 0$ and e:=e(S). Let $\sE$ be an ample rank $r$ vector bundle on $S$, then $$c_1(\sE)^{2}\geq e(3r^2+2r+4br+2b-2)-12(b+1)(b+2r)-12r(r-1)+\frac{e}{2^{b-1}}-2$$ \end{theorem} \proof Since $\sH:=\det(\sE)$ is the determinant of a rank $r$ ample vector bundle, there are no smooth rational curves $C$ on the polarized surface $(S,\sH)$ with $\sH\cdot C\le r-1$. Therefore by Theorem (\ref{adjunctionThm}), $L:=K+(r-1)\sH$ ample. Using Lemma \ref{canonicalBundleFormula}, we have the bound \begin{equation}\label{estimate} -K\sH\geq re. \end{equation} The assumption $e(S)\geq 2^{b}\cdot 6+1$ implies that we have the adjunction sequence $(S,L),\dots, (S_{b}, L_{b}), (S_{b+1}, L_{b+1})$ with $L_{b+1}$ very ample. It follows that the sectional genus $g(L_{b+1})=g(K_{S_{b}}+L_{b})\geq 0$, i.e., $(K_{S_{b}}+L_{b})\cdot (K_{S_{b}}+K_{S_{b}}+L_{b})\geq -2$. Let $S\to S_{1}\to\ldots\to S_{b}$ the sequence of contractions and let $\pi_{i}$ denote the $i$-th contraction map. For simplicity let us set $K_{i} :=(\pi\circ \pi_{1}\ldots \circ \pi_{i})^{*}(K_{S_{i}})$, $K_0:= K_S$, and $S:=S_0$. $$(K_{S_{b}}+L_{b})\cdot (K_{S_{b}}+K_{S_{b}}+L_{b})=(K_{b}+K_{b-1}+\ldots +K_{1}+K_0+L)\cdot (K_b+K_b+K_{b-1}+\ldots +K_{1}+K_0+L)$$ We can further decompose: \begin{eqnarray*} K_{b}\cdot(K_b+K_{b-1}+\ldots +K_{1}+K_0+L)&=&K_b^2+K_b\cdot(K_{b-1}+ K_{b-2}+\ldots +K_{1}+K_0+L)\\ &=&K_b^2+K_{b-1}\cdot(K_{b-1}+ K_{b-2}+\ldots +K_{1}+K_0+L)\\ &=&K_b^2+K_{b-1}^2+K_{b-1}\cdot(K_{b-2}+ \ldots +K_{1}+K_0+L)\\ &\vdots&\vdots\\ &=&K_b^2+K_{b-1}^{2}+K_{b-2}^{2}+\ldots + K_{1}^{2}+K_0^2+K_0\cdot L\\ (K_b+K_{b-1}+K_{b-2}+\ldots+K_{1}+K_0+L)^{2}&=&K_b^2+ 2K_{b}\cdot(K_{b-1}+\ldots+K_{1}+K_0+L)\\ &&+(K_{b-1}+\ldots +K_{1}+K_0+L)^{2}\\ &=&K_b^2+2(K_{b-1}^2+K_{b-2}^2+\ldots+K_1^2+K_0^2+K_0\cdot L)\\ &&K_{b-1}^2+2K_{b-1}\cdot(K_{b-2}+\ldots+K_1+K_0+L)\\ &&+(K_{b-2}+\ldots +K_1+K_0+L)^{2}\\ &=&K_b^2+3K_{b-1}^{2}+5K_{b-2}^{2}+7K_{b-3}^{2}+\ldots\\ &&+(2b-1)K_1^{2}+(2b+1)K_0^2+(2b+2)K_0\cdot L+L^{2} \end{eqnarray*} Then: \ \ $\displaystyle (K_{S_{b}}+L_{b})\cdot (K_{S_{b}}+K_{S_{b}}+L_{b}) =$ \begin{equation}\label{Deg} 2K_b^2+4K_{b-1}^{2}+6K_{b-2}^{2}+\ldots +2b K_{1}^2+(2b+2) K_0^2+(2b+3)K_0\cdot L+L^{2}\geq -2 \end{equation} Recall that $K_{i}^{2}=12-e(S_{i})$ and $ e(S_{i})\geq(\frac{e}{2^{i}})$. Then \begin{eqnarray*} L^{2}+(2b+3)K_0\cdot L&\geq& -2-2\left(12-\frac{e}{2^{b}}\right) -4\left(12-\frac{e}{2^{b-1}}\right)-\ldots-(2b+2)(12-e)+(2b+3)e\\ &\geq& -2 -12(b+1)(b+2)+\frac{2e}{2^{b}}\sum_{j=0}^{b}((j+1)2^{j}). \end{eqnarray*} Using $\displaystyle\sum_{j=0}^{b}((j+1)2^{j})=2^{b+1}b+1$ we have \begin{eqnarray*} L^{2}+(2b+3)K_0\cdot L&\geq& -2-12(b+1)(b+2)+4eb+\frac{e}{2^{b-1}}. \end{eqnarray*} Recalling equation (\ref{estimate}) and the fact that $L=(r-1)K_0+\sH$, we get \begin{eqnarray*} \sH^{2}&\geq& -2-12(b+1)(b+2)+4eb+\frac{e}{2^{b-1}}+2(r-1)re+(r-1)^2(e-12)\\ &&+(2b+3)re+(2b+3)(r-1)(e-12)\\ &=& e(3r^2+2r+4br+2b-2)-12(b+1)(b+2r)-12r(r-1)+\frac{e}{2^{b-1}}-2. \end{eqnarray*} \qed \begin{remark}\label{mapleProgram} To get a global feel for the bound, we have found it helpful to graph the expression. We include a short Maple V Release 5.1 program to plot the expression divided by part of the leading term. Varying the range of the rank $r$ and the Euler characteristic $e$, and of the exact variant of \verb+lowerBound+, the scaled expression for the lower bound is useful. \begin{verbatim} b := floor(ln[2]((e-1)/6)); lowerBound := (r,e) -> e*(3*r^2+2*r+4*b*r+2*b-2)-12*(b+1)*(b+2*r) -12*r*(r-1)+e/2^(b-1)-2; plot3d(lowerBound(r,e)/(r*e*(3*r+4*b)),r=1..20,e=13..100,style=PATCH,axes=BOXED); \end{verbatim} \end{remark} \begin{remark} It is easily checked that the expression in $e$ and $r$ occurring in the lower bound is an increasing function of $e$ and $r$ for $e\ge 7$, $r\ge 1$. It is also easy to check using the above bound that $c_1(\sE)^2\ge 2r^2e(S)$ if $e(S)\ge 12$, and $c_1(\sE)^2\ge 3r^2e(S)$ if $e(S)\ge 6r+7$. Theorem (\ref{degree}) gives a strong asymptotic lower bound for $c_1^2$ as $e$ goes to $\infty$. For any fixed $c>0$, there will only be a finite number of possible pairs $(c_1^2,e)$ of numerical invariants for ample vector bundles $\sE$ on smooth toric surfaces $S$ with $L^2\le ce$. For example, $c_1^2\ge 2r^2e(S)$ as soon as $e(S)\ge 13$. This suggests that enumerating the pairs $(S,\sE)$ with $\sH^2\le cre(S)$, where $\sE$ is an ample vector bundle on a smooth toric surface $S$, and small $c>1$ should be a tractable classification problem with a nice answer. \end{remark} \begin{theorem}\label{applicationOfBogomolov} Let $\sE$ be an ample rank two vector bundle on a nonsingular toric variety $S$ with $2^{b}\cdot 12\ge e(S)\geq 2^{b}\cdot 6+1$ for some integer $b\ge 0$ and $e:=e(S)$. Then $$c_{2}(\sE)\geq -3(b+2)(b+3)+\frac{5b+7}{2}e+\frac{e}{2^{b+1}}-\frac{1}{2}$$ \end{theorem} \proof If the inequality is not satisfied then using Theorem (\ref{degree}), $c_{1}(\sE)^{2}>4c_{2}(\sE)$, and thus the bundle would be unstable. The exact sequence (\ref{BS}) and the inequality (\ref{EQ1}) give $$c_{2}(\sE)\geq (\sH-\sA)^{2}+\sqrt{(\sH-\sA)^{2}}$$ the divisor $\sH-\sA$ is ample and thus by Theorem (\ref{degree}) $$-3(b+2)(b+3)+\frac{(5b+7)}{2}e+\frac{e}{2^{b+1}}-\frac{1}{2}> c_{2}(\sE)\ge e(6b+3)-12(b+1)(b+2)+\frac{e}{2^{b-1}}-2+1$$ which is equivalent to \ $18b^2+42b+13-7eb+e-3e/2^b>0$, which is impossible.\qed \begin{remark} We expect that a generalization of Theorem \ref{applicationOfBogomolov} to ample vector bundles of arbitrary rank $r$ is true. Based on a strong dose of optimism, we conjecture that if $\sE$ is an ample rank $r$ vector bundle on a smooth toric projective surface $S$ with $2^{b}\cdot 12\ge e(S)\geq 2^{b}\cdot 6+1$ for some integer $b\ge 0$, then $$ c_2(\sE)\ge \frac{r-1}{2r}\left[e(S)(3r^2+2r+4br+2b-2)-12(b+1)(b+2r)- 12r(r-1)+\frac{e}{2^{b-1}}-2\right]. $$ \end{remark} We now turn to the special case of rank two bundles where the inequality $\displaystyle c_{2}({\sE})> e(S)$ fails to be true. \begin{lemma}\label{generalBogRestriction} Let $\sE$ be an unstable ample rank two vector bundle on a smooth toric projective surface $S$. If $\sE$ is Bogomolov unstable and $c_2(\sE) \le e(S)+\sqrt{e(S)}$, then $S$ is either $\pn 2$ or $\hirz \epsilon$ with $\epsilon\le 2$. \end{lemma} \proof Assume that $\sE$ is Bogomolov unstable. Consider the sequence (\ref{BS}) and the inequality: $$e(S)+\sqrt{e(S)}\geq c_{2}(\sE)=\sA\cdot (\sH-\sA)+\deg(\sZ)\geq (\sH-\sA)^2+\sqrt{(\sH-\sA)^{2}}$$ We can then assume $(\sH-\sA)^{2}\le e$. We now apply Theorem \ref{rSquareEvectorBundleLowerBound} to the ample line bundle $\sH-\sA$. \qed \begin{remark} Let $\delta := \min\{L^2 | L \text{ an ample line bundle on } S\}$. The above argument implies that any ample vector bundle $\sE$ with $c_2(\sE)< \delta+ \sqrt{\delta}$ is Bogomolov stable. \end{remark} \begin{corollary}\label{Bog}Let $\sE$ be an ample rank two vector bundle on a smooth toric projective surface $S$. Assume that $c_{2}(\sE)\le e(S)$, if $\sE$ is not Bogomolov Stable then $(S,\sE)$ is contained the Table 1. \end{corollary} \proof Simply use Lemma \ref{generalBogRestriction} and the results for $\pn 2$ and the Hirzebruch surfaces from \S \ref{examples} \qed \begin{proposition}\label{main} Let $\sE$ be an ample rank two vector bundle on a smooth projective toric surface $S$. If either $c_1(\sE)^2\le 4e(S)$ or $c_{2}(\sE) \le e(S)$, then $(S,\sE)$ is in the Table 1. \end{proposition} \proof We can also assume that $S$ is neither $\pn 2$ or a Hirzebruch surface by using the results of \S 2. Thus $e(S)\ge 4$. Using Corollary \ref{easyLowerBoundForC1Square} and Lemma \ref{delPezzo}, we can assume without loss of generality that $c_1(\sE)^2>4e(S)$. If $c_2(\sE)\leq e$, then we are in the situation of Lemma \ref{Bog}. \qed
1,108,101,566,016
arxiv
\section{Gaussian wave packets/coherent states: saddle point conditions} \label{gwp} As mentioned briefly in the introduction, multidimensional Gaussian wave packets show up in many subfields of physics and have become extremely important tools for understanding a wide range of phenomena. In addition, the projection into configuration space of a coherent state describing a bosonic many-body system of the form \begin{equation} \label{cs} | z \rangle = \exp \left(-\frac{\left| z \right|^2}{2} + z \hat a^\dagger \right)| 0\rangle \end{equation} results in a Gaussian wave packet~\cite{Glauber63}, and the parameters of the coherent state are straightforwardly mapped onto those of the wave packet; see Appendix~\ref{cswp}. As Gaussian wave packets are extremely important in and of themselves, and it is possible to create a more general wave packet than the one that follows from this particular coherent state form, the development of the theory ahead is given in terms of the most general wave packet. If needed, translating all of the results back into the language of coherent states is possible in a straightforward way, i.e.~$z$ can be mapped onto momentum and position centroids, and the ground state determines the shape parameters. \subsection{Gaussian wave packets} \label{gwp1} A Gaussian wave packet has a number of parameters needed in order to specify it uniquely; we label the entire set with a Greek letter, such as $\alpha$ or $\beta$. Thus, the real mean momenta and positions are labelled $(\vec p_\alpha, \vec q_\alpha)$, and the matrix ${\bf b}_\alpha$ describes all the possible shape parameters. It must be a symmetric matrix diagonalizable by an orthogonal matrix with eigenvalues whose real parts are positive in order to be square integrable. If ${\bf b}_\alpha$ is complex, then the wave packet is sometimes called a ``chirped'' wave packet, i.e.~one in which the speed of phase oscillations linearly increases or decreases across its width. We choose the phase convention and $\hbar$-dependence such that \begin{eqnarray} \label{wavepacket} \phi_\alpha(\vec x) &=& \exp\left[ - \left(\vec x - \vec q_\alpha \right) \cdot \frac{{\bf b}_\alpha}{2\hbar} \cdot \left(\vec x - \vec q_\alpha \right) +\frac{i}{\hbar} \vec p_\alpha \cdot \left(\vec x - \vec q_\alpha \right)\right] \nonumber \\ && \times \left[\frac{{\rm Det}\left({\bf b}_\alpha+{\bf b}^*_\alpha\right)}{(2\pi\hbar)^N}\right]^{1/4} \end{eqnarray} which represents a different phase convention than that implied by Eq.~(\ref{cs}), but that is accounted for properly when applied to the Bose-Hubbard model ahead. Implicitly the right vectors are column vectors and the left vectors are row vectors. The $\hbar$ scaling chosen ensures that $\hbar$ determines the volume occupied by the wave packet, and its overall shape is completely independent of $\hbar$. The dual of this wave packet follows by the complex conjugation of ${\bf b}_\alpha$ and the sign change in front of the momentum term. The notation for an evolving wave packet follows as $\phi_\alpha(\vec x;t)$, but in general, it ceases to maintain a Gaussian form for $t>0$. Assume the existence of a classical Hamiltonian, which can be analytically continued to complex phase space variables $H=H(\vec {\cal p},\vec {\cal q};t)$, and a well defined corresponding quantum Hamiltonian, $\hat H= \hat H(\frac{\hbar}{i}\partial /\partial \vec x, \vec x;t)$. They govern the classical and quantum dynamics, respectively. Two very basic dynamical quantities of interest are given by the evolving wave packet itself, $\phi_\alpha(\vec x;t)$, and so-called correlation functions \begin{eqnarray} \label{ac} {\cal A}_{\beta\alpha}(t) &=& \int_{-\infty}^\infty {\rm d}\vec x\ \phi^*_\beta(\vec x) \phi_\alpha(\vec x;t) \nonumber \\ {\cal C}_{\beta\alpha}(t) &=& \left| {\cal A}_{\beta\alpha}(t)\right|^2 \end{eqnarray} where, if the set of parameters labelled by $\beta$ and $\alpha$ are equal, then ${\cal C}_{\alpha\alpha}(t)$ is called the autocorrelation function. A matrix element of the Feynman path integral in a coherent state representation would be equivalent to the amplitude, ${\cal A}_{\beta\alpha}(t)$, of a correlation function. \subsection{Lagrangian manifolds} The Lagrangian manifold for a wave packet is the set of all complex positions and conjugate momenta $(\vec {\cal p}, \vec {\cal q})$ satisfying the equations~\cite{Huber88} \begin{equation} \label{constraints} {\bf b}_\alpha \cdot \left( \vec {\cal q} - \vec q_\alpha\right) + i \left( \vec {\cal p} - \vec p_\alpha\right) = 0 \end{equation} Notice that the manifold has no dependence on $\hbar$. This gives an $\hbar$ independent boundary value problem to solve, which explains the placement choice of $\hbar$ in Eq.~(\ref{wavepacket}). A dual wave packet with a possibly different parameter set leads to the modified Lagrangian manifold equations \begin{equation} \label{constraintsbra} {\bf b}^*_\beta \cdot \left( \vec {\cal q} - \vec q_\beta\right) - i \left( \vec {\cal p} - \vec p_\beta\right) = 0 \end{equation} The semiclassical approximation~\cite{Maslov81} relies on saddle points whose properties are given by trajectories with initial conditions, $(\vec {\cal p}_0, \vec {\cal q}_0)$, that lie on the initial manifold and after propagation of a time $t$, $(\vec {\cal p}_t, \vec {\cal q}_t)$, end up on the final manifold. Thus for correlation functions, the boundary value problem is to find all contributing solutions of the equations \begin{eqnarray} \label{sadcond} {\bf b}_\alpha \cdot \left( \vec {\cal q}_0 - \vec q_\alpha\right) + i \left( \vec {\cal p}_0 - \vec p_\alpha\right) &=& 0 \nonumber \\ {\bf b}^*_\beta \cdot \left( \vec {\cal q}_t - \vec q_\beta\right) - i \left( \vec {\cal p}_t - \vec p_\beta\right) &=& 0 \end{eqnarray} as a function of $t$. If interest is in the evolving wave packet in the configuration space representation, then the final Lagrangian manifold must be the one associated with $\langle \vec x|$, and the second set of equations is replaced by \begin{equation} \vec {\cal q}_t = \vec x \end{equation} where $\vec {\cal p}_t$ can be anything. We will call the trajectories satisfying these conditions saddle trajectories. Generally speaking, excluding harmonic oscillators (or rather systems with linear Hamilton's equations), there appear to be an infinity of solutions to these equations, almost all of which either must be excluded for reasons mentioned in the introduction, or are irrelevant because they contribute so little that they are vastly smaller than the errors involved in making a semiclassical approximation. The goal then is to find all the saddle trajectories that must be included and contribute sufficiently. The number of relevant saddles grows at least linearly with increasing time for integrable dynamical systems and exponentially for chaotic ones. If for no other reason, this gives a practical upper limit to the length of propagation time conceivable with semiclassical methods. The domain around each saddle point for which the Newton-Raphson scheme can work shrinks accordingly. Eventually, the search has to be carried out on too fine a scale to be practical. Interestingly, for wave packets any initial condition $(\vec {\cal p}_0, \vec {\cal q}_0)$ on the Lagrangian manifold can play the role of the real centroid $(\vec p_\alpha, \vec q_\alpha)$ in Eq.~(\ref{wavepacket}), i.e.~the interchange leaves the spatial dependence of the wave packet invariant. However, the normalization constant has to be redefined to \begin{eqnarray} \label{norm1} {\cal N}_\alpha^0 &=& \left[\frac{{\rm Det}\left({\bf b}_\alpha+{\bf b}^*_\alpha\right)}{(2\pi\hbar)^N}\right]^{1/4} \exp \left[ \frac{i}{\hbar}\left( \vec{\cal p}_0 \cdot \vec{\cal q}_0 - \vec p_\alpha \cdot \vec q_\alpha\right) + \right. \nonumber \\ && \left. \vec {\cal q}_0 \cdot \frac{{\bf b}_\alpha}{2\hbar} \cdot \vec {\cal q}_0 - \vec q_\alpha \cdot \frac{{\bf b}_\alpha}{2\hbar} \cdot \vec q_\alpha \right. \Big] \end{eqnarray} in order to preserve the normalization and phase convention. The similar substitution for correlation functions of the trajectory endpoint is given by \begin{eqnarray} \label{norm2} {\cal N}_\beta^t &=& \left[\frac{{\rm Det}\left({\bf b}_\beta+{\bf b}^*_\beta\right)}{(2\pi\hbar)^N}\right]^{1/4} \exp \left[ -\frac{i}{\hbar}\left( \vec{\cal p}_t \cdot \vec{\cal q}_t - \vec p_\beta \cdot \vec q_\beta\right) + \right. \nonumber \\ && \left. \vec {\cal q}_t \cdot \frac{{\bf b}^*_\beta}{2\hbar} \cdot \vec {\cal q}_t - \vec q_\beta \cdot \frac{{\bf b}^*_\beta}{2\hbar} \cdot \vec q_\beta \right. \Big] \end{eqnarray} This substitution and modified normalization constants can be used to simplify the final form of the semiclassical (saddle point) approximation. \subsection{Real classical transport and saddle trajectories} It was shown in~\cite{Pal16} that there is a one-to-one correspondence between real classical transport pathways (bundles of like-behaving trajectories) and the relevant complex saddle trajectories. It suffices to start with a seed trajectory given by a single representative trajectory for a specific pathway and use a Newton-Raphson scheme to locate the corresponding and contributing saddle trajectory. This scheme has the three highly desirable main consequences mentioned in the introduction. Here, we give for completeness the equations that arise in the Newton-Raphson scheme~\cite{Pal16}. Considering the phase space in the neighborhood of a seed trajectory, it is useful to define $\delta \vec {\cal p}_t = \vec {\cal p} - \vec {\cal p}_t$ and $\delta \vec {\cal q}_t = \vec {\cal q} - \vec {\cal q}_t$. The stability matrix ${\bf M}_t$ describes how neighboring trajectories shift relative to this seed trajectory. Thus, \begin{equation} \left( \begin{array}{c} \delta \vec {\cal p}_t \\ \delta \vec {\cal q}_t \end{array} \right) = \left( \begin{array}{c} \bf{M_{11}} \\ \bf{M_{21}} \end{array} \begin{array}{c} \bf{M_{12}} \\ \bf{M_{22}} \end{array} \right) \left( \begin{array}{c} \delta \vec {\cal p}_0 \\ \delta \vec {\cal q}_0 \end{array} \right) \label{delta} \end{equation} The seed orbit most likely does not satisfy the boundary value problem and in the case of correlation functions instead gives \begin{eqnarray} {\bf b}_\alpha \cdot \left( \vec {\cal q}_0 - \vec q_\alpha\right) + i \left( \vec {\cal p}_0 - \vec p_\alpha\right) &=& \vec {\cal c}_0 \nonumber \\ {\bf b}^*_\beta \cdot \left( \vec {\cal q}_t - \vec q_\beta\right) - i \left( \vec {\cal p}_t - \vec p_\beta\right) &=& \vec {\cal c}_t \end{eqnarray} Combining these and the stability equations, it is possible to solve for the change in initial conditions needed to approach the saddle trajectory. This gives \begin{equation} \label{nr} \begin{array}{l} \vec{\cal{p}_0}^\prime = \vec{\cal{p}}_0 + i{\bf b}_\alpha \cdot {\cal D} \cdot \left[ \left( {\bf b}^*_\beta \cdot{\bf M_{22}} - i {\bf M_{12}} \right) \cdot {\bf b}_\alpha^{-1} \cdot \vec {\cal c}_0 - \vec {\cal c}_t \right] \\ \vec{\cal{q}_0}^\prime = \vec{\cal{q}}_0 - {\cal D} \cdot \left[ \left( {\bf M_{11}}+ i {\bf b}^*_\beta \cdot {\bf M_{21}} \right) \cdot \vec {\cal c}_0 + \vec {\cal c}_t \right] \end{array} \end{equation} where \begin{equation} {\cal D}^{-1} = {\bf M_{11}}\cdot {\bf b}_\alpha + {\bf b}^*_\beta \cdot {\bf M_{22}} + i {\bf b}^*_\beta\cdot {\bf M_{21}}\cdot {\bf b}_\alpha - i{\bf M_{12}} \end{equation} If the interest is in calculating the propagating wave packet itself, as opposed to some correlation function, the equations are slightly simplified to give \begin{equation} \begin{array}{l} \vec{\cal{p}_0}^\prime = \vec{\cal{p}}_0 + i{\bf b}_\alpha \cdot {\cal D} \cdot \left[ {\bf M_{22}} \cdot {\bf b}_\alpha^{-1} \cdot \vec {\cal c}_0 - \vec {\cal c}_t \right] \\ \vec{\cal{q}_0}^\prime = \vec{\cal{q}}_0 -{\cal D} \cdot \left[ i {\bf M_{21}} \cdot \vec {\cal c}_0 + \vec {\cal c}_t \right] \end{array} \end{equation} with \begin{equation} \label{det2} {\cal D}^{-1} = {\bf M_{22}} + i {\bf M_{21}}\cdot {\bf b}_\alpha \qquad \vec {\cal q}_t - \vec x = \vec {\cal c}_t \end{equation} These equations are used iteratively to converge to a contributing saddle point. It suffices to find a single point within the domain of convergence for each saddle, which is what the seed trajectories provide. \subsection{Saddle families} \label{family} In a continuous time dynamical system, i.e.~as opposed to dynamical mappings, each saddle gives rise to a one parameter family of saddles. As $t$ changes continuously, the saddle trajectory's initial conditions change continously as well. Barring orbit bifurcations and crossing Stokes surfaces (which becomes exceedingly unlikely in the $\hbar\rightarrow 0$ limit), it is possible to predict how the initial conditions change using Eq.~(\ref{sadcond}) and Hamilton's equations. Consider a saddle trajectory that contributes at exactly time $t$, thus satisfying Eq.~(\ref{sadcond}). It's initial condition lies on the initial Lagrangian manifold, and its propated endpoint on the final one. If however, the propagation time is slightly (differentially) altered, the endpoint is no longer on the final manifold. Using Hamilton's equations for a time shift $\delta t$, the altered endpoint is located at, \begin{eqnarray} \vec {\cal q}_{t+\delta t} &=& \vec {\cal q}_t + \frac{\partial H}{\partial \vec {\cal p}_t} \delta t \nonumber \\ \vec {\cal p}_{t+\delta t} &=& \vec {\cal p}_t - \frac{\partial H}{\partial \vec {\cal q}_t} \delta t \end{eqnarray} The Newton-Raphson scheme of the previous section can be applied to find the shift in initial conditions that would restore the saddle point conditions for the new time. Since the initial point begins on the initial manifold, $\vec c_0=0$, but the shift of the final point means that $\vec c_t\ne 0$. Following the same kind of algebra leading to Eq.~(\ref{nr}) gives the initial condition expressions for the saddle trajectory, which contributes at $t+\delta t$, \begin{equation} \begin{array}{l} \label{tshift} \vec{\cal{p}}_0^{\{t+\delta t\}} = \vec{\cal{p}}_0^{\{t\}} - i{\bf b}_\alpha \cdot {\cal D} \cdot {\bf b}^*_\alpha \cdot \left( \frac{\partial H}{\partial \vec {\cal p}_t} +i \frac{\partial H}{\partial \vec {\cal q}_t} \right) \delta t \\ \vec{\cal{q}}_0^{\{t+\delta t\}} = \vec{\cal{q}}_0^{\{t\}} - {\cal D} \cdot {\bf b}^*_\alpha \cdot \left( \frac{\partial H}{\partial \vec {\cal p}_t} +i \frac{\partial H}{\partial \vec {\cal q}_t} \right) \delta t \end{array} \end{equation} A similar expression results for the case in which the quantity of interest is the propagating wave function with the matrix ${\bf b}^*_\alpha$ replaced by unity and the simpler determinant $\cal D$ of Eq.~(\ref{det2}). The structure of these equations involving the gradient of the Hamiltonian is linked to the fact that the direction of initial condition variation is along the maximal change of (perpendicular to) the energy surface in an autonomous dynamical system. \begin{figure}[tbh] \includegraphics[width=8.5 cm]{fig1.pdf} \caption{Typical saddle family characteristics. The oscillating curve in the upper panel is the real part of ${\cal A}(t)$ for one particular saddle family, and the envelope is the absolute value. There is a faster phase oscillation at short times decreasing as time increases corresponding to changes in the complex saddle trajectory with time. Each saddle family member has an energy and total particle number ($n_T$) surface to which it belongs. At short times, the real parts of the energy and particle number of the saddle trajectory are greater than the energy and particle number expectation values of the wave packet, and at longer times they are less than the expectation values. The saddle family's peak contribution occurs near where the real parts of the energy and $n_T$ equals the energy and total particle number (here $<n_T>=40$) expectation values of the wave packet. This saddle family is taken from an example of the Bose-Hubbard model defined ahead in Sect.~\ref{bhms}. \label{fig1}} \end{figure} The modified initial conditions of Eq.~(\ref{tshift}) can be used as a seed for the Newton-Raphson scheme of the previous section to construct the entire saddle trajectory family that forms a continuous time contribution to the evolving wave packet or correlation function. An example from the Bose-Hubbard model introduced in Sect.~\ref{bhms} is shown for illustration purposes in Fig.\ref{fig1}. Generally speaking, there is a peak contribution time for a saddle family corresponding to a saddle trajectory possessing an energy and particle number close to the mean of the initial wave packet. Earlier and later in time, the saddle trajectory moves further away from this energy and particle number surface and the contribution decays, thus creating a time window in which it contributes significantly. It suffices to search for a single real transport pathway seed on the energy and particle number surface of the trajectory defined by $(\vec p_\alpha, \vec q_\alpha)$, locate a saddle, and from there obtain the contribution of the entire family through repeated use of Eq.~(\ref{tshift}). In practice, the convergence appears to be superior (computationally faster and fewer convergence problems) if constructing the entire saddle family this way than to find real seed trajectories as a continuous function of time. \section{Identifying real classical transport pathways} \label{transport} \subsection{Wigner transform} \label{wt} The key for identifying classical transport pathways is to start with the phase space image of a wave packet under the Wigner transform. This gives a multidimensional Gaussian density of phase points in a classical phase space to consider. This image is given by \begin{equation*} {\cal W}(\vec p, \vec q) = \frac{1}{(2\pi\hbar)^{N}} \int_{-\infty}^\infty {\rm d} \vec x \ {\rm e}^{i \vec p \cdot \vec x/\hbar} \phi_\alpha \left(q-\frac{\vec x}{2}\right) \phi^*_\alpha \left(q+\frac{\vec x}{2}\right) \end{equation*} \begin{equation} = \left(\pi \hbar \right)^{-N} \exp \left[ - \left(\vec p - \vec p_\alpha, \vec q - \vec q_\alpha \right) \cdot \frac{{\bf A}_\alpha}{\hbar} \cdot \left(\vec p - \vec p_\alpha, \vec q - \vec q_\alpha \right) \right] \end{equation} where ${\bf A}_\alpha$ is \begin{equation} \label{mvg} {\bf A}_\alpha = \left(\begin{array}{cc} {\bf c^{-1}} & {\bf c}^{-1} \cdot {\bf d} \\ {\bf d} \cdot {\bf c}^{-1} & {\bf c} + {\bf d} \cdot {\bf c}^{-1} \cdot {\bf d} \end{array} \right) \qquad {\rm Det}\left[ {\bf A}_\alpha \right] =1 \end{equation} with the association \begin{equation} \label{mvgwf} {\bf b}_\alpha = {\bf c} + i {\bf d} \end{equation} The $ 2N \times 2N$ dimensional matrix ${\bf A}_\alpha$ is real and symmetric. If $\bf b_\alpha$ is real, there are no covariances between $\vec p$ and $\vec q$; i.e.~the wave packet is not chirped. The off-diagonal blocks of the matrix ${\bf A}_\alpha$ disappear. Ahead it is very useful to know that ${\bf A}_\alpha$ can be inverted analytically. The inverse is given by~\cite{Lu02} \begin{equation} {\bf A}_\alpha^{-1} = \left(\begin{array}{cc} {\bf c} + {\bf d} \cdot {\bf c}^{-1} \cdot {\bf d} & - {\bf d} \cdot {\bf c}^{-1} \\ - {\bf c}^{-1} \cdot {\bf d} & {\bf c^{-1}} \end{array} \right) \end{equation} Since it is necessary to calculate $\bf c^{-1}$ to determine ${\bf A}_\alpha$, its inverse is determined with no further effort. \subsection{Local evolution of Gaussian densities} \label{egd} Consider any constant density contour of the Wigner transform of the initial wave packet as a set of initial conditions. It must have some kind of hyper-elliptical shape described by the equation \begin{equation} \label{ellipse0} r^2 = \left( \delta \vec p_0 , \delta \vec q_0 \right) \cdot \frac{{\bf A}_\alpha}{\hbar} \cdot \left(\delta \vec p_0 , \delta \vec q_0 \right) \end{equation} where $(\delta \vec p_0, \delta \vec q_0) = (\vec p_0 - \vec p_\alpha, \vec q_0 - \vec q_\alpha)$, where $(\vec p_0, \vec q_0)$ belong to a set of points on the hyper-elliptical surface satisfying the equation. Locally, within a linearizable regime (small enough $r$), the dynamics to time $t$ distorts the hyper-ellipse to a new one \begin{equation} \label{ellipse} r^2 = \left( \delta \vec p_t , \delta \vec q_t \right) \cdot \frac{{\bf A}_\alpha (t)}{\hbar} \cdot \left(\delta \vec p_t , \delta \vec q_t \right) \end{equation} Recalling the information given by the stability matrix of the central trajectory $(\vec p_0, \vec q_0)=(\vec p_\alpha, \vec q_\alpha)$ identifies the evolution of ${\bf A}_\alpha$ with $t$. Inserting unity of the form $\mathbb{1}= {\bf M}_t^{-1} {\bf M}_t$ and its transpose appropriately into Eq.~(\ref{ellipse0}) gives \begin{eqnarray} r^2 &=& \left( \delta \vec p_0 , \delta \vec q_0 \right) \cdot {\bf M}_t^T \cdot{ {\bf M}_t^{-1}}^T \cdot \frac{{\bf A}_\alpha}{\hbar} \cdot {\bf M}_t^{-1} \cdot {\bf M}_t \cdot \left(\delta \vec p_0, \delta \vec q_0 \right) \nonumber \\ &=& \left( \delta \vec p_t , \delta \vec q_t \right) \cdot {{\bf M}_t^{-1}}^T \cdot \frac{{\bf A}_\alpha}{\hbar} \cdot {\bf M}_t^{-1} \cdot \left(\delta \vec p_t, \delta \vec q_t \right) \end{eqnarray} and, thus, necessarily one has the identification \begin{equation} {\bf A}_\alpha (t) = {{\bf M}_t^{-1}}^T \cdot {\bf A}_\alpha \cdot {\bf M}_t^{-1} \end{equation} This is a real symmetric matrix (also with unit determinant) which can be diagonalized by an orthogonal transformation. Its eigenvalues and eigenvectors contain all the information necessary to enable a targeted search for saddle trajectories. For convenience, we work with the inverse, which has the exact same set of eigenvectors, i.e. \begin{equation} \Lambda = {\cal O} {\bf A}_\alpha^{-1}(t) {\cal O}^{-1} = {\cal O} {\bf M}_t \cdot {\bf A}_\alpha^{-1} \cdot {\bf M}_t^T {\cal O}^{-1} \end{equation} and the set of inverse eigenvalues, $\{\lambda_{j,\pm}\}$. The determinant of ${\bf A}_\alpha^{-1}(t)$ is unity and the eigenvalues come in pairs here labelled by $j=1,...N$, one expanding, $\lambda_{j,+} > 0$, one contracting, $\lambda_{j,-} <0$ ($\lambda_{j,+}=\lambda_{j,-}^{-1}$). Similar constructs have been used in the calculation of the various Lyapunov exponents of a multidimensional chaotic dynamical system, where the process is discussed as a decomposition of the tangent space~\cite{Gaspard98,Ott02}. \subsection{Asymptotic structure} \label{as} For a large class of systems and initial states, it will turn out that most of the degrees of freedom do not need to be part of the search. Dynamical systems have a great deal of structural organization in their phase spaces that is revealed asymptotically in time by the ${\bf A}_\alpha^{-1}(t)$ matrix. Denote the eigenvectors corresponding to the set of $\lambda_{j,+}$ as $(\delta \vec p_t, \delta \vec q_t)_j$. Each eigenvector signifies the final direction of a set of initial conditions along a line, which separated at the rate $\lambda_{j,+}$. One wishes to know which set of initial conditions in the neighborhood of $(\vec p_\alpha, \vec q_\alpha)$ ends up evolving into the eigenvector $(\delta \vec p_t, \delta \vec q_t)_j$. Using the definition of the stability matrix, it turns out to be the direction of initial conditions given by \begin{equation} \left( \begin{array}{c} \delta \vec p_0 \\ \delta \vec q_0 \end{array} \right)_j = {\bf M}_t^{-1} \left( \begin{array}{c} \delta \vec p_t \\ \delta \vec q_t \end{array} \right)_j \label{delta2} \end{equation} The vector of initial conditions depends on the length of propagation time used to generate the ${\bf A}_\alpha^{-1}(t)$ matrix. However, for large times, each vector of initial conditions converges to a stable direction, and becomes essentially independent of time. If a propagation time too short is selected, then the initial condition vectors will not have stabilized, i.e.~converged to the directions of interest. On the other hand, propagation that covers too long a time period risks losing accuracy and will eventually cause numerical problems. Here, we construct ${\bf A}_\alpha^{-1}(\tau)$, $t=\tau$, for a long intermediate time scale within the appropriate time range and use its eigenvectors to determine the most important degrees of freedom to sample. This is done once at the very beginning to initiate the process of finding real seed trajectories as a function of time. An indication of how to arrive at a reasonable time scale $\tau$ is given in the next subsection. Only the $N$ eigenvectors associated with the eigenvalues greater than unity need to be considered. Trajectories linked by a contracting direction only approach each other, and evolve similarly. If a trajectory belongs to a bundle corresponding to a classical pathway, so will all of its neighbors along the $N$ contracting degrees of freedom, i.e.~the $N$-dimensional manifold described by the $N$ contracting eigenvectors. \subsection{Distinguishing shearing and exponential stretching, and a reasonable value of $\tau$} \label{distinguish} It is not necessary to search in the direction that maximizes the change of energy. This is related to the saddle families discussed in Sect.~\ref{family} and this direction is already accounted for by the technique described in that section. Thus, a targeted search for seed trajectories can be immediately reduced to an $N-1$ dimensional parameter search of initial conditions in a real phase space without any loss of generality (assuming the omission of contracting directions). The associated eigenvector needs to be identified in order to avoid sampling in that direction. As it must be associated with a shearing in the dynamics, it cannot be associated with the exponential stretching of instability. There is a simple trick that often suffices to identify this eigenvector quickly, and which helps identify whether one has reached a sufficiently asymptotic propagation time, $\tau$ (this does not work for a harmonic oscillator where there is no shearing in the dynamics). The logic follows by considering free particle motion in a single degree of freedom. Let ${\bf A}_\alpha$ and the mass be unity and irrelevant for this purpose. The stability matrix times its transpose is \begin{equation} {\bf M}_\tau \cdot {\bf M}_\tau^T = \left( \begin{array}{cc} 1 & 0 \\ \tau & 1 \end{array} \right) \cdot \left( \begin{array}{cc} 1 & \tau \\ 0 & 1 \end{array} \right) = \left( \begin{array}{cc} 1 & \tau \\ \tau & 1 + \tau^2 \end{array} \right) \end{equation} with large eigenvalue \begin{equation} \lambda_+(\tau) = 1 + \frac{\tau^2}{2} +\frac{1}{2}\sqrt{\tau^4+4\tau^2} \approx \tau^2 \end{equation} where the approximate result applies only if $\tau$ is large enough. In a multidimensional system with more complicated dynamics, quadratic dependence of this eigenvalue is an indicator that the asymptotic structure of its Hamiltonian flow has emerged. Therefore, if one calculates the spectrum, $\{\lambda_{j,\pm}\}$, for time $\tau$ and $2\tau$ sufficiently large, there must be an eigenvalue for which $\lambda_{j,+} (2\tau) = 4\lambda_{j,+}(\tau)$. If there is only one, then its eigenvector must be perpendicular to the energy surface. If there are none, then one has not reached the asymptotic regime desired and $\tau$ must be increased. If there are multiple eigenvalues respecting this relation, then one can calculate the energy along the associated multiple eigenvectors to determine which maximally shifts the energy or calculate the gradient of the Hamiltonian at the wave packet centroid and compare to the relevant eigenvectors. Unstable degrees of freedom behave very differently. As their eigenvalues behave exponentially in time, one expects fully unstable directions to satisfy, $\lambda_{j,+} (2\tau) = \lambda_{j,+}^2(\tau)$. In practice, one finds a factor of unity (no stretching at all) or square relations as limiting possibilities, and the various eigenvalue behaviors lie in between these cases. In fact, in the calculations performed ahead, only one eigenvalue followed the factor four relation and it was unnecessary to calculate the gradient of the energy surface and compare it to an eigenvector. \subsection{Determining the initial condition sampling space} Of the remaining $N-1$ dimensional phase space of initial conditions of relevance to searching for classical transport pathways, consider the the largest eigenvalue first, denote it $\lambda_{1,+}$ and its associated vector of initial conditions $(\delta \vec p_0, \delta \vec q_0)_1$. It gives a very particular coordinate direction of initial conditions in which to search for the earliest appearing real transport pathways. It should be emphasized that the range of initial conditions along this vector are chosen to fully span the breadth of the initial wave packet's Wigner transform Gaussian density, i.e.~as many standard deviations as desired. They are not limited to the linearizable regime used to identify this direction. The line of initial conditions is propagated long enough in time to become highly stretched, nonlinear, and repeatedly folded into an extremely complicated shape, i.e.~it is used far beyond the linearizable regime that was used to identify the direction. If the second largest eigenvalue $\lambda_{2,+}$ is not too much smaller than $\lambda_{1,+}$, then it is likely necessary to add another search direction for saddle trajectories, i.e.~the phase space plane of initial conditions defined by the first and second vectors $(\delta \vec p_0, \delta \vec q_0)_1$ and $(\delta \vec p_0, \delta \vec q_0)_2$. One could continue in this way to successively higher dimensions until the most relevant initial conditions are included in the search. However, it appears that sometimes an unstable direction does not generate additional saddles for the dynamical quantity of interest. For example, concerning the autocorrelation function, this would mean that even though the various initial conditions lead to rapidly separating trajectories, away from the central trajectory along this direction they do not result in additional returning trajectories within the time frame of interest. In fact, for the Bose-Hubbard model of Sect.~\ref{bhms}, in some cases even when combined with another part of the subspace, which does generate transport pathways leading to saddles, new saddles seem not to appear. This could be true for other dynamical systems as well. Therefore, one can check each expanding direction individually as an indicator of which collection of eigenvectors (subspace of initial conditions) is absolutely necessary for an exhaustive saddle search, and one can use this as a starting point for a minimal search subspace. However, we are not currently aware of any guarantee that this is always going to turn out to be sufficient. We recognize that in practice it may not really be all that practical to continue beyond say, $3$ dimensions. Nevertheless, for a broad class of dynamical systems and wave packets, even possessing many degrees of freedom, this is sufficient for the purpose of constructing the semiclassical prediction for correlation functions. Some examples with up to $8$ degrees of freedom are shown in Sect.~\ref{bhms}. \subsection{Finding seed trajectories} \label{seed} With the sampling space determined, the goal is reduced to identifying a single seed trajectory for each unique pathway. One simple idea is to define a function of the initial conditions in the sampling space for which one can search for local minima. Consider the correlation function as a concrete example. The Wigner transform of the final state has a centroid $(\vec p_\beta, \vec q_\beta)$ and shape given by ${\bf A}_\beta$. A distance function can be defined that measures the number of standard deviations that the endpoint of a trajectory is away from the final wave packet centroid. It is given by \begin{equation} f_\beta(\vec p_0,\vec q_0;t) = (\delta \vec p_t, \delta \vec q_t) \cdot {\bf A}_\beta \cdot (\delta \vec p_t, \delta \vec q_t) \end{equation} where $(\delta \vec p_t,\delta \vec q_t) = (\vec p_t - \vec p_\beta, \vec q_t - \vec q_\beta)$. The trajectory endpoint $(\vec p_t, \vec q_t)$ clearly is a function of the initial conditions. As $(\vec p_0, \vec q_0)$ is varied, each isolated minimum corresponds to a unique classical pathway. However, these minima come in one parameter families with time and one only needs the local minima in time on the central energy surface, as previously discussed. An excellent predictor of how much a saddle can contribute to a correlation function is given by this distance function. Considering the sum of the initial and final distances of a seed trajectory, $\gamma$, \begin{equation} D_\gamma= f_\alpha(\vec p^\gamma_0,\vec q^\gamma_0;0) + f_\beta(\vec p^\gamma_0,\vec q^\gamma_0;t) \end{equation} Thus, all the hard work of finding complex saddle trajectories is reduced to finding the local minima of $D_\gamma$ in the reduced dimensional space determined by the properties of ${\bf A}^{-1}_\alpha (\tau)$ and ${\bf M}_\tau^{-1}$. For the seed trajectories identified by the minima of $D_\gamma$, the function ${\rm e}^{-D_\gamma}$ gives a fairly good rough estimate of the suppression of the semiclassical contribution due to the associated saddle trajectory's mismatch with the real centroids of the initial and final wave packets. Therefore, the matrices ${\bf A}_\alpha$ and ${\bf A}_\beta$ can be used to cut off the search space domains. Typically it is found that the chirp given by a saddle family's contribution tends to be rather insignificant if $D_\gamma\ge 10$. Notice that this provides a cut off criterion that does not increase with $N$ increasing. This has the consequence that for each single degree of freedom, the phase space coordinate of a relevant saddle trajectory tends to get closer to the central trajectory as $N$ increases. \subsection{The role of symmetry} The symmetries of a quantum Hamiltonian lead to an important role for the irreducible representations of the associated groups with respect to the properties of the eigenvalues and eigenfunctions. The Hilbert space can be represented by a basis which separates into subspaces, each having specific transformation properties with respect to the actions of the associated group operators. If an initial state respects at least some part of the dynamical or fundamental symmetries of the system, it necessarily can be constructed from a subspace of the full Hilbert space. Quantities, such as the autocorrelation function, Eq.~(\ref{ac}), must have enhanced long time averages as a result. The enhancement depends on the ratio of the full Hilbert space dimensionality relative to the appropriate subspace. This, of course, must be reflected in the semiclassical theory. The dynamical effects are accounted for by the transformation properties of the saddle trajectories. It suffices to consider the saddle trajectories' initial conditions and how they transform under the group operations. If an operation returns the same initial condition, no multiplicity is implied, otherwise there must be a replica of the saddle trajectory given by the particular operation. Hence the rule, highly symmetric saddle trajectory initial conditions leads to low multiplicities, and low symmetry initial conditions leads to higher multiplicities. Depending on the Hamiltonian and initial and final states then, there will be a symmetry reduced fundamental domain in the phase space, which can be used to search for saddles. The saddles within the other domains follow by a symmetry operation. The precise domain boundaries depend on the subspace, i,e.~set of necessary search directions, \begin{equation} \left\{\left( \begin{array}{c} \delta \vec p_0 \\ \delta \vec q_0 \end{array} \right)_j \right\}\ . \end{equation} They collectively define a volume, which can be decomposed into fundamental domains. This imposes a certain structure on the eigenvectors giving the search directions. If the search domain is comprised of a single eigenvector, hence the eigenvalue is non-degenerate, the symmetry operation applied to the vector has to return the negative of the vector. The symmetry imposed eigenvector structure in such a case is immediately visible with a cursory glance. However, in higher dimensional search spaces and especially if there are search directions associated with degenerate eigenvalues (equal stretching rates, $\lambda_{j,+}$), it may happen (as seems rather likely) that the structure of the eigenvectors is somewhat hidden from view and it can be rather difficult to identify fundamental domain boundaries. In such a case, a rotation of the degenerate search directions can aid immensely their identification and the structure imposed on the eigenvectors. A non-trivial example is shown in Sect.~\ref{6s} where there is a $6$-fold symmetry in a $2$-dimensional space, but the fundamental domain cannot be selected as just any $60^\circ$ wedge in the plane. The eigenvectors that emerge from the stability analysis have to be rotated to identify the boundaries. Once the analysis is completed and the boundaries are properly identified though, the reduced domain can be used to accelerate the saddle search and the construction of a semiclassical approximation. As a final remark, note that there are significant symmetry effects on the dynamics of multidimensional quantum systems, which semiclassical theory is entirely capable of addressing in detail. In particular, they affect the far-out-of-equilibrium dynamics of a many-body system such as represented by the Bose-Hubbard model discussed in Sect.~\ref{bhms}, some of which is addressed in detail there. \section{Bose-Hubbard model saddle trajectories} \label{bhms} In recent studies calculating post-Ehrenfest quantum many-body interferences~\cite{Tomsovic18b,Ullmo18} and coherence effects~\cite{Schlagheck18}, this method was used to find saddle trajectories for a Bose-Hubbard model in a ring configuration. The quantum Hamiltonian contains tunable nearest neighbor hopping and two-body interaction terms, and can be expressed as \begin{equation} \label{bhm} \hat H = -J \sum_{j=1}^N \left(a^\dagger_j a_{j+1} + h.c.\right) + \frac{U}{2} \sum_{j=1}^N \hat n_j \left(\hat n_j - 1 \right) \end{equation} where $N$ is the number of sites in the ring and determines the number of degrees of freedom. $U$ is a measure of the strength of the two-body interaction, which depends on the s-wave scattering length. $J$ controls the tunneling amplitude, which depends on the well depth. There are two constants of the motion, the energy and total number of particles, $\hat n_T=\sum_j \hat n_j$. A mean field analysis~\cite{Pitaevskii03,Castin98} leads to a corresponding classical Hamiltonian, which follows from the introduction of the quadrature operators $(\hat q_j, \hat p_j)$ defined as \begin{align*} \hat a_j &= \frac{\hat q_j + i \hat p_j}{\sqrt{2}} \\ \hat a_j^\dagger &= \frac{\hat q_j - i \hat p_j}{\sqrt{2}} \; . \end{align*} and subsequent replacement by $c$-numbers. After accounting for operator ordering issues, this gives \begin{eqnarray} \label{hamiltonian} H_{cl} &=& -J \sum_{j=1}^N q_j q_{j+1} + p_j p_{j+1} + \frac{U}{2} \sum_{j=1}^N \left(\frac{q_j^2 + p_j^2}{2}\right)^2 \nonumber \\ && - U \sum_{j=1}^N \frac{q_j^2 + p_j^2}{2} \end{eqnarray} It is a quartic function of the phase space variables and straightforwardly analytically continued to complex variables. The second constant of the motion is given by \begin{equation} n_{cl} = \sum_{j=1}^N \frac{q_j^2 + p_j^2}{2} \end{equation} and is the fixed total number of particles for a classical trajectory. \subsection{Quantum and classical symmetries} \label{qcsym} This Bose-Hubbard model, Eq.~(\ref{bhm}), has the following discrete symmetries: cyclic permutation, reverse index ordering (clockwise/counterclockwise ring), and time reversal invariance. The first two symmetries lead to groups of order $g_N=2N$ for $N\ge 3$. For $N\le2$, reversing the index ordering is identical to cyclic permutation and hence $g_N=N$ for $N=1,2$. Furthermore, the model has a continuous symmetry, $U(1)$, in which multiplying the set $\{\hat a_j\}$ by a phase ${\rm e}^{i\theta}$, and hence the $\{\hat a^\dagger_j\}$ by the complex conjugate phase leaves the Hamiltonian invariant. This is equivalent to the rotation of the quadrature operators, \begin{equation} \label{quadrot} \left( \begin{array}{c} \hat p_j^\prime \\ \hat q_j^\prime \end{array} \right) = \left( \begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right) \left( \begin{array}{c} \hat p_j \\ \hat q_j \end{array} \right) \end{equation} and similarly for the $c$-numbers. Thus, all these symmetries are reflected in the classical dynamics and the initial conditions of the saddle trajectories. From a theoretical perspective, one can design a symmetry group of interest quite easily for a many-body system of a type akin to the Bose-Hubbard model of Eq.~(\ref{bhm}). For example, if hopping connects all the sites equally, the maximum discrete group of the Hamiltonian would be the permutation group (the symmetry group), $S_N$. The actual constructive interference and long-time average enhancement factors would depend on the symmetry properties of the initial and final states. \subsection{Coherent state density waves} \label{csdw} A coherent state density wave is a useful initial state for our demonstration purposes~\cite{Tomsovic18b,Ullmo18}. Denote it \begin{equation} |{\bf n}\rangle = \prod_{j=1}^N \exp \left(-\frac{\left|b_j\right|^2}{2} + b_j a^\dagger_j \right)|{\bf 0}\rangle \end{equation} where each site $j$ of the ring potentially has a different mean number of particles $n_j=\left|b_j\right|^2$. A coherent state density wave is populated as follows $|n,0,n,0,...,n,0\rangle$, where $n$ represents the {\bf mean} number of particles on that site (not to be confused with a Fock state density wave). This notation is incomplete in that the phase of each $b_j$ is not specified. Thus, we assume that the $\{b_j\}$ are all chosen real and positive if not indicated otherwise. An example of an initial state that does have alternating phases of the $\{b_j\}$ is discussed near the end of Sect.~\ref{regime}. In a configuration representation, initial coherent states appear as Gaussian wave packets \begin{eqnarray} \phi_\alpha(\vec x) &=& \pi^{-N/4} \exp\left[- \frac{\left(\vec x- \vec q_\alpha\right)^2}{2} + i \vec p_\alpha \cdot \left(\vec x - \vec q_\alpha\right) \right. \nonumber \\ && \left. + i \frac{\vec p_\alpha \cdot \vec q_\alpha}{2}\right] \end{eqnarray} where \begin{equation} \label{rotqp} \sqrt{2}\ \vec b = \vec q_\alpha + i \vec p_\alpha \end{equation} This is in the form of Eq.~(\ref{wavepacket}) as it must be, except with a different phase convention given by the last term of the equation. Its Wigner transform is \begin{equation} {\cal W}(\vec q,\vec p) = \pi^{-N} \exp\left[- \frac{\left(\vec q- \vec q_\alpha\right)^2}{2} - \frac{\left(\vec p- \vec p_\alpha\right)^2}{2}\right] \end{equation} These equations define the wave packet centroid, phase convention, shape matrices ${\bf b}_\alpha =\mathbb{1}, {\bf A}_\alpha ={\bf A}_\beta =\mathbb{1}$, and $\hbar=1$. If the components of $\vec b$ are chosen real and positive, there is just a shift in the position centroids per site, \begin{equation} \phi_\alpha(\vec x) = \pi^{-N/4} \exp\left[- \sum_{j=1}^N \frac{\left(x_j-\sqrt{2n_j}\right)^2}{2} \right] \end{equation} This gives rise to the corresponding density operator Wigner transforms, \begin{equation} {\cal W}(\vec q,\vec p) = \pi^{-N} \prod_{j=1}^N \exp \left[ -\left(q_j-\sqrt{2n_j}\right)^2 - p_j^2 \right] \end{equation} \subsection{4-site coherent state density wave} \label{4s} Consider a 4-site ring with initial coherent state density wave $|20,0,20,0\rangle$ ($b_j$ chosen real and positive), and let the interaction strength be $U=0.5$. There are two time scales in the dynamics for the Bose-Hubbard model without hopping given by \begin{equation} \tau_1 = \frac{2\pi}{U n_j} = 0.63 \qquad \tau_2 = \frac{2\pi}{U} = 4\pi= 12.57 \; , \end{equation} $\tau_1$ is a classical scale associated with first return of classical trajectories, and $\tau_2$ is a quantum scale associated with the revival of the initial quantum state~\cite{Greiner02b}. We fix the hopping strength to be $J \! =\! 0.2$, which perturbs the dynamics, but leaves the system in the strong interaction regime. The autocorrelation function, Eq.~(\ref{ac}), constructed semiclassically using these saddles is pictured in upper panel of Fig.~1 of Ref.~\cite{Tomsovic18b}, where it is seen to have quite a complicated set of oscillations. The revivals and fractional revivals $(1/2,1/3,...)$ are reduced, but still visible; the semiclassical saddle point formulas can be found there and are not repeated here. They require a great deal of delicately balanced quantum interference to reconstruct, but the semiclassical approximation does so. This could only happen if one has identified all or nearly all of the contributing saddles. \subsubsection{Search directions} \label{sd} The first step is to construct and diagonalize the matrix ${\bf M}_\tau\cdot{\bf M}_\tau^T$ (since ${\bf A}_\alpha = \mathbb{1}$) of the initial condition $(\vec p_\alpha,\vec q_\alpha)$ [wave packet centroid] for a long enough propagation time that the eigenvectors have converged to their asymptotic directions; a value of $\tau$ on the order of $(1.5-2) \times \tau_2$ was sufficiently asymptotic. Of the $4$ eigenvalues greater than unity, one dominates, is at least somewhat exponentially unstable, and is given by $5.8 \times 10^{11}$ at $t=16$. The next largest eigenvalue is $1.4 \times 10^{5}$ ($10^6$ times smaller), and its eigenvector is associated with the direction of maximal change in energy, which as pointed out in subsection~\ref{distinguish} is not necessary to search. The final $2$ eigenvalues are nearly degenerate with value $1.8$, and are entirely irrelevant. Thus, this case can be reduced to a $1$-parameter search for saddles without losing any dynamical information on the time scale of $1-2$ revivals, say, less than $2\tau_2$; the straightforward search dimensionality for this case would have required $8$ parameters. The eigenvector associated with the largest eigenvalue is used in conjunction with Eq.~(\ref{delta2}) to determine the sole line of initial conditions necessary to search for saddles. \subsubsection{Saddles} \label{sadd} In Fig.~\ref{fig2}, the points where the distance function, $D_\gamma\le 20$, are blackened and plotted as a function of time and initial \begin{figure}[tbh] \includegraphics[width=8.7 cm]{fig2.pdf} \caption{View of seed trajectory locations in time and initial conditions. For $4000$ initial conditions the distance function is calculated as a function of time. The initial conditions are chosen uniformly along the eigenvector mentioned in the text across the interval $[-4\sigma,4 \sigma]$ corresponding to the Wigner transform of the coherent state density wave. The points are blackened where $D_\gamma \le 20$. The initial conditions are labeled by an index on the $y$-axis. The full discrete symmetry is encapsulated by a reflection symmetry with respect to the $x$-axis. The multiplicity $1$ saddles are found using the $y=0$ line seed trajectories, and the rest have multiplicity $2$. \label{fig2}} \end{figure} conditions. The search direction is given by the vector $\left( \begin{array}{c} \delta \vec p_0 \\ \delta \vec q_0 \end{array} \right)_1$ of Eq.~(\ref{delta2}). The propagation time and initial conditions of each seed trajectory are selected by the one whose distance is minimized within each isolated blackened region. After using the Newton-Raphson scheme, each region leads to a unique saddle trajectory family with a semiclassical contribution to ${\cal A}(t)$ similar to the one shown in Fig.~\ref{fig1}. The total number of saddles found as a function of time is illustrated in Fig.~\ref{fig3}. \begin{figure}[tbh] \includegraphics[width=8.5 cm]{fig3.pdf} \caption{Total number of saddles as a function of time. The saddles counted are those found with seed trajectories with a $D_\gamma \le 20$. A dashed quadratic curve is shown as a guide. \label{fig3}} \end{figure} In this particular dynamical case, one can be fairly certain that all the saddle trajectory families have been found (up to a certain significance) due to the highly structured locations of the distance minima. Despite the high degree of instability in the largest eigenvalue, the saddle number is increasing similarly to that of a system in a near-integrable dynamical regime, i.e.~a linearly increasing density of saddles in time leads to a pure quadratic total count of saddles up to some fixed time. If the system were behaving as a purely chaotic dynamical system, the number of saddles found would increase as an exponential function. In addition, it is possible to see an approaching problem as time increases. Above and below the central horizontal axis $(0$-line) are regions approaching each other in pairs, which implies a coalescence of saddle points once they overlap. Beyond a certain time, to avoid singularities in the semiclassical theory the coalescing saddles will require a uniformized approximation of the kind discussed in~\cite{Chester57}. If one uses the second largest eigenvector, $\left( \begin{array}{c} \delta \vec p_0 \\ \delta \vec q_0 \end{array} \right)_2$, one finds only the symmetric saddles that can be found using the single trajectory with initial condition $(\vec p_\alpha,\vec q_\alpha)$. As this vector is associated with the normal to the \begin{figure}[tbh] \includegraphics[angle=-90,width=8.5 cm]{fig4.pdf}\vskip -.5cm \caption{Equivalent of Fig.~\ref{fig2} for the eigenvector perpendicular to the energy surface. This eigenvector is associated with the second largest eigenvalue. For $1000$ initial conditions the distance function is calculated as a function of time. The initial conditions are chosen uniformly along the eigenvector across the interval $[-4\sigma,4 \sigma]$ corresponding to the Wigner transform of the coherent state density wave. The points are blackened where $D_\gamma \le 20$. The initial conditions are labeled by an index on the y-axis. \label{fig4}} \end{figure} energy surface, this direction preserves the symmetries of the central trajectory, and it does not lead to any new saddles beyond those found with the central trajectory, rather it just generates the families of each of the saddles which are fully symmetric. It is interesting however to illustrate this point. This vector's equivalent of Fig.~\ref{fig2} is shown in Fig.~\ref{fig4}. In the strong interaction regime, there is strong shearing perpendicular to the energy surface and this makes each saddle trajectory family contribute over a wide range in time; recall Fig.~\ref{fig1}. It is possible to deduce from this figure the width of semiclassically-contributing time window for any particular symmetric saddle. Select one of the contiguous blackened regions, and fix any time that intersects it. There will be a minimum distance trajectory for that fixed time, somewhere near the middle of the fixed time vertical line's intersection with the region. It can be used to locate some particular symmetric saddle. If one differentially shifts the time back and forth enough to intersect the entire chosen region, the continuous collection of saddles forms a saddle family exactly as discussed in Sects.~\ref{family},\ref{distinguish}. In fact, one could construct a saddle family this way with a large number of real seed trajectories, one for each fixed time, but the method discussed in Sects.~\ref{family},\ref{distinguish} is much more reliable and faster. It is better not to use this direction in the saddle searches as mentioned earlier. The time interval that intersects the chosen region is the contributing time window of a saddle family, just as pictured in Fig.~\ref{fig1}. Therefore, the regions further to the right (increasing time), corresponding to later arriving saddle families, which are more horizontally tilted, have corresponding saddle families that contribute to the autocorrelation function over wider time windows. It suffices to project any particular region seen in Fig.~\ref{fig4} onto the time axis to read off the width of that saddle family's contribution in time. \subsubsection{Symmetries} \label{symm} The initial condition associated with the coherent state density wave centroid, $\vec p_\alpha = \vec 0$ and $\vec q_\alpha = (20, 0, 20, 0)$, is invariant under some of the symmetry operations that leaves the Bose-Hubbard model invariant, i.e.~a double hop cyclic permutation and time reversal invariance. These symmetries have a number of consequences for the autocorrelation function defined in Eq.~(\ref{ac}). Two consequences are handled quickly. First, time reversal invariance ensures that ${\cal A}(-t) = {\cal A}^*(t)$, but does not otherwise lead to symmetry related saddles (multiplicity greater than 1) forward in time. Second, any choice of rotation via Eq.~(\ref{quadrot}) acting on the variables of $(\vec p_\alpha, \vec q_\alpha)$ of Eq.~(\ref{rotqp}) leaves the autocorrelation function invariant. This is reflected in a symmetry of the classical trajectories, whereby a rotation of initial conditions of this sort leads to a trajectory linked to the former by rotation. The remaining cyclic permutation and index reversal symmetry does lead to symmetry related saddles and this is visible in the symmetry of Fig.~\ref{fig2}. For this case, a saddle may be unique or duplicated elsewhere in phase space by the double cyclic permutation. To be unique, the initial condition of the saddle trajectory must have the same symmetry as $(\vec p_\alpha;\vec q_\alpha)$. In other words, if the initial condition position is the same for sites $1$ $\&$ $3$ (the full symmetry is there, but observing just those two site positions identifies it), it has multiplicity $1$, and if they are different, then it has multiplicity $2$. All of the initial conditions in the neighborhood of $(\vec p_\alpha;\vec q_\alpha)$ have lower symmetry than it does (excluding the direction of maximal change in energy). Thus, the only multiplicity $1$ saddles arise in the Newton-Raphson search from seed trajectories found using the initial condition $(\vec p_\alpha,\vec q_\alpha)$; this is effectively a zero parameter search of initial conditions. The regions straddling the central horizontal axis have multiplicity $1$. The rest of the saddles have multiplicity $2$ and in this case arise from a one parameter search. In fact, one can reduce the search regime to the region above the central axis and multiply the contributions of the saddles to ${\cal A}(t)$ by their multiplicity index. Since total particle number is a conserved quantity, and this example is in the strong interaction regime, the structure of the optimal vector search direction can be understood by simple arguments. For the moment, assume the hopping is turned off, and the classical dynamics are quasi-periodic. In order for a trajectory to return close to its initial conditions, as must be the case for an autocorrelation function, the period of motion for each site must be nearly integer multiples of each other. The shearing is strongest perpendicular to the energy surface for each site. Also, there is almost no frequency change for the unoccupied orbitals. Since for the coherent state density wave chosen, the periods of motion for site 1 \& 3 are identical, the strongest change in their period ratio away from unity, while preserving the total particle number, is for either site 1 to increase its occupancy and site 3 to decrease by the same or vice versa. Furthermore, with $b_j$ real and positive, the perpendicular to the energy surface involves only $q_1$ or $q_3$, no momenta (the perpendicular vector at a point on a circle lies along the continuation of the radial line from the center to that point). Thus, the search direction incorporates vanishing changes in momenta, and a change in position $\delta \vec q = (\delta q, 0, -\delta q, 0)$. Even after turning the hopping term back on, the direction is dominated by these changes. It is clear why Fig.~\ref{fig2} has a reflection symmetry with respect to the central axis, $(\delta q, 0, -\delta q, 0)$ and $(-\delta q, 0, \delta q, 0)$ are related by double cyclic permutation or index reversal (with a shift). As the story gets more complicated for greater numbers of sites, we introduce a shorthand for this search direction $(\delta q, 0, -\delta q, 0) \equiv (\delta n, -\delta n)$, ignoring the unoccupied sites or the difference between position and momentum; note that in this shorthand, the second direction, the one associated with the perpendicular to the energy surface and Fig.~\ref{fig4}, is denoted $(\delta n, \delta n)$. One implication of the irrelevance of the unoccupied sites and momentum generally in the search directions is that in the strong interaction regime, it is never necessary to search more than $N/2-1$ dimensional spaces to find all the contributing complex saddles up to intermediate time scales. \subsection{6-site coherent state density wave} \label{6s} Consider next a 6-site ring with initial coherent state density wave $|10,0,10,0,10,0\rangle$, and let the interaction and hopping strengths, respectively, be $U=1.0$ and $J=0.2$. In this case, the largest eigenvalue is doubly degenerate, and the initial condition search directions correspond very roughly to $(\delta n, -\delta n, 0)$ and $(\delta n, \delta n, -2\delta n)$; the normalization is not given by the notation. \begin{figure}[tbh] \vskip .3 cm\includegraphics[angle=-90,width=9 cm]{fig5.pdf} \caption{$D_\gamma$ near the revival time. Each black spot gives rise to a single seed trajectory, and hence there is a one-to-one correspondence between spots and saddles. A $300 \times 300$ grid of initial conditions were used to calculate $D_\gamma\le 20$ at $\tau_2$. The $6$ dotted lines correspond to the $6$ symmetry related vectors of $(\delta n, -\delta n, 0)$. The $3$ long dashed and $3$ medium dashed lines correspond to cyclic permuations of $(\delta n, \delta n, -2\delta n)$ and its negative. They separate the plane into twelve domains. The domains $I^-$ and $I^+$ are mirror images of each other about the dashed line between them, similarly for $II^-$ and $II^+$. The $3$ copies of each domain, $I^\pm$ and $II^\pm$, are related by $120^\circ$ rotation. One choice for a fundamental domain would be the sum of the regions $I^-$ and $II^+$ adjacent on the right side of the figure. \label{fig5}} \end{figure} The next largest eigenvector corresponds to the energy surface perpendicular, $(\delta n, \delta n, \delta n)$. No other search directions are even remotely relevant. This information was used to locate the roughly $5000$ saddle families up to $t=12$. The equivalent of Fig.~\ref{fig2} would be 3-dimensional. Instead, Fig.~\ref{fig5} shows where $D_\gamma \le 20.0$ in the plane of initial condition search directions, but showing a cross-section by fixing the time to be equal to $\tau_2$ ($=2\pi$). The double degeneracy turns out to be necessary to accommodate the higher symmetry. For example, form the vector from the sum of the two above. The resulting vector is equivalent to $(\delta n, 0, -\delta n)$, which is an odd permutation of the first vector. The difference gives a positive permutation. In fact, using appropriate normalization and summing or subtracting (recall that ${\bf A}_\alpha=\mathbb{1}$), it is possible to construct in the plane of initial conditions all $6$ symmetry related versions of $(\delta n, -\delta n, 0)$, uniformly spread out with $60^\circ$ between them. Similarly, it is possible to build the $3$ cyclic permutations of $(\delta n, \delta n, -2\delta n)$, as well as the $3$ cyclic permutations of the negative $(-\delta n, -\delta n, 2\delta n)$. Unlike $(\delta n, -\delta n, 0)$, which gives rise to a multiplicity of $6$, these two sets cannot be mapped onto each other by a symmetry operation and saddles associated with them only come in multiplicities of $3$. These twelve lines are indicated in Fig.~\ref{fig5}. They separate the fundamental domains, which can be mapped onto each other by either a cyclic permutation or index reversal. Thus, to find all the saddles, it is only necessary to search in $1/6^{th}$ of the initial condition plane. For example, the $60^\circ$ wedge encompassing areas $I^-$ and $II^+$ on the right hand side would give a complete set of saddles. Those emanating from the central initial condition have multiplicity $1$, those on the two symmetry lines at the bottom of $I^-$ and the top of $II^+$ have multiplicity $3$, and the rest have multiplicity $6$. For a highly symmetric point in phase space, typically it takes time for the neighboring lower symmetry trajectories to return. This can be seen in this example by calculating an intensity waited average multiplicity of the saddles as a function of time. It is given by \begin{equation} {\cal M}(t) = \frac{\sum_{j} g^2_j \left|{\cal A}_j(t)\right|^2}{\sum_{j} g_j \left|{\cal A}_j(t)\right|^2} \end{equation} where the index $j$ runs only over the symmetry reduced set of saddles. Figure~\ref{fig6} illustrates the \begin{figure}[tbh] \includegraphics[width=8 cm]{fig6.pdf} \caption{$\cal M$ as a function of time. The effective multiplicity of the saddles begins at $1$ and increases toward the maximum possible of $6$ in this case as time increases. At long times, it saturates at $6$ and remains there as these saddles come from a higher dimensional space of initial conditions than the others. \label{fig6}} \end{figure} result for the $6$-site ring example. The higher symmetry/lower multiplicity saddles dominate at short times and give way to dominance by the highest multiplicity saddles at longer times. \subsection{Remarks on the 8-site ring} \label{r8c} The $8$-site model has a new feature, the 4 cyclic group has a 2 cyclic subgroup. There will be saddles of degeneracies, $(1,2,4,8)$. There are $3$ search directions necessary to construct the maximum $8$-fold saddle degeneracy, and a search for all the relevant saddles to long times will require a significant computational effort, but is quite possible to do. Nevertheless, compared with the $16$-dimensions required of the straightforward search method, this is a great advance. The fourth largest eigenvalue will be associated with the normal to the energy surface, $(\delta n, \delta n, \delta n, \delta n)$, and along with the remaining ones can be entirely ignored. In greater detail, consider the case for $J=0.5$ and $U=0.5$ and an initial coherent state density wave $|40, 0, 40, 0, 40, 0, 40, 0\rangle$. It turns out that the search direction associated with the most unstable eigenvector direction is $(\delta n, -\delta n, \delta n, -\delta n)$. This direction can capture the saddles of multiplicity $2$; as usual multiplicity $1$ saddles require only the wave packet central orbit. The two choices for the fundamental search domain are either the positive half line or the negative half line along this direction. It is slightly more complicated to determine the fundamental search domain for the multiplicity $4$ saddles. The second and third most unstable directions are equally unstable (degenerate eigenvalues) and are roughly given by $(\delta n, 0, -\delta n, 0)$ and $(0, \delta n, 0, -\delta n)$. Actually, due to the degeneracy, the two eigenvectors that emerge from the calculations are not these two, but rather a linear combination that hides the simple structure of these two vectors. It is necessary to recognize that rotating the two calculated vectors generates the two above, which are then simpler and related by a cyclic permutation. With the vectors above, four choices for a fundamental search domain could be given by the full line along $(\delta n, -\delta n, \delta n, -\delta n)$ and either the positive or negative half line along either $(\delta n, 0, -\delta n, 0)$ or $(0, \delta n, 0, -\delta n)$. Another choice, though, could be the positive half lines of $(\delta n, -\delta n, \delta n, -\delta n)$ and $(\delta n, -\delta n, \delta n, -\delta n)$ plus the positive half lines of $(\delta n, -\delta n, \delta n, -\delta n)$ and $(0, \delta n, 0, -\delta n)$. Some care must be exercised. The choice of the positive half line along $(\delta n, -\delta n, \delta n, -\delta n)$ and the full line along $(\delta n, 0, -\delta n, 0)$ would turn out to miss half of the possible multiplicity $4$ saddles entirely (those found would come with a symmetry related partner). A simple fundamental search domain for multiplicity $8$ saddles is the positive half lines of all three directions. One curious feature is that is that the largest eigenvalue (a variance) turns out to be approximately $90$ times greater than the next $2$ eigenvector directions ($\sqrt{90}$ times more unstable). This has some interesting consequences. First, a crude guess would be that the earliest saddle of degeneracy $4$ should show up on a time scale roughly $\sqrt{90}$ times the first return time, $\tau_1$. In fact, the first degeneracy-$4$ saddle appears at roughly $7.5\tau_1$. Thus, there is a significant time separation of the initial appearance of saddles with multiplicities, $(1,2)$, relative to saddles with multiplicities, $(4,8)$. The first return is non-degenerate, but by just after the second return, the quantum dynamics quickly becomes dominated by doubly degenerate saddles. The situation remains this way until $7.5\tau_1$, when the first quadruply degenerate saddle arises. They are few and weakly contributing, and so it still takes quite a bit more time for the quantum dynamics to be dominated by the highest degeneracy saddles. If one is only interested in the initial interferences that arise in the dynamics, a $1$-dimensional search suffices for this $8$-site case, but to follow the dynamics long enough to see the emergence of the full symmetry enhancement in the autocorrelation function requires a full $3$-dimensional search. \section{Identifying dynamical regimes using ${\bf M}_\tau \cdot {\bf A}_\alpha^{-1} \cdot {\bf M}_\tau^T$ and ${\bf M}_\tau^{-1}$} \label{regime} Many Hamiltonian systems depend on parameters, and they might be controllable in many cases, say for example, by varying externally controllable external field strengths. For systems with many degrees of freedom, far out of equilibrium, it can be rather challenging to get a full understanding of the dynamics for an individual system, let alone for the range of dynamical possibilities of the system as a function of the parameters. The analysis using the spectrum of ${\bf M}_\tau \cdot {\bf A}_\alpha^{-1} \cdot {\bf M}_\tau^T$ and its associated eigenfunctions (after mapping back with ${\bf M}_\tau^{-1}$) are ideally suited to elucidating the various dynamical regimes of such a system. The results depend naturally on the phase space region of interest, which is determined by the central trajectory of the wave packet or coherent state. For the Bose-Hubbard model of Sect.~\ref{bhms}, there are various transitions related to the relative strengths of the hopping ($J$-parameter) and interactions ($U$-parameter); one example is the much discussed superfluid-Mott insulator transition~\cite{Greiner02a}, another is the dynamical transition to more chaotic dynamics away from the pure hopping and pure interaction limiting systems, which represent integrable systems~\cite{Kolovsky16}. To illustrate the idea, let $J=\cos\theta$ and $U=\sin\theta$ so that $J^2+U^2=1$. There is a complete rotation of the system from pure hopping dynamics to pure interaction dynamics covered \begin{figure}[tbh] \includegraphics[width=8.5 cm]{fig7.pdf} \caption{Spectrum, $\{\log_{10}\left(\lambda_{j,+}\right) \}$, of ${\bf M}_\tau \cdot {\bf A}_\alpha^{-1} \cdot {\bf M}_\tau^T$ as a function of $\theta$. The initial state is a coherent state density wave for a ring with $8$-sites whose parameters are given in the main text. All $8$ $\lambda_{j,+}$ are shown, but degeneracies make it appear as though fewer are plotted. There is a strong realignment of the associated eigenvectors, $\left\{\left( \begin{array}{c} \delta \vec p_0 \\ \delta \vec q_0 \end{array} \right)_j\right\}$, at the transition point in the spectrum. To the right of the transition, the eigenvectors essentially do not involve the initially unoccupied sites, whereas to the left, all the sites are involved and there is a double repetition around the ring in the structure of the eigenstates. \label{fig7}} \end{figure} by varying $\theta$ across the range $0\le \theta \le \pi/2$. For an $8$-site ring, Fig.~\ref{fig7} shows the expanding part of the spectrum (the base $10$ logarithm of all eight $\lambda_{j,+}$) as a function of $\theta$ for a density wave coherent state with populated sites of mean number $n=5$ and $b=\sqrt{5}$ (i.e.~$|5,0,5,0,5,0,5,0\rangle$). The spectrum is invariant with increasing particle number if the interaction strength $U$ is reduced by the increase. Thus, for any value of the occupied sites, i.e.~coherent state density wave $|n,0,n,0,n,0,n,0\rangle$ generates the exact same $\theta$-dependent spectrum as Fig.~\ref{fig7} if one uses $U = \frac{5}{n}\sin\theta$ or rather $J^2+\left(\frac{n}{5}\right)^2 U^2=1$. Moving from left to right, the spectrum exhibits a seemingly discontinuous change in the dynamical properties of the system near $\theta=1.01219704$ where the spectrum abruptly shifts and the eigenvectors completely rearrange their orientations. This occurs at the same location independent of the number of sites in the ring. \begin{figure}[tbh] \includegraphics[width=8.5 cm]{fig8.pdf} \caption{Largest eigenvalue $\{\log_{10}\left(\lambda_{1,+}\right) \}$, of ${\bf M}_\tau \cdot {\bf A}_\alpha^{-1} \cdot {\bf M}_\tau^T$ as a function of $\theta$. The initial state is a coherent state density wave for rings with $4,6,8,10,12,14,16,18$-sites with populated sites of mean number $n=5$. All $8$ cases of $\lambda_{1,+}$ are shown, but portions of the curves with fewer sites are copied in rings with greater numbers of sites. For example, the $12$-site ring follows partly the $4$-site ring and partly the $6$-site ring results. That makes it appear as though fewer examples are plotted. The upper panel is a magnification of the transition region, which is magnified again in the lower panel near the sharp peak. \label{fig8}} \end{figure} For example, Fig.~\ref{fig8} plots the largest eigenvector for all rings with even numbers of sites from $4$ to $18$ and $n=5$ as in Fig.~\ref{fig7}. It turns out that although the transition is extremely abrupt, it is not discontinuous in either the number of sites or occupancies tending to infinity limits (assuming the appropriate scaling of $U$, and the peak occurs at a universal value of $nU/J=n\tan\theta\approx 8$ to an accuracy of better than one part in $10^6$. To the right of this transition, the subspace ($4$-dimensional) of initial conditions involving the initially unoccupied sites rapidly fall towards zero, meaning that they have no involvement in the production of saddles. Thus, the part of the initial conditions of saddle trajectories regarding those sites remain very nearly unoccupied for the entire time range that the semiclassical theory can be used to reconstruct the quantum dynamics. The next larger eigenvalue is mostly horizontal on the right side and is related to the shearing perpendicular to the energy surface. It has this general appearance in all the calculations regardless of site or particle numbers or initial conditions. The fact that there are eigenvalues several orders of magnitude above it is an indicator of the presence of at least some chaotic dynamics in the system. The next eigenvalue above is doubly degenerate and responsible for creating saddles that have multiplicities $4,8$ just discussed in greater detail in Sect.~\ref{r8c}. The most unstable eigenvalue at the top is responsible for the multiplicity $2$ saddles. The larger the gap between these two eigenvalues, the longer it takes for the effective saddle multiplicity, ${\cal M}(t)$, to transition from $1 \rightarrow 2 \rightarrow 4 \rightarrow 8$. On the left side of the transition, the most unstable eigenvectors involve the initially unoccupied sites strongly. They must satisfy the discrete symmetries of the ring as must the eigenvectors on the right side, but that is accomplished in a very different way. They exhibit a pattern which is twice repeated in going around the ring once unlike the eigenvectors right of the transition, which just do not involve half the sites (those initially unoccupied). In addition, there are ``level'' crossings where the association between eigenvectors and eigenvalues switch back and forth, and thus there there is the possibility of transitions in the dynamics with regards to which subspaces dominate the production of saddles. On a final note, if one chooses an initial coherent state with all of its particles in a single site, there is a generally similar appearance to the spectral dependence on $\theta$. There does not appear to be qualitatively new dynamical features associated with initially occupying a single site relative to the density wave example. There are initial conditions though which do. For example, it is straightforward to show using the mean field (Hamiltonian) equations of motion that the trajectory associated with equal site populations and phases of $b$ is stable for all values of $J,U$. Its spectrum must behave quite differently than the density wave. Consider a $4$-site ring populated $|20,20,20,20\rangle$ with all $b=\sqrt{20}$. Figure \ref{fig8} shows its spectrum in the upper panel and illustrates how different the behavior can be from the coherent state density wave example. It turns out that the largest eigenvalue is associated with the eigenvector perpendicular to the energy surface, which just represents the associated dynamical shearing. One of the eigenvalues is doubly degenerate and only $3$ curves are apparent. A \begin{figure}[tbh] \includegraphics[width=8.5 cm]{fig9.pdf} \caption{Spectrum, $\{\log_{10}\left(\lambda_{j,+}\right) \}$, of ${\bf M}_\tau \cdot {\bf A}_\alpha^{-1} \cdot {\bf M}_\tau^T$ as a function of $\theta$. The upper panel shows the $\{\log_{10}\left(\lambda_{j,+}\right) \}$ of a $4$-site ring with equal populations and phase relations; see text for details. The lower panel shows the $\{\log_{10}\left(\lambda_{j,+}\right) \}$ for an $8$-site ring with an alternating phase relationship from site to site; see text for details. There is no abrupt dynamical transition for these examples as there is for coherent state density waves. There is a transition to chaotic dynamics in the lower panel, and the greatest degree of instability seen for any trajectories. \label{fig9}} \end{figure} small change to this coherent state, i.e.~alternating the sign of the $b_j$ creates the most unstable dynamics that we have seen in calculations. This is such a strong effect that the mean site particle number had to be reduced to $2.5$ to prevent the instability from exceeding the precision available in the calculation. The example in the lower panel is for $8$-sites with $b=(-1)^{j+1}\sqrt{2.5}$ for the $j^{th}$ site. There is no abrupt transition for this initial state, but there are a number of level crossings where the dominant dynamical features are interchanged. There appears to be only $5$ eigenvalues because $3$ of them are doubly degenerate. They are the ones which on the right side of the figure are interior to the highest and lowest eigenvalues. This is also where the lowest eigenvalue is the one associated with the normal to the energy surface and the most unstable to the creation of doubly degenerate saddles. \section{Summary} Gaussian wave packets and their intimately related counterparts, coherent states for bosonic many-body systems, have great importance in a wide variety of fields. With respect to their dynamics in systems far from equilibrium, i.e.~short wavelength or mesoscopic regimes, semiclassical methods are ideally suited to furnish excellent quantitative approximations and physical pictures of the essential physics. Nevertheless, they have rarely been applied completely to wave packet dynamics for systems with more than a couple of degrees of freedom. The dual problems of performing complex trajectory saddle point searches with many parameters, and determining which ones must be kept due to Stokes phenomena present formidable barriers to the development of practical techniques for implementing the theory fully. In this paper, a technique similar in spirit to the tangent space decomposition method for calculating Lyapunov exponents~\cite{Gaspard98,Ott02} and the anisotropic method~\cite{Sala16} is developed to identify the minimal search space. From the minimal space, the method only relies on identifying real transport pathways and a Newton-Raphson scheme introduced earlier~\cite{Pal16}. Any system, independent of its number of degrees of freedom, with a small number of dominant expansion directions can be treated. With these techniques, it has been demonstrated that thousands of saddles can be located in individual systems possessing up to $8$ degrees of freedom. That particular Bose-Hubbard model case requires a minimal $3$-dimensional parameter search space; the high symmetry, low multiplicity saddles require even smaller dimensional searches. On the other hand, a straightforward search without the stability analysis would have required a $16$-dimensional parameter search space. That would have rendered the saddle search effectively impossible to carry out. Up to the propagation times considered, the set of saddles identified is essentially complete, which can be partly confirmed by comparing a Monte Carlo method applied to the classical transport with the diagonal approximation of the semiclassical quantities. Furthermore, with a complete knowledge of the saddles, it was shown in~\cite{Tomsovic18b} that a semiclassical theory could capture post-Ehrenfest interference phenomena in the context of the Bose-Hubbard model in a ring configuration extremely accurately. The existence of symmetries in the system dynamics imposes a significant structure in the locations and multiplicities of symmetry related saddle trajectories, depending on the choice of system state being propagated. Understanding the fundamental domains, which follows from the group operations involved and the eigenvectors of ${\bf A}^{-1}_\alpha (\tau)$ multiplied by ${\bf M}_\tau^{-1}$, allows one to reduce the search space further. Symmetry also has a strong influence on the dynamics. It turns out that high symmetry, low multiplicity saddles dominate the earliest return dynamics that later give way to dominance by low symmetry, high multiplicity saddles. The high multiplicity saddles generate constructive interference, and enhance long time averages of quantities such as the autocorrelation function. For the far-out-of-equilibrium dynamics of a many-body system such as the Bose-Hubbard model discussed, symmetry related saddles necessarily lead to constructive interference, and any enhancement factor is revealed over time, not immediately, depending on the time scales at which the various saddle multiplicities are dominant. An example was shown of the time dependence of the enhancement factors. There the transition of the enhancement factor from $1\rightarrow 3 \rightarrow 6$ occurred over just a few Ehrenfest times, but for other cases, such as the $8$-site case mentioned, and in other dynamical regimes, it can take much longer for the full enhancement to settle into the dynamics. All of this information is captured in a full semiclassical theory incorporating quantum interference through the properties of the saddles. The dynamical analysis relying on the spectrum of ${\bf A}^{-1}_\alpha (\tau)$ and the associated eigenvectors of initial conditions found with the application of ${\bf M}_\tau^{-1}$ can be turned into a powerful and quick way to investigate various dynamical regimes and possibilities of multidimensional dynamical systems, especially for those depending on tunable parameters. As illustrated with the Bose-Hubbard model, high degrees of instability or abrupt dynamical transitions are easily identified. Commonalities also appear evident, such as the similarities seen on varying site numbers or scaling with particle numbers. The eigenvectors also must reflect the symmetries of the system, but there may be multiple ways of accommodating them in high dimensional spaces. Any transitions between such regimes are associated with spectral crossings that indicate where they occur in the parameter space. Building on the work here, there are a large number of directions that future research could go. There are many other kinds of quantities of interest that can be pursued. There are other classes of states, such as Fock states, that would require modifying the implementation techniques. In addition, there are entanglement measures, and out-of-time-ordered correlators, questions regarding thermalization, and relaxation in many-body systems that would be of interest as well. There are also spectroscopic problems that could be addressed, such as found in molecular spectroscopy, femtosecond chemistry, or attosecond physics. The beginning would be to identify the equivalent Lagrangian manifolds associated with the quantities of interest, and adapting the search methods to the relevant manifolds. \begin{appendix} \section{Associating coherent state and wave packet parameter sets} \label{cswp} First consider the usual quantum harmonic oscillator in $1$ degree of freedom, \begin{equation} H(\hat p,\hat x) = \frac{\hat p^2}{2m} + \frac{m\omega^2}{2} \hat x^2 \end{equation} with the creation operator \begin{equation} \hat a^\dagger = \frac{1}{\sqrt{2\hbar}}\left( \sqrt{m\omega}\hat x - i \frac{\hat p}{\sqrt{m\omega}} \right) \end{equation} The projection of a coherent state into a configuration space representation follows as \begin{eqnarray} \label{csx} \langle x | z \rangle &=& \langle x| \exp \left(-\frac{\left| z \right|^2}{2} + z \hat a^\dagger \right)| 0\rangle \nonumber \\ &=& \exp \left(-\frac{\left| z \right|^2}{2}\right) \sum_{n=0}^\infty \frac{ z^n}{\sqrt{n!}} \langle x | n \rangle \nonumber \\ &=& \left(\frac{m\omega}{\pi\hbar}\right)^{\frac{1}{4}}{\rm e}^{-\frac{\left| z \right|^2}{2} -\frac{m\omega x^2}{2\hbar}} \sum_{n=0}^\infty \frac{ z^n}{\sqrt{2^n}n!} H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)\nonumber \\ &=& \left(\frac{m\omega}{\pi\hbar}\right)^{\frac{1}{4}} \exp \left(-\frac{\left| z \right|^2}{2} -\frac{z^2}{2} -\frac{m\omega x^2}{2\hbar} +\sqrt{\frac{2m\omega}{\hbar}} xz\right) \nonumber \\ \end{eqnarray} where the $H_n(x)$ are Hermite polynomials, and last line follows from an application of the definition of their generating function. Therefore, the application of the exponential of the creation operator is just a configuration space shift of the ground state multiplied by a global phase. The following associations of parameters puts the wave packet and position representation of the coherent state into the same form. Let \begin{equation} \label{assoc} m\omega = b_\alpha \ {\rm and }\ z=\sqrt{\frac{b_\alpha}{2\hbar}}\left(q_\alpha +i \frac{p_\alpha}{b_\alpha}\right) \end{equation} then the configuration space representation of the coherent state is \begin{eqnarray} \label{csx2} \langle x | z \rangle &=& \exp\left( -\frac{b_\alpha}{2\hbar} (x-q_\alpha)^2 + \frac{i}{\hbar}p_\alpha(x-q_\alpha) +\frac{i}{2\hbar}p_\alpha q_\alpha \right) \nonumber \\ && \left(\frac{b_\alpha}{\pi\hbar}\right)^{\frac{1}{4}} \end{eqnarray} which is to be compared to Eq.~(\ref{wavepacket}) reduced to its $1$ degree of freedom form, \begin{equation} \phi_\alpha(x) = \left(\frac{b_\alpha}{\pi\hbar}\right)^{\frac{1}{4}} \exp\left[ - \frac{b_\alpha}{2\hbar}\left(x - q_\alpha\right)^2+\frac{i}{\hbar} p_\alpha \left( x - q_\alpha \right)\right] \end{equation} With the parameter association of Eq.~(\ref{assoc}), the only distinction between the two states is the phase convention. The wave packet form does not include the phase $\exp[ip_\alpha q_\alpha/(2\hbar)]$, which is easily taken into account. Next consider an $N$-degree-of-freedom set of coupled harmonic oscillators, \begin{equation} H({\bf \hat p},{\bf \hat x}) = \frac{{\bf \hat p} \cdot {\bf \hat p} }{2m} + \frac{m}{2} {\bf \hat x} \cdot {\bf A} \cdot {\bf \hat x} \end{equation} As throughout the entire paper, implicitly the right vectors are column vectors and the left vectors are row vectors. There is an orthogonal transformation to normal coordinates for the column vectors, \begin{equation} {\bf \hat x^\prime} = {\bf O}\cdot {\bf \hat x} \ \ {\rm and}\ \ {\bf \hat p^\prime} = {\bf O}\cdot {\bf \hat p} \end{equation} such that \begin{equation} H({\bf \hat p^\prime},{\bf \hat x^\prime}) = \frac{{\bf \hat p^\prime} \cdot {\bf \hat p^\prime} }{2m} + \frac{m}{2} {\bf \hat x^\prime} \cdot {\bf \Omega^2} \cdot {\bf \hat x^\prime} \end{equation} where ${\bf \Omega}$ is the diagonal matrix \begin{equation} {\bf \Omega} = \left( \begin{matrix} \omega_1 & 0 & 0 & \\ 0 & \omega_2 & 0 & \hdots\\ 0 & 0 & \omega_3 & \\ & \vdots & & \ddots\\ \end{matrix}\right) \end{equation} and \begin{equation} {\bf \Omega^2} = {\bf O}\cdot {\bf A} \cdot {\bf O}^T \end{equation} The ground state in normal coordinates is \begin{equation} \langle {\bf \hat x^\prime} |{\bf 0}\rangle = \left(\frac{m^N {\rm Det}(\Omega)}{\pi^N\hbar^N}\right)^{1/4}\exp \left( -\frac{m}{2\hbar} {\bf \hat x^\prime} \cdot {\bf \Omega} \cdot {\bf \hat x^\prime} \right) \end{equation} As ${\bf A}$ is symmetric and positive definite, it can be decomposed as ${\bf A = B^T \cdot B}$ (with ${\bf B = \Omega\cdot O}$). Thus, the ground state can also be written in the original coordinates as \begin{equation} \langle {\bf \hat x} |{\bf 0}\rangle = \left(\frac{m^N {\rm Det}(\Omega)}{\pi^N\hbar^N}\right)^{1/4}\exp \left( -\frac{m}{2\hbar} {\bf \hat x}\cdot {\bf B^T} \cdot {\bf \Omega}^{-1} \cdot {\bf B} \cdot {\bf \hat x} \right) \end{equation} From this equation, it is already clear that \begin{equation} \label{assocn} {\bf b}_\alpha = m {\bf B^T} \cdot {\bf \Omega}^{-1} \cdot {\bf B} \end{equation} since the action of the exponential of the creation operators is a displacement of the ground state not a deformation. Assume the coherent state is defined in terms of the creation operators associated with the original coordinate system. Further, let's project it to begin with onto the normal coordinates. Thus, the initial quantity to evaluate is \begin{equation} \langle {\bf \hat x^\prime} | {\bf z} \rangle = \langle {\bf \hat x^\prime} | \exp \left(-\frac{{\bf z}\cdot {\bf z}^\dagger}{2} + {\bf z}\cdot {\bf \hat a^\dagger} \right)| {\bf 0} \rangle \end{equation} Transforming the creation operators to those associated with the normal coordinates leads to the identifications, $({\bf \hat a^\dagger})^\prime = {\bf O}\cdot {\bf \hat a^\dagger}$ for the column vector and ${\bf z}^\prime = {\bf z} \cdot {\bf O}^T$ for the row vector. At this point, action of the exponential of the $({\bf \hat a^\dagger})^\prime$ is just $N$ independent translations. This gives, \begin{equation} \label{assoc1} {\bf z}^\prime = \sqrt{\frac{m {\bf \Omega}}{2\hbar}}\cdot \left({\bf q}^\prime_{\alpha} +i (m{\bf \Omega})^{-1}\cdot{\bf p^\prime}_{\alpha}\right) \end{equation} or in component form \begin{equation} \label{assoc3} z_j^\prime = \sqrt{\frac{m \omega_j}{2\hbar}} \left(q^\prime_{\alpha,j} +i \frac{p_{\alpha,j}^\prime}{m\omega_j}\right) \end{equation} and that implies for the column vectors of the translations \begin{eqnarray} {\bf q}_\alpha &=& {\bf O^T} \cdot {\bf q^\prime}_\alpha \nonumber \\ {\bf p}_\alpha &=& {\bf O^T} \cdot {\bf p^\prime}_\alpha \end{eqnarray} which along with Eq.~(\ref{assocn}) associates the multidimensional coherent state parameters with the wave packet parameters, except for a global phase convention, which is not of great interest. Returning to just $1$ degree of freedom, there is the possibility of introducing chirps in wave packets as mentioned in the text, which corresponds to the introduction of a complex width $b_\alpha$. It is well known that free particle motion introduces a complex width parameter as a function of time. The classical essence of this effect is a linear canonical transformation of the shearing taking place in the dynamics. As a linear canonical transformation can be associated with an exact unitary transformation in quantum mechanics, a natural way to introduce this effect into a coherent state is to consider the ground state of the Hamiltonian, \begin{eqnarray} \label{px} H(\hat p,\hat x) &=& \frac{(\hat p+\epsilon m \omega x)^2}{2m} + \frac{m\omega^2}{2} \hat x^2 \nonumber \\ &=& \frac{\hat p^2}{2m} +\frac{\epsilon \omega}{2}\left( \hat x \hat p + \hat p \hat x \right) + \frac{1+\epsilon^2}{2} m\omega^2 \hat x^2m\nonumber \\ \end{eqnarray} The ground state energy remains $E_0=\hbar \omega/2$ and the eigenfunction a Gaussian, but the width becomes complex and it turns out that $b_\alpha = m\omega(1+i\epsilon)$. A phase convention can be absorbed into the expression for the normalization. A coherent state can be defined in exactly the same way as in Eq.~(\ref{csx}) with suitably transformed annihilation and creation operators possessing the same properties. The only change is the complexification of $b_\alpha$, which shows up in that equation with the replacement of $m\omega$ with $m \omega (1+i\epsilon)$ with the exception of the normalization factor, which is unchanged. Thus, $b_\alpha$ is replaced with $(b_\alpha +b_\alpha^*)/2$ in the normalization. The form of Eq.~(\ref{csx2}) emerges again, only with a complex $b_\alpha$, except that the global phase factor is more complicated. \end{appendix} \section*{Acknowledgments} The author gratefully acknowledges a very helpful critical reading of an early draft of the manuscript by D.~Ullmo and important discussions with D.~Ullmo, P.~Schlagheck, J.~D.~Urbina, K.~Richter, and L.~Kocia. The author also gratefully acknowledges support from the Vielberth Foundation and the UR International Presidential Visiting Fellowship 2016 during two extended stays at the Physics Department of Regensburg University.
1,108,101,566,017
arxiv
\section{Conclusions and Future Work} \label{sec:conclusion} In this work we proposed a novel method using self-organizing maps for multi-label stream classification in scenarios with infinitely delayed labels. Experiments on synthetically and real datasets showed that our proposal was highly competitive in different stationary and concept drift scenarios in comparison with batch lower bounds and incremental upper bounds. Our method takes advantage of the SOMs topology neighborhood behavior, forcing neurons to move according to each other in early stages of the training. This better exploitation of the search space, combined with our proposed updating and classification procedure, led to generally better results in comparison to MINAS-BR, up to now the only method which also considers infinitely delayed~labels. Our method also has the advantage of having only two parameters, the learning rate for updating in the online phase, and the neuron grid dimension. However, we obtained very competitive results with a fixed learning rate. Our proposal can also be easily updated to eliminate the $d$ parameter by using dynamic versions of the SOM such as \cite{Alahakoon2000} and \cite{Dittenbach2000}. As future works we will use dynamic self-organizing maps, and also extend our method to deal with concept evolution scenarios, where new classes can emerge over the stream. We also plan to investigate how to deal with structured streams, where classes are organized in topologies such as trees of graphs. \section{Experiments and Discussion} \label{sec:exp} \begin{figure*}[htbp] \centering \includegraphics[scale=0.76]{Figures/heatmap2.pdf} \caption{Methods' ranking and results of the Nemenyi statistical test. Both axes show all investigated methods.} \label{fig:heatmap} \end{figure*} Due space restrictions, Figure~\ref{fig:results} shows the best SOM, upper bound and lower bound results, and MINAS-BR. We show multi-label macro f-measures (y-axix) across the entire stream over 50 evaluation windows (x-axis). The acronym Ea differentiates upper bound ensembles from the lower bound ones. SOM-$d$ refers to our proposal, with $d$ the dimension of the neurons grid (we used a hexagonal 2-$d$ grid in all experiments). We varied $d$ from 1 to 10 (1 to 100 neurons), executing each configuration 10 times in each dataset. We show the average results in each evaluation window, considering the SOM-$d$ with the high averages over the 50 evaluation windows. All other methods are deterministic, and were executed once. The exceptions were BP-MLL and MINAS-BR, which were executed 10~times. All 42 methods were executed with their default parameter~values. The results for the MOA generated datasets (Figure~\ref{fig:results}(a-d) show that the performance of our proposal increased over the stream compared to the lower bounds and MINAS-BR, resulting in the best macro f-measures by the end of the stream. In MOA-Spher-5C-2A and MOA-Spher-2C-2A, we obtained very competitive results compared with the upper bounds. Since the MOA datasets are spherical, a small grid was enough to provide a good approximation of the feature space. In the datasets with two features, the 2-$d$ grid could obtain a more faithful representation of the input instances. This better maintained the topological ordering of the maps, {\it i.e.}, the spacial location of a neuron corresponded better to a particular feature from the input space. The clusters are well-behaved, and in some datasets only one neuron was enough to model a class. These characteristics combined with our proposed updating and kNN strategy resulted in a better adaptation to concept drift when compared to the lower bounds and~MINAS-BR. In the non-spherical datasets generated with the Read et. al. generator (Figure~\ref{fig:results}(e-g)), our proposal performed similar to the lower bounds and MINAS-BR. We obtained a slightly better performance in Mult-Non-Spher-HP, being also very competitive in Mult-Non-Spher-RT and Mult-Non-Spher-WF. Differently from the spherical datasets, larger neuron grids were now necessary to better represent the input feature vectors. Although being spherical, the high number of features (80) combined with the high number of classes (22) in Mult-Spher-RB (Figure~\ref{fig:results}(h)) contributed to harm the performances of the SOMs. All methods, including the upper bounds, had generally worse performances in the Read et. al. generated datasets compared to the MOA generated ones. The former ones have more classes that are very overlapping, making the task much more difficult. All methods had difficulties in addapting to concept~drift. Similar to the the results in the Read et. al. generated datasets, the results in the real datasets were generally worse than in the well-behaved MOA generated ones. Since there is no concept drift in these datasets, the batch algorithms obtained performances competitive to the upper bounds in the majority of the datasets. Our method was very competitive, being able to approach the upper bounds in two datasets. Figure~\ref{fig:heatmap} presents a heat map pairwise comparing all 43 methods according to the post-hoc Nemenyi test~\cite{Demsar2006}, which was applied after the Friedman test returned a p-value = 6.181E-08. The figure also ranks the methods (x-axis left to right / y-axis bottom to top) according to their average macro f-measures over all datasets and evaluation windows. Our proposal was highly competitive to the baselines overall, being statistically equivalent. We obtained the fifth best performance, behind only the upper bounds EaCC, EaBR, CC and~BR. Very few statistically significant differences were detected, mainly between the top five methods of the ranking (including SOM) and the worst ranked ones, such as the BR transformation with SVM as base classifier. These differences are represented in Figure~\ref{fig:heatmap} by the black colored rectangles (p-values $<$ 0.05). \section{Introduction} Multi-label Classification (MLC) is a machine learning task which associates multiple labels to an instance \cite{Tsoumakas2010}. This is a reality in many real-world applications such as bioinformatics, images, documents, movies, and music classification. Several works have addressed MLC in batch scenarios \cite{read2011classifier,nam2014large,Pliakos2018,Cerri2019}. They usually assume static probability distribution of data, and training instances being sufficiently representative of the problem. The decision model is built once and does not~evolve. Recent works in MLC bring a different scenario, where data flows continuously, in high speed, and with non-stationary distribution. This is known as data streams (DS)~\cite{gama2007learning}, bringing new challenges to MLC. Among them is concept drift, where learned concepts evolve over time, requiring constant model updating. Also, given the high velocity and volume of data, storing and scanning it several times is impractical. Many works have been developed to address such issues~\cite{read2010efficient,read2012scalable,song2014new,trajdos2015multi The first works in MLC for DS have addressed concept drift proposing techniques to update the model as new data arrives using supervised learning~\cite{read2012scalable,shi2014efficient,osojnik2017multi}. However, they assume that true labels of instances are immediately available after classification, which is an unrealistic assumption in several scenarios. Few works have addressed infinitely delayed labels to MLC for DS \cite{wang2012mining,zhu2018multi,CostaJunior2019}. They usually use k-means clustering to detect the emergence of new classes, updating the models in an unsupervised fashion, or use other strategies such as active learning, which do consider that some labels will be available some time. Also, these works are more focused in identifying the appearance of novel classes than in concept drift. This work proposes a different strategy to avoid the previously mentioned drawbacks of the existing methods. Instead of using k-means, we rely on self-organizing maps (SOMs). The neighborhood characteristic of the SOMs better explore the search space, forcing neurons to move according to each other, creating a topological ordering. As a result, the spacial location of a neuron corresponds to a particular domain or feature on the input instances. With this, we don't need to worry about the number of clusters, since the set of synaptic weights provide a good approximation of the input space~\cite{Haykin2009}. Our proposal detects concept drift by adjusting the weight vectors of the neurons which classify arrival instances. We also decide the number of predicted labels for an instance based on an adaptive label cardinality, a Bayes rule that considers the outputs of each neuron, and online adapting probabilities and conditional probabilities of the classes in the~stream. The method is totally unsupervised during the online arrival of instances This paper is organized as follows. Section~\ref{sec:relWork} discusses the main related works. Section~\ref{sec:method} presents our proposal, and Section~\ref{sec:methodology} brings the experimental methodology. The results are discussed in Section~\ref{sec:exp}. Finally, Section~\ref{sec:conclusion} presents our conclusions and future research directions. \section{Methodology} \label{sec:methodology} Table~\ref{tab:data} presents our datasets, with number of numeric attributes ($A$), classes $(Y)$, and label cardinalities ($z$) for the initial labeled set. We generated four spherical ones using the MOA framework~\cite{bifet2010moa}. The classes are represented by possible overlapped clusters, and any overlap of clusters is a multi-label assignment. We also used the \cite{read2012scalable} proposal to generated one spherical dataset and three non-spherical~ones. \begin{table}[htbp] \scriptsize \centering \setlength{\tabcolsep}{7pt} \caption{Characteristics of the used datasets.} \begin{tabular}{l l l l l l} \toprule {Name} & $|DS|$ & $|A|$ & $|Y|$ & $z$ & $sd$\\ \midrule Mult-Non-Spher-WF & 100,000 & 21 & 7 & 2.37 & --\\ Mult-Non-Spher-RT & 99,586 & 30 & 8 & 2.54 & --\\ Mult-Non-Spher-HP & 94,417 & 10 & 5 & 1.68 & --\\ Mult-Spher-RB & 99,911 & 80 & 22 & 2.24 & --\\ MOA-Spher-2C-2A & 96,907 & 2 & 2 & 1.06 & 1,000\\ MOA-Spher-5C-2A & 95,529 & 2 & 5 & 1.54 & 1,500\\ MOA-Spher-5C-3A & 94,667 & 3 & 5 & 1.37 & 1,500\\ MOA-Spher-3C-2A & 93,345 & 2 & 3 & 1.76 & 2,000\\ Mediamill & 41,442 & 120 & 15 & 3.78 & --\\ Nus-wide & 162,598 & 128 & 7 & 1.71 & --\\ Scene & 1,642 & 294 & 4 & 1.07 & --\\ Yeast & 2,364 & 103 & 9 & 4.15 & --\\ \bottomrule \end{tabular} \label{tab:data} \end{table} The MOA datasets were generated with a radial basis function, where clusters are smoothly displaced after $sd$ instances in the stream. The dataset MOA-Spher-2C-2A has the additional characteristic that its clusters are simultaneously rotated around the same axis, moving close and away from each other. We used four generators with the multi-label generator: wave-form (WF), random tree (RT), radial basis function (RB), and hyper plane (HP). We varied their label relationships, which can influence label cardinalities. Label relationships are closely related to label skew (where a label or a set of labels is dominant in data). Thus, $p(y_k|y_j)$ is high if $p(y_k)$ is high, and low when $p(y_k)$ is low. We divided the stream $DS$ in four sub-streams. To insert concept drift, $10\%$ of the $p(y_k|y_j)$ values in the second and third sub-streams receive normally distributed random numbers with $\mu = p(y_k)$ and $\sigma = 1.0$. A value of $30\%$ is used in the fourth sub-stream. In all synthetic datasets, the initial 10\% of the stream is used for training ($D_{tr}$). A detailed description on how the pairwise relationships are generated is given by~\cite{read2012scalable}. \begin{figure*}[htpb] \center \subfigure[refa][MOA-Spher-5C-2A]{\includegraphics[scale=0.5]{Figures/MOA-5C-7C-2D.pdf}} \hspace{0.3em} \subfigure[refb][MOA-Spher-5C-3A]{\includegraphics[scale=0.5]{Figures/MOA-5C-7C-3D.pdf}} \hspace{0.3em} \subfigure[refc][MOA-Spher-2C-2A]{\includegraphics[scale=0.5]{Figures/4CRE-V2.pdf}} \hspace{0.3em} \subfigure[refd][MOA-Spher-3C-2A]{\includegraphics[scale=0.5]{Figures/MOA-3C-5C-2D.pdf}} \hspace{0.3em} \subfigure[ref1][Mult-Non-Spher-HP]{\includegraphics[scale=0.5]{Figures/SynHyperPlane.pdf}} \hspace{0.3em} \subfigure[ref2][Mult-Non-Spher-RT]{\includegraphics[scale=0.5]{Figures/SynRTG.pdf}} \hspace{0.3em} \subfigure[ref2][Mult-Non-Spher-WF]{\includegraphics[scale=0.5]{Figures/SynWaveForm2.pdf}} \hspace{0.3em} \subfigure[ref2][Mult-Spher-RB]{\includegraphics[scale=0.5]{Figures/SynRBF2.pdf}} \hspace{0.3em} \subfigure[ref1][Mediamill]{\includegraphics[scale=0.5]{Figures/mediamill.pdf}} \hspace{0.3em} \subfigure[ref2][Scene]{\includegraphics[scale=0.5]{Figures/scene2.pdf}} \hspace{0.3em} \subfigure[ref2][Yeast]{\includegraphics[scale=0.5]{Figures/yeast2.pdf}} \hspace{0.3em} \subfigure[ref2][Nus-wide]{\includegraphics[scale=0.5]{Figures/nus-wide.pdf}} \caption{Best results for all investigated datasets. Macro f-measure values over 50 evaluation windows.} \label{fig:results} \end{figure*} The four real datasets are from the Mulan website\footnote{http://mulan.sourceforge.net/datasets-mlc.html}. They are originally stationary, and were pre-processed to remove labels with less than 5\% of positive instances. The training set was constructed with 10\% of the data, trying to keep a same number of instances for each class. We used 42 multi-label methods as baselines, with 31 being batch offline from Mulan~\cite{Mulan2011}, and 10 being online incremental from the MOA framework~\cite{bifet2010moa}. The Mulan methods are considered lower bounds, since they are trained with the offline dataset and are never updated. The MOA methods are considered upper bounds, since they are always incrementally updated using the true labels of the arrival instances. We also included MINAS-BR~\cite{CostaJunior2019}, up to now the only multi-label method in the literature which truly considers infinitely delayed labels. We used problem transformations as lower bounds: Binary Relevance (BR), Label Powerset (LP), Randon k-Labelsets (Rakel), Classifier Chains (CC), Pruned Sets (PS), Ensemble of PS (EPS), Ensemble of CC (ECC), and Hierarchy of Multi-label Classifier (Homer), all with J48, SVM, and KNN as base classifiers. We also used algorithm adaptations: Multi-label KNN (ML-KNN), Multi-label Instance-Based Learning by Logistic Regression (IBLR-ML and IBLR-ML+), and Backpropagation for Multi-label Learning~(BP-MLL). BR, CC and PS with their ensembles were also used as upper bounds. We also used Multilabel Hoeffding Tree with PS (MLHT) and Incremental Structured Output Prediction Tree (ISOPTree), with their ensembles. They all use incremental Hoeffding Trees as base classifiers. \section{Our Proposal} \label{sec:method} Our proposal is divided in two phases: \textit{i)} offline, using a labeled dataset to train models, and \textit{ii)} online, classifying arrival instances in a completely unsupervised~fashion In our offline phase, $n$ SOM maps with $d \times d$ neurons are trained to represent each of the $n$ known classes. Each training instance is formed by a tuple $({\bf x}_i,Y_i)$, with ${\bf x}_i$ representing the feature vector of instance $i$, and $Y_i$ its corresponding set of classes. After calculating the training set label cardinality, we compute two $n \times n$ matrices $P$ and $T$. $T$ has the total number of instances classified in a class $y_j$ and in a pair of classes $(y_j,y_n)$. Matrix $T$ is used to compute $P$, which has the relationships between classes. $P$ stores class probabilities $p(y_j)$ and class conditional probabilities $p(y_j|y_n)$ for each one of the $n$ known classes. Positions $T[j,n]$ and $P[j,n]$ have, respectively, the number of instances classified in the pair $(y_j,y_n)$, and the conditional probabilities $p(y_j|y_n)$. Similarly, $T[j,j]$ and $P[j,j]$ have, respectively, the number of instances classified in $y_j$, and the probability $p(y_j)$. These matrices are used in the online phase for classification of instances in the stream. To compute $p(y_j|y_n)$, we have the conditional distribution between $y_j$ and $y_n$ based on the Bayes theorem: \begin{equation} p(y_j|y_n) = \frac{p(y_j,y_n)}{p(y_n)} = \frac{f(y_j,y_n)}{f(y_n)} \label{eq:condProb} \end{equation} In Equation~\ref{eq:condProb}, $f(y_n)$ and $f(y_j,y_n)$ are obtained from the labeled dataset (matrix $T$), with $f(y_n)$ the number of instances classified in class $y_n$, and $f(y_j,y_n)$ the number of instances classified in both classes $y_j$ and $y_n$. The next step constructs $n$ subsets $X_{y_j}$, each one with the instances classified in class $y_j$. We then build a SOM map for each of these subsets applying the well-known batch implementation of the Kohonen maps~\cite{Kohonen2013}. It is more recommended for practical applications, since it does not require a learning rate, and convergences faster and safer than the stepwise recursive version~\cite{Kohonen2013}. The batch algorithm first compares each of the ${\bf x}_i$ vectors to all $d \times d$ neurons of the map, which had their weight vectors ${\bf m}$ randomly initialized. Then, a copy of ${\bf x}_i$ is stored into a sub-list associated with its best matching neuron $n_b$ according to the Euclidean distance: \begin{equation} n_b = \underset{b}\argmin{||{\bf x}_i - {\bf m}_b||} \label{eq:bestUnit} \end{equation} Given $N_b$ as the neighborhood set of a neuron $n_b$, we compute a new vector ${\bf m}_b$ as the mean of all ${\bf x}_i$ that have been copied into the union of all sub-lists in $N_b$. This is performed for every neuron of the SOM grid. Old values of ${\bf m}_b$ are replaced by their respective means. This has the advantage of allowing the concurrent computation of the means and updating over all neurons. This cycle is repeated, cleaning the sub-lists of all neurons and redistributing the input vectors to their best matching neurons. Training stops when no changes are detected in the weight vectors in continued iterations. To avoid empty neurons, or neurons with very few mapped instances, we discard the ones with less than four mapped~instances. The next step associates an average output and a threshold value to each neuron of the SOM maps. The average output is obtained my mapping $X_{y_j}$ to $map_{y_j}$. For each neuron $n_b$, we get the $X_b$ instances mapped to it, and then calculate the average of the discriminant functions $averOut_b$ over these instances: \begin{equation} averOut_b = \sum_{i \in X_b}exp(-||{\bf x}_i - {\bf m}_b||) / |X_b| \label{eq:discriminantFunction} \end{equation} Having the average output of a neuron $n_b$, we compute its threshold value, which is used in the online phase to decide if a new instance is classified in the class associated to the map containing $n_b$. For this, we consider that an instance mapped to $n_b$ was already classified in all the other classes, except class $y_j$ associated to $map_{y_j}$. This is calculated using the Bayes rule: \begin{equation} p(y_j|Y,X_b) = p(y_j) \times \prod_{y_k \in Y}p(y_k|y_j) \times p(X_b|y_j) \label{eq:condProbaThreshold} \end{equation} As already seen, we obtain $p(y_j)$ and $p(y_k|y_j)$ from data. Since $p(X_b|y_j)$ is the probability of observing $X_b$ given $y_j$, we have $p(X_b|y_j) =averOut_b$ (Equation~\ref{eq:discriminantFunction}). We thus avoid to manually set a threshold to decide when a neuron classifies an instance. If $p(y_k|y_j) = 0$ in matrix $P$, we do not consider this value in the calculation, otherwise we would have $p(y_j|Y,X_b) = 0$. The online (classification) phase is detailed in Algorithm~\ref{alg:online}. Given an incoming unlabeled instance ${\bf x}_i$, we map it to each $map_{y_j}$. For each map, we retrieve a sorted list with the closest neurons to ${\bf x}_i$ (Algorithm~\ref{alg:online}, step~\ref{alg:online:map}). We also store the index of the closest neuron for each map, together with its corresponding discriminant function output (Algorithm~\ref{alg:online}, steps~\ref{alg:online:win1} to \ref{alg:online:win2}). \begin{algorithm}[tb!] \scriptsize \DontPrintSemicolon \Input{Multi-label data stream ($DS$)\; \hspace{1.3cm}Label cardinality $z$\; \hspace{1.3cm}List with $n$ SOM maps ($MAP$)\; \hspace{1.3cm}Class probability matrix ($P$)\; \hspace{1.3cm}Class totals matrix ($T$)\; \hspace{1.3cm}Total number of instances ($N$)\; \hspace{1.3cm}Average neuron outputs ($ANO$)\; \hspace{1.3cm}Neurons thresholds ($NT$)\;} \Output{Updated $z$, $MAP$, $P$, $T$, $NT$, $N$, $ANO$\; } \Begin{ $kn \leftarrow minNumberNeuronsMAP(MAP)$\; \ForEach{instance ${\bf x}_i$ in $DS$}{ $Y_i \leftarrow \emptyset$ \tcp{new prediction} $N \leftarrow N+1$\; $NrSort \leftarrow ListOfLists[[L_1],\dots,[L_n]]$\; $WinNr \leftarrow Array[1,\dots,n]$\; $outputWinNr \leftarrow Array[1,\dots,n]$\; \For{$j=1$ to $size(MAP)$}{ $map_{y_j} \leftarrow MAP[j]$\; $NrSort[[j]] \leftarrow sortNeurons(map_{y_j},{\bf x}_i)$\;\label{alg:online:map} $WinNr[j] \leftarrow NrSort[[j]][1]$\;\label{alg:online:win1} ${\bf m} \leftarrow getWeightVector(WinNr[j])$\; $outputWinNr[j] \leftarrow exp(-||{\bf x}_i - {\bf m}||)$\;\label{alg:online:win2} } $WinClasses \leftarrow getKNN(NrSort,kn)$\;\label{algo:online:knn} $c \leftarrow WinClasses[1]$\;\label{algo:online:winClass1} $Y_i \leftarrow Y_i \cup y_{c}$\;\label{algo:online:winClass2} \For{$k=2$ to $\ceil*{z}$}{\label{alg:online:classification1} $c \leftarrow WinClasses[k]$\; $p(y_c) \leftarrow P[c,c]$\; $p({\bf x}_i|y_c) \leftarrow outputWinNr[c]$\; $p(y_d|y_c) \leftarrow 1$\; \For{$l=1$ to $k-1$}{ $d \leftarrow WinClasses[l]$\; \uIf{$y_d$ in $Y_i$}{ $p(y_d|y_c) \leftarrow p(y_d|y_c) \times P[d,c]$\; } } $p(y_c|y_d,{\bf x}_i) \leftarrow p(y_c) \times p(y_d|y_c) \times p({\bf x}_i|y_c)$\; $tr \leftarrow NT[[c]][WinNr[c]]$\; \uIf{$p(y_c|y_d,{\bf x}_i) \geq tr$}{ $Y_i \leftarrow Y_i \cup y_c$\; } }\label{alg:online:classification2} $classifyInstance({\bf x}_i,Y_i)$\; $MAP \leftarrow updateMAPs(MAP,Y_i)$\; $z \leftarrow updateLabelCardinality(z,Y_i,N)$\; $ANO \leftarrow updateAverNrOutputs(ANO,Y_i)$\; $T \leftarrow updateClassTotals(T,N,Y_i)$\; $P \leftarrow updateClassProbability(P,T,N,Y_i)$\; $NT \leftarrow updateThresholds(NT,P,ANO)$\; } \KwRet{$(MAP, P, T, NT, ANO, N, z)$}\; } \caption{Online phase.}\label{alg:online} \end{algorithm} \setlength{\textfloatsep}{10pt} Given sorted lists $NrSort$ with the closest neurons to instance ${\bf x}_i$, we use a $k$-nearest neighbors strategy to retrieve a sorted list $WinClasses$ with the indexes of the winner classes of ${\bf x}_i$. Figure~\ref{fig:sortClasses} illustrates this (Algorithm~\ref{alg:online}, step~\ref{algo:online:knn}) for three maps with a maximum of nine neurons (grid dimension = 3), in a problem with three classes ($y_1, y_2, y_3$). In our proposal we always set $k$ as the number of neurons of the smallest map in $MAP$. If $k$ is even, we subtract 1 to guarantee an odd~number. In Figure~\ref{fig:sortClasses}, instance ${\bf x}_i$ is represented by a star ($\bigstar$). The other symbols represent the weight vectors of the neurons from $map_{y_1} (\newmoon)$, $map_{y_2} (\blacksquare)$, and $map_{y_3} (\blacktriangle)$. To get the winner class, we retrieve the $k=5$ nearest neurons from ${\bf x}_i$. We see that three of the closest neurons are from $map_{y_1}$, two from $map_{y_3}$, and one from $map_{y_2}$. From majority voting, class $y_1$ is the winner class. Neurons from $map_{y_1}$ are not considered anymore. It is easy to see now that from the five other closest neurons, three are from $map_{y_3}$ and two from $map_{y_2}$. The list $WinClasses$ then has the indexes 1, 3, 2 in this order. Now, ${\bf x}_i$ is classified in its closest class (Algorithm~\ref{alg:online}, steps~\ref{algo:online:winClass1} and \ref{algo:online:winClass2}), and the label cardinality $z$ is used to decide in which other classes to classify ${\bf x}_i$. We again use the Bayes rule and the class probabilities and conditional probabilities. Given a set $\hat{Y}_i$ with the classes in which ${\bf x}_i$ was already classified, the probability of classifying ${\bf x}_i$ in a new class $y_c$ is given by: \begin{equation} p(y_c|\hat{Y}_i,{\bf x}_i) = p(y_c) \times \prod_{y_k \in \hat{Y}_i}p(y_k|y_c) \times p({\bf x}_i|y_c) \label{eq:condProbaClassify} \end{equation} \begin{figure}[t!] \centering \includegraphics[scale=1.1]{Figures/KNN.pdf} \caption{KNN procedure to select winning classes.} \label{fig:sortClasses} \end{figure} We again obtain $p(y_c)$ and $p(y_k|y_c)$ from data. The probability $p({\bf x}_i|y_c)$ is given by $exp(-||{\bf x}_i - {\bf m}_b||)$, which is the output of the best matching neuron $n_b$ from $map_{y_c}$. If $p(y_c|\hat{Y}_i,{\bf x}_i)$ is greater or equal than the threshold associated to $n_b$ (Equation~\ref{eq:condProbaThreshold}), ${\bf x}_i$ is classified in $y_c$. This whole procedure is shown in Algorithm~\ref{alg:online}, steps~\ref{alg:online:classification1} to~\ref{alg:online:classification2}. After classifying an $N$th instance, we update the maps of the $|Y_N|$ classes where ${\bf x}_N$ was classified. For each map, the weight vector of the best matching unit to ${\bf x}_N$ is updated with a fixed learning rate $\eta = 0.05$: \begin{equation} {\bf m}_{b_{N}} = {\bf m}_{b_{N-1}} + \eta \times ({\bf x}_N - {\bf m}_{b_{N-1}}) \label{eq:updateWeight} \end{equation} The label cardinality $z_{N}$ of the stream is also updated after classifying an $N$th instance (Equation~\ref{eq:updateLC}). Recall that $N$ is the total number of instances in the stream. \begin{equation} z_N = \frac{1}{N}\sum_{i=1}^N |Y_i| = \frac{1}{N}((N-1) \times z_{N-1} + |Y_N|) \label{eq:updateLC} \end{equation} The average output $averOut_b$ of the best matching neuron ${\bf m}_b$ in each map corresponding to the classes in $Y_N$ is also updated: \begin{equation} averOut_{b_N} = averOut_{b_{N-1}} + exp(-||{\bf x}_N - {\bf m}_{b_N}||) \label{eq:uptadeANO} \end{equation} We then update matrix $T$, and use it to update $P$ according to Equation~\ref{eq:condProb}. Finally we update the threshold values for each neuron using Equation~\ref{eq:condProbaThreshold}. \section{Related Work} \label{sec:relWork} \cite{osojnik2017multi} adapted a multi-target regression method, but it poorly adapts to concept drifts and has a high computational complexity. \cite{sousa2018multi} proposed the same strategy with two problem transformation methods, ML-AMR and ML-RR, but also with high computation complexity. To deal with computational complexity and high memory consumption, \cite{ahmadi2018label} proposed a label compression method combining dependent labels into single pseudo labels. A classifier is then trained for each one of~them. \cite{Nguyen2019b} proposed an incremental weighted clustering with a decay mechanism to detect changes in data, decreasing weights associated to each instance with time, focusing more on new arrived instances. For the classification, only clusters with weights greater than a threshold are used to assign labels to instances. The method also uses the Hoeffding inequality and the label cardinality to decide the number of labels to be predicted for an instance. However, the clusters are updated considering that the ground true labels arrive with the instances in the stream. To our knowledge, \cite{wang2012mining} was the first work to deal with delayed labels. It is a label-based ensemble with Active Learning to select the most representative instances to continually refine classes boundaries. The authors argue that using an ensemble, and updating the classifiers individually, they preserve information of classes which do not change when concept drift is detected. However, label dependencies are not considered. \cite{zhu2018multi} proposed an anomaly detection method for concept evolution and infinitely delayed labels. It has three processes: \textit{i)} classification, using pairwise label ranking, binary linear classifiers, and a function to minimize the pairwise label ranking loss; \textit{ii)} detection, using Isolation Forest together with a clustering procedure in order to detect instances which may represent the emergence of new classes; and \textit{iii)} updating, building a classifier for each new class according to an optimization function. Although able to detect new classes, the method has difficulties with concept drifts, since changes in the streams are considered~anomalies. \cite{CostaJunior2019} proposed MINAS-BR, a clustering-based method using k-means for novelty detection. Offline labeled instances induce an initial decision model. This model classifies new online unlabeled instances, which are used to update the model in an unsupervised fashion. The method also considers that instances that are outside the radius of the existing clusters represent novelty classes, and new models must be constructed for them. Although promising, focusing on novelty detection can generate many false positives, harming the performance for concept drift detection. From all methods reviewed here, only \cite{wang2012mining}, \cite{zhu2018multi} and \cite{CostaJunior2019} consider infinitely delayed labels. Wang et. al., however, use Active Learning, and thus consider that, at some point, labeled instances will be available. Zhu et. al. show promising results for anomaly detection, but fail to detect concept~drift. Costa Júnior et. al., although focusing on novelty detection, is also proposed for concept drift. Thus, we included MINAS-BR in our experiments, only considering its concept drift detection~strategy.
1,108,101,566,018
arxiv
\section{Introduction} In conventional distribution systems, the lack of sufficient real-time measurements in a distribution grid hinders the ability to obtain situational awareness. With increasing penetration of PV and EVs, more extensive real-time monitoring and control is required for effective operation of the system and for good quality of service to the customers \cite{baran1994state}. To ensure situational awareness, classic distribution system state estimation (DSSE) uses pseudo-measurements with conventional weighted least squares approaches \cite{manitsas2012distribution}. Recently, sparsity-aware DSSE techniques \cite{9247106, dahale2021joint, rout2022dynamic} are proposed to deal with the issue of low observability. Alongside, the utilities are upgrading their distribution system to cope with the low observability issue. This has led to a significant increase in the availability of real-time measurements to the control center. For instance, the installation of smart meters and supervisory control and data acquisition (SCADA) sensors have increased to improve the measurement redundancy. Furthermore, new generations of phasor measurement units (PMU) and IEDs (Intelligent Electronic Devices) enhance situational awareness, protection and control functions in substations. However, aggregating the measurements from different sources presents some challenges. Firstly, the measurements are unevenly sampled i.e., they are obtained at different rates. For example, the smart meter measurements are typically sampled at 15-min interval while the SCADA sensors are sampled at 1-sec to 1-min interval. Furthermore, these measurements may be intermittent or corrupt due to communication network impairments. It is therefore important to reconcile the multi time-scale measurements for situational awareness in a power distribution grid. \subsection{Related work} Research efforts have focused on aggregating two time-scale measurements using linear interpolation/extrapolation based weighted least squares (WLS) approach \cite{gomez2014state}. However, this approach fails to exploit the spatio-temporal relationships in the time-series data and performs poorly in the case of intermittent measurements. The issue of irregular sensor sampling and random communication delays was proposed in \cite{stankovic2017hybrid}. A multi-task Gaussian process (GP) framework to reconcile heterogeneous measurements was proposed in \cite{9637824}, \cite{dahale2022bayesian}. This approach is computationally expensive as it requires an inversion operation of the kernel matrix to impute or predict the slow-rate measurements. An exponential moving average method to extrapolate the slow-rate measurements was proposed in \cite{karimipour2015extended}. Authors in \cite{alcaide2017electric} use PMU and SCADA measurements for state estimation. This approach performs DSSE by incorporating a subset of PMU measurements available at time $t$ along with the predicted SCADA measurements. The future predictions of the SCADA measurements are obtained using the information from the previous state estimates. It suffers from large measurement redundancy requirements (around 1.7), which makes it impractical for low-observable distribution systems. Furthermore, \cite{alcaide2017electric} fails to take into account any missing measurements scenario that could occur while aggregating measurements over finite bandwidth communication networks. A recursive Gaussian process based framework is proposed in \cite{9878081} that sequentially aggregates measurements batch-wise or real-time. The proposed approach requires careful selection of the hyper-parameters of the GP function as well as inversion of the kernel matrix at the initial time step. In this paper, we propose a novel approach that leverages spatio-temporal dependencies in time-series data without involving matrix inverse operations. The proposed approach uses neural ordinary differential equations \cite{chen2018neural} that are ideal for imputing and predicting time-series measurements collected at non-uniform intervals. \subsection{Contributions} The main contributions of this paper are summarized below: \begin{itemize} \item For the first time, we propose a latent neural ordinary differential equations (LODE) approach to reconcile the multi time-scale measurements in a distribution grid. The proposed approach is capable of performing imputations as well as predictions at any desired time instant. \item The proposed approach is computationally efficient and performs fast predictions once the model is trained offline. \item Simulation results for three-phase unbalanced IEEE 37 bus system reveal the superior performance of the proposed approach. The proposed approach provides smooth imputations and predictions with high fidelity. \end{itemize} \section{Background: Neural ODE} The measurements obtained from multiple grid sensors are obtained at different sampling rates. The multivariate time-series sensor data with $D$ variables and of $N$ length can be written as, \begin{equation} X_t = x_1, x_2,...,x_T^D \in \mathbb{R}^{N \times D} \label{eq:1} \end{equation} This data may contain missing values due to the sensor sampling rate or communication impairments. A mask $M \in \mathbb{R}^{N \times D}$ identifies the missing measurements. The entries in $M$ i.e., $m_t^d$ is set to 1 if the corresponding measurement $x_t^d$ is observed; else, they are set to 0. The goal is to reconcile the unevenly sampled measurements at the finest time resolution. Learning a generative model for the multivariate time-series will help accomplish this goal. Generative models using a deep neural network is typically built on the concept of a fixed number of \textit{layers}. In the forward pass, each network consists of a stack of $L$ transformations, where $L$ is the depth of the model. In order to update these models, a backpropagation algorithm is run through the same $L$ layers via chain rule. This process necessitates that we store the intermediate values of the layers. Thus, training standard deep neural networks are computationally challenging as the memory requirement for storing the intermediate quantity increases as the model depth is increased. Furthermore, limited number of transformations are performed due to the fixed number of layers. Neural ODE is a recent novel framework that is effective for modeling irregularly-sampled time series commonly encountered in various real-world application, including smart grid and medical data. It combines deep neural network principles with ordinary differential equations, and thus are more effective than conventional time series models. Particularly, in this work, Neural ODE is used for learning generative models for multivariate time series data from the distribution grid. Neural ODE offers a continuous time transformation of variables from input state to final predictions unlike standard deep neural network which only performs a limited number of transformations depending on the number of layers. The transformed values (or intermediate values) are obtained via ODE solvers by providing initial state and dynamics as inputs. The dynamics of the transformation function is determined by a neural network as shown below: \begin{equation} \frac{dz_t}{dt} = f(z_t, \theta) \label{eq:2} \end{equation} where, $f$ is a neural network parameterized by $\theta$ that defines the ODE dynamics. $z_t$ is the hidden state of the Neural ODE. Thus, starting from an initial point $z(t_0)$, the transformed state at any time $t_i$ is given by integrating an ODE forward in time, given as, \begin{equation} \begin{aligned} z_i = z_0 + \int_{t_0}^{t_i} \frac{dz_t}{dt} dt \\ z_i = ODESolve (f, z_0, t_0, t_i, \theta) \end{aligned} \label{eq:3} \end{equation} \eqref{eq:3} can be solved numerically using any ODE Solver (e.g., Euler's method). In order to train the parameters of the ODE function $f$, an adjoint sensitivity approach is proposed in \cite{chen2018neural}. This approach computes the derivatives of the loss function with respect to the model parameters $\theta$ by solving a second augmented ODE backwards in time. Some of the advantages of using Neural ODE solvers over other conventional approaches are: (1) \textit{Memory efficiency:} The adjoint sensitivity approach allows us to train the model with constant memory cost independent of the layers in the ODE function $f$; (2) \textit{Adaptive computation:} In deep neural networks, the number of layers are fixed and therefore, they generally have a fixed amount of function evaluations. However, the number of layers in a neural ODE is the number of steps an adaptive ODE solver decides to take. This means that the neural ODE can effectively adapt the number of layers on the fly for different datasets and take adaptive steps wherever necessary to determine the solution with desired accuracy; (3) \textit{Effective formulation:} In addition to above two advantages, the continuously defined dynamics can naturally incorporate data which arrives at arbitrary times. Therefore, we propose to use the Neural ODE for reconciling the unevenly sampled distribution grid data. \section{Proposed Approach} The basic neural ODEs evaluate the hidden state values at any desired time instants. However, such models are hard to interpret, especially for multi-time scale power measurements, due to the combined dynamics of power system and the ODE solver. Therefore, we propose to use Latent ODE (LODE) approach for reconciling the multi-time scale power measurements. The LODE approach is a continuous-time generative process for integrating multi-time scale measurements. This approach uses Neural ODEs, and variational autoencoder within a single framework \cite{rubanova2019latent}. LODE has two key advantages over neural ODE: First, it explicitly decouples the dynamics of the power system, the likelihood of observations, and the recognition model so that each component can be analyzed separately. Secondly, the posterior distribution over an initial latent state provides a measure of uncertainty which further increases the reliability of our predictions. The proposed framework (LODE) has three different modules, namely an encoder, a decoder, and the ODE solver. The architecture of LODE for smart distribution grid is illustrated in Fig. \ref{fig:architecture}. \begin{figure*}[h!] \centering \includegraphics[width= 0.85\textwidth]{architecture.png} \caption{Architecture of proposed Latent ODE approach for smart distribution grid } \label{fig:architecture} \end{figure*} Each module of LODE is described in the forthcoming subsection. \subsection{Encoder: Recognition network} This module encodes the input measurements from data space and transforms it into a latent space. Encoding is typically carried out via a recurrent neural network since it is effective in capturing long term dependencies of time series data. Encoder takes $\{ x_i, t_i\}_{i=1}^{N}$ as an input, where $x_i$ represents the observations and $t_i$ represents the corresponding observation times. The data is processed backward in time from time $t_N$ to $t_0$. An approximate posterior over the initial state $q_\phi(z_0|\{ x_i, t_i\}_{i=1}^{N})$ is computed from the last hidden layer of the encoder network. In our approach, the mean and standard deviation of the approximate posterior $q_\phi(z_0|\{ x_i, t_i\}_{i=1}^{N})$ are the function of the final hidden state of an encoder network, characterized by, \begin{equation} q_\phi(z_0|\{ x_i, t_i\}_{i=1}^{N}) = \mathcal{N}(\mu_{z_0}, \sigma_{z_0}) \end{equation} where, $\mu_{z_0}, \sigma_{z_0} = g(RNN_{\phi}(\{ x_i, t_i\}_{i=1}^{N}))$. Here, the function $g$ represents a neural network layer, translating the final hidden state of the encoder into the mean and variance of $z_0$. \subsection{ ODE Solver} Once an approximate posterior distribution $q(z_0|\{ x_i, t_i\}_{i=1}^{N})$ is obtained from the encoder, an initial latent state ($z_0$) for the ODE solver is sampled from the corresponding distribution. The initial latent state serves as an input to the ODE solver together with a ODE dynamic function $f$. Then, a ODE solver is used to obtain latent space observations for all the given times, \begin{equation} z_1, z_2, ..., z_N = ODESolve(f, z_0, \theta, \{ t_0, t_1,..., t_N\}) \label{eq:5} \end{equation} Thus, given observation times $t_0, t_1, ... , t_N$ and an initial state $z_0$, an ODE solver produces $ z_1, ..., z_N$, which describes the latent state at each observation. The ODE solver is capable of imputing historical time points as well as forecasting future values by providing appropriate latent states at desired time instants. As the function $f$ is time-invariant, a unique latent trajectory can be defined given the initial latent state. \subsection{ Decoder} the decoder transforms the latent trajectory defined at various time instants back into the data space using the neural network. Standard multi layer perceptron can be used in decoder network since we only need to map two function spaces. \subsection{Training of LODE} The training of the proposed LODE framework is similar to that of a variational autoencoder, and it is end-to-end. The encoder-decoder model is trained by maximizing the Evidence Lower Bound (ELBO) given as, \begin{equation} \begin{aligned} ELBO(\phi, \theta) = \mathop{\mathbb{E}}_{q_{\phi} (z_0| \{x_i, t_i \}_{i=0}^{N})} [log(p_{\theta} (x_0, ...,x_N))] \\ - KL(q_{\phi} (z_0| \{x_i, t_i \}_{i=0}^{N}) || p(z_0)) \end{aligned} \end{equation} The first term in the ELBO represents the log probability of the decoder estimates. The second term denotes the KL divergence or degree of ``dissimilarilty" between the two distributions $q_{\phi} (z_0| \{x_i, t_i \}_{i=0}^{N}) $ and $p(z_0)$. Here, the prior over latent states $p(z_0)$ is chosen as $\mathcal{N}(0,1)$. The overall pipeline of the proposed Latent ODE approach summarized in Algorithm 1. \begin{algorithm} \KwInput{Datapoints $\{ x_i\}_{i=1}^{N}$ and the corresponding times $\{ t_i\}_{i=1}^{N}$ \\} \begin{algorithmic}[1] \STATE ${z_0} = RNN(\{ x_i\}_{i=1}^{N})$ \STATE $\mu_{z_0}, \sigma_{z_0} = g({z_0})$ \STATE $z_{0} = N(\mu_{z_0}, \sigma_{z_0}) = q_{\phi} (z_0| \{x_i, t_i \}_{i=0}^{N})$ \STATE $z_{1}, z_{2},..., z_{N} = ODESolve(f, \theta, z_{0}, (t_0,...,t_N))$ \STATE $\hat{x}_i = OutputNN\{ z_{i} \}$ \STATE \textbf{return} $\hat{x}_i$ \end{algorithmic} \caption{Latent ODE Approach} \end{algorithm} \section{Simulation results} This section evaluates the efficacy of the proposed framework for the imputation and predictions tasks related to smart distribution grid. Experiments are carried out in standard IEEE 37 bus system. \subsection{Data processing} We consider measurements from smart meter and SCADA sensors. The smart meter measurements consist of 24-hr active and reactive power injection time-series data aggregated at the primary nodes. This 24-hr load profile consists of a mixture of load profiles, i.e., industrial/commercial load profiles \cite{carmona2013fast}, and residential loads \cite{al2016state}. Reactive power profiles are obtained by assuming a power factor of $0.9$ lagging. The SCADA measurements are obtained by executing load flows on the test network. The SCADA measurements consists of the voltage magnitude measurements at a subset of node locations. The aggregated smart meter data are averaged over 15-min intervals while the SCADA measurements are sampled at a 1-min interval. A Gaussian noise with $0$ mean and standard deviation equal to $10\%$ of the actual power values is added in the smart meter data to mimic real-world patterns. The smart meter and SCADA measurements constitute our training dataset. Once the distribution grid's measurements are obtained, we represent the dataset as a list of records. Each record represents the information about the time-series data with the format given as, \textit{record = [measurement type, values, times, mask]}. Here, time-series data at each node of the IEEE 37 bus network represents one \textit{record}. The \textit{measurement type} denotes the sensor type, i.e., $P,Q,$ or $V$. \textit{Values} $\in \mathbb{R}^{N \times 1}$ represents the sensor measurements with \textit{times} $\in \mathbb{R}^{N}$ as the corresponding time instants. \textit{Mask} $\in \mathbb{R}^{N \times 1}$ represents the availability of the corresponding measurements. The dataset is further normalized between [0,1] interval. We take the union of all time points across different nodes in the dataset that are irregularly sampled. This is needed to perform batching during training. \subsection{Model Specifications} The encoder is a gated recurrent unit (GRU) \cite{cho2014properties}. We consider 40-dimensional hidden states of encoder with tanh activation functions. The ODE function is a feedforward neural network with three layers and $100$ units on each layer. The ODE solver is a fifth-order `dopri5' solver. The decoder consists of a feedforward neural network with a single layer. Here, we consider an adaptive learning rate with an initial value of the learning rate set to 0.01. We consider batch size as $10$, and report loss as mean squared error (MSE) and negative ELBO. The model is trained using stochastic gradient descent through Adam optimizer for $200$ iterations. All the experiments are conducted on the system with Intel i9 core processor, 32 GB RAM, 8 GB GPU. Python is explicitly used for coding the entire framework with the support of PyTorch's Torchdiffeq package \cite{chen2018neural}. The results of our experiments are discussed in the following subsections. \subsection{Imputation} In this task, the smart meter measurements are interpolated at a 1-min interval. The training is performed using the observed 15-minute interval data. In order to perform interpolation, the encoder runs backward in time to compute the approximate posterior distribution at the initial time $t_0$. Fig. \ref{fig:ami_node0} demonstrates the imputation performance using the proposed LODE approach. Table. \ref{table1} shows the comparison of LODE approach with linear interpolation approach \cite{gomez2014state} and recursive GP with graphs (RGP-G) approach \cite{9878081}. As seen from Table. \ref{table1}, the proposed approach is accurate with 0.2\% MSE error in the test data. The trajectory of MSE and negative ELBO on the test data are illustrated in Fig. \ref{fig:testing_loss_mse} and Fig. \ref{fig:testing_loss_ELBO}, respectively. The convergence of the losses demonstrate the effective training of the model. \begin{table} \centering \caption{MSE performance of the proposed LODE, linear interpolation and recursive GP approach} \label{table1} \begin{tabular}{|l|l|} \hline \textbf{Approach} & \textbf{MSE (\%)} \\ \hline Latent ODE & 0.2\% \\ \hline \begin{tabular}[c]{@{}l@{}}Linear \\ Interpolation\end{tabular} & 2\% \\ \hline \begin{tabular}[c]{@{}l@{}}Recursive Gaussian~\\process\end{tabular} & 0.7\% \\ \hline \end{tabular} \end{table} \begin{figure}[h!] \centering \includegraphics[width= 0.55\textwidth]{ami_node0.png} \caption{{Imputation at node 1 using Latent ODE approach}} \label{fig:ami_node0} \end{figure} \begin{figure}[h!] \centering \includegraphics[width= 0.55\textwidth]{testing_loss_mse.png} \caption{{MSE on the test dataset using Latent ODE approach}} \label{fig:testing_loss_mse} \end{figure} \begin{figure}[h!] \centering \includegraphics[width= 0.55\textwidth]{testing_loss_ELBO.png} \caption{{Negative ELBO on the test dataset using Latent ODE approach}} \label{fig:testing_loss_ELBO} \end{figure} \subsection{Prediction} In this task, we split the time-series into two halves, $t_0$ to $t_{N/2}$ and $t_{N/2}$ to $t_{N}$. The model is trained by conditioning the observations in the first 12 hrs of the time-series data and reconstructing the other half, i.e., training loss is considered on the second half. Once the model is trained, it can perform predictions for any desired time horizon. As seen from Fig. \ref{fig:extrapolation_node12}, the model only observes the first 12 hours of measurement data (blue in color) and extrapolates the next 12 hours (red in color). The predictions on the testing data are accurate with MSE of 0.72\%. \begin{figure}[h!] \centering \includegraphics[width= 0.55\textwidth]{extrapolation_node12.png} \caption{{Predictions at node 12 using Latent ODE approach}} \label{fig:extrapolation_node12} \end{figure} \section{Conclusion and Future work} This paper proposes a Latent ODE approach for integrating heterogeneous measurements in a smart distribution grid. The proposed approach uses neural ODEs for learning generative models, which can provide accurate imputations and predictions at any desired time instant. Future work involves developing a robust latent ODE approach against outliers in the measurement data. \bibliographystyle{IEEEtran}
1,108,101,566,019
arxiv
\section{Introduction} \noindent Ref.~\cite{Meade:2008wd} introduced a novel framework suitable for discussing and analysing general models of gauge mediation in a model-independent way. The so-called General Gauge Mediation (GGM) paradigm~\cite{Meade:2008wd} is defined by the requirement that the Minimal Supersymmetric Standard Model (MSSM) becomes decoupled from the hidden SUSY-breaking sector in the limit where the three MSSM gauge couplings $\alpha_{i=1,2,3}$ are set to zero. Since no other parameters participate in the coupling of the two sectors, we call this strict interpretation of gauge mediation `general pure gauge mediation' or {\em pure} GGM. This framework is broad enough to include everything from weakly coupled models with explicit messengers to strongly coupled theories with direct mediation. Preliminary investigations of the phenomenology of GGM have been made in Refs.~\cite{Carpenter:2008he,Rajaraman:2009ga,Abel:2009ve}. In particular Ref.~\cite{Abel:2009ve} concentrated on the pure GGM scenario which we shall be adopting here. This is in a sense the most minimal assumption because it obviates the need for an additional sector just to generate the bilinear $B_\mu$ parameter for the higgses. To summarise the approach, in addition to the supersymmetric interaction, \begin{equation} \label{mudef} {\cal L}_{eff}\supset\int d^2 \theta \,\, \mu \,{\cal H}_u{\cal H}_d \ , \end{equation} the Higgs-sector effective Lagrangian also includes soft supersymmetry-breaking terms. All of the latter must be generated by the SUSY-breaking sector, since there would be little merit in a model of dynamical SUSY-breaking which generates only a subset of the SUSY-breaking terms in the effective SM Lagrangian. There are quadratic terms \begin{equation} \label{quaddef} m_u^2 |H_u|^2 + m_d^2 |H_d|^2 +(B_\mu H_uH_d + c.c.)~, \end{equation} as well as cubic $a$-terms \begin{equation} \label{Atermsdef} a_u^{ij} H_u Q^i \bar u^j + a_d^{ij} H_d Q^i \bar d^j + a_L^{ij} H_d L^i \bar E^j~, \end{equation} in the MSSM. As is well-known, a phenomenologically acceptable electroweak symmetry breaking in the supersymmetric SM occurs if $\mu^2$ and the soft masses in \eqref{quaddef} at the low scale (i.e. the electroweak scale) are of the same order, $\mu^2 \sim B_\mu \sim m_{soft}^2\sim M^2_{W}$. In pure GGM we have no direct couplings of the SUSY-breaking sector to the Higgs sector, and therefore must have $B_{\mu}\approx 0$ at the messenger scale. From this starting point, i.e. taking $B_{\mu}\approx 0$ at the high scale $M_{mess}$, a small but viable value of $B_\mu$ is generated radiatively at the electroweak scale \cite{Rattazzi:1996fb,Babu:1996jf}. Electroweak symmetry breaking then determines the values of $\tan\beta$ and $\mu$. Since $B_\mu$ is small, $\tan\beta$ is generally large (between $20$ and $70$). This setup where $B_{\mu}\approx 0$ is an input and $\tan \beta$ is an output \cite{Abel:2007nr,Abel:2009ve} is in contrast to the common approach where $\tan\beta$ is taken as an arbitrary input and $B_{\mu}$ at the high scale is obtained from it. The main free parameters are the gaugino and scalar masses as well as the messenger scale. For simplicity we restrict ourselves to a single effective scale $\L_G$ for the gaugino masses and a single scale $\L_S$ for the scalars\footnote{Of course in each specific GGM model, the parameters $\L_G$ and $\L_S$ determining gaugino and scalar masses at the messenger scale are computed and expressed in terms of the scales of the SUSY-breaking sector, and details of the messenger fields. As such, $\L_G$ and $\L_S$ (together with $M_{mess}$) characterise a point in the pure GGM parameter space and can be treated as input parameters. }. Thus at the messenger scale $M_{mess}$ the soft supersymmetry breaking gaugino masses are \begin{equation} \label{gauginosoft} M_{\tilde{\lambda}_i}(M_{mess}) =\, k_i \,\frac{\alpha_i(M_{mess})}{4\pi}\,\Lambda_G \end{equation} where $k_i = (5/3,1,1)$, $k_i\alpha_i$ (no sum) are equal at the GUT scale and $\alpha_i$ are the gauge coupling constants. The scalar mass squareds are \begin{equation} \label{scalarsoft} m_{\tilde{f}}^2 (M_{mess}) =\, 2 \sum_{i=1}^3 C_i k_i \,\frac{\alpha_i^2(M_{mess})}{(4\pi)^2}\, \Lambda_S^2 \end{equation} where the $C_i$ are the quadratic Casimir operators of the gauge groups. Ordinary gauge mediation scenarios (see Ref.~\cite{Giudice:1998bp} for a review) live on the restricted parameter space $\L_G\simeq\L_S$. We have implemented these boundary conditions in a modified version of \texttt{Softsusy}~\cite{Allanach:2001kg}, which takes $_{\mu}$ as an input and predicts $\tan\beta$ using the electroweak symmetry breaking conditions. Outside the confines of \emph{ordinary} gauge mediation the parameter space is populated by many models that predict different values of the ratio of gaugino to scalar masses, $\L_G/\L_S$. In models with explicit messengers one expects this ratio {to be} close to one, while for direct mediation models the gaugino masses are often suppressed relative to the scalar masses~\cite{Izawa:1997gs,Kitano:2006xg,Csaki:2006wi,Abel:2007jx,Abel:2007nr,Abel:2008gv}. Ref.~\cite{Komargodski:2009jf} provided a general argument that linked the gaugino mass to the existence of lower lying minima at tree-level. Indeed hybrid models can easily be constructed which interpolate between these two cases by bringing lower lying minima in from infinity~\cite{Abel:2009ze}. It is also possible to achieve values $\L_G/\L_S>1$ by increasing the ``effective number of messengers'' in the context of extraordinary gauge mediation models~\cite{Cheung:2007es}. Naively this ``gaugino mediation'' region of parameter space corresponds to strong coupling, but explicit and calculable models space are possible in the context of extra dimensional models \cite{Mirabelli:1997aj,Kaplan:1999ac,Chacko:1999mi,Csaki:2001em,McGarrie:2010kh,McGarrie:2010qr} or electric/magnetic duality~\cite{Green:2010ww} or some other mechanism which can screen the scalar mass contributions (see the latter reference for a more complete review). The broad relation of the underlying physics to the values of $\L_G$ and $\L_S$ is shown in Figure.~\ref{phenoland}. It is striking that the phenomenology of GGM probes the vacuum structure so directly. We also show for later reference the exclusions from various phenomenological constraints discussed in detail in Ref.~\cite{Abel:2009ve} for a messenger scale of $10^{10}$\mbox{\,GeV\,}. \begin{figure} \begin{center} \begin{picture}(190,180) \includegraphics[viewport= 160 70 380 380, width=6cm]{10exclusion.eps} \Text(-155,135)[c]{\scalebox{1}[1]{\large \JJ{direct}}} \Text(-145,119)[c]{\scalebox{1}[1]{\large \JJ{hybrid}}} \Text(-110,105)[c]{\scalebox{1}[1]{\large \JJ{ordinary}}} \Text(-75,60)[c]{\scalebox{1}[1]{\large \JJ{Gaugino}}} \Text(-70,45)[c]{\scalebox{1}[1]{\large \JJ{mediation}}} \end{picture} \vspace*{1cm} \end{center} \begin{center} \caption{ The underlying mediation physics corresponding to different regions of the $\Lambda_G$,~$\Lambda_S$ parameter space. In the extreme $\Lambda_G\ll \Lambda_S$ region we have direct gauge mediation with no lower lying tree-level minima. Outside this region lies the hybrid region with lower lying minima being brought in from infinity. The red dotted line indicates the ordinary gauge mediation line where $\Lambda_G = \Lambda_S$, which can be reproduced in metastable set-ups with high messenger scales such as those in Ref.~\cite{Murayama:2006yf}. Below the ordinary gauge mediation line we find the ``many effective messenger'' $\Lambda_G \gg \Lambda_S$ region, which is where some mechanism screens the contributions to the scalar masses. We also show the allowed region for intermediate messenger scales, $ M_{Mess} = 10^{10}$~GeV with the dominant constraints excluding various areas indicated as follows: yellow (pale grey) means the point is excluded by the presence of tachyons in the spectrum, while the black region falls foul of the direct search limits. In the blue (dark grey) region SoftSUSY has not converged and in the green (light grey) region a coupling reaches a Landau pole during RG evolution.} \label{phenoland} \end{center} \end{figure} GGM allows also different $\Lambda_G^{(i)}$ and $\Lambda_S^{(j)}$ for the different species of gauginos and sfermions although certain sum-rules still apply~\cite{Meade:2008wd}. However the general parameter space is prohibitively large for an exhaustive survey and moreover most perturbative models (for example the direct mediation models, or the hybrid models of \cite{Abel:2009ze}) do correspond to only to single $\L_G$, $\L_S$ and $M_{mess}$ scales. This is especially true if one wishes to maintain gauge coupling unification, which is most easily achieved by keeping an $SU(5)$ structure for the mediating sector. In this sense the set of models defined by single $\L_G$, $\L_S$ and $M_{mess}$ scales are the gauge mediation equivalent of the canonical mSUGRA\footnote{We use the more common term minimal Supergravity (mSUGRA); Constrained MSSM (CMSSM) would be more accurate.} scenario, with $\L_G$ and $\L_S$ playing the role of the parameters $m_{1/2}$ and $m_0$ in those {models\footnote{Note that our approach is orthogonal to that taken in Ref.~\cite{Carpenter:2008he} which has $\Lambda^i_G=\Lambda^i_S=\Lambda^i$, but a different $\Lambda^i$ for each gauge group.}.} The LHC is currently operating at 7~TeV centre-of-mass energy and, it is hoped, will collect 1fb$^{-1}$ of data by the end of 2011. It is thus relevant to ask what models and regions of parameter space might be discovered in the next year. Recent work on this subject includes~\cite{Altunkaynak:2010we,Baer:2010tk,Alves:2010za} and has focussed on the mSUGRA scenario. The goal of this paper is to investigate the signatures and the discovery potential of pure GGM models at the early LHC stage focussing on collisions at 7~TeV. In section~\ref{sec:benchmarks} we shall analyse the available parameter space relevant for this regime. We will proceed to construct a pair of benchmark points with relatively light gluinos. For these we compute the total $2 \to 2$ production cross-sections, the low-energy spectrum of superpartners and the branching ratios. The NLSP particles in this region of the parameter space are neutralinos. We continue in section~\ref{sec:NLSP} with a more general survey of the NLSP phenomenology which is also very relevant for early stage LHC searches, and analyse other regions of the pure GGM parameter space, complementary to that of section~\ref{sec:benchmarks}. These will include benchmark points in the stau and co-NLSP regions, and a benchmark point in the $\L_G\gg \L_S$ region. \section{Benchmark points with light gluinos for early LHC discovery} \label{sec:benchmarks} {We begin our investigation of the discovery potential of these models at the early LHC stage by focussing on two explicit benchmark points. The parameter space of pure GGM models was first investigated in our earlier work \cite{Abel:2009ve} which also excluded regions due to various constraints. These are shown for the example of a $10^{10}$\mbox{\,GeV\,} messenger scale in Fig.~\ref{phenoland}. We will be exploring the allowed regions of parameter space where either gluino or squark masses are likely to be sufficiently light to be discovered with a centre of mass energy of up to 7~TeV and integrated luminosity of order $1$\,fb$^{-1}$. As a guideline note that in mSUGRA, Ref.~\cite{Baer:2010tk} has argued that when $m_{\tilde{g}} \sim m_{\tilde{q}}$ that the $1$\,fb$^{-1}$ reach is approximately 1.1~\mbox{\,TeV\,}. Our first two benchmark points will be chosen to have a slightly split spectrum with $m_{\tilde{q}} \sim 2-4\, m_{\tilde{g}}$ to allow lighter gluinos.} Three scans of the parameter space of pure GGM are shown in Figure~\ref{nessie}, one at $M_{mess}=10^{8}\mbox{\,GeV\,}$ one at $M_{mess}=10^{10}\mbox{\,GeV\,}$ and one at $M_{mess}=10^{14}\mbox{\,GeV\,}$\footnote{It should be noted that lower values of messenger scales restrict the parameter space significantly because of our assumption that $B_\mu$ is generated radiatively, and the fact that low messenger scales reduce the range of RG running.}. In each figure stop mass contours of 500\mbox{\,GeV\,} and 1\mbox{\,TeV\,} are indicated as dotted lines, and the 500\mbox{\,GeV\,} and 1\mbox{\,TeV\,} gluino contours are indicated as solid lines\footnote{In the $M_{mess}= 10^8$\mbox{\,GeV\,} scenario the single dotted contour is for 1\mbox{\,TeV\,} stop masses.}. Furthermore, the diagonal dotted red line corresponds to the boundary between neutralino and slepton NLSP. (Note that this line is similar to but distinct from the ordinary gauge mediation line of Figure~\ref{phenoland}.) The figures are also marked with a variety of benchmark points. The circular blobs are benchmark points with a neutralino NLSP, the triangular points have a stau NLSP, and the stau-neutralino co-NLSP point is indicated by a star. The square blob corresponds to a gaugino mediated point with stau NLSP and slepton NNLSP. \begin{figure} \begin{center} \vspace*{-0.6cm} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess8earlyLHC.eps} } \end{center} \vspace*{-1.2cm} \begin{center} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess10earlyLHC.eps} } \end{center} \vspace*{-1.2cm} \begin{center} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess14earlyLHC.eps} } \end{center} \begin{center} \caption{ The $\Lambda_G$,~$\Lambda_S$ parameter space for $M_{mess}=10^{8}\mbox{\,GeV\,}$ (upper panel), $M_{mess}=10^{10}\mbox{\,GeV\,}$ (middle panel) and $M_{mess}=10^{14}\mbox{\,GeV\,}$ (lower panel). Stop mass contours (500\mbox{\,GeV\,} and 1\mbox{\,TeV\,}) are indicated as dotted lines, and the 500\mbox{\,GeV\,} and 1\mbox{\,TeV\,} gluino lines are solid. The NLSP is neutralino above the dotted red line and stau below. The marked points are the benchmark points discussed in the text: circular for neutralino NLSP (PGM1a middle panel, PGM1b bottom panel), triangular for stau NLSP (PGM2), a star for stau-neutralino co-NLSP (PGM3) on the bottom panel and finally a square for PGM4 which has stau NLSP and slepton NNLSP.} \label{nessie} \end{center} \end{figure} {As will be seen from the $\chi^2$-analysis in Fig.~\ref{fig:fits}, the region where the squark masses are below 500~GeV is somewhat disfavored by already existing data. Therefore, in this section we will concentrate on the region of the parameter space with light gluinos -- benchmark points PGM1a and PGM1b. The triangular, square and star-shaped points with stau NLSP and stau-neutralino co-NLSP will be discussed in Section~\ref{sec:PGM23}. We chose PGM1a and PGM1b in the light gluino region, to the left of the 500\mbox{\,GeV\,}\ line, marked as circular blobs in each figure.} The first point (PGM1a) is for a medium to low messenger mass of $10^{10}$~\mbox{\,GeV\,} (middle panel of Fig.~\ref{nessie}) . The second (PGM1b) is for a high messenger mass $10^{14}$~\mbox{\,GeV\,} (lower panel of Fig.~\ref{nessie}). These are typical light gluino points, and as we have said correspond to phenomenology of the ``mildly split'' variety (in which the low energy spectrum is the Standard Model with only fermionic superpartners) found in the direct gauge mediation models analysed in Refs~\cite{Abel:2007jx,Abel:2007nr,Abel:2008gv}. To some degree these points are quite generic: we chose them to be to the left of the 500\mbox{\,GeV\,} gluino line but we have not tried to optimize for the production cross section. The benchmark points are located in the regions of parameter space which are in good agreement with currently known experimental constraints. The experimental constraints were discussed in detail in Ref.~\cite{Abel:2009ve}. They are quantified by a total $\chi^2$ which it is pleasing to note is indeed low in these regions. As shown in Figure~\ref{fig:fits}, where the 68\% and 95\% confidence regions are indicated as black lines, both benchmark points lie well within the 95\% confidence region. \begin{figure} \begin{center} \subfigure[]{ \begin{picture}(190,180) \includegraphics[bb= 142 80 510 410,width=6.5cm]{10chi2-pointless.eps} \Vertex(-140.5,109){2} \end{picture} } \hspace*{1cm} \subfigure[]{ \begin{picture}(190,180) \includegraphics[bb= 142 80 510 410,width=6.5cm]{14chi2-pointless.eps} \Vertex(-140.5,109){2} \Text(-116,84)[c]{\scalebox{1}[1]{$\star$}} \Text(-116,66)[c]{\scalebox{1}[1]{$\blacktriangleup$}} \end{picture} } \end{center} \vspace*{-1cm} \begin{center} \caption{Figures (a,b) show the $\chi^2_{tot}$ distribution in the $\L_G$-$\L_S$ plane for $M_{mess}=10^{10}$ and $10^{14}$~GeV respectively. The black lines denote the 68\% and 95\% confidence regions, and we also show the benchmark points following the same notation as before. The benchmark points are all inside the 95\% confidence regions.} \label{fig:fits} \end{center} \end{figure} The spectra of the two benchmark points are given in Table~\ref{tab:sp}, and the neighbourhood of the chosen benchmark points leads to similar spectra. The main features of the spectrum in these points are that they have light gluinos with masses below $500$~\mbox{\,GeV\,} and that the NLSP is a bino-like neutralino\footnote{The LSP is the gravitino as is standard in gauge mediation.}. Detailed discussion of other possibilities for NLSP phenomenology in the early stages of the LHC in PGGM will be presented in section~\ref{sec:NLSP}. For the rest of the spectrum in Table~\ref{tab:sp} we note that the first two neutralinos are light, while the Higgsino-like third and fourth neutralinos are much heavier, at the TeV scale. A similar story holds for the charginos: one is quite light, approximately 135~GeV and is wino-like while the other is higgsino-like and at the TeV scale. The left-handed sleptons are at the TeV scale, while the right-handed ones vary from 400-700 GeV depending on the point and sparticle type. As usual, the right-handed staus are the lightest of the sleptons due to mixing proportional to $\tan\beta$ and the relatively large size of $\lambda_{\tau}$. Finally, the squarks all have masses above 1~TeV. Thus for these benchmark points the dominant production channel at the LHC is gluino pair production. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline Benchmark point& PGM1a& PGM1b\\\hline \hline $M_{mess}$~(GeV)& $10^{10}$ & $10^{14}$ \\ \hline\hline $\L_G$~(GeV) & $5\times 10^4$ & $5\times 10^4$ \\ \hline $\L_S$~(GeV) & $2.5\times 10^5$ & $2.5\times 10^5$ \\ \hline $\tan\beta$ & 46.6 & 41.2 \\ \hline\hline $\chi_1^0$ & {\bf 67} & {\bf 67} \\ \hline $\chi_2^0$ & 136 & 133 \\ \hline $\chi_3^0$ & 1038 & 936 \\ \hline $\chi_4^0$ & 1039 & 938 \\ \hline $\chi_1^{\pm}$ & 136 & 134 \\ \hline $\chi_2^{\pm}$ & 1039 & 937\\ \hline $\tilde{g}$ & {\bf 458} & {\bf 453} \\ \hline\hline $\tilde{e}_L,\tilde{\mu}_L$ & 927 & 1013 \\ \hline $\tilde{e}_R,\tilde{\mu}_R$ & 540 & 712\\ \hline $\tilde{\tau}_1$ & 392 & 544 \\ \hline $\tilde{\tau}_2$ & 898 & 964 \\ \hline $\tilde{\nu}_{1,2}$ & 925 & 1011 \\ \hline $\tilde{\nu}_3$ & 889 & 958 \\ \hline\hline $\tilde{t}_1$ & 1418 & 1050\\ \hline $\tilde{t}_2$ & 1729 & 1471 \\ \hline $\tilde{b}_1$ & 1578 & 1287 \\ \hline $\tilde{b}_2$ & 1731 & 1471\\ \hline $\tilde{u}_L,\tilde{c}_L$ & 2011 & 1760 \\ \hline $\tilde{u}_R,\tilde{c}_R$ & 1803 & 1520 \\ \hline $\tilde{d}_L,\tilde{s}_L$ & 1983 & 1734 \\ \hline $\tilde{d}_R,\tilde{s}_R$ & 1774 & 1460\\ \hline\hline $h_0$ & 116.9 & 115.3 \\ \hline $A_0, H_0$ & 944 & 1032 \\ \hline $H^{\pm}$ & 947 & 1035 \\ \hline\hline \end{tabular} \end{center} \begin{center} \caption{Spectra for the two benchmark points with light gluinos. All masses are in GeV. The NLSP and the lightest coloured super-particle (gluino) are shown in bold in each case. These spectra and all other relevant details can be obtained in SLHA format at \href{http://www.ippp.dur.ac.uk/~SUSY}{\bf http://www.ippp.dur.ac.uk/$\sim$SUSY}} \label{tab:sp} \end{center} \end{table} We have computed the total production cross-sections to NLO using PROSPINO~\cite{Beenakker:1996ed,prospino}. The total gluino production cross sections in pp collissions at 7~TeV are, \begin{eqnarray} {\rm PGM1a}:\quad\quad \sigma_{pp\rightarrow\tilde{g}\tilde{g}}=4.09\,{\rm pb}\quad\quad@7\,{\rm TeV}\\\nonumber {\rm PGM1b}:\quad\quad \sigma_{pp\rightarrow\tilde{g}\tilde{g}}=4.34\,{\rm pb}\quad\quad@7\,{\rm TeV} \end{eqnarray} We present cross-sections in femtobarns for various channels in Table~\ref{tab:xsections}. Since before shutdown the early-stage LHC is expected to accumulate approximately 1~fb$^{-1}$ of luminosity, the entries in the table also give the number of SUSY events expected before then. The largest contribution to the total production cross-section comes from gluino production, as the gluinos are both relatively light and strongly interacting. Since for our benchmark points the sfermions are significantly heavier than the gauginos, production processes involving the squarks are suppressed relative to those only involving gluinos. Weak gaugino pair production also makes a large contribution to the total cross-section. Since $\chi_2^0$ is wino-like, the cross-sections for $\chi_2^0 \chi_1^{\pm}$ production are much higher than for the same process with $\chi_2^0$ replaced with the bino-like $\chi_1^0$. Di-chargino production, with a cross-section of 1.32 (1.39)~pb for PGM1a (PGM1b, respectively) also makes an important contribution. All other cross-sections are nearly two orders of magnitude smaller than these, such as $pp\to \tilde{g} \tilde{q}$, also shown in Table~\ref{tab:xsections}. We have also investigated all the other possibilities, $pp\rightarrow \chi_i^0 \chi_j^{\pm}$, in this family of processes. The Higgsino nature of $\chi_{3,4}^0$ and $\chi_2^{\pm}$ means that production of these particles is negligible. Even though the lightest neutralino has $m_{\chi_1^0}=67$\mbox{\,GeV\,}, it is not directly produced in any great numbers. Of course, these features will change in regions where the hierarchy between the sfermions and gauginos is less pronounced, and also when the centre of mass energy is raised from 7 to $14$~TeV. The decays of the lightest chargino are dominated by $\chi_1^{+} \to \chi_1^0 q_{u} \bar{q}_{d}$, which occurs 69\% (70\%) of the time. The rest of the branching ratio is taken up by $\chi_1^{+} \to l^+ \nu_l$, where $l= (e,\mu,\tau)$, with the tau-component taking a somewhat larger share of 19\% (20\%). The wino-like neutralino $\chi^{0}_{2}$ decays predominantly to $\chi_1^0 q \bar{q}$, 71\% (87\%), and to $\chi_1^0 \tau^+ \tau^-$, 23\% (5\%), with the remaining channels being a combination of $\chi_1^0 l_{e,\mu}^+ l_{e,\mu}^-$ and $\chi_1^0 \nu \bar{\nu}$. The upshot of this analysis is that in both $\chi_1^{+}\chi_1^{-}$ and $\chi^{\pm}_1\chi_1^0$ production the standard $4j$+ MET analysis should be useful for probing supersymmetry this year. One might wonder about the existing strong constraints on NLSP neutralino and chargino masses from Tevatron (see \cite{Meade:2009qv} for an overview). The strongest constraints resulting in a high lower bound on neutralino and chargino masses originate from two potential signals. The first one is a di-photon signature studied most recently in Refs.~\cite{Aaltonen:2009tp,Abazov:2010us}. In this case one considers production and subsequent decay $p\bar{p}\rightarrow \chi^{+}_{1}\chi^{-}_{1}\rightarrow 2\chi^{0}_{1} +\ldots\rightarrow2\gamma+2\tilde{G}+\ldots$ or $p\bar{p}\rightarrow \chi^{0}_{2}\chi^{\pm}_{1}\rightarrow 2\chi^{0}_{1} +\ldots\rightarrow2\gamma+2\tilde{G}+\ldots$. The (unobserved) signal is two photons plus missing transverse energy. However, for such bounds to hold the last decay stage of a neutralino NLSP into a photon and a gravitino must happen promptly (at the very least inside the detector). In general prompt NLSP decays occur only for sufficiently low messenger masses. As we will see in more detail in Sect.~\ref{sec:NLSP} (see Fig.~\ref{fig:nessie-decaylength}) the NLSP decays happen way outside the detector for our benchmark points PGM1a,b. The second signature analysed at Tevatron is a tri-lepton signal. The production would follow from $p\bar{p}\rightarrow \chi^{0}_{2}\chi^{\pm}_{1}\rightarrow 2\chi^{0}_{1} +\ell \bar{\ell}+\ell^{\prime}\nu$. This signal has been analysed in Refs.~\cite{Abazov:2009zi,Forrest:2009gm} in the context of mSugra with low values of $\tan(\beta)=3$, setting a new lower limit on chargino masses of 164~GeV. However, the value of the upper limit depends quite strongly on the choice made for $\tan(\beta)$ as well as other model dependent considerations. Therefore it will be different in gauge mediation. In particular all our predictions obtained in a pure GGM setup always have much higher values of $\tan(\beta)$. This increases the branching fraction to $\tau$s which are more difficult to reconstruct. Overall the branching ratios to leptons are quite small in our scenarios as can be seen from the red and blue segments of the outer circles in Fig.~\ref{fig:pies}. This makes the current constraints inconclusive for the pure GGM predictions analyzed here. This, of course, can be changed by an analysis of (existing) larger sets of Tevatron data, which would be very interesting. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Benchmark Point & $\sigma_{pp\rightarrow\tilde{g}\tilde{g}}$ & $\sigma_{pp\rightarrow \chi_2^0 \chi_1^{\pm}}$ & $\sigma_{pp\rightarrow \chi_1^{+} \chi_1^{-}}$ & $\sigma_{pp\rightarrow\tilde{g}\tilde{q}}$ \\ \hline PGM1a & 4090 & 2682 & 1320 & 18.9 \\ \hline PGM1b & 4340 & 2835 & 1390 & 58.7 \\ \hline \end{tabular} \vspace{0.3cm} \caption{Production rates for the most important processes for the two benchmark points under consideration at the LHC with $\sqrt{s}=7$~TeV. All cross-sections are in femtobarns.} \label{tab:xsections} \end{center} \end{table} \noindent We now focus on $pp\rightarrow \tilde{g}\tilde{g}$, and discuss the main decay avenues to the final states including NLSPs. It can be seen from this analysis that the gluino decays dominantly into a chargino plus a quark and an antiquark. Subsequently the chargino decays into a neutralino plus either a quark and an antiquark, or a lepton and a neutrino, as discussed above. An alternative interesting channel is that each gluino decays directly into a neutralino and a quark-antiquark pair. In all of these processes the two gluinos will decay into a total of 4 or more coloured particles and two neutralinos (plus leptons in some cases). In Fig.~\ref{fig:pies} the branching ratios of the gluino {and the daughter sparticle decays are represented graphically, with the PGM1a benchmark point shown on the left panel and the PGM1b point on the right. Decay chains with branching ratios of less than 5\% are not shown\footnote{ For example, for the PGM1a the $\chi_2^0$ decays to 23\% into $\tau$'s but only to less than 4\% into other leptons and thus the latter are not shown on the left panel in Fig.~\ref{fig:pies}. For the PGM1b the $\chi_2^0$ decays 5\% into $\tau$'s and to nearly 4\% into other leptons, these two contributions are combined and collectively called ``leptons'' on the right panel in Fig.~\ref{fig:pies}.}. } \begin{figure} \begin{center} \includegraphics[bb= 182 0 560 600,width=3.2cm]{PGM1a.eps} \hspace*{5cm} \includegraphics[bb= 182 0 560 600,width=4.0cm]{PGM1b.eps} \vspace*{-0.4cm} \end{center} \begin{center} \caption{Piecharts giving a rough impression of the gluino decay chains/branching ratios {with the PGM1a benchmark point on the left panel and PGM1b on the right.} In the first step the gluino decays into the products depicted in the inner ring, in the next step the daughter sparticle decays into the products given in the outer ring (for simplicity we only write down the additional decay products for this last decay). We do not display those chains with a branching ratio less than 5\%.} \label{fig:pies} \end{center} \end{figure} The full set of branching ratios (as well as the spectra in SLHA format) for these benchmark points can be found at {\vspace{-0.5cm} \centering{ \href{http://www.ippp.dur.ac.uk/~SUSY}{\bf http://www.ippp.dur.ac.uk/$\sim$SUSY} }\\\vspace{0.4cm}} In the following section we will present a more general overview of the NLSP phenomenology. We shall then perform a complementary analysis, in regions of the parameter space where the NLSP is a stau or a light slepton or there are co-NLSPs (in practice these are areas where the stau and neutralino are nearly degenerate in mass). Again we focus on areas that may be relevant to the early LHC searches. \section{Survey of NLSP phenomenology} \label{sec:NLSP} In gauge mediated models the Lightest Supersymmetric Particle (LSP) is always the gravitino \cite{Giudice:1998bp}. There is much interest therefore in the phenomenology of the {\em Next-to}-LSP (NLSP) as this is the metastable state into which any produced superpartner will decay before ultimately decaying to the gravitino. Therefore it is instructive to map out the NLSP phenomenology in the whole $\L_G$, $\L_S$ parameter space, and describe in more detail some of the top-down models that correspond to the different regions. For the assumptions we outlined above, the NLSP is either slepton or neutralino. The NLSP phenomenology is of great interest for two reasons~\cite{Giudice:1998bp}. First it is typically very long lived -- its decay to the gravitino is suppressed: $\Gamma \propto \, m_{NLSP}^5/F_0^2$ where $m_{NLSP}$ is its mass and $F_0$ is the intrinsic scale of supersymmetry breaking in the hidden sector (i.e. the potential is $\langle V\rangle =F_0^2$). Typically, depending on how the SUSY breaking encoded by $F_0$ is mediated, $\Gamma$ represents many orders of magnitude of suppression. If it is sufficiently long lived the NLSP will exit the detector as missing energy, or {leave a muon-like track} if it is charged (e.g. if it is a stau). On the other hand for certain values of parameters (which we discuss presently) the particle can decay inside the detector possibly allowing one to resolve a displaced decay vertex. Moreover such a measurement would give direct information about the SUSY breaking in the hidden sector $F_0$ rather than that seen in the visible sector which depends heavily on the particular type of (gauge) mediation. \begin{figure} \begin{center} \vspace*{-0.6cm} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess8_NLSPidentity.eps} } \end{center} \vspace*{-1.2cm} \begin{center} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess10_NLSPidentity.eps} } \end{center} \vspace*{-1.2cm} \begin{center} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess14_NLSPidentity.eps} } \end{center} \begin{center} \caption{The NLSP regions in the $\Lambda_G$,~$\Lambda_S$ parameter space for $M_{mess}=10^{8}\mbox{\,GeV\,}$ (top figure), $M_{mess}=10^{10}\mbox{\,GeV\,}$ (middle figure) and $M_{mess}=10^{14}\mbox{\,GeV\,}$ (bottom figure). The NLSP is $\chi_1^0$ in the green region, $\chi_1^0 /\tilde{\tau}$ co-NLSP in the red region and $\tilde{\tau}$ in the blue region.} \label{fig:nessie-nlsp} \end{center} \end{figure} The compositions of the NLSP in different regions of parameter space are shown in Figure~\ref{fig:nessie-nlsp}, again one at $M_{mess}=10^{8}\mbox{\,GeV\,}$ (top), one at $M_{mess}=10^{10}\mbox{\,GeV\,}$ (middle) and one at $M_{mess}=10^{14}\mbox{\,GeV\,}$ (bottom). In each figure again stop mass contours of 500\mbox{\,GeV\,} and 1\mbox{\,TeV\,} are indicated as dotted lines, and the 500\mbox{\,GeV\,} gluino contour is indicated as a solid line. We have indicated 3 different NLSP regions on the figures, each giving quite distinct experimental signatures: \begin{itemize} \item Neutralino NLSP (Marked in green): no ionization track and either missing energy or displaced vertex with decay predominantly to photon ($\chi^0_1 \rightarrow \tilde{G} \gamma$) or jet/lepton pairs ($ \chi^0_1 \rightarrow \tilde{G} Z\rightarrow \tilde{G} +jets/l{\bar l}$). \item Stau NLSP (Marked in blue): ionization track plus possible displaced vertex with decay predominantly to jets ($ \tilde{\tau}_R \rightarrow \tilde{G} \tau\rightarrow \tilde{G}\nu_\tau +jets/l'{\bar l}$). \item Neutralino/stau co-NLSP (Marked in red): if the mass difference between the neutralino and stau is less than $m_\tau$, then the NNLSP is unable to decay to the NLSP, and each component behaves effectively a separate NLSP. One expects a mix of those previous two cases. \end{itemize} We can treat the decay length of the NLSP as follows. First consider the decays: they go through the interaction term which for on-shell particles is~\cite{Giudice:1998bp} \begin{equation} \label{int} {\cal L} = \frac{1}{F_0} \left( (m_f^2-m_{\tilde{f}}^2) \bar{f}_L \tilde{f} + \frac{M_{\tilde{\lambda}_i}}{4\sqrt{2}} \bar{\tilde{\lambda}}_i \sigma^{\mu\nu}F^i_{\mu\nu} \right) \tilde{G} + h.c. \end{equation} where $\tilde{G}$ is the Goldstino and as we have already stated $F_0$ is the absolute scale of supersymmetry breaking. The decay length derived from Eq.\eqref{int} is given by \begin{equation} \label{f0-eq} L_{decay} = \frac{1}{\kappa} \left( \frac{100\mbox{\,GeV\,} }{m_{NLSP}} \right)^5 \left( \frac{F_0 }{(100\mbox{\,TeV\,})^2 } \right)^2 0.1\, {\rm mm} \end{equation} where the factor $\kappa $ is a calculable number depending on the mixing in the NLSP, and is of order unity (precisely unity for the stau in fact). The interesting case is when decay takes place inside the detector which conservatively requires $L_{decay}< 10$\,m. For NLSP masses less that 500\mbox{\,GeV\,}, this translates into \begin{equation} \label{eq-fbound} \sqrt{F_0} \lesssim 10^4 \mbox{\,TeV\,} \, . \end{equation} Thus $F_0$ will be at the lower end of the possible range. In order to get more precise information we need to consider the relation between $F_0$ and $\Lambda_G$ or $\Lambda_S$. This is very model dependent, but simplifies if we take there to be only one source of supersymmetry breaking (i.e. one potential Goldstino) and one dominant source of mediation for gauginos or scalars. Under this assumption the relation between the $\Lambda$'s and $F_0$ can be expressed with two parameters $k_G$ and $k_S$ as \begin{equation} \label{eqn:10} \Lambda_G = k_G F_0/M_{mess} \,\, ; \, \, \Lambda_S = k_S F_0/M_{mess} \, . \end{equation} In GGM, $k_G$ and $k_S$ are independent parameters which encode the difference between the gauge and scalar mass scales $\L_G$ and $\L_S$. In ordinary gauge mediation, $k_G=k_S,$ and this corresponds to a simple one-scale special case of GGM. In general, as will be reviewed shortly, the range of values for $k_G$ and $k_S$ is highly model-dependent. In order to present model-independent information it is useful to express $F_0$ with reference to $\Lambda_G$: i.e. we replace $F_0 = k_G^{-1}\L_G M_{mess}$. The decay length $L_{decay}$ derived from Eq.\eqref{int} is given by \begin{equation} \label{eq-withk} {k_G^{2}} L_{decay} = \frac{1}{\kappa} \left( \frac{100\mbox{\,GeV\,} }{m_{NLSP}} \right)^5 \left( \frac{\sqrt{\Lambda_G M_{mess}} }{100\mbox{\,TeV\,} } \right)^4 0.1\, {\rm mm} \end{equation} We then plot contours of ${k_G^{2}} L$. The reason that this is a most useful parameterization is that in the regions where $\Lambda_G>\Lambda_S$ the NLSP is mainly slepton, as can be seen from Fig.~\ref{fig:nessie-nlsp}, and its mass is dominated by renormalization group contributions from the gauginos (except when $\Lambda_G/\Lambda_S \sim \mathcal{O}(1-10)$). Thus $m_{NLSP}$ is mainly a function of $\L_G$ (just as the stop mass is in fact). On the other hand in the regions where $\Lambda_G<\Lambda_S$ the NLSP is mainly a bino-like neutralino and again its mass is expected to be dominated by $\L_G$. Hence the RHS of Eq.~\eqref{eq-withk} is predominantly a function of $\Lambda_G$. We show the results for the decay lengths $\log_{10}(k_G^2 L_{decay})$ in Figure~\ref{fig:nessie-decaylength} for the three values of the messenger mass. We see that the contours follow a vertical, horizontal and vertical again pattern, which we now explain. Starting at the top of the figures, when $\Lambda_S$ is large the NLSP is the neutralino, and the decay length does not change with decreasing $\Lambda_S$ as both $m_{NLSP}$ and $\Lambda_G$ are constant. When the NLSP species changes from neutralino to the lightest stau, there is a kink in the contour. This is partly due to the change in $\kappa$, and also to the change in the behaviour of the NLSP mass with $\Lambda_G$ and $\Lambda_S$. In this regime the stau mass is dominated by $\Lambda_S$ and, although $k_G^2 L_{decay}$ is proportional to $\Lambda_G^2$ the factor of $1/m_{\tilde{\tau}}^5$ means that $k_G^2 L_{decay}$ is proportional to $1/\Lambda_S^5$. When these two parameters are of the same of order of magnitude the contour thus appears flat in $\Lambda_S$. Finally, when $\Lambda_G / \Lambda_S \sim 10$ the stau mass begins to be dominated by $\Lambda_G$ and generated mostly through RG running and so the contour is again approximated by a line of constant $\Lambda_G$. \begin{figure} \begin{center} \vspace*{-0.6cm} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess8_DecayLength.eps} } \end{center} \vspace*{-1.2cm} \begin{center} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess10_DecayLength.eps} } \end{center} \vspace*{-1.2cm} \begin{center} \subfigure[]{ \includegraphics[bb= 142 75 500 400,clip,width=6.5cm]{Mmess14_DecayLength.eps} } \end{center} \begin{center} \caption{ This figure shows the logarithm of the decay length in meters of the NLSP, $\log_{10}(k_G^2 L_{decay})$ for $M_{mess}= 1\times 10^{8}$~GeV (top), $M_{mess}= 1\times 10^{10}$~GeV (middle) and $M_{mess}= 1\times 10^{14}$~GeV (bottom), as well as contours for each case.} \label{fig:nessie-decaylength} \end{center} \end{figure} It is instructive to now consider the values of $k_G$ that one expects to have in various different top-down scenarios in order to see whether decays inside the detector are a possibility: \begin{itemize} \item{Ordinary mediation}: Here one has only one messenger and $\Lambda_G=\L_S$ and $k_G$ is the coupling of the messenger to the SUSY breaking $F$-term. Typically one takes $k_G\sim 1$. In this case Figure~\ref{fig:nessie-decaylength} gives directly the decay lengths of the NLSP. Evidently low messenger scales are required for decay inside the detector. For $M_{mess}=10^8\mbox{\,GeV\,}$ decays {can happen inside or outside the detector, depending on the region of parameter space. Comparing Fig.~\ref{fig:nessie-decaylength} with Fig.~\ref{nessie} we see that decay inside the detector happens when $m_{\tilde{g}} \geq 1 $\mbox{\,TeV\,}}. Intermediate scales $M_{mess}=10^{10}\mbox{\,GeV\,}$ would require high values of $\Lambda_G,\,\L_S$ which leads to very high masses outside the early discovery region. \item{Suppressed ordinary gauge mediation}: Ref.~\cite{Murayama:2006yf} presented a simple scheme for gauge mediation in which a single messenger field was coupled to a metastable SUSY-breaking sector of the type introduced in Ref.~\cite{Intriligator:2006dd}. In these models the Goldstino superfield is a composite particle (a ``meson'') and hence the effective coupling to the messenger fields is suppressed by a factor $k_G\sim k_S \sim \frac{\Lambda_{comp}}{M_X} \ll 1$ where $M_X$ is some {high fundamental scale which might be $M_{\rm Pl}$,} and $\Lambda_{comp}$ is the scale of compositeness. The general expectation is that $k_G,~k_S\ll 1$ and indeed phenomenological viability demands it. For example the values chosen in Ref.~\cite{Murayama:2006yf} give $k_G,~k_S \sim 10^{-7}$. Hence decay inside the detector (or indeed the Solar system) is clearly impossible for any values of $M_{mess}$ or $\Lambda_G$,~$\Lambda_S$. \item{Mildly split spectrum}: phenomenology of the ``mildly split'' variety (in which the low energy spectrum is the Standard Model plus only the fermionic superpartners) was found in the direct gauge mediation models analysed in Refs~\cite{Abel:2007jx,Abel:2007nr,Abel:2008gv}. This type of phenomenology is in fact characteristic of models that have no {\em tree-level} metastability, due to a theorem by Komargodski and Shih \cite{Komargodski:2009jf} that tree-level gaugino masses are equivalent to there existing some point in moduli space where there is a tachyon. These models have $\Lambda_G\ll \L_S\lesssim F_0$ and hence they correspond to $k_S\sim 1$ and $k_G\sim 10^{-(1-2)}$. Since the NLSP mass is governed by $\Lambda_G$, viable phenomenology requires larger values of $F_0$ and a commensurately slower NSLP decay. For example low messenger scales $M_{mess}=10^8\mbox{\,GeV\,} $ can just give decay within the detector whereas already intermediate scales $M_{mess}=10^{10}\mbox{\,GeV\,} $ do not allow decay within the detector at all. \item{Many messenger/strong coupling limit}: In ordinary gauge mediation, the gaugino mass scale $\Lambda_G$ is proportional to the number of messengers $N_{mess}$, and the mediated squark mass scale $\Lambda_S$ is proportional to $\sqrt{N_{mess}}$. Thus in the ``many effective messengers'' limit, we access the $\Lambda_G\gg \Lambda_S$ region of the parameter space and moreover we can effectively have $k_G\gg 1$, so the NLSP decays more rapidly. For example when $k_G\sim 30$ even intermediate messenger masses, $M_{mess}=10^{10}\mbox{\,GeV\,}$, allow NLSP decays to take within the detector with reasonably low masses for the coloured sparticles (i.e. below 1TeV). Of course naively adding many messengers leads to strong coupling in the visible sector: as discussed in the introduction calculable models in this region require some mechanism to screen the scalar mass contribution. \end{itemize} To summarise the discussion arising from Fig.~\ref{fig:nessie-decaylength}: In most cases NLSP decay happens well outside the detector. Decays inside the detector are only possible for relatively low messenger masses, high SUSY breaking scales and/or quite strong coupling $k_{G}$ to the hidden sector. \subsection{Stau and co-NLSP Benchmark points} \label{sec:PGM23} As can be seen from Figure~\ref{fig:nessie-nlsp}, the stau NLSP and co-NLSP regions both have $m_{\tilde{g}}>500\mbox{\,GeV\,}$ and $m_{\tilde{q}}>500\mbox{\,GeV\,}$. The low-mass parts of these regions are also disfavoured according to the analysis of supersymmetric contributions to Standard Model observables in~\cite{Abel:2009ve}. Accordingly the production cross-sections in these cases are lower than for neutralino NLSP. However, in the stau NLSP case (and also possibly in the co-NLSP scenario) with higher messenger scales the stau is stable on collider length- and time-scales. The signatures from such charged massive metastable particles (CHAMPS) are unique enough that early SUSY discovery may be feasible even with the smaller cross-sections in this scenario. {We have therefore selected two benchmark points, PGM2 with a stau NLSP, and PGM3 with a stau co-NLSP (both shown on the bottom panel in Fig.~\ref{nessie}) and performed a preliminary analysis of their phenomenology.} In addition we have for completeness chosen a fifth benchmark point, PGM4, at low messenger scales in the $\L_G\gg\L_S$ region (with $\L_G=3.4\times 10^5$\mbox{\,GeV\,} and $\L_G= 10^4$\mbox{\,GeV\,}). As for the light gluino points, the SLHA files and are available on the Pure GGM website mentioned above. {The spectra are shown in Table~\ref{tab:sp2}.} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Benchmark point& PGM2& PGM3 & PGM4\\\hline \hline $M_{mess}$~(GeV)& $10^{14}$ & $10^{14}$ & $10^8$ \\ \hline\hline $\L_G$~(GeV) & $1.2\times10^5$ & $1.2\times10^5$ & $3.4\times10^5$ \\ \hline $\L_S$~(GeV) & $1.6\times10^4$ & $4.76\times10^4$ & $10^4$ \\ \hline $\tan\beta$ & 19.0 & 20.5 & 34.4 \\ \hline\hline $\chi_1^0$ & 156 & {\bf 157} & 456 \\ \hline $\chi_2^0$ & 292 & 296 & 723 \\ \hline $\chi_3^0$ & 461 & 489 & 743 \\ \hline $\chi_4^0$ & 479 & 504 & 897 \\ \hline $\chi_1^{\pm}$ & 291 & 295 & 720 \\ \hline $\chi_2^{\pm}$ & 480 & 505 & 898 \\ \hline $\tilde{g}$ & 879 & 887 & 2239 \\ \hline\hline $\tilde{e}_L,\tilde{\mu}_L$ & 246 & 305 & 406 \\ \hline $\tilde{e}_R,\tilde{\mu}_R$ & 129 & 182 & 163\\ \hline $\tilde{\tau}_1$ & {\bf 100} & {\bf 157} & {\bf 110} \\ \hline $\tilde{\tau}_2$ & 254 & 310 & 423\\ \hline $\tilde{\nu}_{1,2}$ & 234 & 296 & 401 \\ \hline $\tilde{\nu}_3$ & 232 & 293 & 401 \\ \hline\hline $\tilde{t}_1$ & 618 & 650 & 1459\\ \hline $\tilde{t}_2$ & 786 & 823 & 1601\\ \hline $\tilde{b}_1$ & 726 & 769 & 1557\\ \hline $\tilde{b}_2$ & 761 & 802 & 1596\\ \hline $\tilde{u}_L,\tilde{c}_L$ & 804 & 860 & 1682\\ \hline $\tilde{u}_R,\tilde{c}_R$ & 766 & 810 & 1621\\ \hline $\tilde{d}_L,\tilde{s}_L$ & 795 & 850 & 1658\\ \hline $\tilde{d}_R,\tilde{s}_R$ & 765 & 805 & 1621\\ \hline\hline $h_0$ & 113.3 & 113.4 & 118\\ \hline $A_0, H_0$ & 493 & 539 & 781 \\ \hline $H^{\pm}$ & 499 & 545 & 785\\ \hline\hline \end{tabular} \end{center} \begin{center} \caption{ Spectra for three benchmark points with stau NLSP. PGM2 has slepton NNLSP and a high messenger scale and PGM3 has stau-neutralino co-NLSP also at a high messenger scale. PGM4 is at low messenger scale with slepton NNLSP. All masses are in GeV. The NLSP is shown in bold in each case. These spectra and all other relevant details can be obtained in SLHA format at \href{http://www.ippp.dur.ac.uk/~SUSY}{\bf http://www.ippp.dur.ac.uk/$\sim$SUSY}} \label{tab:sp2} \end{center} \end{table} Let us first consider the stau NLSP case, {PGM2}. Due to the constraint from the Higgs mass, it is not possible to have very light squarks in this case. The point we have chosen has $\Lambda_G=1.2\times10^5$ and $\Lambda_S = 1.6\times10^4$, which corresponds to a moderately large value of $\tan\beta = 19$. The squark masses for our benchmark point are in the range $750-800\mbox{\,GeV\,}$, while the mass of the lightest stop is 617\mbox{\,GeV\,}. The gluino mass is slightly heavier at 880\mbox{\,GeV\,}. The lightest stau mass is 100\mbox{\,GeV\,}, just above the bound from direct searches, and the lightest neutralino mass is 156\mbox{\,GeV\,}. The stau-smuon splitting is {29\mbox{\,GeV\,}.} We now turn to the production cross-sections for this point. As the gluino mass {in PGM2} is nearly double that of the neutralino NLSP {in points PGM1a and PGM1b,} the $pp\to\tilde{g}\tilde{g}$ cross-section is much smaller. The processes with the largest production cross-sections for the stau NLSP benchmark point {PGM2} are shown in Table~\ref{tab:stau-xsections} in femtobarns. While the squark production cross-sections are higher than for the PGM1 scenarios, for this point the total number of SUSY events will be about 600, when one includes the processes with smaller contributions. While we have not performed a detailed simulation, the {PGM2} point should just be within the range of discovery of the ATLAS detector in the first year of operation~\cite{Raklev:2009mg}. In the stau NLSP scenario one does not expect any missing $E_T$ since the pair produced staus will turn up in the calorimeters at the end of the SUSY cascade. {From the strong production channels $pp\to\tilde{g}\tilde{g}$ and $pp\to\tilde{g}\tilde{q}$ we expect $\geq 2$ jets plus two muon-like objects. In addition we also have significant $\tilde{\tau}$ pair production which should just give two muon-like objects. Together these channels should provide good chances for early SUSY discovery in these scenarios.} Finally, single production of neutralinos and charginos in conjuction with a gluino or a squark is negligible. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Benchmark Point & $\sigma_{pp\rightarrow\tilde{g}\tilde{g}}$ & $\sigma_{pp\rightarrow\tilde{q}\tilde{q}}$ & $\sigma_{pp\rightarrow\tilde{g}\tilde{q}}$ & $\sigma_{pp\rightarrow\tilde{q}\bar{\tilde{q}}}$ & $\sigma_{pp\rightarrow \tilde{\tau}_i \tilde{\tau}_j}$ & $\sigma_{pp\rightarrow \chi_2^0 \chi_1^{\pm}}$ \\ \hline PGM2 & 17 & 190 & 164 & 54 & 91 & 49 \\ \hline PGM3 & 16 & 133 & 128 & 34 & 17 & 50 \\ \hline \end{tabular} \end{center} \begin{center} \caption{This table shows the production rates for the most important processes for the {stau (PGM2) and co-NLSP (PGM3) benchmark points} at the LHC with $\sqrt{s}=7$~TeV. All cross-sections are in femtobarns.} \label{tab:stau-xsections} \end{center} \end{table} Next, we discuss the possibility of a stau-neutralino co-NLSP. If we were to decrease $\Lambda_G$ very much, this would lead to an unacceptable decrease in the Higgs mass. Therefore we must increase $\Lambda_S$ in order to achieve $m_{\tilde{\tau}} \sim m_{\chi_1^0}$. The co-NLSP point {PGM3} has $\Lambda_G=1.2\times 10^5$, $\Lambda_S = 4.76\times 10^4$ and $\tan\beta=20.5$. The point we have selected has $m_{\tilde{\tau}_1}=157\mbox{\,GeV\,}$ and $m_{\chi_1^0} = 157\mbox{\,GeV\,}$, with neutralino marginally heavier than the stau. As the scalar mass parameter $\Lambda_S$ has increased somewhat, the squark masses are heavier at this point by around 50\mbox{\,GeV\,} compared with the stau NLSP point. The slepton masses are also higher, and the light smuon and selectron masses are 181\mbox{\,GeV\,}. The production cross-sections are broadly similar to the stau NLSP case, but somewhat smaller due to the higher masses and more compressed spectrum in this case. Finally we discuss the stau NLSP point in the many messenger limit, PGM4. This point has the interesting feature that the lightest neutralino is heavier than all the sleptons and sneutrinos. The phenomenology of this scenario has been explored in~\cite{DeSimone:2008gm,DeSimone:2009ws}, and includes the presence of many leptons from decay chains leading to the NLSP. It is not possible in PGGM to achieve low enough coloured sparticle masses to have large gluino and squark production cross-sections. The reason for this is as follows. The scalar masses in this gaugino mediated region are generated predominatly by RG running, and take the form \begin{equation} \delta m^2_{\tilde{f}} \sim \frac{\alpha}{4\pi} \Lambda_G^2 \end{equation} where a summation over the gauge groups is implied. The main constraint on the value of $\Lambda_G$ in the gaugino mediated region are the direct search constraints, and specifically the constraint on the mass of the stau. The staus are only weakly interacting, and thus require relatively large values of $\Lambda_G$ to evade the direct search constraints. This large $\Lambda_G$ is what causes the strongly interacting sparticles to have such large masses. In the full GGM parameter space with three independent gaugino masses one could increase coloured sparticle production by keeping $\Lambda_G^{1,2}$ fixed and decreasing $\Lambda_G^3$. This would leave the slepton, neutralino and chargino masses fixed while decreasing the squark and gluino masses. Accordingly sparticle production at PGM4 at LHC7 is mostly due to direct production of the stau NLSP. This has a cross-section of 62~fb. Almost all the produced staus are the NLSP however (the cross section into these being 61.8 fb). Thus the leptogenic signals due to heavy stau or neutralino decay described in Ref.\cite{DeSimone:2009ws} will not be a feature of the LHC at 7 TeV in the pure GGM scenario, and will only appear at higher energies. The main signal in this region for the moment will be an excess of di-muon events, and possibly the displaced vertex signals of NLSP decay inside the detector. \section{Conclusions} We have made a survey of the phenomenology of Pure General Gauge Mediation -- i.e. in which the $B_{\mu}$ parameter is generated radiatively, with a particular emphasis on its testability in early LHC searches (at 7~TeV). Five benchmark points were presented: two corresponding to light gluino regions ($m_{\tilde{g}}\lesssim 500$~GeV with a bino-like neutralino NLSP), two to a stau NLSP and one to stau/neutralino co-NLSP. These benchmark points are representative of the different phenomenology that can occur in the regions of parameter space. We presented a preliminary analysis of the spectrum, production cross sections and branching ratios, which suggests that all of these points can be discovered in the first year of LHC running with appropriate selection cuts. The full set of data in SLHA format for these benchmark points can be found at { \centering{ \href{http://www.ippp.dur.ac.uk/~SUSY}{\bf http://www.ippp.dur.ac.uk/$\sim$SUSY} }\\\vspace{0.5cm}} \noindent We also surveyed and discussed NLSP phenomenology in this set-up, focussing on the possibility of NLSP decays inside the detector in various different schemes of SUSY breaking. Pure GGM with medium to low messenger masses ($10^{6-10}$~GeV) can give detectable decays with displaced vertices inside the detector, and hence direct knowledge of the fundamental scale of SUSY breaking. \subsection*{Acknowledgements} We thank Yuri Gershtein and Zohar Komargodski for interesting discussions. MJD thanks St John's College, the CET and EPSRC for financial support. {SAA and VVK are in receipt of Leverhulme Research Fellowships.} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,108,101,566,020
arxiv
\section{Introduction} \label{sec:introduction} Structural T1-weighted (T1w) magnetic resonance imaging (MRI) is useful for diagnosis of various brain disorders, in particular neurodegenerative diseases \citep{frisoni_clinical_2010, harper2016mri}. They have thus often been used as inputs of machine learning (ML) algorithms for computer-aided diagnosis (CAD) \citep{falahati2014multivariate, koikkalainen2016differential, rathore2017review, burgos2020machine}. Most ML methods are trained and validated on high-quality research data \citep{noor2019detecting,choi2019deep,punjabi2019neuroimaging}: protocols for image acquisition are standardized and a strict quality control is applied \citep{jack2008alzheimer,littlejohns2020uk}. However, to be applied in the clinic, ML methods need to be validated on clinical routine images. In recent years, hospitals have constituted clinical data warehouses that can contain medical images from 100,000-1,000,000 patients \citep{daniel2020hospital,amara2020design}. The quality of such images can greatly vary (see Figure~\ref{fig:labelbrain}), since the acquisition protocols are not standardized, scanners may not be recent and patients may have moved during the acquisition. All these factors can prevent algorithms from working properly \citep{reuter2015head,gilmore2019variations}. Quality control (QC) is thus a fundamental step before training and evaluating ML approaches on clinical routine data. \begin{figure}[!t] \begin{center} \includegraphics[width=1\linewidth]{QC_figures/label_brain.png} \caption{Examples of T1w brain images from the clinical data warehouse and the corresponding labels. A1: Image of good quality (tier 1), without gadolinium; A2: Good quality (tier 1), with gadolinium; B1: Medium quality (tier 2), without gadolinium (noise grade 1); B2: Medium quality (tier 2), with gadolinium (contrast grade 1); C1: Bad quality (tier 3), without gadolinium (contrast grade 2, motion grade 2); C2: Bad quality (tier 3), with gadolinium (contrast grade 2, motion grade 1); D1: Straight rejection (segmented); D2: Straight rejection (cropped).} \label{fig:labelbrain} \end{center} \end{figure} Manual QC takes time and is thus not always doable, especially in the context of ML-based CAD, where a large number of training samples is needed. Typically, clinical data warehouses can contain hundreds of thousands of samples. Even if web-based systems facilitate annotation \citep{kim2019loni,keshavan2018mindcontrol}, the task remains unfeasible for very large datasets. In this context, automatic QC is needed. Several works have been proposed to enable automatic QC. The Preprocessed Connectomes Project developed a Quality Assessment Protocol\footnote{\url{http://preprocessed-connectomes-project.org/quality-assessment-protocol}}. The package enables the extraction of several image quality metrics (IQMs) such as the signal-to-noise ratio, the contrast-to-noise ratio or the volume of the gray and white matter. IQMs are then compared to a normative distribution obtained from three research datasets, ABIDE \citep{di2014autism}, CoRR\footnote{\url{http://fcon_1000.projects.nitrc.org/indi/CoRR/html/index.html}} and NFB\footnote{\url{http://fcon_1000.projects.nitrc.org/indi/enhanced/}}. In the same spirit, we find \citep{esteban2017mriqc,alfaro2018image,raamana2020visual}. These approaches propose to use the IQMs as input of a classifier for automatic QC. \cite{esteban2017mriqc} and \cite{alfaro2018image} developed a pipeline for the automatic QC of 3D brain T1w MRI, the first has the advantage to be an open source software (called MRIQC). \cite{raamana2020visual} developed another open source software called VisualQC whose aim is the visualisation and the rating of the Freesurfer cortical segmentation output. The pipelines proposed by these works are very extensive as they require registration and segmentation steps to extract features. It is not possible to assume a priori that these steps will perform well with a new unseen clinical dataset. On the contrary, it is likely that the segmentation will fail for the lowest quality images, thus making it impossible to apply the QC tool. Moreover, the extracted features may not be representative of the problems affecting clinical routine data. As proposed by \cite{sujit2019automated}, convolutional neural networks (CNNs) are a good option for automatic QC because they can learn features without knowing a priori which are the most adapted. A further limitation of these works is that they rely on images acquired following a well-defined research protocol. The pipeline presented in \citep{alfaro2018image} was developed for the large, but well-standardized, UK Biobank dataset containing mostly healthy volunteers. \cite{esteban2017mriqc} and \cite{sujit2019automated} trained their algorithms on ABIDE, a research multicenter study including patients with autism and control subjects and used another research dataset for testing. Thus, to the best of our knowledge, there is currently no automatic QC approach dedicated to large clinical datasets. Our work was done using a clinical data warehouse. It assembles all MRI data from all hospitals of the greater Paris area. Images come from different sites and different machines with no homogenization on the parameters. Their acquisition cover several decades. The patient may have any disease for which a brain MRI exam is required. All these factors are not present in the approaches already proposed in the literature: even when images come from different sites, the acquisition protocol is harmonized, the number of machines is limited and they are usually acquired within a few years, avoiding intrinsic problems of quality due to the progress in the technology. Additionally, the presence of different diseases such as neurodegenerative diseases, stroke, multiple sclerosis, or brain tumours, is typical of clinical datasets: they can strongly alter the structure of the brain and it may be difficult to use a specific set of features to characterize the quality of the images independently of the disease. In addition, due to security reasons, images from the data warehouse cannot be uploaded to a web server and we had to work in a restricted IT environment \citep{daniel2020hospital}. The objective of our work was to develop a method for the automatic QC of T1w brain MRI in large clinical data warehouses. The specific objectives were to: 1) discard images which are not proper T1w brain MRI; 2) identify images with gadolinium; 3) recognise images of bad, medium and good quality. We used 5000 images for training/validation and 500 for testing. To train/validate the models, the data were annotated by two trained raters. To that purpose, we introduced an original visual QC protocol that is applicable to clinical data warehouses. \section{Materials and methods} \subsection{Dataset description} This work relies on a large clinical routine dataset containing all the T1w brain MR images of adult patients scanned in hospitals of the Greater Paris area (Assistance Publique-Hôpitaux de Paris [AP-HP]). The data were made available by the data warehouse of the AP-HP and the study was approved by the Ethical and Scientific Board of the AP-HP. According to French regulation, consent was waived as these images were acquired as part of the routine clinical care of the patients. The images were selected according to DICOM attributes. A first query on the PACS was performed to list the DICOM attributes corresponding to MRI. For all the MR images, we listed the ``series descriptions", ``body parts examined", and ``study descriptions" DICOM attributes. A neuroradiologist manually selected all the attribute values that may refer to 3D T1w brain MRI (e.g. ``T1 EG 3D MPR", ``SAG 3D BRAVO", ``3D T1 EG MPRAGE", ``IRM cranio", ``Brain T1W/FFEGADO"). He selected 3736 relevant attribute values. Relevant attribute values were manually selected as several DICOM tags are manually filled by radiographers, and so may not be homogeneous for the images acquired across the 39 hospitals of the AP-HP during several decades. These attributes were used to select the images of interest. Among all the 3D T1w brain MRI of the AP-HP, a first batch of about 11,000 images was delivered by the data warehouse. We excluded all the images having less than 40 slices because they correspond to 2D brain images even if the corresponding DICOM attribute refer to 3D. For the present study, we randomly selected 5500 images, corresponding to 4177 patients. The images were acquired on various scanners from four manufacturers: Siemens Healthineers ($n=3752$), GE Healthcare ($n=1710$), Philips ($n=33$) and Toshiba ($n=5$). Among all the images, 3229 images were acquired with 3 Tesla machines and 2271 with 1.5 Tesla. Table~\ref{tab:machines_name} in Supplementary Material reports all the models present in our dataset with the corresponding magnetic field. \subsection{Image preprocessing} The T1w MR images were converted from DICOM to NIfTI using the software dicom2niix \citep{li2016first} and organized using the Brain Imaging Data Structure (BIDS) standard \citep{gorgolewski2016brain}. Images with a voxel dimension smaller than 0.9~mm were resampled using a 3rd-order spline interpolation to obtain 1~mm isotropic voxels. To facilitate annotations, we applied the following pre-processing using the ‘t1-linear’ pipeline of Clinica \citep{routier_clinica_2021}, which is a wrapper of the ANTs software \citep{avants2014insight}. Bias field correction was applied using the N4ITK method \citep{tustison2010n4itk}. An affine registration to MNI space was performed using the SyN algorithm \citep{avants2008symmetric}. The registered images were further rescaled based on the min and max intensity values, and cropped to remove background resulting in images of size 169$\times$208$\times$179, with 1~mm isotropic voxels \citep{wen2020convolutional}. One should note that we only aimed to obtain a rough alignment and intensity rescaling to facilitate annotation. \subsection{Manual labeling of the dataset} In this section, we introduce the visual QC protocol. We describe the different characteristics noted on the images and how we created the final label for the automatic QC. Images were labeled by two trained raters and the annotation protocol was designed with the help of a radiologist. \subsubsection{Quality criteria} Five characteristics were manually annotated. The first two (straight rejection and gadolinium) are binary flags, while the other three (motion, contrast and noise) are assessed with a three-level grade. \begin{itemize} \item \textbf{Straight rejection (SR)}: images not containing a T1w MRI of the whole brain (for instance images of segmented tissues or truncated images). Note that these images still have DICOM attributes corresponding to T1w brain MRI and thus were not removed through the selection step based on DICOM attributes. \item \textbf{Gadolinium}: presence of gadolinium-based contrast agent. \item \textbf{Motion} 0: no motion, 1: some motion but the structures of the brain are still distinguishable, 2: severe motion, the cortical and subcortical structures are difficult to distinguish. \item \textbf{Contrast} 0: good contrast, 1: medium contrast (gray matter and white matter are difficult to distinguish in some parts of the image), 2: bad contrast (gray matter and white matter are difficult to distinguish everywhere in the brain). \item \textbf{Noise} 0: no noise, 1: presence of noise that does not prevent identifying structures, 2: severe noise that does prevent identifying structures. \end{itemize} Gadolinium injection, motion, contrast and noise were noted for all the images which were not defined as SR. According to the grades given to the motion, contrast and noise characteristics, we determined three tiers corresponding to images of good, medium and bad quality. The tiers, along with the rules used to defined them, are described in Table~\ref{tab:summary_tier}. \begin{table}[!h] \centering \renewcommand{\arraystretch}{1.5} \begin{tabular}{m{15mm} m{40mm} m{60mm}} \toprule \bfseries Tier & \bfseries Description &\bfseries Determination \textbf{rule}\\ \hline\hline Tier 1 & 3D T1w brain MRI of good quality & Grade 0 for motion, contrast and noise \\ \hline Tier 2 & 3D T1w brain MRI of medium quality & At least one characteristic among motion, contrast and noise with grade 1 and none with grade 2 \\ \hline Tier 3 & 3D T1w brain MRI of bad quality & At least one characteristic among motion, contrast and noise with grade 2 \\ \bottomrule \end{tabular} \caption{Description and determination rules of the proposed quality control tiers.} \label{tab:summary_tier} \end{table} \subsubsection{Annotation set-up} Our aim was to annotate the largest possible number of images in an efficient manner while being restricted to the environment of the data warehouse which only included a Jupyter notebook and a command-line interface. We thus implemented a graphical interface in a Jupyter notebook. This interface displayed only the central axial, sagittal and coronal slices of the brain. Indeed, loading the whole 3D volume for inspecting all the slices in the data warehouse environment was unfeasible due to the above mentioned restrictions. Specifically, from the NIfTI format, we saved a screenshot of the central slice of each view (sagittal, coronal, axial) in PNG format. This allowed a fast loading of the image to annotate. Each image was labeled by two trained raters. The interface was flexible: it was possible to go back and label again an image, and after the labelling all the characteristics noted were displayed. The procedure was optimized to reduce the workload of the raters to a minimum. \subsubsection{Consensus label} The final label used to train and validate the automatic QC is a consensus between the two raters. If the users labeled different image characteristics, we determined a procedure to define a consensus label. We distinguished two types of disagreement: one regarding the SR status and the other one regarding the other characteristics based on which the tiers are assigned. When the two raters disagreed on the SR status, we manually set the consensus label: the two raters reviewed the images and decided together to keep the SR label or assign the alternative label. In case of disagreement regarding the other characteristics, the consensus was chosen as follows. The objective was to be as conservative as possible: we wanted to retain all the imperfections that may have been seen by one annotator and not by the other. For a given characteristic, the consensus grade was chosen as the maximum of the two grades of the observers. The tier was recomputed accordingly. \subsection{Automatic quality control method} We developed an automatic QC method based on CNNs trained to perform several classification tasks: 1) discard images which were not proper T1w brain MRI (SR: yes vs no)); 2) identify images with gadolinium (gadolinium: yes vs no); 3) differentiate images of bad quality from images of medium and good quality (tier 3 vs tiers 2-1); 4) differentiate images of medium quality from images of good quality (tier 2 vs tier 1). \subsubsection{Network architecture} The network proposed was composed of five convolutional blocks and of three fully connected layers. The convolutional blocks were made of one convolutional layer, one batch normalization layer, one ReLU and one max pooling. Details about architecture are represented on Figure \ref{fig:conv5fc3}. All the details about the parameters of the layers, i.e. the filter size, the number of filters/neurons, the stride and the padding size and the dropout rate are in the Supplementary Materials in table \ref{tab:parameters}. In the following, we refer to this architecture as Conv5\textunderscore FC3. The models were trained using the cross entropy loss, which was weighted according to the proportion of images per class for each task. We used the Adam optimizer with a learning rate of 1e-4. We implemented early stopping and all the models were evaluated with a maximum of 50 epochs. The batch size was set to 2. The model with the lowest loss was saved as final model. Implementation was done using Pytorch. This architecture has previously been used and validated in \citep{wen2020convolutional}. It is available through the ClinicaDL software available on GitHub: \url{https://github.com/aramis-lab/AD-DL}. \begin{figure}[!t] \begin{center} \includegraphics[width=1\linewidth]{QC_figures/conv5fc3_pdf.pdf} \caption{Architecture of the 3D CNN called Conv5\textunderscore FC3. Five convolutional blocks (composed sequentially of a convolutional layer, a batch normalization layer, a ReLU and a max pooling layer) are followed by a dropout and three fully connected layers.} \label{fig:conv5fc3} \end{center} \end{figure} We compared this network to more sophisticated CNN architectures. In particular, we implemented a modified 3D version of Google's incarnation of the Inception architecture \citep{szegedy2016rethinking}. In addition we also implemented a 3D ResNet (CNN with residual blocks) inspired from \citep{jonsson2019brain}. More details about the architectures are given Figures~\ref{fig:inception} and~\ref{fig:resnet}. Both the Inception and the ResNet models were trained using the cross entropy loss weighted according to the proportion of images per class, the Adam optimizer with a learning rate of 1e-4 and the batch size was set to 2. These two models have been used in \citep{couvy2020ensemble} to predict brain age from 3D T1w MRI. For that specific task, they achieved a higher performance than the 5-layer CNN mentioned above. Their implementation is openly available on GitHub \url{https://github.com/aramis-lab/pac2019} and all the parameters of the CNNs are listed in the supplementary materials of \citep{couvy2020ensemble}. \subsubsection{Experiments} Before starting the experiments, we defined a test set by randomly selecting 500 images which respected the same distribution of tiers as the images in the training/validation set. We also verified that the distribution of the manufacturers and the different scanner models was respected. The remaining 5000 images were split into training and validation using a 5-fold cross validation (CV). The separation between training, validation and test sets was made at the patient level to avoid data leakage. For each of the four tasks considered (SR, gadolinium, tier 3 vs 2-1, tier 2 vs 1), the five models trained in the CV were evaluated on the test set. We also studied the influence of the size of the training set on the performance by computing learning curves. We compared the output of each classifier with the consensus label. To set the automatic QC results in perspective, we computed the balanced accuracy (BA) for the raters (defined as the average of the BAs between each rater and the consensus). \begin{table}[!t] \renewcommand{\arraystretch}{1.25} \begin{center} \begin{tabular}{cc} \toprule \bfseries Characteristics & \bfseries Weighted Cohen's kappa\\ \hline\hline SR (yes vs no) & 0.88\\ \hline Gadolinium injection (yes vs no) & 0.89\\ \hline Contrast (0 vs 1 vs 2) & 0.79\\ \hline Motion (0 vs 1 vs 2) & 0.68\\ \hline Noise (0 vs 1 vs 2) & 0.70\\ \bottomrule \end{tabular} \caption{Weighted Cohen's kappa between the two annotators}% \label{tab:weighted kappa} \end{center} \end{table} \section{Results} \subsection{Manual quality control} The inter-rater agreement was evaluated using the weighted Cohen's kappa \citep{watson2010method} between the two annotators for each of the characteristics. Results are presented in Table~\ref{tab:weighted kappa}. The agreement is strong for the SR label and the gadolinium injection (0.88 and 0.89) and moderate for the other characteristics (from 0.68 to 0.79). \begin{figure}[!t] \begin{center} \includegraphics[width=0.75\linewidth]{QC_figures/consensus_5500_pdf.pdf} \caption{Distribution of the consensus labels for the whole dataset of 5500 images. Outermost circle: images in SR and in the different tiers. For every tier, we divide between images with and without gadolinium injection. For each injection status we see the grade distribution of the contrast, motion and noise characteristics.} \label{fig:consensus5500} \end{center} \end{figure} The distribution of the consensus labels for the 5500 patients is shown in Figure~\ref{fig:consensus5500}. 26\% of the images are labeled as SR, 16\% as tier 1, 28\% as tier 2, and 30\% as tier 3. Figure~\ref{fig:labelbrain} shows some representative examples of T1w brain images with the corresponding labels. As expected, the proportion of images with gadolinium increased when the quality decreased (proportion of images with gadolinium: 41\% in Tier 1, 53\% in tier 2, 76\% in tier 3; $p<2.13e^{-8}$; $\chi^2$ test). A vast majority of tier 3 images had a contrast of 2 (90\%) and were with gadolinium (70\%). If we analyse the relationships between characteristics, we note that 73\% of images with a grade 2 for motion have also a grade 2 for contrast. Unsurprisingly, a strong motion has a severe impact on contrast. On the other hand, images with a grade 2 for contrast present a closer distribution of grade 0, 1 and 2 for motion (40\%, 34\%, and 26\%, respectively). Figure~\ref{fig:tesla_consensus} displays the distribution into SR or the different tiers for images acquired at 3T and 1.5T, respectively. One can observe that 3T images are more often in the SR category than 1.5T images (31.5\% vs 19.3\%). The most likely explanation is that 3T scanners are most often equipped with image segmentation tools, which leads to a larger number of segmented images. On the other hand, the image quality tended to be higher for 3T than for 1.5T images, which was expected. \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\linewidth]{QC_figures/tesla_tier_pdf.pdf} \caption{Proportion of images that fall into the different tiers or are labelled as SR depending on the field strength (3T or 1.5T).} \label{fig:tesla_consensus} \end{center} \end{figure} DICOM attributes often contain information regarding the injection of gadolinium. However, it is well-known to radiologists that such information is often unreliable because it is manually entered by the MRI radiographer. We aimed to assess the extent to which such information was unreliable. We thus analysed the ``study description'' and ``series description'' DICOM attributes of the images to check if the presence of gadolinium injection was noted. We considered that it was noted if at least one of the words `gado', `inj' or `iv' was present in the value of one of the attributes. Among the 2416 images that were manually annotated as with gadolinium, 2033 images had the information in the DICOM attributes. Among the 1629 images that were manually annotated as without gadolinium, 987 were noted as images with gadolinium injection according to the DICOM attributes. Since our manual annotation of gadolinium injection is highly reproducible and was designed with the guidance of an experienced neuroradiologist, we conclude that, as expected, DICOM attributes do not provide reliable information regarding the presence of gadolinium. This highlights the importance of being able to detect it using an automatic QC tool. \subsection{Automatic quality control} Results obtained for the four tasks of interest by the proposed Conv5\textunderscore FC3 classifier are presented in Table~\ref{tab:results_batch_5500}. We report the BA of the annotators for comparison. For the recognition of SR images, we used all the images available in the training/validation set ($n=5000$); for the gadolinium and tier 3 vs tiers 2-1 tasks, the training/validation set does not include SR images ($n=3770$); and for the tier 2 vs tier 1 task, the training/validation set does not include SR and tier 3 images ($n=2182$). \begin{table}[!t] \renewcommand{\arraystretch}{1.25} \begin{footnotesize} \begin{tabular}{lccccc} \toprule \bfseries Metric & \makecell{\bfseries SR \\\bfseries(yes vs no)} & \makecell {\bfseries Gadolinium injection \\\bfseries(yes vs no)} & \makecell {\bfseries Tier 3 vs \\\bfseries tiers 2-1} & \makecell {\bfseries Tier 2 vs \\ \bfseries tier 1} \\ \hline\hline BA annotators & 97.13 & 96.10 & 91.56 & 88.27\\\hline BA classifiers & 93.76 $\pm$ 0.57 & 97.14 $\pm$ 0.34 & 83.51 $\pm$ 0.93 & 71.65 $\pm$ 2.15 \\\hline F1 score & 94.85 $\pm$ 0.41 & 97.04 $\pm$ 0.31 & 84.07 $\pm$ 1.02 & 74.10 $\pm$ 1.35\\\hline MCC & 85.71 $\pm$ 1.11 & 94.00 $\pm$ 0.64 & 67.38 $\pm$ 2.13 & 42.10 $\pm$ 3.25\\\hline AUC & 93.76 $\pm$ 0.57 & 97.14 $\pm$ 0.34 & 83.51 $\pm$ 0.93 & 71.65 $\pm$ 2.15\\\hline Sensitivity & 91.83 $\pm$ 1.18 & 96.45 $\pm$ 0.34 & 79.88 $\pm$ 3.06 & 77.39 $\pm$ 4.29\\\hline Specificity & 95.69 $\pm$ 0.53 & 97.82 $\pm$ 0.62 & 87.14 $\pm$ 3.14 & 65.92 $\pm$ 7.47\\\hline PPV & 86.44 $\pm$ 1.43 & 98.33 $\pm$ 0.46 & 81.93 $\pm$ 3.36 & 83.20 $\pm$ 2.31\\\hline NPV & 97.51 $\pm$ 0.35 & 95.39 $\pm$ 0.42 & 85.83 $\pm$ 1.49 & 57.78 $\pm$ 2.63\\ \bottomrule \end{tabular} \end{footnotesize} \caption{Results of the CNN classifier for all the tasks. We report the BA of the annotators and for every metric of the CNN we report the mean and the empirical standard deviation across the five folds. BA: balanced accuracy; MCC: Matthews correlation coefficient; AUC: area under the receiver operator characteristic curve; PPV: positive predictive values; NPV: negative predictive values} \label{tab:results_batch_5500} \end{table} Balanced accuracy for SR and gadolinium is excellent (94\% and 97\%). For SR, the CNN is slightly less good than the annotators. For gadolinium, the CNN is as good as the raters. For tier 3 vs 2-1, the classifier BA is good but lower than that of the annotators. For tier 2 vs 1, CNN BA is low (71\%) and much lower than that of the raters (88\%). \begin{figure}[!t] \begin{center} \includegraphics[width=1\linewidth]{QC_figures/learning_curves_pdf.pdf} \end{center} \caption{Learning curves for the SR (yes vs no), gadolinium injection (yes vs no), tier 3 vs tier 2-1 and tier 2 vs tier 1 tasks. Blue: balanced accuracy of the classifier across the five folds. Violet: balanced accuracy of the annotators on the testing set.} \label{fig:tier_4_gado_tier3} \end{figure} The influence of the size of the training set on the performance is shown in Figure~\ref{fig:tier_4_gado_tier3}. For SR, the performance increases with sample size, even if it is also good with few examples (90\% for 500 images) because of the easiness of the task. For gadolinium, performance is very high regardless of the sample size. For tier 3 vs tiers 2-1, adding more training samples helps the classifier while this is not the case for tier 2 vs 1. For tier 3 vs tiers 2-1 and tier 2 vs tier 1, we compared the proposed architecture, Conv5\textunderscore FC3, with the Inception and ResNet architectures. For both tasks, the balanced accuracy obtained with the different networks is comparable: while for tier 3 vs tiers 2-1 it is slightly higher with the ResNet (85.82 ± 0.95) than the Conv5\textunderscore FC3 (83.51 ± 0.93) and the Inception (82.40 ± 1.2 ), for tier 2 vs 1 it is slightly higher with the Conv5\textunderscore FC3 (71.65 ± 2.15) than the ResNet (68.08 ± 1.6) or Inception (69.27 ± 2.05) architectures. For both tasks, the performance of the different classifiers were not statistically different (for tier 3 vs tiers 2-1: p\textgreater0.21, McNemar's test; for tier 2 vs tier 1: p\textgreater0.12, McNemar's test). All the metrics are reported in Table~\ref{tab:inception_resnet}. \begin{table}[!b] \begin{center} \renewcommand{\arraystretch}{1.5} \begin{small} \bigskip \textbf{A. Tier 3 vs tiers 2-1}\\ \bigskip \begin{tabular}{l|ccc} \toprule \bfseries Metric & \bfseries Conv5\textunderscore FC3 & \bfseries Inception & \bfseries ResNet \\\hline\hline BA & 83.51 $\pm$ 0.93 & 82.41 $\pm$ 1.28 & 85.82 $\pm$ 0.95 \\\hline Sensitivity & 79.88 $\pm$ 3.06 & 75.53 $\pm$ 2.68 & 80.75 $\pm$ 3.24 \\\hline Specificity & 87.14 $\pm$ 3.14 & 89.29 $\pm$ 3.45 & 90.89 $\pm$ 2.22 \\\hline F1 score & 84.07 $\pm$ 1.02 & 83.38 $\pm$ 1.44 & 86.57 $\pm$ 0.81 \\\hline MCC & 67.38 $\pm$ 2.13 & 66.08 $\pm$ 3.02 & 72.52 $\pm$ 1.70 \\\hline AUC & 83.51 $\pm$ 0.93 & 82.41 $\pm$ 1.28 & 85.82 $\pm$ 2.81 \\\hline PPV & 81.93 $\pm$ 3.36 & 83.80 $\pm$ 3.93 & 86.58 $\pm$ 2.43 \\\hline NPV & 85.83 $\pm$ 1.49 & 83.58 $\pm$ 1.20 & 86.85 $\pm$ 1.76 \\ \bottomrule \end{tabular} \\ \bigskip\bigskip \textbf{B. Tier 2 vs tier 1}\\ \bigskip \begin{tabular}{l|ccc} \toprule \bfseries Metric & \bfseries Conv5\textunderscore FC3 & \bfseries Inception & \bfseries ResNet \\\hline\hline BA & 71.65 $\pm$ 2.15 & 69.28 $\pm$ 2.81 & 68.08 $\pm$ 1.63\\\hline Sensitivity & 77.39 $\pm$ 4.29 & 76.86 $\pm$ 4.76 & 82.35 $\pm$ 2.90\\\hline Specificity & 65.92 $\pm$ 7.47 & 61.69 $\pm$ 10.01 & 53.80 $\pm$ 4.99\\\hline F1 score & 74.10 $\pm$ 1.35 & 72.28 $\pm$ 1.13 & 72.94 $\pm$ 1.18 \\\hline MCC & 42.10 $\pm$ 3.25 & 37.74 $\pm$ 4.10 & 37.13 $\pm$ 2.73 \\\hline AUC & 71.65 $\pm$ 2.15 & 69.28 $\pm$ 2.81 & 68.08 $\pm$ 1.62 \\\hline PPV & 83.20 $\pm$ 2.32 & 81.51 $\pm$ 3.08 & 79.40 $\pm$ 1.34 \\\hline NPV & 57.78 $\pm$ 2.63 & 55.49 $\pm$ 1.70 & 58.77 $\pm$ 2.40 \\ \bottomrule \end{tabular} \end{small} \caption{Results of three 3D CNN architectures (Conv5\textunderscore FC3, Inception and ResNet) for the rating of the overall image quality. We report the mean and the empirical standard deviation across the five folds for all the metrics. BA: balanced accuracy; MCC: Matthews correlation coefficient; AUC: area under the receiver operator characteristic curve; PPV: positive predictive values; NPV: negative predictive values} \label{tab:inception_resnet} \end{center} \end{table} \clearpage \section{Discussion} In this work, we developed a method for the automatic QC of T1w brain MRI for a large clinical data warehouse. Our approach allows: i) discarding images which are of no interest (SR), ii) recognizing gadolinium injection , iii) rating the overall image quality. To this aim, different CNN were trained and evaluated thanks to the manual annotation of 5500 images by two raters. In the last decades, many computer-aided diagnosis systems using machine learning methods have been proposed for the detection of lesions or tumours, or for the classification of neurodegenerative or psychiatric diseases \citep{rathore2017review,icsin2016review,burgos2021deep}. Algorithms were mainly developed and tested using research images \citep{samper2018reproducible,noor2019detecting,cuingnet2011automatic}, or clinical datasets of limited size \citep{morin2020accuracy,zhang2019three,campese2019psychiatric,oh2019classification}. Their validation on large realistic clinical datasets is crucial. To that aim, clinical data warehouses, which may gather millions of clinical routine images, offer fantastic opportunities. They also provide considerable challenges. In particular, selecting adequate images for a given analysis task can be very difficult: DICOM attributes may be unreliable, images may be of the wrong type, truncated and their quality is extremely variable. Therefore, automatic curation and QC methods are needed to fully exploit the potential of clinical data warehouses. Important efforts and achievements have been made by the scientific community to propose protocols and automatic tools for QC. MRIQC \citep{esteban2017mriqc} and VisualQC \citep{raamana2020visual} are two tools developed for the QC of T1w brain MRI data: they propose the extraction of image quality metrics for the detection of outliers, and a graphical interface to check the images. \cite{alfaro2018image} proposed a pipeline for the UK Biobank dataset. \cite{sujit2019automated} trained a CNN using the research dataset ABIDE. Other works focused on QC of processing results (segmentation) rather than raw data \citep{keshavan2018mindcontrol,klapwijk2019qoala}. However, all these tools were designed for research data. Even if the data came from multiple sites, they do not cover all the images existing in a clinical PACS: they did not cover images with gadolinium and the patients presented with a limited number of diseases. On the contrary, in a clinical data warehouse, we may find images with or without gadolinium injection, ``research quality" images, and images segmented, cropped or with so much motion that it is impossible to distinguish the brain. This heterogeneity makes it impossible to use other QC tools present in the literature. To the best of our knowledge, we are the first to propose an automatic QC framework for clinical data warehouses. To train our automatic QC algorithm, we had to manually annotate a large sample of images from the data warehouse. It was not possible to use existing protocols and software tools. In addition to the limitations mentioned above, we were also constrained by the environment of the data warehouse which only included a Jupyter notebook and a command-line interface. While constraints may vary from a data warehouse to another, it is very common that the data cannot be downloaded and thus have to be used within a specific informatics set-up \citep{daniel2020hospital}. We thus developed a dedicated visual QC protocol, with the assistance of a resident radiologist. We compared the annotation using 3D images and 2D slices, and we concluded that three 2D slices were sufficient and could represent a good compromise to fulfil our objectives: one being the exclusion of bad quality images that would compromise further analyses. Manual annotation results showed that our protocol is reproducible across all tasks, even though agreement was weaker for more challenging characteristics. Inter-rater agreement was strong for the SR label and the gadolinium injection and moderate for other characteristics. Manual annotation also provides interesting information on the variability of image quality in a clinical routine data warehouse. As much as 25\% are totally unusable (SR), and almost a third has a very low quality (Tier 3). We also confirmed that gadolinium has a strong impact on image quality, hence the critical importance of detecting it accurately, the DICOM attributes being unreliable in that regard. For detecting straight reject, our CNN had excellent performance (BA greater than 90\%). Even though the task is relatively easy, this is very important in order to automatically discard images in a very large scale study. This was also the case for detection of gadolinium, an important characteristic that strongly impacts the behavior of many image analysis methods. For the rating of image quality, the situation was different for identifying Tier 3 (low quality) images and for separating Tier 2 (medium quality) and Tier 1 (high quality). The proposed CNN classifier identified low quality images (Tier 3) with a high accuracy (83\%). This is important because these are typically the images on which image processing algorithms could fail. Differentiating images of high and medium quality could also be useful but is less important as both categories can likely lead to reliable diagnostic predictions. We thus believe that these tools can be reliably used on the rest of this large data warehouse and already have an important practical impact. We compared several more sophisticated CNN architectures to our simple network based on five convolutional and three fully connected layers. However, these more complex networks (3D Inception and 3D ResNet) did not provide any significant improvement in performance. Thanks to the large number of hospitals in the AP-HP consortium (39 hospitals) and to the huge amount of images collected over the years (1980--now), we strongly believe that this dataset is representative of 3D T1w brain MRI that may be acquired in other hospitals. Consequently, the use of our QC framework could be generalized and it represents a first important step for the use of clinical data warehouses for the design of computer-aided diagnosis systems. The main limitations of our study concern the annotation process. With the analysis of only three slices, we limit the chances to notice localised artefacts. Another consequence is that it may be difficult to properly distinguish the characteristics when an image is degraded: in particular the motion and the noise may be confused. This is also reflected by moderate values of the weighted Cohen's kappa obtained for these two characteristics. Additionally, even if we believe that the CNN models that were trained on data from the AP-HP data warehouse can be applied to other clinical datasets due to the large numbers of hospitals and scanner models involved in study and to the extended period of time, it would be beneficial to apply them on a public dataset for benchmarking. \section{Conclusion} In this work, we proposed a framework for the automatic quality control of 3D brain T1w MRI for a large clinical data warehouse. Thanks to the manual annotation of 55O0 images, we trained and validated different convolutional neural networks on 5000 images with a 5-fold CV and we tested them on an independent test set of 500 images. The classifier was as efficient as manual rating for the classification of images which are not proper 3D T1w brain MRI (i.e. truncated or segmented images) and for the images for which gadolinium was injected. In addition, the classifier was able to recognise low quality images with good accuracy. \clearpage \singlespacing \section*{Acknowledgments} \noindent The research was done using the Clinical Data Warehouse of the Greater Paris University Hospitals. The authors are grateful to the members of the AP-HP WIND and URC teams, and in particular Stéphane Bréant, Florence Tubach, Jacques Ropers, Antoine Rozès, Camille Nevoret, Christel Daniel, Martin Hilka, Yannick Jacob, Julien Dubiel and Cyrina Saussol. They would also like to thank the ``Collégiale de Radiologie of AP-HP'' as well as, more generally, all the radiology departments from AP-HP hospitals. Finally, the authors are very appreciative of the support and guidance they have received from Quentin Vanderbecq when setting up the visual quality control protocol.\\ The research leading to these results has received funding from the Abeona Foundation (project Brain@Scale), from the French government under management of Agence Nationale de la Recherche as part of the ``Investissements d'avenir'' program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute) and reference ANR-10-IAIHU-06 (Agence Nationale de la Recherche-10-IA Institut Hospitalo-Universitaire-6). \section*{Authors contribution} \noindent Study concepts and study design: OC, NB, DD, SB\\ Acquisition, analysis or interpretation of data: all authors\\ Manuscript drafting or manuscript revision for important intellectual content: all authors\\ Approval of final version of submitted manuscript: all authors\\ Literature research: SB, NB, OC\\ Statistical analysis: SB\\ Obtained funding: OC, NB\\ Administrative, technical, or material support: AM\\ Study supervision: OC, NB, DD\\ \section*{Disclosure statement} \noindent Competing financial interests related to the present article: none to disclose for all authors.\\ Competing financial interests unrelated to the present article: OC reports having received consulting fees from AskBio (2020), having received fees for writing a lay audience short paper from Expression Santé (2019). Members from his laboratory have co-supervised a PhD thesis with myBrainTechnologies (2016-2019) and with Qynapse (2017-present). OC’s spouse is an employee and holds stock-options of myBrainTechnologies (2015-present). O.C. holds a patent registered at the International Bureau of the World Intellectual Property Organization (PCT/IB2016/0526993, Schiratti J-B, Allassonniere S, Colliot O, Durrleman S, A method for determining the temporal progression of a biological phenomenon and associated methods and devices) (2017). \section*{APPRIMAGE Study Group} \noindent Olivier Colliot, Ninon Burgos, Simona Bottani {$^{1}$} \\ Didier Dormont {$^{1,2}$}, Samia Si Smail Belkacem, Sebastian Ströer {$^{2}$}\\ Nathalie Boddaert {$^{3}$} \\ Farida Benoudiba, Ghaida Nasser, Claire Ancelet, Laurent Spelle {$^{4}$}\\ Hubert Ducou-Le-Pointe{$^{5}$}\\ Catherine Adamsbaum{$^{6}$}\\ Marianne Alison{$^{7}$}\\ Emmanuel Houdart{$^{8}$}\\ Robert Carlier {$^{9,17}$}\\ Myriam Edjlali{$^{9}$}\\ Betty Marro{$^{10,11}$}\\ Lionel Arrive{$^{10}$}\\ Alain Luciani{$^{12}$}\\ Antoine Khalil{$^{13}$}\\ Elisabeth Dion{$^{14}$}\\ Laurence Rocher{$^{15}$}\\ Pierre-Yves Brillet{$^{16}$}\\ Paul Legmann, Jean-Luc Drape {$^{18}$}\\ Aurélien Maire, Stéphane Bréant, Christel Daniel, Martin Hilka, Yannick Jacob, Julien Dubiel, Cyrina Saussol {$^{19}$}\\ Florence Tubach, Jacques Ropers, Antoine Rozès, Camille Nevoret {$^{20}$}\\ \begin{small} \noindent $^{1}$ Paris Brain Institute (ICM), Inserm U 1127, CNRS UMR 7225, Sorbonne Université, Inria, Aramis project-team, F-75013, Paris, France \\ $^{2}$ AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, F-75013, Paris, France \\ $^{3}$ AP-HP, Hôpital Necker, Department of Radiology, F-75015, Paris, France \\ $^{4}$ AP-HP, Hôpital Bicêtre, Department of Radiology, F-94270, Le Kremlin-Bicêtre, France \\ $^{5}$ AP-HP, Hôpital Armand-Trousseau, Department of Radiology, F-75012, Paris, France \\ $^{6}$ AP-HP, Hôpital Bicêtre, Department of Pediatric Radiology, F-94270, Le Kremlin-Bicêtre, France \\ $^{7}$ AP-HP, Hôpital Robert-Debré, Department of Radiology, F-75019, Paris, France \\ $^{8}$ AP-HP, Hôpital Lariboisière , Department of Neuroradiology, F-75010, Paris, France \\ $^{9}$ AP-HP, Hôpital Raymond-Poincaré, Department of Radiology, F-92380, Garches, France \\ $^{10}$ AP-HP, Hôpital Saint-Antoine, Department of Radiology, F-75012, Paris, France \\ $^{11}$ AP-HP, Hôpital Tenon, Department of Radiology, F-75020, Paris, France \\ $^{12}$ AP-HP, Hôpital Henri-Mondor, Department of Radiology, F-94000, Créteil, France \\ $^{13}$ AP-HP, Hôpital Bichat, Department of Radiology, F-75018, Paris, France \\ $^{14}$ AP-HP, Hôpital Hôtel-Dieu, Department of Radiology, F-75004, Paris, France \\ $^{15}$ AP-HP, Hôpital Antoine-Béclère, Department of Radiology, F-92140, Clamart, France \\ $^{16}$ AP-HP, Hôpital Avicenne, Department of Radiology, F-93000, Bobigny, France \\ $^{17}$ AP-HP, Hôpital Ambroise Paré, Department of Radiology, F-92100 104, Boulogne-Billancourt, France \\ $^{18}$ AP-HP, Hôpital Cochin, Department of Radiology, F-75014, Paris, France \\ $^{19}$ AP-HP, WIND department, F-75012, Paris, France \\ $^{20}$ AP-HP, Unité de Recherche Clinique, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, F-75013, Paris, France \\ \end{small} \newpage \bibliographystyle{elsarticle-harv}
1,108,101,566,021
arxiv
\section{Introduction} The extent to which the non-classical properties of one-mode field states survive in the presence of noise and losses was investigated since the early years of quantum optics \cite{VW,MW}. A routine operation like the transmission of light beams through an optical fiber could produce a substantial degradation of their non-classical properties. As an example, it was found that squeezing properties are altered by admixture with thermal noise and disappear completely for values of thermal mean photon occupancy exceeding the threshold $1/2$ \cite{PT1996}. From a more recent quantum-information perspective, a lot of work is concentrated on correlations such as entanglement and discord in multi-partite systems \cite{QC1,QC2,QC3}. While correlations associated with entanglement \cite{QC1} are defined in connection to global transformations of bipartite quantum states, the concept of quantum discord arises from local (marginal) actions and measurements performed on one subsystem \cite{OZ,HV}. Its definition contains an optimization over the set of all one-party measurements, which in the case of mixed states could be a challenging problem. Note that in the pure-state case, entanglement and discord coincide and therefore they measure the total amount of correlations. In the mixed-state case, quantum discord is a measure of quantumness whose relation to entanglement is not a simple one. A survey of recent progress and applications of classical and quantum correlations quantified by quantum discord and other measures can be found in Refs. \cite{QC2,QC3}. When states of a two-party quantum system need to interact with a noisy channel, a drastic modification of their quantum correlations is expected to occur \cite{Yu,Damp}. For instance, it was found that quantum and classical correlations for a system of two qubits evolving in Markovian dephasing channels can display different dynamics \cite{sabrina}. Quite recently, the effect of local noisy channels on quantum correlations in finite-dimensional quantum systems was investigated \cite{bruss}. It was found that while entanglement does not increase under local channels, other correlations can become larger when the input state is not pure. In the continuous-variable settings, a similar behaviour was recently noticed for the Gaussian discord of two-mode mixed states under single-mode Gaussian dissipative channels \cite{Ciccarello}. Local Gaussian thermal and phase-sensitive reservoirs modify the entanglement properties of two-mode Gaussian states, as interestingly is pointed out in Refs. \cite{serafini,goyal,souza}. Evolutions of more general Gaussian correlations were also investigated in Refs. \cite {buono,barbosa,isar,madsen}. In this work we analyze the decay of quantum correlations of the field in a two-mode Gaussian state due to the interaction of the modes with separate thermal baths. We focus on two measures of quantum correlations, namely, the entanglement of formation (EF) and the quantum discord, and evaluate them for a damped two-mode squeezed thermal state (STS) \cite{PTH2001,PTH2003}. Our choice of this important particular class of Gaussian states is motivated by the recent result \cite{PSBCL} that for an STS, the exact discord according to its original definition \cite{OZ,HV} is achieved with an optimal measurement which is Gaussian. Consequently, what was called the Gaussian discord and was derived in Refs. \cite{PG,AD} is actually the {\em exact} discord. On the other hand, in the interesting paper \cite{OP} it is shown that for an STS, a Gaussian character of the discord implies that the EF is also Gaussian. Accordingly, the Gaussian EF written explicitly in our paper \cite{PT2008} is equally an exact result. As such, when dealing with a dissipative evolution that preserves the STSs, we have the rare privilege to fully describe the decay of two types of correlations by analytic means. Our paper is structured as follows. In Section 2 we recapitulate several properties of an STS: the covariance matrix, the Simon separability criterion \cite{Simon}, the entanglement of formation \cite{PT2008} as a measure of inseparability, and the quantum discord as derived in Refs. \cite{PG,AD}. In Section 3 the system of interest (field $+$ environment) is specified in order to write and solve the quantum optical master equation. Special attention is then paid to the solution for an input STS. Section 4 deals with a damped STS for modes coupled with two identical local thermal baths. We here derive the time at which the entanglement sudden death occurs and study the evolution of the discord. Section 5 is dedicated to another interesting configuration of the system: only one mode is in contact with a thermal bath. We find that all the pure Gaussian states lose their entanglement at the same time which depends on the reservoir only. In the case of a mixed input state, the discord defined by local measurements on the attenuated mode is increased above its initial value. Our conclusions are drawn in Sec. 6. \section{Quantum correlations in a two-mode squeezed thermal state} In this section we review shortly the above-mentioned examples of Gaussian correlations, with emphasis on those of an STS. We consider two-mode Gaussian states and let us denote the photon annihilation operators of the modes by $\hat a_1$ and $\hat a_2.$ As shown in Refs. \cite{PTH2001,PTH2003}, an STS is the result of the action of a two-mode squeeze operator, \begin{eqnarray*} \hat S_{12}(r,\phi):=\exp{\left[ \,r \left( {\rm e}^{i\phi} \hat a^{\dag}_1 \hat a^{\dag}_2-{\rm e}^{-i\phi} \hat a_1 \hat a_2 \right) \right] },\quad \left( r>0,\;\; \phi\in (-\pi,\pi] \right), \end{eqnarray*} on a two-mode thermal state with the mean photon occupancies $\bar{n}_1$ and $\bar{n}_2$: \begin{eqnarray} \hat \rho_{ST}=\hat S_{12}(r, \phi)\hat \rho_T (\bar n_1,\bar n_2) \hat S^{\dag}_{12}(r, \phi) \label{sts}. \label{STS} \end{eqnarray} Its covariance matrix (CM) has the following block structure \cite{PTH2001,PTH2003}: \begin{equation} {\cal V}=\left( \begin{array}{cc} b_1\, \mathbb{I}_2&{\cal C}\\ {\cal C}&b_2\, \mathbb{I}_2 \end{array} \right), \quad \left( b_1>\frac{1}{2},\; b_2>\frac{1}{2}\right ). \label{cv} \end{equation} In Eq.\ (\ref{cv}), $\mathbb{I}_2$ denotes the 2$\times $2 identity matrix and ${\cal C}$ is the 2$\times $2 symmetric matrix \begin{equation} {\cal C}=c \left( \begin{array}{cc} \cos{\phi}\quad \sin{\phi} \\ \sin{\phi}\quad -\cos{\phi} \end{array} \right), \quad (c>0). \label{mc} \end{equation} Recall that the CM of an STS, Eqs.\ (\ref{cv}) and\ (\ref{mc}), has the standard-form parameters \cite{PTH2001,PTH2003}: \begin{eqnarray} b_1&=&\left( \bar{n}_1+\frac{1}{2}\right) [\cosh(r)]^2 +\left( \bar{n}_2+\frac{1}{2}\right) [\sinh(r)]^2, \nonumber \\ b_2&=&\left( \bar{n}_1+\frac{1}{2}\right)[\sinh(r)]^2+\left( \bar{n}_2 +\frac{1}{2}\right) [\cosh(r)]^2, \nonumber \\ c&=&(\bar{n}_1+\bar{n}_2+1)\sinh(r) \cosh(r). \label{par} \end{eqnarray} In many applications one can take advantage of a formal definition of an STS, as being an undisplaced and unscaled two-mode Gaussian state described by three standard-form parameters: $b_1>\frac{1}{2}, \; b_2>\frac{1}{2}, \; c>0$. If $b_1\geqq b_2$, then these parameters must fulfill the uncertainty inequality \begin{equation} \left( b_1+\frac{1}{2}\right) \left( b_2-\frac{1}{2}\right)-c^2 \geqq 0. \label{state} \end{equation} If $b_1 < b_2$, then one has to interchange the parameters $b_1$ and $b_2$ in Eq.\ (\ref{state}) \cite{PTH2003}. The standard form of its CM is given by Eq.\ (\ref{cv}) with the $2\times 2$ matrix ${\cal C}$ written for $\phi=0$, i.e., becoming proportional to the Pauli matrix $\sigma_3:\; {\cal C}= c\,\sigma_3$. Within this formal treatment, Eq.\ (\ref{sts}) and its companions, Eqs.\ (\ref{cv})--\ (\ref{par}), represent a parametrization of an STS with a clear experimental relevance. It is known that, according to Williamson's theorem \cite{Wil}, the CM of a two-mode Gaussian state can be diagonalized by a symplectic transformation. We get thus an important ingredient in describing the state, namely, the symplectic eigenvalues of the CM. For an STS they are \cite{PTH2003}: \begin{eqnarray} \kappa_{\pm}=\frac{1}{2}\left[ \sqrt{(b_1+b_2)^2-4c^2}\pm (b_1-b_2)\right]. \label{se} \end{eqnarray} In the parametrization\ (\ref{par}), we get $ \kappa_{\pm}=\bar n_{1,2} +\frac{1}{2}$. It is worth mentioning Simon's separability criterion for two-mode Gaussian states \cite{Simon}. It was proven that preservation of the non-negativity of the density matrix under partial transposition is not only a necessary \cite{Peres}, but also a sufficient condition for the separability of two-mode Gaussian states \cite{Simon}. Accordingly, a two-mode Gaussian state is separable when the condition $\tilde \kappa_-\geqq \frac{1}{2}$ is met. We have denoted by $\tilde \kappa_{\pm}$ the symplectic eigenvalues of the CM corresponding to the partial transpose of the density matrix. For an STS one finds: \begin{eqnarray} \tilde \kappa_{\pm}=&\frac{1}{2}\left[b_1+b_2 \pm\sqrt{(b_1-b_2)^2 +4c^2}\right]. \label{pse} \end{eqnarray} A simplified form of the separability condition for an STS that we shall use in what follows reads \cite{PTH2001,PTH2003}: \begin{eqnarray} \left({b_1}-\frac{1}{2}\right) \left({b_2}-\frac{1}{2}\right)-{c}^2\geqq 0. \label{sc} \end{eqnarray} Before proceeding, let us note that, apart from the vacuum state, the only undisplaced and unscaled pure two-mode Gaussian states are the squeezed vacuum ones, $|{\psi}_{SV}\rangle\langle {\psi}_{SV}|$, and they belong to the set of STSs. The pure-state case is characterized by the identities $b_1=b_2=:b$ and $b^2-c^2=\frac{1}{4}$. Equations\ (\ref{se}) and\ (\ref{pse}) give now $\kappa_{\pm}=\frac{1}{2},\;\; \tilde \kappa_{\pm}=b\pm c$. In the following, we shall recall two measures of quantum correlations for an STS, namely, the EF and the quantum discord. The EF is defined as an optimization over all the pure-state decompositions of the given state \cite{Bennett}: \begin{eqnarray} E_F(\hat \rho ):=\inf \left[ \sum_kp_k\, E\left( |\psi_k \rangle \langle \psi_k|\right) \; | \; \hat \rho =\sum_kp_k |\psi_k \rangle \langle \psi_k|\right]. \label{od} \end{eqnarray} In the expression above, we have denoted by $E\left( |\psi_k \rangle \langle \psi_k|\right)$ the entanglement of the pure bipartite state $|\psi_k\rangle\langle \psi_k|$. We here focus on the case of an STS $\hat \rho_{ST}$ and recall that an expression for its EF could be obtained when restricting the optimization in Eq.\ (\ref{od}) to Gaussian pure-state decompositions only. For further convenience, we introduce the entropic function \begin{equation} h(x):=\left( x+\frac{1}{2}\right) \, \ln \left( x+\frac{1}{2}\right)-\left( x-\frac{1}{2}\right) \, \ln \left( x-\frac{1}{2}\right). \label{h} \end{equation} It was proven that the Gaussian EF can be expressed in terms of the function $h(x)$ as: \begin{equation} E_F( \hat \rho_{ST})=h(x_m). \label{expr-EF} \end{equation} In Eq.\ (\ref{expr-EF}), the parameter $x_m$ is given in terms of the entries of the CM \cite{PT2008}: \begin{equation} x_m=\frac{\left(b_1+b_2\right) \left(b_1b_2-c^2+\frac{1}{4}\right) -2c\sqrt{{\cal D}}}{\left(b_1+b_2 \right)^2-4\, c^2}. \label{xm} \end{equation} Here $${\cal D}:=\left( b_1b_2-c^2\right) ^2-\frac{1}{4}\, \left( b_1^2+b_2^2-2\, c^2\right)+\frac{1}{16}\geqq 0$$ is the main symplectic invariant. For any squeezed vacuum state, ${\cal D}=0,$ so that $x_m=b$, and thus its EF is equal to the von Neumann entropy of the reduced one-mode thermal state, i. e., $E_F\left( |{\psi}_{SV} \rangle \langle {\psi}_{SV}|\right)=h(b)$. The difference between two classically equivalent definitions of the mutual information provides another measure of the total amount of quantum correlations in a quantum state, called discord \cite{OZ,HV}. Let us consider a bipartite state $\hat \rho_{AB}$ and write down its quantum mutual information, \begin{equation} I(\hat \rho_{AB}):=S(\hat \rho_A)+S(\hat \rho_B)-S(\hat \rho_{AB}), \end{equation} with $S(\hat \rho)$ being the von Neumann entropy of the state $\hat \rho$. Another quantum analogue of the mutual information is more complicated and depends on the influence on the first subsystem $A$ of the measurements made on the second subsystem $B$. Let us denote by $\{\hat \Pi^B_k \}$ a quantum measurement performed on the system $B$. The final state of the subsystem $A$ after such a measurement on the subsystem $B$ leading to the outcome $j$ is \begin{equation} \hat \rho_{A|\hat \Pi^B_j}=\frac{1}{p_j}\, \mbox{Tr}_B(\hat\rho_{AB}\, \hat \mathbb{I}_A\otimes \hat \Pi^B_j), \label{meas-state} \end{equation} In Eq.\ (\ref{meas-state}), $p_j$ is the probability of the outcome $j$: $p_j=\mbox{Tr}(\hat \rho_{AB}\, \hat \mathbb{I}_A\otimes \hat \Pi^B_j)$. The quantum conditional entropy, given the non-selective measurement $\{ \hat\Pi^B_j\}$, is a convex sum of von Neumann entropies of the post-measurement states\ (\ref{meas-state}) which is taken over all the possible outcomes: \begin{equation} S(\hat \rho_{A|\{ \hat\Pi^B_j\}})=\sum_jp_j\, S(\hat\rho_{A|\hat\Pi^B_j}). \end{equation} The quantum information gained about the subsystem $A$ by taking into account the minimal disturbance produced on it by any of all the possible measurements performed on the subsystem $B$ is the difference \cite{OZ} \begin{equation} {\cal J}(\hat\rho_{AB})|_{\{\hat\Pi^B_j\}}:=S(\hat\rho_A) -\inf_{\{\hat \Pi^B_j\} } S(\hat\rho_{A|\{ \hat \Pi^B_j\}}). \end{equation} The quantum $A$-discord is then defined as follows \cite{OZ}: \begin{equation} D_1(\hat\rho_{AB}):=I(\hat \rho_{AB}) -{\cal J}(\hat \rho_{AB})|_{\{\hat\Pi^B_j\}}\geqq 0. \label{d1} \end{equation} Similarly, the quantum $B$-discord, which considers the local quantum measurements performed on the first subsystem is \begin{equation} D_2(\hat\rho_{AB}):=I(\hat \rho_{AB}) -{\cal J}(\hat \rho_{AB})|_{\{\hat\Pi^A_j\}}\geqq 0. \label{d2} \end{equation} Quite recently, the above-defined discord \cite{OZ,HV} has been calculated for two-mode Gaussian states under the approach of limiting the set of all one-party quantum measurements to the Gaussian ones \cite{PG,AD}. We were thus provided with an analytic formula for what is called the Gaussian discord. Moreover, according to Ref. \cite{PSBCL}, at least for the states analyzed here, namely, the STSs, the Gaussian discord is the {\em exact} discord. Thus the quantum discords\ (\ref{d1}) and\ (\ref{d2}) turn out to have very simple expressions in terms of one-mode von Neumann entropies: \begin{eqnarray} D_1^{STS}&=&h(b_2)-h(\kappa_+)-h(\kappa_-)+h(y) \nonumber\\ D_2^{STS}&=&h(b_1)-h(\kappa_+)-h(\kappa_-)+h(z). \label{discord} \end{eqnarray} Here $h$ is the entropic function\ (\ref{h}) and the symplectic eigenvalues $\kappa_+, \kappa_-$ are given in Eq.\ (\ref{se}). In addition, we have used the notations: \begin{equation} y:=b_1-\frac{c^2}{b_2+\frac{1}{2}}, \quad z:=b_2-\frac{c^2}{b_1 +\frac{1}{2}}. \end{equation} Note that, for symmetric STSs ($b_1=b_2=:b$), the identity $y=z$ holds and therefore $D_1=D_2$. Moreover, for pure two-mode Gaussian states, we get $y=z=\frac{1}{2}$ and $D_1=D_2=h(b)$, i. e., the discord and the entanglement coincide, as expected \cite{QC2}. \section{Evolution of a two-mode state with two local thermal reservoirs} We consider an arbitrary two-mode field state having the annihilation operators $\hat a_1,\hat a_2,$ and the density operator $\hat \rho$. Each mode is in contact with a local thermal bath. We denote the mean photon occupancies of the two thermal reservoirs by ${\bar n}_{Rj},\, (j=1, 2)$, respectively, and the corresponding damping rates by $\gamma_j,\, (j=1, 2)$ . In the interaction picture, the quantum optical master equation which describes this type of coupling is \begin{eqnarray} &&\frac{\partial\hat \rho}{\partial t}= \frac{\gamma_1}{2}(2\hat a_1 \hat \rho \hat a_1^{\dagger}-\hat a_1^{\dag}\hat a_1 \hat \rho -\hat \rho \hat a_1^{\dagger}\hat a_1)+\gamma_1 \bar{n}_{R1} (\hat a_1^{\dagger}\hat \rho \hat a_1+\hat a_1\hat \rho \hat a_1^{\dag}-\hat a_1^{\dagger} \hat a_1 \hat \rho -\hat \rho \hat a_1 \hat a_1^{\dagger}) \nonumber \\ &&+ \frac{\gamma_2}{2}(2\hat a_2 \hat \rho \hat a_2^{\dagger}-\hat a_2^{\dag}\hat a_2 \hat \rho -\hat \rho \hat a_2^{\dagger}\hat a_2)+\gamma_2 \bar{n}_{R2} (\hat a_2^{\dagger}\hat \rho \hat a_2+\hat a_2\hat \rho \hat a_2^{\dag}-\hat a_2^{\dagger} \hat a_2 \hat \rho -\hat \rho \hat a_2 \hat a_2^{\dagger}).\nonumber\\ \label{me} \end{eqnarray} As in our recent work \cite{MGM} for the one-mode case, instead of the master equation \ (\ref{me}), we employ the equivalent differential equation for the two-mode characteristic function $\chi(\lambda_1,\lambda_2,t):= \Tr \{[\hat D_1(\lambda_1) \otimes \hat D_2(\lambda_2)]\hat \rho(t)\}$. Here $\hat D_1(\lambda_1)$ and $\hat D_2(\lambda_2)$ are the Weyl displacement operators of the modes: $\hat D_j(\lambda_j):=\exp(\lambda_j\hat a^{\dagger}_j -\lambda^{\ast}_j\hat a_j), \, (j=1, 2).$ We finally find the solution: \begin{eqnarray} \chi(\lambda_1,\lambda_2,t)&=&\chi \left (\lambda_1 e^{-\frac{1}{2}\gamma_1 t},\, \lambda_2e^{-\frac{1}{2}\gamma_2 t},0 \right) \, \exp \left[ -\left(\bar{n}_{R1}+\frac{1}{2}\right)\left( 1-e^{-\gamma_1t}\right) |\lambda_1|^2\right] \nonumber\\ &&\times \exp \left[ -\left(\bar{n}_{R2}+\frac{1}{2}\right)\left( 1-e^{-\gamma_2t}\right) |\lambda_2|^2\right]. \label{cf} \end{eqnarray} Let us inspect the asymptotic behaviour of the solution\ (\ref{cf}) of the master equation\ (\ref{me}). When we take $t\rightarrow \infty$ in Eq.\ (\ref{cf}), we get the characteristic function of the two-mode thermal state imposed by the two reservoirs: \begin{equation} \lim_{t \to \infty}\chi(\lambda_1,\lambda_2,t)= \exp \left[ -\left(\bar{n}_{R1}+\frac{1}{2}\right) |\lambda_1|^2 -\left(\bar{n}_{R2}+\frac{1}{2}\right) |\lambda_2|^2\right]. \label{asympcf} \end{equation} Note that this two-mode steady state, which is independent of the input state, is a product state without any correlations between the modes. Given the structure of the time-dependent characteristic function\ (\ref{cf}), any input Gaussian state preserves its Gaussian form at any time during the mode damping. In particular, an initial STS remains an STS at any subsequent time. Its evolving CM has the following standard-form entries: \begin{eqnarray} b_1(t)&=&b_1\, e^{-\gamma_1\, t}+\left( \bar{n}_{R1}+\frac{1}{2} \right) \left( 1-e^{-\gamma_1\, t} \right), \nonumber \\ b_2(t)&=&b_2\, e^{-\gamma_2\, t}+\left( \bar{n}_{R2}+\frac{1}{2} \right) \left( 1-e^{-\gamma_2\, t} \right), \nonumber \\ c(t)&=& c\, \exp \left[-\frac{1}{2}\left(\gamma_1+\gamma_2 \right)t \right]. \label{spt} \end{eqnarray} In view of Eqs.\ (\ref{spt}), the CM of the damped STS becomes asymptotically diagonal: \begin{equation} \lim_{t \to \infty}{\cal V}(t)=\left({\bar n}_{R1} + \frac{1}{2} \right) \mathbb{I}_2 \oplus \left({\bar n}_{R2}+\frac{1}{2} \right) \mathbb{I}_2. \label{asympCM} \end{equation} This means that the two-mode steady state is a product one, whose factors are precisely the single-mode thermal states conditioned by the corresponding reservoirs. Thus we recover the previous general conclusion in the special case of an an initial STS. To sum up, any measure of correlations in the Gaussian approach available for an STS, such as the entanglement of formation \cite{PT2008} or the quantum discord \cite{PG,AD}, can readily be applied for a decaying STS on account of Eqs.\ (\ref{spt}). \section{Evolution of a two-mode squeezed thermal state with two identical local thermal reservoirs} What are we expecting to occur when a two-mode quantum state is subjected to a dissipative interaction as described by the master equation\ (\ref{me})? In general, a substantial reduction of the non-classical properties of the state which entails a decrease of its quantum correlations such as entanglement and discord. More specifically, in the important particular case of two-mode Gaussian states, we can notice from the very beginning an important difference between the ways in which these two measures of quantum correlations actually decay. Indeed, on the one hand, according to condition\ (\ref{sc}), the entanglement of the input state is expected to vanish at a finite time. This process has been called the {\em entanglement sudden death} in the case of qubits \cite{Yu,Damp}. On the other hand, it is known that the only zero-discord two-mode Gaussian states are the product ones \cite{PG,AD}. Taking account of the time-dependent two-mode characteristic function\ (\ref{cf}), as well as of its steady-state form\ (\ref {asympcf}), we infer that only the latter describes a product state without any correlations between the modes. Therefore, it is reasonable to believe that only asymptotically a damped two-mode Gaussian state could lose all its correlations, both quantum and classical, measured by the Gaussian discord. For the sake of simplicity and in order to get versatile analytic results, we consider here the particular case when the two local reservoirs are identical: $\gamma_1=\gamma_2=:\gamma$ and $\bar{ n}_{R1}=\bar{ n}_{R2}=:\bar{ n}_R$. In this case, the CM of an arbitrary damped two-mode Gaussian state reads: \begin{equation} {\cal V}(t)={\rm e}^{-{\gamma}t}{\cal V}(0)+\left({\bar n}_{R} + \frac{1}{2} \right) \left( 1-{\rm e}^{-{\gamma}t} \right) \mathbb{I}_4. \label{CM} \end{equation} Here ${\cal V}(0)$ is the input CM and $\mathbb{I}_4$ denotes the $4\times 4$ identity matrix. Equation\ (\ref{CM}) tells us that an input state with no local squeezing does not change its character during damping: for instance, a symmetric state remains symmetric and an STS evolves as a damped STS. We restrict ourselves now to this latter case. When employing the entries of the time-dependent CM\ (\ref{CM}) in the separability condition\ (\ref{sc}), one finds a simple expression of the time required by a damped STS to reach the separability threshold: \begin{equation} t_s=\frac{1}{\gamma}\ln \left(1+\frac{\frac{1}{2}-\tilde \kappa_{-}}{\bar n_R}\right), \quad \left( \tilde \kappa_{-}<\frac{1}{2}\right) . \label{ESD} \end{equation} Here $\tilde \kappa_{-}$ is the smallest symplectic eigenvalue of the CM of the partially transposed input density matrix, which is given by Eq.\ (\ref{pse}). We see that in the special case of zero-temperature baths, the entanglement disappears only asymptotically. In all other cases, the quantum-classical transition occurs at finite times, i. e., it happens a {\em sudden death of entanglement} \cite{Yu}. \begin{figure}[h] \center \includegraphics[width=5cm]{doua-rez-disc-ent-r2-n-05.eps} \includegraphics[width=5cm]{doua-rez-disc-ent-r2-n-0.eps} \includegraphics[width=5cm]{doua-rez-st-pura-disc-ent-r2-n-05.eps} \caption{(Color online) Evolution of the EF (dot-dashed blue line), and of the discords $D_1$ (black line) and $D_2$ (dashed red line) for an input STS in interaction with two local identical thermal reservoirs. We have employed the following parameters. (a) The STS is characterized by the parameters $\bar{n}_1=10$, $\bar{n}_2=0.1$, $r=2$ and the reservoir by $\bar{n}_R=0.5$. (b) For the same input state we use $\bar{n}_R=0$. (c) We plot the EF and the discord $D_1=D_2$ (dashed red line) for an input pure state having the squeeze parameter $r=2$. The reservoir is noisy with $\bar{n}_R=0.5$. } \label{fig1} \end{figure} In Figs. \ref{fig1}(a) and \ref{fig1}(b) we plot the evolution of the entanglement of formation, Eq.\ (\ref{expr-EF}), as well as that of the quantum discords $D_1$ and $D_2$, Eqs.\ (\ref{d1}) and\ (\ref{d2}), for an asymmetric mixed STS. The aspect of the plots follows closely our above remarks on the robustness of discord against noise in comparison with the fragility of entanglement. Note that the discords $D_1$ and $D_2$ are very close and can be distinguished only for very different values of the thermal mean photon occupancies $\bar n_1$ and $\bar n_2$. When the reservoirs are noisy, ($\bar n_R>0$), both the EF and the discords $D_1, D_2$ are strongly diminished. Contact with zero-temperature baths, as in Fig. \ref{fig1}(b), produces a slower decay of all correlations, which in this case disappear only asymptotically. In Fig. \ref{fig1}(c) we consider an input pure state, namely, a two-mode squeezed vacuum state in contact with a noisy bath. At the time $t=0$ the entanglement and discord coincide, but their time developments look very different. Notice that, in view of Eq.\ (\ref{CM}), a thermalized two-mode squeezed vacuum state evolves into a symmetric STS having $D_1=D_2$. \section{Evolution of a two-mode squeezed thermal state with a single local thermal reservoir} For finite-dimensional quantum systems it was recently found that while entanglement does not increase under local channels, other correlations such as discord can become larger when the input state is not pure \cite{bruss}. In continuous-variable settings, a similar behaviour was noticed for Gaussian discord of mixed two-mode states under one-mode Gaussian dissipative channels \cite{Ciccarello,madsen}. To investigate here such an interaction with analytic means and results, we consider an input STS having only the mode 1 in contact with a thermal reservoir. We specialize the master equation\ (\ref{me}) to the values $\gamma_1=:\gamma,\; \bar n_{R1}=:\bar n_{R},\; \gamma_2=0,\; \bar n_{R2}=0,$ so that the standard-form entries\ (\ref{spt}) of the damped CM become: \begin{eqnarray} b_1(t)&=&b_1 {\rm e}^{-\gamma t}+\left( \bar{n}_R +\frac{1}{2}\right) \left( 1-{\rm e}^{-\gamma t}\right),\nonumber \\ b_2(t)&=&b_2,\nonumber \\ c(t)&=&c\, \exp\left(-\frac{1}{2}{\gamma t}\right). \label{time-dep} \end{eqnarray} By insertion of the time-dependent parameters\ (\ref{time-dep}) into the separability condition\ (\ref{sc}) one finds the time at which the EF of a damped STS vanishes: \begin{equation} t_{s}=\frac{1}{\gamma}\ln \left[1-\frac{(b_1-\frac{1}{2})(b_2-\frac{1}{2}) -c^2}{\bar n_R (b_2-\frac{1}{2})}\right], \quad \left( b_1-\frac{1}{2}\right) \left( b_2-\frac{1}{2}\right)-c^2 <0. \label{ts1} \end{equation} We specialize Eq.\ (\ref{ts1}) to the case of a pure Gaussian input $\left (b_1=b_2=:b,\;b^2-c^2=\frac{1}{4}\right) .$ The time of the death of entanglement\ (\ref{ts1}) is then independent of the input two-mode squeezed vacuum state, being determined only by the field-reservoir coupling: \begin{equation} t_{c}=\frac{1}{\gamma}\ln \left(1+\frac{1}{\bar n_R}\right). \label{tc} \end{equation} We have also checked that the time of the entanglement death has the same expression\ (\ref{tc}) for an input squeezed vaccuum state with additional local squeezings on both modes. Moreover, in our recent paper \cite{MGM}, we found that, for some classes of one-mode states displaying initially certain negativities of their Glauber-Sudarshan $P$ representation, $t_{c}$ is the ultimate time at which the $P$ function becomes positive due to the field interaction with a thermal reservoir. At the time\ (\ref{tc}) it therefore occurs a sudden quantum-classical transition for some types of one-mode states, as well as for any two-mode squeezed vacuum state. As regards the evolution of the Gaussian discord, we expect it to decay eventually very slowly and to vanish only asymptotically. Indeed, according to Eqs.\ (\ref{time-dep}), the CM of the damped STS has an asymptotically diagonal form: \begin{equation} \lim_{t \to \infty}{\cal V}(t)=\left({\bar n}_{R} + \frac{1}{2} \right) \mathbb{I}_2 \oplus b_2\, \mathbb{I}_2. \label{asympCM1} \end{equation} The steady state of the field is therefore the product of two single-mode thermal states: the state of the damped mode 1, which is imposed by the thermal reservoir owing to their interaction, and that of the freely-evolving mode 2, which is its reduced state remaining constant in time and thus equal to its input at $t=0$. The case of a pure-state input deserves additional remarks. According to Eqs.\ (\ref{time-dep}), although at the moment $t=0$ the three measures of quantum correlations EF, $D_1$ and $D_2$ coincide, they behave subsequently quite differently because an input two-mode squeezed vacuum state evolves into an asymmetric STS. Figure \ref{fig2}(c) displays the evolution of the EF, as well as those of both discords $D_1$ and $D_2$, which are all monotonic, as predicted in Ref. \cite{bruss}. However, the discord $D_2$ corresponding to local measurements performed on the damped mode 1 survives much longer than both the EF and the discord $D_1$. The case of an initial mixed state can be tackled by using Eqs.\ (\ref{expr-EF}), (\ref{discord}), and (\ref{time-dep}) for obtaining the expressions of the EF and the discords $D_1$ and $D_2$. We plot in Figs. \ref{fig2}(a) and \ref{fig2}(b) their time evolution for the same input state, but with a noisy bath (a) and a zero-temperature reservoir (b). An enhancement of $D_2$ is noticed in both panels (a) and (b). The discord $D_2$ presents a clear maximum in the latter situation and is much enhanced with respect to its value at the moment $t=0$. This can be interpreted as a creation of quantum correlations similar to those first explored for finite-dimensional systems \cite{bruss}. Moreover, in the recent Ref. \cite{Ciccarello} it was found that an enhancement of the discord $D_2$ can be noticed even when the input Gaussian state is separable. \begin{figure} \center \includegraphics[width=5cm]{disc-ent-r2-n1-05.eps} \includegraphics[width=5cm]{disc-ent-r2-n1-0.eps} \includegraphics[width=5cm]{st-pura-r2-n1-05.eps} \caption{Evolution of the EF (dot-dashed blue line), $D_1$ (black line) and $D_2$ (dashed red line) in a thermal bath acting on mode 1. The input state is characterized by the squeeze parameter $r=2$. The input thermal mean photon occupancies are $\bar{n}_1=10$, $\bar{n}_1=7$ (left and central panels) and the reservoir has (a) $\bar{n}_R=0.5$ and (b) $\bar{n}_R=0$. The state considered in the panel (c) is pure, while the reservoir has $\bar{n}_R=0.5$.} \label{fig2} \end{figure} \section{Concluding remarks} In order to draw some conclusions on the effects produced by local dissipation on quantum correlations of an STS, we compare now the decay of entanglement and discord for the two situations studied above. Figure \ref{fig3} displays our results for the EF (blue curves) and $D_2$ (black plots) in the cases of both one and two local identical reservoirs for a mixed STS (panels (a) and (b)) and for a pure Gaussian state (c). \begin{figure}[h] \center \includegraphics[width=5cm]{comparatie-r2-n-0.eps} \includegraphics[width=5cm]{comparatie-r2-n-05.eps} \includegraphics[width=5cm]{comparatie-r2-n-05-st-pura.eps} \caption{Comparison of decays of the EF (blue curves) and of the discord $D_2$ (black curves) for one local bath (dashed lines) and two local identical baths (full lines) for an input state with the squeeze parameter $r=2$. The other parameters are: (a) $\bar{n}_1=10$, $\bar{n}_1=7$, $\bar{n}_R=0$; (b) $\bar{n}_1=10$, $\bar{n}_1=7$, $\bar{n}_R=0.5$; (c) an input pure state and a noisy bath with $\bar{n}_R=0.5$.} \label{fig3} \end{figure} We represent here the case of zero-temperature reservoirs (panels (a) and (c)) to show a better preservation of all correlations in comparison with the noisy bath considered in panel (b). We can see that in all cases both Gaussian discords $D_1$ and $D_2$ survive longer than the EF. This is expected because only asymptotically the damped Gaussian state becomes a product one. Thus the Gaussian discord, which measures the whole amount of quantum and classical correlations, proves to be quite robust against dissipation in all the above-mentioned situations. However, only for a configuration with one local thermal bath, there is an enhancement of the discord $D_2$. Since this can be larger than the discord of the input state, it means that the field-reservoir interaction generates quantum and clasical correlations of the discord type. A final conclusion arising from Fig. \ref{fig3} is quite interesting. In all the analyzed situations (mixed or pure input states, noisy or zero-temperature reservoirs), the configuration with one local thermal bath performs better than that with two local identical baths. This is valid when analyzing both the magnitude of correlations and their preservation in time. \ack{This work was supported by the Romanian National Authority for Scientific Research through Grant PN-II-ID-PCE-2011-3-1012 for the University of Bucharest.} \section*{References}
1,108,101,566,022
arxiv
\section{\protect\detokenize{#1}}% \lstinputlisting[]{#1}% } \definecolor{greencode}{RGB}{0,128,0} \definecolor{comment}{RGB}{128,128,128} \definecolor{orange}{RGB}{255,127,0} \lstset{ columns=flexible, basicstyle=\ttfamily, keywordstyle=\color{greencode}\bfseries, commentstyle=\color{comment}\ttfamily, stringstyle=\color{orange}, escapeinside={||}, language = Python, mathescape=true, breaklines=true, frameround=ttff, frame=lines, showspaces = false, showstringspaces = false, showtabs = true } \newcounter{bla} \newenvironment{refnummer}{% \list{[\arabic{bla}]}% {\usecounter{bla}% \setlength{\itemindent}{0pt}% \setlength{\topsep}{0pt}% \setlength{\itemsep}{0pt}% \setlength{\labelsep}{2pt}% \setlength{\listparindent}{0pt}% \settowidth{\labelwidth}{[9]}% \setlength{\leftmargin}{\labelwidth}% \addtolength{\leftmargin}{\labelsep}% \setlength{\rightmargin}{0pt}}} {\endlist} \usepackage{etoolbox,calc} \makeatletter \def
1,108,101,566,023
arxiv
\section{Introduction} \label{sec.intro} The concept of {\it materials informatics} based on the {\it big data science} has attracted recent interests in the context for discovering and exploring novel materials \cite{Ikebata2017}. Achieving high efficiency to get {\it data}, namely experimental measurements and analysis of materials, is necessary to accelerate the cycle of the exploration. XRD (X-ray diffraction) analysis is quite commonly used to capture crystal structures causing material properties~\cite{Hongo2018}. The analysis is getting accelerated by the improvements in X-ray intensities as well as in the environments of measurements~\cite{Kawaguchi2017}. Typical efforts to achieve efficiency in the analysis include such studies applying machine learning technique to series of XRD data in a systematic observation ({\it e.g.}, dependences on concentrations, temperature {\it etc.}) to extract significant information~\cite{Park2017}. While materials informatics approaches combined with XRD data have been recently used to distinguish different phases (i.e., {\it inter-phase} identifications) ~\cite{2014KUE, 2017SUR, Iwasaki2017, 2018SHA, 2018STA, 2018XIN, 2018OSE}, no attempt has been made to tackle {\it intra-phase} ones so far. Along this context, the present study aims to provide a framework which can predict the concentrations of atomic substituents introduced in the main phase of polycrystalline magnetic alloys. \vspace{2mm} ThMn$_{12}$-type~(Fig.~\ref{fig_cystalstructure}) crystal structured SmFe$_{11-x}$Ti$_x$ has been regarded as one of the candidates of the main phase in rare-earth permanent magnets~\cite{KOBAYASHI2017}. The origin of intrinsic properties emerging at high temperature as well as that of the phase stability has not yet been clarified well. Introducing Ti and Zr to substitute Fe and Sm is found to improve the magnetic properties and the phase stability, as described in details in Sec.'Samples and experiments'. To clarify the mechanism how the substitutions improve the properties, it is desirable to identify substituted sites and its amount quantitatively, preferably with high throughput efficiency for accelerating the materials tuning. In this work, we have developed a machine learning clustering technique to distinguish powder XRD patterns to get such microscopic identifications about the atomic substitutions. \vspace{2mm} {\it Ab initio} calculations are used to generate supervising references for the machine learning of XRD patterns: We prepared several possible model structures with substituents located on different sites over a range of substitution fractions. Geometrical optimizations for each model give slightly different structures from each other. Then we generated many XRD patterns calculated from each structure. We found that the DTW (dynamic time wrapping) analysis can capture slight shifts in XRD peak positions corresponding to the differences of each relaxed structure, distinguishing the fractions and positions of substituents. We have established such a clustering technique using Ward's analysis on top of the DTW, being capable of sorting out simulated XRD patterns based on the distinction. \vspace{2mm} The established technique can hence learn the correspondence between XRD peak shifts and microscopic structures with substitutions over many supervising simulated data. Since the {\it ab initio} simulation can also give several properties such as magnetization for each structure, the correspondence in the machine learning can further predict functional properties of materials when it is applied to the experimental XRD patterns, not only being capable of distinguishing the atomic substitutions. The machine learning technique for XRD patterns developed here has therefore the wider range of applications not limited only on magnets, but further on those materials which properties are tuned by the atomic substitutions. \section{Results} \label{sec.results} For our target system, [Sm$_{(1-y)}$Zr$_y$]~Fe$_{12-x}$Ti$_x$, we examined the range for $x$ and $y$ as shown in Table~\ref{table.searching}, which is accessible by the experiments. For a given concentration, several possible configurations for substituents exist. They are sorted into identical subgroups in terms of the crystalline symmetries, as described in Sec.'Computational details'. Table.~\ref{table.SmZr} and ~\ref{table.FeTi} summarize the possible space groups of substituted alloy structure (used as initial structures for computations) for given concentrations of Sm/Zr and Fe/Ti, respectively. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.4]{unitcell_rotate.pdf} \end{center} \caption The tetragonal ($Imm2$) crystal structure of SmFe$_{11}$Ti. Note that the labels are Wyckoff sites of space group before substitution by Ti ($I4/mmm$).} \label{fig_cystalstructure} \end{figure} \begin{table}[h] \caption{ The numbers of inequivalent configurations of Sm$_{(1-y)}$Zr$_y$Fe$_{12-x}$Ti$_x$ to be considered. The numbers in bracket indicate the structure constructed from $2\times2\times2$ supercell (Sm/Zr), while the rest from $2\times2\times1$ supercell (Fe/Ti).} \begin{center} \begin{tabular}{c|rrrrr} $y\backslash x$ & 0.0 & 0.5 & 1.0 & 1.5 & 2.0 \\ \hline 0.000 & 1 & 13 & 22 (1) & 27 & 61 \\ 0.125 & - & - & (2) & - & - \\ 0.250 & - & - & (7) &- & - \\ 0.375 & - & - & (6) &- & - \\ 0.500 & - & - & (10) &- &- \\ \end{tabular} \end{center} \label{table.searching} \end{table} \begin{table}[t] \caption{[table.SmZr] Space groups of the initial structures for the substitution models (Sm/Zr) at each concentration. For $y=0.5$, for instance, inequivalent configurations of substituted sites amounts to six in total. The number given in parenthesis represents the number of degenerated configurations within each symmetry at the initial structures for further lattice relaxations. It amounts therefore 26 configurations in total for generating simulated XRD patterns. } \begin{center} \begin{tabular}{llllll} \toprule &\multicolumn{5}{c}{ $y$ ( space group/number of configurations)}\\ \hline & 0.000 & 0.125 & 0.250 & 0.375 & 0.500 \\ \midrule &\textit{Imm2} (1) & \textit{Imm2} (2) & \textit{Amm2} (4) & \textit{Imm2} (2) & \textit{Imm2} (1) \\ & & & \textit{Cmm2} (2) & \textit{Cm} (2) & \textit{Ima2} (2) \\ & & & \textit{P1} (1) & \textit{C2} (2) & \textit{Cm} (2) \\ & & & & & \textit{C2} (2) \\ & & & & & \textit{Pmm2} (2) \\ & & & & & \textit{P1} (1) \\ \bottomrule \end{tabular} \end{center} \label{table.SmZr} \end{table} \begin{table}[t] \caption{Space group of SmFe$_{12-x}$Ti$_{x}$ with inequivalent site of Ti substitutions. The SmFe$_{12}$($I4/mmm$) is used as an initial structure. The number given in parenthesis represents the number of degenerated configurations within each symmetry at the initial structures for further lattice relaxations. It amounts therefore 124 configurations in total for generating simulated XRD patterns. } \begin{center} \begin{tabular}{llllll} \toprule &\multicolumn{5}{c}{ $x$ ( space group/number of configurations)}\\ \hline & 0.0 & 0.5 & 1.0 & 1.5 & 2.0 \\ \midrule & \textit{I4/mmm} (1) &\textit{Cm} (1) & \textit{Imm2} (2) & \textit{Cm} (2) & \textit{Fmmm} (2) \\ & & \textit{C2} (4) & \textit{C2/m} (12) & \textit{C2} (8) & \textit{Immm} (4) \\ & & \textit{P-1} (8) & \textit{C2/c} (6) & \textit{P-1} (16) & \textit{Fmm2} (1) \\ & & & \textit{Cm} (1) & \textit{P1} (1) & \textit{Imm2} (1) \\ & & & \textit{P1} (1) & & \textit{C2/m} (20) \\ & & & & & \textit{C2/c} (4) \\ & & & & & \textit{Cm} (4) \\ & & & & & \textit{Cc} (1) \\ & & & & & \textit{C2} (8) \\ & & & & & \textit{P-1} (14) \\ & & & & & \textit{P1} (2) \\ \bottomrule \end{tabular} \end{center} \label{table.FeTi} \end{table} \vspace{2mm} After applying lattice relaxations to the initial structures achieved by {\it ab initio} geometrical optimizations, we can calculate XRD patterns for the lattices. The procedures in details for the above are given in Sec.'Computational details'. We could therefore generate 'simulated XRD patterns' as above, {\it e.g.}, 26 patterns for the Sm/Zr substitution, those are the data for the clustering by the unsupervised learning. We examine whether the clustering can sort them again correctly based on their concentration. \vspace{2mm} Resultant XRD patterns (simulated one) fairly well coincides with experimental ones, as shown in Fig.~\ref{fig.xrdComparison}. We see that the patterns keep the overall shape almost completely, just with slight variations in the inter-peak distances depending on the concentrations. To capture only such slight variations, DTW is expected to perform well due to the following reason: The method is designed to be applied to such signals given along an axis ({\it e.g.}, time dependent signal, $y(t)$) so that it can extract only the {\it shape} of the signal ignoring uniform shifts along the axis. The method scores the {\it dissimilarity} between signals, $i$ and $j$, in terms of the DTW-distance, DTW$(i,j)$. \begin{figure}[htb] \begin{center} \includegraphics[width=\linewidth]{tempF3.pdf} \end{center} \caption Comparison of simulated XRD patterns (bottom) of SmFe$_{11}$Ti and experimental XRD patterns (top) of Sm$_{1.05}$Fe$_{10.75}$Ti$_{1.25}$. The inset numbers are the main-phase peak position.} \label{fig.xrdComparison} \end{figure} \vspace{2mm} Specifying a clustering framework is generally given by a combination of methods, $a\otimes b$, where '$a$/that scoring the dissimilarity', and '$b$/that making linkages between elements to form clusters based on the given dissimilarity'. In the present work, we employed the framework, [Normalized Constrained DTW (NC-DTW)] ~$\otimes$~[Ward linkage method], using those implemented in 'Scipy package'~\cite{Scipy}. The descriptions of linkage and dissimilarity-measure methods being used in this work can be found on the Scipy documents, except the DTW dissimilarity measures which were calculated by fastDTW\cite{Salvador2007} package. The framework is found to achieve the clustering to distinguish the concentration of Sm/Zr substitutions with sufficiently high accuracy, 96.2\% (one failures among 26 XRD patterns), as shown in Fig.~\ref{fig.SmZrClustering}. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.4]{SmZrClustering.pdf} \end{center} \caption Clustering over the XRD peak patterns~(26 in total) of [Sm$_{1-y}$~Zr$_y$]~Fe$_{11}$Ti, performed by DTW (dynamical-time-wrapping) scoring and Ward linkage method. Putting the threshold around 1,000 for the dissimilarity (horizontal broken line), the patterns are clustered into four groups, sharing almost the same number of the substitutions by Zr. The red arrows at the bottom show the errors where a 'zero substitution' is wrongly sorted into the group with 'one substitution' {\it etc.} } \label{fig.SmZrClustering} \end{figure} \section{Discussions} \subsection{Limitation of the DTW-dissimilarity} When the same method (Ward$\otimes$DTW) as in Sm/Zr case is applied to Fe/Ti, the successful rate for the recognition gets reduced to 33.1\%. We can identify the reason why the successful rate for the Fe/Ti gets worse than that for Sm/Zr from the dependence shown in Fig.~\ref{fig.degenerate}. Since XRD reflects lattice constants as its peak position, we can take the unit cell volume, $v$, as a representative quantity to be captured by the clustering recognition under such a situation where the cell symmetry is kept unchanged. DTW dissimilarity, DTW$(i,j)$, can then be regarded to be scaling roughly to the difference of $v$. The recognition can therefore be regarded as such a framework to perform an {\it inverse inference} from the 'difference of $v$' to identify the 'difference of $x$' on the dependence of $v(x)$, as shown in Fig.~\ref{fig.degenerate}. For Sm/Zr, the 'trace back mapping' from $v$ to $x$ is one-to-one, while for Fe/Ti it is not the case due to the {\it degeneracy} in the sense that many different values of $v$ share the same $x$. Under such a {\it degeneracy}, it is impossible to provide correct inferences of 'difference in $x$' from a given 'difference in $v$'. Such a difficulty occurring for the Fe/Ti case leads to the worse successful rate for the clustering recognition. \vspace{2mm} The problem can be resolved by a way assisted by the advantage of {\it ab initio} methods in the sense that they can provide several other quantities not only the optimized lattice parameters. Even when ${\rm DTW}(i,j)\sim \left|{v(x_i)-v(x_j)}\right|$ does not work well due to the degeneracy in $v(x)$, other quantities such as the magnetization $M(v)$ can be non-degenerate (as shown in Fig.~\ref{fig.mag}) and hence useful to solve the difficulty. Using magnetizations is especially practical because the quantity is available from both experiments and simulations. We also note that the dependence in Fig.~\ref{fig.degenerate} (left panel) is consistent with the experimental fact~\cite{2016KUN} that the magnetization per volume is increasing as the Zr concentration increases. \begin{equation} {\rm Dissimilarity}(i,j) = {\rm DTW}(i,j)\times W(i,j) \ , \label{improvedW} \end{equation} so that it can prevent from the problem due to the degeneracy. We have confirmed that the successful rate is actually improved from 33.1\% into 99.19\%, as shown in Fig. \ref{fig.FeTiWeight}, by using the weight as above. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.4]{Volume_trend.pdf} \end{center} \caption Dependences of the unit cell volume ($v$) on the concentration $x$ for Fe/Ti~(blue) and Sm/Zr~(red) substitutions. Several plots with the same color on the same $x$ has the different symmetries as given in Table~\ref{table.SmZr} and \ref{table.FeTi}. Rectangular enclosures on the blue dependence show the {\it degeneracy}, {\it i.e.}, the different $x$ may give the same $v$. } \label{fig.degenerate} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{mag2.pdf} \caption Magnetizations depending on the concentrations for Sm/Zr (Sm$_{8-x}$Zr$_x$Fe$_{88}$Ti$_8$) [left panel] and Fe/Ti structures (Sm$_2$Fe$_{24-x}$Ti$_x$) [right panel]. } \label{fig.mag} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[scale=0.32]{FeTi_MW.pdf} \end{center} \caption{ Clustering over the XRD peak patterns~(124 in total) of SmFe$_{12-x}$Ti$_{x}$, performed by DTW (dynamical-time-wrapping) scoring and Ward linkage method. The weighted function calculated from magnetization was used to improve the dissimilarity measures. } \label{fig.FeTiWeight} \end{figure} \subsection{How to treat experimental XRD} As shown in Fig.~\ref{fig.xrdComparison}, simulated XRD patterns ($s$) well reproduce the experimental ones ($e$). The consistency is sufficiently enough so that the {\it direct} comparison to evaluate the DTW distance, DTW$(e,s)$, can make sense for the clustering (it is usual that some pre-processing for law data, '$e$' or '$s$', to get corrections, '$\tilde e$' or '$\tilde s$' to evaluate DTW$(\tilde e,\tilde s)$ in order to fill the gap between the idealized simulations and realities). By preparing simulated XRDs, $(\left\{ s_j\right\}_{j=1}^{N})$, in advance, we can identify such a $s_k$ for a given $e$ which gives the smallest distance, $\left|e-s_k \right|$. The simulated $s_k$ is accompanied by several quantities, $\left\{q_\alpha \right\}$, such as the formation energy, the magnetization, and the local geometrical configuration of substituents, those evaluated by {\it ab initio} method. Then the $\left\{q_\alpha \right\}$ can be the theoretical predictions for the observed $e$, serving a machine-leaning framework for XRD patterns assisted by {\it ab initio} simulations. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.4]{esDist4.pdf} \end{center} \caption{ DTW-dissimilarities between an experimental XRD ($e$) and simulated XRDs ($\left\{ s_j\right\}$), being in a correlation with the composition similarity. The $e$ is taken at the composition, Sm$_{1.05}$Zr$_{0.0}$Fe$_{10.75}$Ti$_{1.25}$, compared with $\left\{ s_j\right\}$ in terms of DTW$(e,s_j)$ (vertical variable). The horizontal variable is the normalized 'composition similarity' defined by Eq.~(\ref{CompDist}). } \label{esDist} \end{figure} Fig.~\ref{esDist} shows that such a distance, $\left|e-s_k \right|$, works fairly well, taking an example of $e$ at a composition Sm$_{1.05}$Zr$_{0.0}$Fe$_{10.75}$Ti$_{1.25}$. For the general composition, Sm$_{c_1}$Zr$_{c_2}$Fe$_{c_3}$Ti$_{c_4}$, we can define the 'composition similarity' between $e$ and $\left\{ s_j\right\}$ as, \begin{equation} D = \sum\limits_{\alpha = 1}^4 {{{\left( {c_\alpha ^{\left( e \right)} - c_\alpha ^{\left( s \right)}} \right)}^2}} \ . \label{CompDist} \end{equation} In Fig.~\ref{esDist}, we see that DTW$(e,s_j)$ (vertical variable) well correlates with the 'composition similarity'. The closest $s_k$ giving the shortest DTW$(e,s_k)$ (black filled circle in the figure) has actually the closest composition, Sm$_{1.0}$Zr$_{0.0}$Fe$_{11.0}$Ti$_{1.0}$ than the other $\left\{ s_j \right\}$. The prediction accuracy is simply improved by the more number of simulation data, $(\left\{ s_j\right\}_{j=1}^{N})$. A straightforward way to do so is to take more dense grid on $x$ but it requires larger supercell and hence more computational power. For the present grid resolution, the experimental XRD with Zr\% = 0.0, 10.4, and 31.8 are identified to be closest to Zr\% (simulated) = 0, 25, and 37.5, respectively, being the best performance as possible. \vspace{2mm} In the case with the degeneracy (Fig.~\ref{fig.degenerate} for Fe/Ti substitution), the DTW distance is not capable of performing the clustering for $\left\{ s_j \right\}$, and hence quite unlikely to be capable of identifying the closest $s_k$ for a given $e$ based on the $\left|e-s_k \right|$. The strategy with $W(i,j)$ (the weight by the magnetization) introduced in the previous section won't work in this case because for $e$ (experimental XRD patterns), the accompanying quantities such as maginetizations are not always available. A possible remedy to distinguish $e$ would be as follows using a plucked set, $\tilde A \subset A=\left\{ s_j \right\}$: Since $A$ is generated by simulations, each element is accompanied with the quantities like the magnetization, the formation energy {\it etc.}. By using the formation energies, we can pluck the degenerating candidates ({\it e.g.}, $P$ and $Q$ in Fig.~\ref{tempF8}) by excluding ones with higher energies ($P$ in Fig.~\ref{tempF8}) to form the plucked subset $\tilde A$. The degeneracy is now excluded in $\tilde A$, and hence used as a pool of references to be identified as the closest $s_k$ to a given $e$ based on the DTW distance, $\left|e-s_k \right|$. The identified $s_k$ is accompanied with the physical quantities evaluated by the simulations, and hence they could be the estimates for the sample giving the experimental XRD, $e$. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.4]{weight3.pdf} \end{center} \caption{ A schematic picture explaining the difficulty to make proper distinctions in the clustering due to the 'degeneracy'. The horizontal axis corresponds to the substitution concentrations in our case, while the vertical one to the lattice parameters those characterize a XRD pattern. The 'errorbar-like' symbols represent the data spreading along the vertical axis shown in Fig.~\ref{fig.degenerate}. When a XRD pattern is given as a point on the vertical axis (blue arrow), several points ($P$ and $Q$) corespond to it with almost the same lattice parameters but with different internal alignments of defects, leading to the difficulty to mix up the possibilities of [1] and [2] as the possible 'explanation variable (concentration in this case)'. Red arrows beside the 'errorbars' mean that there is the weight, such as the formation energy, being possible to put over the spreading. } \label{tempF8} \end{figure} \subsection{Significance to use DTW} Clustering package 'Scipy'~\cite{Scipy} used here includes several other algorithms than our DTW$\otimes$Ward choice. It is worth interesting to see the comparisons of their performance, as shown in Table.~\ref{table.score1}-\ref{table.score2} with detailed explanations given in \S\ref{SI}.D. Although the NC-DTW does not show the best performance there, however, the compromise to the peak-shift of NC-DTW is required when we consider to treat experimental data. We note that the simulated XRDs are reflecting structures at zero temperature while the experimental ones are subject to thermal effects under the finite temperature. The effects would lead the broadening of peaks due to the thermal vibrations as well as the peak shifts due to the thermal expansions. Since we are looking at the change within a phase (not inter-phase changes), the shifts are expected to be almost uniform, not so modifying the inter-peak distances significantly because the expansion occurs almost evenly for every lattice degrees of freedom. Such uniform shifts are not detected by DTW as its intention of the design, and hence the scoring works well not affected by the thermal effect. This nature forms the robustness against thermal noises on the experimental data enabling the direct comparison with simulation data at zero temperature to evaluate $\left|e-s_k\right|$. Based on the above observations, we positively use DTW even though it does not achieve the best performance for simulated data as seen in Table.~\ref{table.score1}-\ref{table.score2}. The evidence for this issue has also been shown in the preceding study~\cite{Iwasaki2017} as NC-DTW shows the best performance, among various techniques, to sort out the various phases from experimental data. \vspace{2mm} Several preceding works are found those applied DTW to analyse XRD patterns ~\cite{Baumes2008,Iwasaki2017}. While these studies applied it to distinguish phases ({\it i.e.}, inter-phase works), the present study works on the {\it intra-phase} identifications. In the formers, DTW is used to distinguish {\it major} differences of peak positions those drastically occurring when the phase changes~\cite{Iwasaki2017}. In this study, on the other hand, we clarified the new capability of DTW, namely, it can distinguish even far tiny changes of inter-peak distances those occurring within a target phase. By this capability, we can explore a new framework that enables to identify the microscopic geometries of the substituents introduced in a target phase assisted by machine learning technique. \section{Conclusion} \label{sec.conc} We have developed such a clustering framework that can be applied to XRD patterns of alloys to distinguish the concentrations of substituents. We found that the clustering works quite well to identify the concentrations when applied to the patterns of magnetic alloys based on SmFe$_{12}$. Supercell models for the substitutions are found to work well with {\it ab initio} lattice relaxations, reproducing XRD patterns being sufficiently in coincidence with experiments. The implementation of the clustering with [DTW dissimilarity scoring]$\otimes$ [Ward linkage method] is found to achieve around 90\% of the successful rate for distinguishing the concentration. The main reason of the failure case in the clustering is identified being due to the {\it degeneracy}, namely the situation where different concentrations give almost the same lattice constant. By imposing quantities predicted by {\it ab initio} methods into the weight used for the dissimilarity scoring, such degeneracies are lifted to prevent the clustering from failure. Sufficiently good coincidence between simulated and experimental XRD patterns enables the framework to be used to predict unknown concentrations of the substituent introduced in the main phase of alloys from their XRD patterns. The established framework here is applicable not only to the system treated in this work but widely to the systems to be tuned their properties by atomic substitutions within a phase. Not only identifying the concentrations, the framework has larger potential for usefulness to predict wider properties from observed XRD patterns in a way that it can provide such properties evaluated from predicted microscopic local structure (positions of substitutions {\it etc.}), including magnetic moments, optical spectrum {\it etc.}). \section{Acknowledgments} The computations in this work have been performed using the facilities of Research Center for Advanced Computing Infrastructure at JAIST. R.M. is grateful for financial supports from MEXT-KAKENHI (17H05478 and 16KK0097), from FLAGSHIP2020 (project nos. hp180206 and hp180175 at K-computer), from Toyota Motor Corporation, from I-O DATA Foundation, and from the Air Force Office of Scientific Research (AFOSR-AOARD/FA2386-17-1-4049). K.H. is grateful for financial supports from FLAGSHIP2020 (project nos. hp180206 and hp180175 at K-computer), KAKENHI grant (17K17762), a Grant-in-Aid for Scientific Research on Innovative Areas (16H06439), PRESTO (JPMJPR16NA) and the ``Materials research by Information Integration Initiative" (MI$^2$I) project of the Support Program for Starting Up Innovation Hub from Japan Science and Technology Agency (JST). R.H. is grateful for financial support from the Development and Promotion of Science and Technology Talents Project (DPST) for a scholarship to study at Faculty of Science, Mahidol University, and research internship at JAIST. \section{Supplemental Information} \label{SI} \subsection{Samples and Experiments} The X-ray diffraction (XRD) measurements for the powdered Sm-Fe-Ti were performed at the beamline BL02B2 in SPring-8 (Proposal Nos. 2016B1618 and 2017A1602). CeO$_2$ diffraction pattern was used to determine the X-ray energy of 25~keV. The diffraction intensities were collected using a sample rotator system and a high-resolution one-dimensional semiconductor detector (multiple MYTHEN system) with a step size of 2$\theta$ = 0.006~[deg.]~\cite{Kawaguchi2017}. The samples were powderized from strip-casted alloys and the powder was put into a quartz capillary and encapsulated with negative pressure of Ar gas. \subsection{Computational Details} For getting the structures of the target alloys, [Sm$_{(1-y)}$Zr$_y$]~Fe$_{12-x}$Ti$_x$, we firstly constructed a tetragonal ($I4/mmm$) crystal structure of SmFe$_{12}$ using experimental lattice parameters, $a=0.856$ nm and $b=0.480$ nm ($b=a$), of SmFe$_{11}$Ti~\cite{experiment-lattice} as an initial setting for further optimizations. For Zr-substitutions replacing Sm sites (ranging from 1-4 atoms), we constructed $2\times2\times2$ supercell, containing 104 atoms in the primitive of tetragonal ($Imm2$) of SmFe$_{11}$Ti (Fig.~\ref{fig_cystalstructure}). All possible configurations were considered to cover a randomness of experimental substitutions, and we ignored some configurations by considering their symmetry using FINDSYM software~\cite{FINDSYM}. Finally, we considered only 26 supercells (Table.~\ref{table.SmZr}) that possess different space groups, Wyckoff site occupations. \vspace{2mm} For {\it ab initio} calculations, we used the spin-polarized density functional theory (DFT) implemented in the 'Vienna {\it ab initio} simulation package~(VASP)'~\cite{VASP1,VASP2,VASP3}. For such systems like our target those including transition metal and rare earth elements, it is generally known that the predictions are critically influenced by the choice of exchange-correlation (XC) potentials used in DFT~\cite{Hongo2017,Ichibha2017,Hongo2015,Hongo2013} . For the present case, it has been found that DFT+$U$ is essentially inevitable if we treat $f$-orbitals as the valence range\cite{Larson2003,YEHIA2008,LIU2011,Pang2009,Cheng2012} It has also been found that GGA (generalized gradient approximation) works well if the 4$f$ is treated as the core range described by pseudo potentials ~\cite{GGA-largecore,GGA-largecore2,GGA-largecore3,Ismail2011,Puchala2013}. We therefore used the revised Perdew-Burke-Erzenhof (RPBE)~\cite{RPBE} for the GGA-XC upon the confirmation that RPBE improves optimized lattice parameters getting closer to experiment ones than when PBE~\cite{PBE} used. The pseudopotentials based on projected augmented wave (PAW)~\cite{VASP-PAW} method were used. The $s$ and $p$ semi-core states are included in valence states, except Sm, resulting in 12, 16 and 12 valence states for Zr, Fe and Ti, respectively. The structural relaxations were done until a force on each ion was smaller than 0.01 eV \AA $^{-1}$. A plane-wave cutoff energy of 400~eV and $5\times5\times5$ Monkhorst-Pack grids were used which was large enough to give convergence energy. The lattice relaxations with the above choice applied to SmFe$_{11}$Ti is confirmed to get the lowest total energy with Ti at 8i site, which is consistent with experiments~\cite{isite-experiment1,KOBAYASHI2017} and {\it ab initio} calculations~\cite{isite-simulation1} of RFe$_{11}$Ti-type magnetic compounds. The optimized lattice parameters, $a$ and $c$, were 0.851 and 0.473~[nm] which are good agreement with the experiments~\cite{experiment-lattice} These comparisons confirm that our model sufficiently reasonable. With Ti substitution at 8i site, $I4/mmm$ space group breaks and becomes $Imm2$ as shown in Fig. \ref{fig_cystalstructure}. \subsection{Validation of simulated XRD patterns} To validate our simulated XRD pattern, the simulated XRD patterns were compared to the experimental XRD patterns. The X-ray diffraction (XRD) patterns of the optimized structures were theoretically calculated by the powder diffraction pattern utility in VESTA\cite{Momma2011} software. The X-ray wavelength of 0.496~\AA was used as being used in experiment. The isotropic atomic displacement parameter ($B$) was set to 1.00 \AA. The normalized XRD patterns having 2$\theta$ from 1 to 120 degree with 0.01 degree interval was obtained. \vspace{2mm} The simulated XRD pattern of SmFe$_{11}$Ti agrees very well with the experimental XRD pattern of Sm$_{1.05}$Fe$_{10.75}$Ti$_{1.25}$ (Fig.~\ref{fig.xrdComparison}), but the main-phase peak position is quite different as it is 13.41 deg. in experiment while 13.48 deg. in simulation. This is due to the fact that the peak shift occurs if the lattice expands or contracts, and we found that the optimized lattice parameters from DFT are underestimated which accounts for the difference. This underestimated lattice parameters introduce only systematic shift of peak position but their XRD profiles remain unchanged. When the Zr concentration increases, the main-phase peak position of the simulated XRD patterns shifts to larger 2$\theta$, being in accordance with experimental results. \subsection{Hierarchical clustering analysis} The hierarchical clustering analysis (HCA) was used to identify the simulated XRD patterns. All clustering analysis were carried out using Scipy package~\cite{Scipy}. The descriptions of linkage and dissimilarity-measure methods being used in this work can be found on the Scipy documents, except the DTW dissimilarity measures which were calculated by fastDTW\cite{Salvador2007} package. \begin{table*}[htb] \caption{[table.score1]Adjusted rand index of clustering result of Sm$_{1-y}$Zr$_{y}$Fe$_{11}$Ti (Sm/Zr) structures.} \begin{center} \begin{tabular}{l|cccccccc} & Single & Complete & Average & Weighted & Centroid & Median & Ward \\ \hline NC-DTW & 1.00 & 0.80 & 1.00 & 0.82 & 0.82 & 0.82 & 0.91 \\ Cityblock & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ Euclidean & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ Cosine & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ Correlation & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ \end{tabular} \end{center} \label{table.score1} \end{table*} \begin{table*}[htb] \caption{[table.score2]Adjusted rand index of clustering result of SmFe$_{12-x}$Ti$_{x}$ (Fe/Ti) structures.} \begin{center} \begin{tabular}{l|cccccccc} & Single & Complete & Average & Weighted & Centroid & Median & Ward \\ \hline NC-DTW & -0.04 & -0.10 & -0.08 & -0.08 & -0.04 & -0.04 & 0.01 \\ Cityblock & -0.06 & 0.49 & 0.45 & 0.39 & 0.27 & 0.40 & 0.41 \\ Euclidean & -0.03 & 0.28 & 0.47 & 0.49 & 0.27 & 0.28 & 0.28 \\ Cosine & -0.01 & 0.35 & 0.51 & 0.55 & 0.33 & 0.28 & 0.35 \\ Correlation & -0.01 & 0.34 & 0.51 & 0.55 & 0.33 & 0.33 & 0.35 \\ \end{tabular} \end{center} \label{table.score2} \end{table*} The package provides variety of other methods than the present choice, DTW$\otimes$Ward, as shown in Table~\ref{table.score1}-\ref{table.score2}. The tables compares the performances achieved by the variety of choices for the identifications of Sm/Zr and Fe/Ti, respectively. The performance is evaluated in terms of ARI~(adjusted Rand index), which measures the similarity between the true labels and predicted labels with the maximum score and minimum score of 1 and -1, respectively. The ARI calculations have been done by using 'Scikit-learn' package\cite{Pedregosa2011}. In the tables, several dissimilarity measures, NC-DTW, Euclidean, Cityblock, Cosine and Correlation, with various linkage methods, Single, Complete, Average, Weighted, Centroid, Median and Ward, have been compared. The perfect score of 1 is reached by all methods, except NC-DTW, in Sm/Zr structures. While the best method in Fe/Ti structures are cosine and correlation with ward linkage with 0.55 score. The NC-DTW method provides lower performance than other methods in both structures since the NC-DTW omits the peak-shift information while the rest are peak-position based dissimilarity measure. With NC-DTW dissimilarity measure, the ward linkage method shows the a good performance, ARI of 0.91 and 0.01 for Sm/Zr and Fe/Ti structures, respectively, among all linkage methods. Therefore, in this work, we will focus on NC-DTW with ward linkage method.
1,108,101,566,024
arxiv
\section{Introduction} The leading power transverse momentum dependent factorization theorem introduces eight quark transverse momentum-dependent distributions (TMDs) \cite{Mulders:1995dh}, which are listed in table \ref{tab:tmds}. Altogether, these eight TMDs provide a comprehensive description of the nucleon's three-dimensional spin-orbital structure in momentum space. Some of these TMDs (primarily the unpolarized ones) are studied very well theoretically and experimentally (for recent developments, see \cite{Scimemi:2019cmh, Bacchetta:2022awv}). However, several of these TMDs are still almost unexplored. This paper is devoted to study the Sivers, Boer-Mulders, worm-gear-T, and worm-gear-L (also known as Kotzinian–Mulders) functions in the limit of small-$b$ (or, equivalently, large transverse momentum) within QCD perturbation theory. TMDs are nonperturbative functions of two kinematic variables $x$ and $b$, being $x$ the collinear momentum-fraction and $b$ a transverse vector Fourier conjugated of the transverse momentum. Different ranges of $x$ and $b$ correspond to different physical pictures, relevant for different processes. In particular, in the limit of small $b$, TMDs turn into ordinary one-dimensional collinear parton distributions. Schematically, this relation has the form \begin{eqnarray}\label{into1} F(x,b)=C(x,\ln(\mu b))\otimes f(x,\mu)+\mathcal{O}(b^2), \end{eqnarray} where $F$ is a TMD, $f$ is a collinear distribution, $C$ is a perturbative coefficient function, and $\otimes$ is an integral convolution. The expansion (\ref{into1}) (also known as the ``matching relation'' \cite{Collins:2011zzd}) follows from the operator product expansion (OPE) and can be derived systematically order-by-order in the coupling constant and powers of $b^2$ \cite{Moos:2020wvd}. Small-$b$ expansions for TMDs have been intensively studied during the last decade. Naturally, the main efforts were devoted to the unpolarized distribution $f_1$, for which the coefficient function is known at next-to-next-to-next-to-leading order (N$^3$LO) in the QCD coupling constant \cite{Ebert:2020qef,Luo:2020epw}. For the other distributions, the analysis is less developed. So, the transversity $h_1$ and linearly-polarized gluon TMD $h^g_1$ are known up to NNLO \cite{Gutierrez-Reyes:2018iod, Gutierrez-Reyes:2019rug}. The helicity $g_1$ is known at NLO \cite{Gutierrez-Reyes:2017glx, Buffing:2017mqm, Bacchetta:2013pqa}. All these TMDs are special because their small-$b$ asymptotic contains only collinear distributions of twist-two. Therefore, their computation is relatively straightforward and can be done with standard techniques. However, the majority of TMDs match collinear distributions of higher twists, making their study more cumbersome. Thus, for Boer-Mulders $h_1^\perp$, worm-gear-T $g_{1T}$, and worm-gear-L $h_{1L}^\perp$ the small-$b$ expansion is known only at LO \cite{Kanazawa:2015ajw, Scimemi:2018mmi}, and for the Sivers function $f_{1T}^\perp$ at NLO \cite{Scimemi:2019gge}. The pretzelocity distribution $h_{1T}^\perp$ differs from other TMDs. Its leading term is given by a twist-four operator, while matching is only known for the twist-three part \cite{Moos:2020wvd}. In table \ref{tab:tmds} we indicate the twists of collinear distributions that appear as the leading-power term in eqn.~(\ref{into1}). \begin{table}[tb] \begin{center} \begin{tabular}{|c||c|c|c|} \hline & U & H & T \\\hline U & $f_1$ (tw2) & & \cellcolor{blue!25} $h_1^\perp$ (tw3) \\\hline L & & $g_1$ (tw2) & \cellcolor{blue!25}$h_{1L}^\perp$ (tw2 \& tw3) \\\hline \multirow{2}{*}{T} & \cellcolor{blue!25} $f_{1T}^\perp$ (tw3) & \cellcolor{blue!25} $g_{1T}$ (tw2 \& tw3) & $h_1$ (tw2) \\ &\cellcolor{blue!25}&\cellcolor{blue!25} & $h_{1T}^\perp$ (tw3 \& tw4) \\\hline \end{tabular} \caption{\label{tab:tmds} Quark TMDs sorted with respect to polarization properties of both the operator (columns) and the hadron (rows). The labels U, H, L, and T, are for the unpolarized, helicity, longitudinal, and transverse polarizations. In brackets, we indicate the twist of collinear distributions to which TMDs match at small-$b$. The blue color highlights TMDs that are investigated in this work.} \end{center} \end{table} The usage of matching relations is essential for practical applications. It allows incorporating the already-known parton distribution functions into TMDs, essentially increasing the predictive power of the formalism. In fact, all modern phenomenological extractions of TMDs are based on these relations (see f.i. \cite{Bacchetta:2017gcc, Scimemi:2017etj, Scimemi:2019cmh, Bacchetta:2022awv, Cammarota:2020qcw, Echevarria:2020hpy, Bury:2022czx}). The twist-two part of the matching relation (the so-called Wandzura-Wilczek-like (WW-like) approximation) is known to work very well for many cases \cite{Cammarota:2020qcw, Bhattacharya:2021twu}. Also, matching relations can be inverted and used to determine collinear distributions from TMDs. For example, the knowledge of Sivers function provides an essential constraint on the Qiu-Sterman twist-three distribution \cite{Bury:2021sue, Bury:2020vhj}. Finally, the relation (\ref{into1}) links TMD factorization theorem to resummation approach \cite{Collins:1981uk}, which is vital for the description of the high-energy data. In all these cases, it is critical to employ at least NLO expressions to fix the scaling properties of distributions. This contribution aims to close the remaining gap in the theoretical description of polarized TMDs and compute the small-$b$ expansion for TMDs with leading twist-three contributions at NLO. This includes the Sivers, Boer-Mulders, worm-gear-T, and worm-gear-L functions, highlighted in table \ref{tab:tmds}. There are several approaches to compute higher-twist contributions to the small-$b$ asymptotics of TMDs \cite{Kanazawa:2015ajw, Sun:2013hua, Dai:2014ala, Scimemi:2018mmi, Scimemi:2019gge, Moos:2020wvd}. Among them, the most practical for the present case is the method used in ref.~\cite{Scimemi:2019gge}, i.e. the background-field method with collinear counting. This method is a generalization of the classical approach to deep-inelastic scattering (DIS) \cite{Balitsky:1987bk}. It has been used recently for many higher-twist computations including quasi- and pseudo-distributions \cite{Braun:2021aon, Braun:2021gvv}, leading and sub-leading power TMDs \cite{Scimemi:2019gge, Vladimirov:2021hdn, Rodini:2022wki}. In many aspects, the work presented here is the straightforward generalization of the computation performed in ref. \cite{Scimemi:2019gge} for different polarizations (we also recompute the Sivers function as a cross-check). Therefore, we do not provide a detailed description of the method, which can be found in the refs. \cite{Scimemi:2019gge, Braun:2021aon} together with computational examples. Instead, we provide a general discussion, emphasizing the present case's particularities, and present the final expression. The paper is structured as follows. In section \ref{sec:definitions}, we collect the definitions of TMDs and collinear distributions -- the main subjects of the present work. In section \ref{sec:details}, we provide the essential details on the computation method (referring, for an extended discussion, to \cite{Scimemi:2019gge,Braun:2021aon}). The generalization of $\gamma^5$ in $d$ dimensions and the definition of gluon correlator) are described in more detail in sections \ref{sec:gamma5} and \ref{sec:FDF}, respectively. In section \ref{sec:results}, we present NLO expressions for Sivers, Boer-Mulders, and worm-gear functions in momentum-fraction space. The position space expressions (split into contributions from the different diagrams) are given in appendix \ref{app:pos}. In appendix \ref{app:evol} are collected the expressions for the twist-three evolution kernels used as cross-check of our computation. \section{Definition of distributions} \label{sec:definitions} In this work, we deal with many parton distributions. For clarity, we collect their definition and important properties in this section. \subsection{TMD distributions} The quark TMDs are defined for the Drell-Yan process, taken as an example, by the following matrix element \begin{eqnarray}\label{def:TMD-position} \Phi^{[\Gamma]}(x,b)=\frac{1}{2}\int \frac{d z}{2\pi} e^{-ix zp^+} \langle p,S|\bar q(zn+b)[zn+b,-\infty n+b]\Gamma [-\infty n,0]q(0)|p,S\rangle, \end{eqnarray} where $n$ is the light-like vector ($n^2=0$) associated with the large component of the hadron momentum $p$, $b$ is the vector tranverse to the $(p,n)$ plane, and $\Gamma$ is a Dirac matrix. $[x,y]$ is the straight Wilson lines from $x$ to $y$, \begin{eqnarray} [a_1n+b,a_2n+b]=P\exp\(ig\int_{a_2}^{a_1}d\sigma n^\mu A_\mu(\sigma n+b)\). \end{eqnarray} The standard parameterization of the matrix element (\ref{def:TMD-position}) can be found in ref.~\cite{Mulders:1995dh}. It reads \begin{eqnarray}\label{def:TMDs:1:g+} \Phi^{[\gamma^+]}(x,b)&=&f_1(x,b)+i\epsilon^{\mu\nu}_T b_\mu s_{T\nu}M f_{1T}^\perp(x,b), \\\label{def:TMDs:1:g+5} \Phi^{[\gamma^+\gamma^5]}(x,b)&=&\lambda g_{1}(x,b)+i(b \cdot s_T)M g_{1T}(x,b), \\\label{def:TMDs:1:s+} \Phi^{[i\sigma^{\alpha+}\gamma^5]}(x,b)&=&s_T^\alpha h_{1}(x,b)-i\lambda b^\alpha M h_{1L}^\perp(x,b) \\\nonumber && +i\epsilon^{\alpha\mu}b_\mu M h_1^\perp(x,b)-\frac{M^2 b^2}{2}\(\frac{g_T^{\alpha\mu}}{2}-\frac{b^\alpha b^\mu}{b^2}\)s_{T\mu}h_{1T}^\perp(x,b), \end{eqnarray} where $b^2<0$. Here, \begin{eqnarray} g_T^{\mu\nu}=g^{\mu\nu}-n^\mu \bar n^\nu-\bar n^\mu n^\nu,\qquad \epsilon^{\mu\nu}_T=\bar n_\alpha n_\beta \epsilon^{\alpha\beta\mu\nu}=\epsilon^{-+\mu\nu}, \end{eqnarray} where $\bar n^\mu$ is light-cone vector ($\bar n^2=0$) associated with the small-component of the hadron momentum, i.e. $p^\mu=p^+ \bar n^\mu+M^2 n^\mu/(2p^+)$ with $p^+=(n\cdot p)$. The relative normalization is $(n\cdot\bar n)=1$. The Levi-Civita tensor and $\gamma^5$-matrix are defined in $4$ dimensions as \begin{eqnarray} \epsilon^{0123}=+1,\qquad \gamma^5=-\frac{i}{4!}\epsilon^{\mu\nu\alpha\beta}\gamma_\mu \gamma_\nu\gamma_\alpha\gamma_\beta. \end{eqnarray} Consequently, $\epsilon_T^{12}=\epsilon_{T,12}=+1$. The variables $\lambda$ and $s_T$ are longitudinal and transverse components of the spin vector \begin{eqnarray} s^\mu=\lambda\frac{p^+ \bar n^\mu}{M}-\lambda\frac{n^\mu M}{2p^+}+s_T^\mu, \end{eqnarray} where $M$ is the mass of the hadron. This implies $\lambda=M s^+/p^+$. All TMDs are dimensionless real functions that depend on $b^2$ (the argument $b$ is used for shortness). In this work, we consider only Sivers ($f_{1T}^\perp$), Boer-Mulders ($h_1^\perp$), worm-gear-T ($g_{1T}$) and worm-gear-L ($h_{1L}^\perp$) functions. The definition (\ref{def:TMD-position}) in a SIDIS-like process has the Wilson line pointing to $+\infty n$ \cite{Boer:2003cm} instead to $-\infty n$. The T-even TMDs (in the present context, these are the worm-gear functions, $g_{1T}$ and $h_{1L}^\perp$) are independent of the direction of the staple contour due to the T-invariance of QCD. They are the same for Drell-Yan-like and SIDIS-like cases. In contrast, the T-odd TMDs (Sivers $f_{1T}^\perp$ and Boer-Mulders $h_{1}^\perp$ functions) dependent on the direction of the staple contour. One has \cite{Collins:2002kn} \begin{eqnarray}\label{sign-change} f_{1T}^\perp(x,b)\Big|_{\text{DY}}=-f_{1T}^\perp(x,b)\Big|_{\text{SIDIS}},\qquad h_{1}^\perp(x,b)\Big|_{\text{DY}}=-h_{1}^\perp(x,b)\Big|_{\text{SIDIS}}. \end{eqnarray} Apart of the sign-change the TMDs are identical for both cases. In the following, we assume the DY-like definition, if not specified. The bare TMDs contain two types of divergences -- ultraviolet and rapidity divergences. Both types of divergences are multiplicatively renormalizable \cite{Vladimirov:2017ksc}. As a consequence, the renormalized TMD depends on two scales $\mu$ and $\zeta$. These dependencies are described by the evolution equations \begin{eqnarray}\label{TMD-evol} \mu^2 \frac{d F(x,b;\mu,\zeta)}{d\mu^2}=\frac{\gamma_F(\mu,\zeta)}{2}F(x,b;\mu,\zeta), \qquad \zeta \frac{d F(x,b;\mu,\zeta)}{d\zeta}=-\mathcal{D}(b,\mu)F(x,b;\mu,\zeta), \end{eqnarray} where $F$ is any TMD, $\gamma_F$ is the TMD anomalous dimension, and $\mathcal{D}$ is the Collins-Soper kernel. At LO, these kernels are \cite{Aybat:2011zv} \begin{eqnarray} \gamma_F(\mu,\zeta)=a_s(\mu) \(4C_F\mathbf{l}_\zeta+6C_F\)+\mathcal{O}(a_s^2), \qquad \mathcal{D}(b,\mu)=a_s(\mu) 2C_F\mathbf{L}_b+\mathcal{O}(a_s^2,b^2), \end{eqnarray} where \begin{eqnarray}\label{def:logs} a_s(\mu)=\frac{g^2}{(4\pi)^2},\qquad \mathbf{l}_\zeta=\ln\(\frac{\mu^2}{\zeta}\),\qquad \mathbf{L}_b=\ln\(\frac{(-b^2)\mu^2}{4e^{-2\gamma_E}}\), \end{eqnarray} with $g$ being the QCD coupling constant, and $\gamma_E$ is the Euler-Mascheroni constant. In the following text, we often omit the scales $(\mu,\zeta)$ to simplify notation. These scales can be reconstructed from the context. The relation between momentum and position space TMDs is \begin{eqnarray} \Phi^{[\Gamma]}(x,k_T)=\int \frac{d^2b}{(2\pi)}e^{-i(b\cdot k_T)}\Phi^{[\Gamma]}(x,b), \end{eqnarray} where $k_T$ is the transverse momentum ($k_T^2<0$). The transformations for individual TMDs can be found in refs. \cite{Boer:2011xd, Scimemi:2018mmi}. The momentum-space definition is less convenient for theoretical computations. Therefore, in the following, we use only position space TMDs. \subsection{Collinear distributions of twist-two} The collinear distributions of twist-two are defined as follows (see e.g. \cite{Jaffe:1996zw}) \begin{eqnarray}\label{def:f1-coll} \langle p,S|\bar q(zn)[zn,0] \gamma^+q(0)|p,S\rangle &=& 2 p^+ \int_{-1}^1 dx e^{i xzp^+}f_1(x), \\\label{def:g1-coll} \langle p,S|\bar q(zn)[zn,0] \gamma^+\gamma^5 q(0)|p,S\rangle &=& 2 \lambda p^+ \int_{-1}^1 dx e^{i xzp^+}g_1(x), \\\label{def:h1-coll} \langle p,S|\bar q(zn)[zn,0] i\sigma^{\alpha +}\gamma^5 q(0)|p,S\rangle &=& 2 s_T^\alpha p^+ \int_{-1}^1 dx e^{i xzp^+}h_1(x), \end{eqnarray} where $\alpha$ is a transverse index. These distributions are known as unpolarized ($f_1$), helicity ($g_1$) and tranversity distributions ($h_1$). They are defined for $x\in[-1,1]$ and are zero for $|x|>1$. The distributions with negative $x$ are usually interpreted as distributions of antiquarks, \begin{eqnarray}\nonumber f_1(x)&=&\theta(x)f_{1,q}(x)-\theta(-x)f_{1,\bar q}(-x), \\\label{definite-flavor} g_1(x)&=&\theta(x)g_{1,q}(x)+\theta(-x)g_{1,\bar q}(-x), \\\nonumber h_1(x)&=&\theta(x)h_{1,q}(x)-\theta(-x)h_{1,\bar q}(-x). \end{eqnarray} In the present work, the unpolarized distribution does not appear, and is presented here only for comparison. Note that the notation $f_1$, $g_1$ and $h_1$ is the same for TMD distributions and collinear distributions. We distinguish these functions by their arguments, which are $(x,b)$ for TMDs and $(x)$ for collinear distributions. The gluon collinear distributions are defined as \begin{eqnarray}\label{def:gluon-coll} \langle p,S|F_{\mu+}(zn)[zn,0] F_{\nu+}(0)|p,S\rangle &=&(p^+)^2 \int_{-1}^1 dx e^{i xzp^+} \frac{x}{2}\(-g^{\mu\nu}_T f_g(x)-i\epsilon^{\mu\nu}_T \Delta f_g(x)\), \end{eqnarray} where $f_g$ and $\Delta f_g$ are unpolarized and helicity gluon distributions. Gluon distributions satisfy the ralation \begin{eqnarray} f_g(-x)=-f_g(x),\qquad \Delta f_g(-x)=+\Delta f_g(x). \end{eqnarray} In dimensional regularization (with $d=4-2\epsilon$) the definition of gluon distributions (\ref{def:gluon-coll}) is modified and takes the form \begin{eqnarray}\label{def:gluon-coll-d} \langle p,S|F_{\mu+}(zn)[zn,0] F_{\nu+}(0)|p,S\rangle &=&(p^+)^2 \int_{-1}^1 dx e^{i xzp^+} \frac{x}{2}\(-\frac{g^{\mu\nu}_Tf_g(x)}{1-\epsilon} -\frac{i\epsilon^{\mu\nu}_T\Delta f_g(x)}{(1-\epsilon)(1-2\epsilon)} \), \end{eqnarray} where $\epsilon_T^{\mu\nu}$ is the $d$-dimensional generalized Levi-Civita tensor (see sec. \ref{sec:gamma5}). The $\epsilon$-dependent factors are chosen such that the contraction of the correlator's matrix element with $g_T^{\mu\nu}$ or $\epsilon_T^{\mu\nu}$ yields the same result in any dimension. The scale-dependence of a twist-two distribution $F$ is given by the DGLAP-type equation \begin{eqnarray}\label{evol-tw2} \mu^2 \frac{d F_f(x,\mu)}{d\mu^2} = \sum_{f'}\int_x^1 \frac{dy}{y} P_{f\leftarrow f'}(y) F_{f'}\(\frac{x}{y},\mu\), \end{eqnarray} where $f$ labels the partons flavor, and $P$ is the evolution kernel. In this work we need only LO expressions for $P$, which can be found, e.g., in \cite{Jaffe:1996zw}. \subsection{Collinear distributions of twist-three} The twist-three distributions parametrize the three-point light-cone operators. The quark-gluon-quark distributions are defined as \begin{eqnarray} && \langle p,S|g\bar q(z_1n)F^{\mu+}(z_2n) \gamma^+q(z_3n)|p,S\rangle \\\nonumber && \qquad\qquad = 2 \epsilon^{\mu\nu}_T s_\nu (p^+)^2 M \int[dx] e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}T(x_1,x_2,x_3), \\ &&\langle p,S|g\bar q(z_1n)F^{\mu+}(z_2n) \gamma^+\gamma^5q(z_3n)|p,S\rangle \\\nonumber && \qquad\qquad = 2i s_T^\mu (p^+)^2 M \int[dx] e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}\Delta T(x_1,x_2,x_3), \\ &&\langle p,S|g\bar q(z_1n)F^{\mu+}(z_2n) i\sigma^{\nu+}\gamma^5q(z_3n)|p,S\rangle \\\nonumber && \qquad\qquad = 2 (p^+)^2 M \int[dx] e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)} \(\epsilon^{\mu\nu}_T E(x_1,x_2,x_3)+i\lambda g^{\mu\nu}_T H(x_1,x_2,x_3)\), \end{eqnarray} where $F_{\mu\nu}$ is the gluon field-strength tensor, and we have omitted the Wilson links $[z_1n,z_2n]$ and $[z_2n,z_3n]$ for brevity. The integral measure \begin{eqnarray}\label{def:[dx]} \int [dx]=\int_{-1}^1 dx_1 d x_2 dx_3 \delta(x_1+x_2+x_3), \end{eqnarray} reflects momentum conservation. Note that in the above definitions, by convention, the phase of the exponential has the opposite sign compare to the twist-2 distributions. The quark-gluon-quark distributions are real-valued functions that satisfy the symmetry relations \begin{align}\label{def:sym-quark} T(x_1,x_2,x_3)&=T(-x_3,-x_2,-x_1),& \qquad \Delta T(x_1,x_2,x_3)&=-\Delta T(-x_3,-x_2,-x_1), \\\nonumber E(x_1,x_2,x_3)&=E(-x_3,-x_2,-x_1),& \qquad H(x_1,x_2,x_3)&=-H(-x_3,-x_2,-x_1). \end{align} Often it is convenient to use the following combination \begin{eqnarray} S^\pm(x_1,x_2,x_3)=\frac{-T(x_1,x_2,x_3)\pm \Delta T(x_1,x_2,x_3)}{2}. \end{eqnarray} In the literature one can find different notations for these distributions. For example, ref. \cite{Kang:2008ey} defines $\widetilde{\mathcal{T}}_{q,F}(x_3,-x_1)=MT(x_1,-x_1-x_3,x_3)$, and $\widetilde{\mathcal{T}}_{\Delta q,F}(x_3,-x_1)=M\Delta T(x_1,-x_1-x_3,x_3)$, and ref. \cite{Scimemi:2018mmi} defines $\delta T_\epsilon=E$ and $\delta T_g=H$. A dictionary between the different notations is provided by ref. \cite{Scimemi:2018mmi}. For the three-gluon distributions, a standard definition has not yet been established. Here we follow the convention of ref. \cite{Scimemi:2019gge}, in which the three-gluon correlators are parametrized as \begin{eqnarray} && \langle p,S|igf^{ABC}F^{\mu+}_A(z_1n)F^{\nu+}_B(z_2n) F^{\rho+}_C(z_3n)|p,S\rangle \\\nonumber && \qquad\qquad = (p^+)^3 M \int[dx] e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}\sum_{i}t_i^{\mu\nu\rho}F_i^+(x_1,x_2,x_3), \\ && \langle p,S|gd^{ABC}F^{\mu+}_A(z_1n)F^{\nu+}_B(z_2n) F^{\rho+}_C(z_3n)|p,S\rangle \\\nonumber && \qquad\qquad = (p^+)^3 M \int[dx] e^{-ip^+(x_1z_1+x_2z_2+x_3z_3)}\sum_{i}t_i^{\mu\nu\rho}F_i^-(x_1,x_2,x_3), \end{eqnarray} where $f^{ABC}$ and $d^{ABC}$ are the anti-symmetric and symmetric structure constants of SU($N_c$). There are six tensor structures $t_i$. Their complete derivation and classification is given in appendix A of ref. \cite{Scimemi:2019gge}. Only three structures are non-vanishing for $d=4$. These are \begin{eqnarray}\nonumber t_2^{\mu\nu\rho}&=& s^\alpha_T \epsilon^{\mu\alpha}_T g_T^{\nu\rho} +s^\alpha_T \epsilon^{\nu\alpha}_T g_T^{\rho\mu} +s^\alpha_T \epsilon^{\rho\alpha}_T g_T^{\mu\nu}, \\\label{def:tensor-t} t_4^{\mu\nu\rho}&=& -s^\alpha_T \epsilon^{\mu\alpha}_T g_T^{\nu\rho} +2s^\alpha_T \epsilon^{\nu\alpha}_T g_T^{\rho\mu} -s^\alpha_T \epsilon^{\rho\alpha}_T g_T^{\mu\nu}, \\\nonumber t_6^{\mu\nu\rho}&=& s^\alpha_T \epsilon^{\mu\alpha}_T g_T^{\nu\rho} -s^\alpha_T \epsilon^{\rho\alpha}_T g_T^{\mu\nu}. \end{eqnarray} The other structures (i.e. $t_{3,5,7}^{\mu\nu\rho}$) parametrize evanescent operators. In general, these contributions are non-zero in the dimension regularization and should be taken into account during the renormalization procedure \cite{Dugan:1990df}. However, in the present calculation they do not contribute to the pole part, and thus decouple. For that reason these functions can be set to zero in $d=4$. The three-gluon gluon functions are defined as \cite{Scimemi:2019gge} \begin{eqnarray}\label{def:F24} F_2^\pm(x_1,x_2,x_3)=-\frac{G_\pm(x_1,x_2,x_3)}{2(2-\epsilon)}, \qquad F_4^\pm(x_1,x_2,x_3)=-\frac{Y_\pm(x_1,x_2,x_3)}{2(1-2\epsilon)}. \label{def:F2F4} \end{eqnarray} The distribution $F_6$ can be expressed via $Y_\pm$ \begin{eqnarray}\label{def:F6} F_6^\pm(x_1,x_2,x_3)=\pm\frac{Y_\pm(x_1,x_3,x_2)-Y_\pm(x_2,x_1,x_3)}{2(1-2\epsilon)}. \end{eqnarray} Like in the twist-two case (\ref{def:gluon-coll-d}), the $\epsilon$-dependent factors are chosen such that most of the $\epsilon$-dependence at NLO cancels. The distributions $G_\pm$ and $Y_\pm$ satisfy the following symmetry relations \begin{eqnarray}\nonumber &&G_\pm(x_1,x_2,x_3)= G_\pm(-x_3,-x_2,-x_1)= \mp G_\pm(x_2,x_1,x_3)= \mp G_\pm(x_1,x_3,x_2), \\\label{def:sym-gluon} &&Y_\pm(x_1,x_2,x_3)= Y_\pm(-x_3,-x_2,-x_1)= \mp Y_\pm(x_3,x_2,x_1), \\\nonumber &&Y_\pm(x_1,x_2,x_3)+Y_\pm(x_2,x_3,x_1)+Y_\pm(x_3,x_1,x_2)=0. \end{eqnarray} These relations constrain the internal structure of three-gluon distributions \cite{Scimemi:2019gge}. For a comparison of our convention with others see ref. \cite{Scimemi:2019gge}. All twist-three distributions are functions of two variables, since the third variable is fixed by the momentum conservation condition $x_1+x_2+x_3=0$. Nevertheless, we use the the three-variable notation for its convenience since in this notation the symmetry transformations (\ref{def:sym-quark}, \ref{def:sym-gluon}) are more transparent. Also, each sector $(x_i\lessgtr 0)$ has a special interpretation in the parton picture \cite{Jaffe:1983hp}, which is harder to see in the two-variable notation. The set of parton distributions $\{T, \Delta T, E, H, G_\pm, Y_\pm\}$ evolves autonomously under a change of renormalization scale $\mu$ \cite{Balitsky:1987bk, Braun:2009vc}, \begin{eqnarray}\label{evol-tw3} \mu^2\frac{d F_1(x_1,x_2,x_3;\mu)}{d\mu^2}=\sum_{F_2}\int [dy] K_{F_1\leftarrow F_2}(x_1,x_2,x_3;y_1,y_2,y_3;a_s)F_2(y_1,y_2,y_3;\mu), \end{eqnarray} where $F_{1,2}\in \{T, \Delta T, E, H, G_\pm, Y_\pm\}$. Moreover, the chiral-odd distributions $E$ and $H$ do not mix with other distributions. The expressions for the evolution kernels $K_{F_1\leftarrow F_2}$ are rather long, and not explicitly needed in the present work. For the reader's convenience we present them in position space in appendix \ref{app:evol}. The momentum space expressions are much more cumbersome \cite{Ji:2014eta}. The set of parton distributions $\{T, \Delta T, E, H, G_\pm, Y_\pm\}$ is complete in the sense that all other twist-three distributions can be expressed in this basis (and possibly twist-two distributions). For example, the twist-three distributions $g_T$, $h_L$ and $e$ \cite{Jaffe:1996zw} can be express in terms of $\{T,\Delta T\}$, $H$ and $E$ (see e.g. \cite{Scimemi:2018mmi, Braun:2021aon, Braun:2021gvv}). \section{Evaluation of small-$b$ expansion} \label{sec:details} The NLO computation presented in this work has been done using the background-field method. It is a very well developed method for the computation of perturbative corrections involving higher-twist operators. A detailed explanation of the method can be found in refs. \cite{Balitsky:1987bk, Scimemi:2019gge, Braun:2021aon, Braun:2021gvv, Vladimirov:2021hdn}. We skip the detailed description of the computation process, which can be found in ref. \cite{Scimemi:2019gge, Braun:2021aon}. In this section, we present a general discussion, and focus on particularities of the current case. \subsection{General structure of small-$b$ expansion} In the regime of small-$b$ the TMD operator can be expressed as a series of light-cone operators with increasing dimensions, \begin{eqnarray}\label{OPE1} \Phi^{[\Gamma]}(x,b)=\phi^{[\Gamma]}(x)+b^\mu \phi_{\mu}^{[\Gamma]}(x)+b^\mu b^\nu \phi_{\mu\nu}^{[\Gamma]}(x)+...~. \end{eqnarray} Here, the leading terms are \begin{eqnarray}\label{phi:tw2} \phi^{[\Gamma]}(x)&=&\frac{1}{2}\int \frac{dz}{2\pi}e^{-ixzp_+}\langle p,S|\bar q(z,n)[zn,0]\Gamma q(0)|p,S\rangle, \\\label{phi:tw3} \phi^{[\Gamma]}_{\mu}(x)&=&\frac{1}{2}\int \frac{dz}{2\pi}e^{-ixzp_+}\langle p,S|\bar q(z,n)[zn,-\infty n]\overleftarrow{D_\mu}[-\infty n,0]\Gamma q(0)|p,S\rangle, \end{eqnarray} where $D_\mu$ is the QCD covariant derivative. The series (\ref{OPE1}) is a particular application of light-cone OPE and can be written also as series of local operators \cite{Moos:2020wvd}. The matrix element (\ref{phi:tw2}) can be expresses by collinear parton distributions of twist-two, while for the matrix element (\ref{phi:tw3}) they are of twist-two and twist-three. The higher dimension matrix elements involve higher-twist distributions. There is no simple correspondence between the twist of TMDs and the twist of the leading contribution of its small-$b$ series. The factors $b^\mu$ in the parametrization of TMDs (\ref{def:TMDs:1:g+} -- \ref{def:TMDs:1:s+}) spoils the counting and thus the series for individual TMDs start with terms of different twist\footnote{ The coefficients in the parametrization of TMDs are not the only cause of the spoiled counting. There can be also singular contributions $\sim b^{-2}$ that appear for loop diagrams \cite{Rodini:2022wki}. However, this happens only for TMDs of higher twist. }. So, the small-$b$ series for the TMDs $f_1$, $g_1$ and $h_1$ start with (\ref{phi:tw2}) and have leading contributions of twist-two \cite{Bacchetta:2013pqa, Echevarria:2015uaa, Gutierrez-Reyes:2017glx}. The small-$b$ series for the TMDs $f_{1T}^\perp$, $g_{1T}$, $h_{1L}^\perp$ and $h_1^\perp$ start with operators of type (\ref{phi:tw3}) and involve twist-three distributions \cite{Boer:2003cm, Kang:2011mr, Kanazawa:2015ajw, Scimemi:2018mmi}. Finally, the pretzelosity distribution $h_{1T}^\perp$ starts with $\phi_{\mu\nu}(x)$ and the leading term contains already twist-four terms \cite{Moos:2020wvd}. The expression (\ref{OPE1}) is a tree-level expression. Accounting of quantum corrections modifies (\ref{OPE1}) by terms $\sim a_s=\alpha_s/4\pi$. These terms can be absorbed into the coefficient functions, which enter in convolution with collinear distributions. For example, the twist-two term turns into \begin{eqnarray}\label{OPE-coef-f} \phi_f^{[\Gamma]}(x)\to \sum_{f'} \int_x^1 \frac{dy}{y} C_{f\leftarrow f'}(y,\ln b^2;\mu,\zeta;\mu_{\text{OPE}})\phi_{f'}^{[\Gamma]}\(\frac{x}{y},\mu_{\text{OPE}}\), \end{eqnarray} where indices $f$ label contributions of different parton content. The coefficient function explicitly contains the dependence on $(\mu,\zeta)$. It also contains the $\mu_{\text{OPE}}$-scale, which is the scale of OPE. The whole expression (\ref{OPE-coef-f}) is independent on $\mu_{\text{OPE}}$. Using the TMD evolution equations (\ref{TMD-evol}) and the evolution equation for collinear distributions (\ref{evol-tw2}), one can deduce the part of the coefficient function proportional to logarithms (see e.g. \cite{Echevarria:2016scs}). In what follows, we set $\mu_{\text{OPE}}=\mu$ for simplicity, such that the coefficient function depends only on $(a_s(\mu), \mathbf{L}_b, \mathbf{l}_\zeta)$. Therefore, the small-$b$ expansion for the TMDs $F\in\{f_1, g_1, h_1\}$ takes the form \begin{eqnarray} F_f(x,b;\mu,\zeta)=\sum_{f'} \int_x^1 \frac{dy}{y} C^{F}_{f\leftarrow f'}(y;\mathbf{L}_b,\mathbf{l}_\zeta)f_{f'}\(\frac{x}{y},\mu\)+\mathcal{O}(b^2), \end{eqnarray} with $f$ being collinear distributions of twist-two. The expressions for twist-three have a similar general structure, but a more involved form. Generally, for $F\in \{f_{1T}^\perp, g_{1T}, h_{1L}^\perp, h_1^\perp\}$ one has \begin{eqnarray} F_f(x,b;\mu,\zeta)&=&\sum_{f'} \int_x^1 \frac{dy}{y} C^{F,\text{tw2}}_{f\leftarrow f'}(y;\mathbf{L}_b,\mathbf{l}_\zeta)f_{f'}\(\frac{x}{y},\mu\) \\\nonumber && +\sum_{f'} \int [dx] C^{F,\text{tw3}}_{f\leftarrow f'}(x,x_1,x_2,x_3;\mathbf{L}_b,\mathbf{l}_\zeta)t_{f'}(x_1,x_2,x_3;\mu), \end{eqnarray} where $f$ and $t$ are distributions of twist-two and three, correspondingly. Note, that in the case of the Sivers and Boer-Mulders function $C^{\text{tw2}}=0$. The coefficient functions for the Sivers function are known at NLO \cite{Scimemi:2019gge}. For the other functions they are known at LO \cite{Boer:2003cm, Kang:2011mr, Kanazawa:2015ajw, Scimemi:2018mmi}, and computed here at NLO. \subsection{Computation} In a nutshell, the computation within the background-field method consists in following steps. \begin{enumerate} \item The matrix element for a TMD is presented in a functional-integral form. Then the QCD fields are split into the quantum and background modes ($q(x)=q_{\text{quan.}}(x)+q_{\text{back.}}(x)$), with corresponding momentum counting. \item The quantum modes are (functionally) integrated using both the perturbative expansion and the expansion in the number of background fields. The Lagrangian of the quantum-to-background fields interaction can be found in ref. \cite{Abbott:1981ke}. As result of the integration, one obtains the effective operator. \item The effective operator is decomposed in the basis of definite-twist operators using equations of motion and algebraic manipulations. \end{enumerate} During this procedure one expects that the hadron is composed of the low-energy fields only, and that thus the highly-energetic quantum modes do not contribute to its wave function. Therefore, the computation is done on the level of the operator itself without any reference to the hadron state. For a detailed discussion of each step in the concrete application to TMD operators (Sivers function) we refer to \cite{Scimemi:2019gge}. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\textwidth]{TMDtw3Diags} \caption{\label{fig:diags} Diagrams contributing to the NLO effective operator at twist-two and twist-three level. The dashed lines show the half-infinite Wilson lines. The mirror diagrams to (A, C, D, E) should be added.} \end{center} \end{figure} At the twist-three level one has to compute all diagrams of mass-dimension four. They are shown in fig.\ref{fig:diags}. The diagrams with two external fields (A, B, G) have to be computed up to a single transverse-derivative contribution. These diagrams contain twist-two and twist-three parts, which can be identified using the QCD equations of motion. The diagrams with three external fields (C, D, E, F, H) contain only twist-three terms. The diagrams have been evaluated in position space. It is the preferred representation for dealing with higher-twist operators, because the resulting expressions are much shorter in comparison to momentum space. Examples of diagram computations in this technique can be found in appendices of refs. \cite{Scimemi:2019gge, Braun:2021aon, Vladimirov:2021hdn}. The final expressions in position space are presented in appendix \ref{app:pos}. The subsequent Fourier transformation to momentum space is laborious but straightforward. As a by-product of the computations for diagrams A and B, we obtained the NLO matching coefficients for the TMDs $f_1$, $g_1$ and $h_1$. Our expressions coincides with well-known results \cite{Bacchetta:2013pqa, Echevarria:2015uaa, Gutierrez-Reyes:2017glx, Buffing:2017mqm}. This served as an intermediate check of our computation. The computation is done for the bare operators and requires renormalization. Schematically the renormalization factor has the form \begin{eqnarray} \Phi_{\text{renor.}}(\mu,\zeta)= Z^{-1}_{UV}(\mu,\zeta)R^{-1}(\zeta)\Phi_{\text{bare}} = Z^{-1}_{UV}(\mu,\zeta)R^{-1}(\zeta)\(C_{\text{bare}}\otimes \phi_{\text{bare}}+...\), \end{eqnarray} where in the last equality we inserted the bare small-$b$ expansion. Here, $Z_{UV}$ is the ultraviolet renormalization factor, and $R$ is the rapidity renormalization factor. We also renormalize the collinear distribution and obtain \begin{eqnarray} \Phi_{\text{renor.}}(\mu,\zeta)=C_{\text{renor.}}(\mu,\zeta,\mu_{\text{OPE}})\otimes \phi_{\text{renor.}}(\mu_{\text{OPE}}), \end{eqnarray} where \begin{eqnarray} C_{\text{renor.}}(\mu,\zeta,\mu_{\text{OPE}})=Z^{-1}_{UV}(\mu,\zeta)R^{-1}(\zeta)C_{\text{bare}}\otimes Z_{\phi}(\mu_{\text{OPE}}), \end{eqnarray} where $Z_\phi$ is the renormalization factor for the collinear distribution $\phi$. The function $C_{\text{renor.}}$ is finite. To regularize divergences we use the combination of dimensional regularization and $\delta$-regularization (for rapidity divergences), which has been used in many TMD-related computations (see e.g. refs. \cite{Echevarria:2016scs, Echevarria:2012js, Buffing:2017mqm}). Collecting expressions for the LO renormalization factors \cite{Aybat:2011zv, Echevarria:2015byo}, we derive the following pocket formula for the renormalization of the NLO coefficient functions \begin{eqnarray}\nonumber C^{\text{NLO}}_{\text{renorm}}&=&\mu^{2\epsilon}e^{\epsilon\gamma_E} C_{\text{bare}}^{\text{NLO}} +\Big[ \mu^{2\epsilon}e^{\epsilon\gamma_E}2 \(\frac{-b^2}{4}\)^\epsilon C_F \Gamma(-\epsilon)\(\mathbf{L}_b-\mathbf{l}_\zeta+2\ln\(\frac{\delta^+}{p^+}\)-\psi(-\epsilon)-\gamma_E\) \\\label{renormalization} && -C_F\(\frac{2}{\epsilon^2}+\frac{3+2\mathbf{l}_\zeta}{\epsilon}\)-\frac{a_s}{\epsilon}\mathbb{H}\otimes\Big]C^{\text{LO}}, \end{eqnarray} where the factors $\mu^{2\epsilon}e^{\epsilon \gamma_E}$ are the usual factors of the $\overline{\text{MS}}$-scheme, $\delta^+$ is the parameter of the $\delta$-regularization, $\epsilon$ is the parameter of the dimensional regularization ($d=4-2\epsilon$), and $\mathbb{H}$ is the LO evolution kernel for the corresponding collinear distribution. The cancellation of divergences in this combination is a very sensitive check of the computation. \subsection{Treatment of $\gamma_5$} \label{sec:gamma5} The $\gamma^5$ matrix requires an additional treatment in dimensional regularization. In our computation we use the ``Larin+''-scheme introduced in ref. \cite{Gutierrez-Reyes:2017glx}. This is based on the four-dimensional identity \begin{eqnarray}\label{larin+} \gamma^+\gamma^5=\frac{i}{2!}\epsilon_T^{\mu\nu} \gamma^+\gamma_{\mu}\gamma_\nu. \end{eqnarray} The anti-symmetric tensor $\epsilon_T^{\mu\nu}$ is generalized to an arbitrary number of dimensions by means of the identity \begin{eqnarray}\label{ee=gg} \epsilon_T^{\mu_1\mu_2}\epsilon_T^{\nu_1\nu_2}=g_T^{\mu_1\nu_1}g_T^{\mu_2\nu_2}-g_T^{\mu_1\nu_2}g_T^{\mu_2\nu_1}. \end{eqnarray} This generalization is different from the ordinary Larin-scheme\footnote{ In the Larin scheme, one uses the identity $\gamma^+\gamma^5=i\epsilon^{+\mu\nu\rho} \gamma_{\mu}\gamma_\nu \gamma_\rho/3!$, and defines the 4-indices $\epsilon^{\mu\nu\rho\lambda}$ using the identity $\epsilon^{\mu_1\mu_2\mu_3\mu_4}\epsilon^{\nu_1\nu_2\nu_3\nu_4}=-g^{\mu_1\nu_1}g^{\mu_2\nu_2}g^{\mu_3\nu_3}g^{\mu_4\nu_4}+...$~. Therefore, the Larin-scheme treats all directions of the space-time on equal foot, whereas ``Larin+''-scheme (\ref{larin+}) specifically identifies two light-cone directions. } \cite{Larin:1993tq}. The ``Larin+''-scheme is preferable to the Larin-scheme, because it preserves the TMD-twist of an operator \cite{Gutierrez-Reyes:2017glx, Rodini:2022wki}, and consequently, the structure of its divergences. The generalization of the $\gamma^5$ matrix to $d$-dimensions could also involve a multiplication by scheme-dependent factor $Z_5$. However, there is no necessity to introduce such factor for the TMD operators, because their renormalization is independent on the $\Gamma$-structure (as long as it preserves the TMD-twist). The factor $Z_5$ in the ``Larin+''-scheme has been computed in ref. \cite{Gutierrez-Reyes:2017glx} demanding the equality between helicity and unpolarized coefficient functions, \begin{eqnarray}\label{larin-condition} Z_5\otimes C_{q\leftarrow q}^{[\Gamma=\gamma^+\gamma^5]}=C_{q\leftarrow q}^{[\Gamma=\gamma^+]}. \end{eqnarray} Unfortunately, up to now, no accurate generalization of this scheme to the twist-three case exists. In this work, we use the following procedure, which allows us to (partially) by-pass the problems associated with the definition of $\gamma^5$. First of all, we note that the problem exists only for the worm-gear-T function $g_{1T}$. For the chiral-odd operators with $\Gamma=i\sigma^{\alpha+}\gamma^5$, the $\gamma^5$-factor is illusory since $i\sigma^{\alpha+}\gamma^5=-\epsilon_T^{\alpha\beta}\sigma_{\beta+}$. The twist-two part of the function $g_{1T}$ can be computed using the standard definition. For the twist-three part of $g_{1T}$, we distinguish quark and gluon contributions. For the pure quark contributions we use an anti-commuting $\gamma^5$ (which is equivalent to implementing condition (\ref{larin-condition})). For the gluon contributions (diagrams G and H) we compute the trace using (\ref{larin+}) and (\ref{ee=gg}). The result of this procedure (at NLO for the coefficient function) is equivalent to an $\overline{\text{MS}}$ twist-two computation. The deviations arrears at term suppressed by $\epsilon$ and at NNLO. It is straightforward to proof that the current scheme is equivalent at NLO to the ’t Hooft-Veltman-Breitenlohner-Maison \cite{tHooft:1972tcz, Breitenlohner:1977hr} scheme. \subsection{Twist-decomposition of the $F_{\mu+}D_\alpha F_{\nu+}$ operator} \label{sec:FDF} The diagrams A, B, and G result in two-point operators of generic twist-three. Such operators must be rewritten in terms of definite-twist-2 and -3 operators, which can be accomplished by using Dirac algebra and equations of motion. For the diagrams A and B, these operators have the form $\bar q(zn)[zn,0] \Gamma_T q(0)$ where $\Gamma \in \{\gamma^\mu, \gamma^\mu\gamma^5, \sigma^{\mu\nu}\}$ (with $\mu$ and $\nu$ being transverse indices), and $\bar q(zn)[zn,0] \Gamma_+ D_\mu q(0)$. The decomposition of such operators can be found in the literature, e.g. in refs. \cite{Balitsky:1987bk, Scimemi:2018mmi, Moos:2020wvd}. A typical relation has the form \begin{eqnarray}\label{gT} \langle p,S|\bar q(zn)[zn,0]\gamma^\mu \gamma^5q(0)|p,S\rangle &=& 2s_T^\mu M\int_{-1}^1 dx e^{ix \zeta }g_T(x) \\\nonumber &=& 2s_T^\mu M\Big(\int_0^1 d\alpha \widehat{g_1}(\zeta)+2\zeta^2 \int_0^1 d\alpha \int_0^{\bar \alpha} d\beta \beta \widehat{S}^+(\bar \alpha \zeta,\beta \zeta,0)\Big), \end{eqnarray} where $\zeta=zp_+$, $\bar \alpha=1-\alpha$, and $\widehat{g_1}$ and $\widehat{S}_+$ are Fourier transformations of the corresponding collinear distributions (\ref{Fourier:tw2}, \ref{Fourier:tw3}). The first term in eqn. (\ref{gT}) gives the celebrated Wandzura-Wilczek relation \cite{Wandzura:1977qf}. For the diagram G the operator is $\mathbb{O}^{\mu\alpha\nu}(z)$ which comes from the expansion in $b$ of the leading-twist gluon TMD operator \begin{eqnarray} &&\mathbb{O}^{\mu\alpha\nu}(z)=F^{\mu+}(zn+b)[zn ,\pm\infty n]\lDer{D}^\alpha[\pm\infty n,0]F^{\nu+}(0) \end{eqnarray} where all indices are transverse and the sign $\pm$ depends on the process. We have not found the decomposition of his operator in the literature and, therefore, perform it here. To derive the decomposition, we have used the technique based on the spinor-helicity formalism developed in ref. \cite{Moos:2020wvd}. This formalism yields in a natural way the result written as Fourier transformation of the momentum space representation. The operator $\mathbb{O}^{\mu\alpha\nu}$ has twist-two and twist-three parts \begin{eqnarray} \mathbb{O}^{\mu\alpha\nu}(z)= [\mathbb{O}^{\mu\alpha\nu}(z)]_{\text{tw2}} + [\mathbb{O}^{\mu\alpha\nu}(z)]_{\text{tw3}}. \end{eqnarray} For the twist-two part we found \begin{eqnarray} \langle p,S|\[\mathbb{O}^{\mu\alpha\nu}(z)\]_{\text{tw2}}|p,S\rangle & =& \frac{\epsilon_T^{\mu\nu}s_T^\alpha M}{2(1-\epsilon)(1-2\epsilon)} \text{FDF}^{\text{tw2}}(z) \label{FDF_tw2}\\ & =& \frac{\epsilon_T^{\mu\nu}s_T^\alpha M}{2(1-\epsilon)(1-2\epsilon)} \int_0^1 d\alpha \int_{-\infty}^\infty dy e^{iy\alpha p^+ z}(\alpha p^+ y)^2 \Delta f_g(y), \end{eqnarray} where $\Delta f_g$ is the gluon-helicity distribution (\ref{def:gluon-coll-d}). The twist-three term contains three tensor structures, \begin{eqnarray} \langle p,S|\[\mathbb{O}^{\mu\alpha\nu}(z)\]_{\text{tw3}}|p,S\rangle & = & t_2^{\mu\alpha\nu} M \ \text{FDF}^{\text{tw3}}_{2}(z)+ t_4^{\mu\alpha\nu} M \ \text{FDF}^{\text{tw3}}_{4}(z)+ t_6^{\mu\alpha\nu} M \ \text{FDF}^{\text{tw3}}_{6}(z), \label{FDF_tw3} \end{eqnarray} where \begin{align*} \text{FDF}^{\text{tw3}}_{2}(z) &= \mp ip_+^2 \pi \int_{-1}^1 dy F^+_2(-y,0,y)e^{iy p_+z},\\ \text{FDF}^{\text{tw3}}_{4}(z) &= \mp ip_+^2 \pi \int_{-1}^1 dy F^+_4(-y,0,y)e^{iyp_+z},\\ \text{FDF}^{\text{tw3}}_{6}(z) &= p_+^2 \int [dx] g^+(x_1,x_2,x_3) \int_0^1 du \left( \frac{3x_1+2x_3}{x_2^2}u^2 e^{-iux_1 p^+z} + \frac{x_3}{x_2^2}u^2e^{iux_3p^+z} \right) \\ & +p_+^2 \sum_q \int[dx] 2T_q(x_1,x_2,x_3)\int_0^1 du u^2 e^{-ip_+zux_2}, \end{align*} with $g^+=(2F_2^++F_4^++F_6^+)$. The tensors $t_i^{\mu\nu\rho}$ and functions $F_{2,4,6}$ are defined in eqns.(\ref{def:tensor-t}, \ref{def:F24}, \ref{def:F6}). The last term in $\text{FDF}^{\text{tw3}}_{6}$ is a consequence of the QCD equations of motion, and gives the singlet-quark contribution. (Note the sum over all active flavors.) The signs $\mp$ depend on the defining process, and are ``-''(``+'') for SIDIS (Drell-Yan). \section{Results} \label{sec:results} In this section, we present the results for Sivers, Boer-Mulders and worm-gear TMDs in the small-$b$ regime at NLO. The expression for the Sivers function has been computed in ref. \cite{Scimemi:2019gge}. In this paper, we have re-evaluated it as cross-check and present it here for completeness. The intermediate results of our computation, which could be interesting for theoretical investigations, are presented in appendix \ref{app:pos}. In the formulas presented below we employ the notation for the logarithms defined in eqn.(\ref{def:logs}). The bar-variables are $\bar \alpha=1-\alpha$, $\bar y=1-y$, etc. The color factors are $C_F=(N_c^2-1)/2N_c$, $C_A=N_c$. For simplicity of presentation we use the delta-function form of the Mellin convolution \begin{eqnarray} \int_{-1}^1 dy \int_0^1 d\alpha \delta(x-\alpha y) f(\alpha,y)= \left\{ \begin{array}{lc}\displaystyle \int_{x}^1 \frac{dy}{y} f\(\frac{x}{y},y\), & x>0, \\\displaystyle \int_{-x}^1 \frac{dy}{y} f\(\frac{-x}{y},-y\), & x<0. \end{array}\right. \end{eqnarray} The ``plus''-distribution is defined as usual \begin{eqnarray} (f(\alpha))_+=f(\alpha)-\delta(\bar \alpha)\int_0^1 d\beta f(\beta). \end{eqnarray} For all distributions the NLO expression has the following general form \begin{eqnarray}\label{matching-general} F(x,b;\mu,\zeta)&=&F^{(0)}(x)+ a_s\Bigg\{C_F\(-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta+3\mathbf{L}_b-\frac{\pi^2}{6}\)F^{(0)}(x) \\\nonumber && -2\mathbf{L}_b \mathbb{H}\otimes F^{(0)}(x) +F^{(1)}(x)\Bigg\}+\mathcal{O}(a_s^2,b^2), \end{eqnarray} where $F^{(0)}$ is the tree-level expression, $F^{(1)}(x)$ is the finite part of the coefficient function, and $\mathbb{H}\otimes F^{(0)}$ contains the evolution kernel for the corresponding distribution, \begin{eqnarray} \mu^2 \frac{d F^{(0)}(x)}{d \mu^2}=2a_s \mathbb{H}\otimes F^{(0)}(x). \end{eqnarray} The parts proportional to the logarithms follow from the evolution equations (\ref{TMD-evol}, \ref{evol-tw2}, \ref{evol-tw3}). In each case, we found agreement between our results and the known evolution equations, see appendix \ref{app:evol}. For practical applications, it is convenient to use the so-called optimal TMDs \cite{Scimemi:2018xaf, Scimemi:2017etj}. They are defined at $\zeta=\zeta(b,\mu)$, where $\zeta(b,\mu)$ is a null-evolution curve that passes through the saddle point of $(\gamma_F,\mathcal{D})$-field \cite{Scimemi:2018xaf}. To receive the coefficient function for optimal TMDs at NLO, it is enough to set $\mathbf{l}_\zeta$ according to \begin{eqnarray} -\mathbf{L}_b^2+2\mathbf{L}_b\mathbf{l}_\zeta+3\mathbf{L}_b=0. \end{eqnarray} Note, that the remaining dependence on $\mu$ is compensated by the evolution of collinear PDF, and thus the remaining $\mu$ is the scale of OPE $\mu_{\text{OPE}}$. \subsection{Sivers function $f_{1T}^\perp$} The NLO expression for the Sivers function reads \begin{eqnarray}\label{sivers:nlo} f_{1T,q}^\perp(x,b;\mu,\zeta)&=&\pm \pi T_q(-x,0,x)\pm\pi a_s\Big\{C_F\(-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta+3\mathbf{L}_b-\frac{\pi^2}{6}\)T_q(-x,0,x) \\\nonumber && -2\mathbf{L}_b \mathbb{H}\otimes T_q(-x,0,x) +\mathbf{\delta f}_{1T}^\perp(x)\Big\}+\mathcal{O}(a_s^2,b^2). \end{eqnarray} The finite part is \begin{eqnarray}\label{sivers:finite} \mathbf{\delta f}_{1T}^\perp(x)&=& \int_{-1}^1 dy \int_0^1 d\alpha \delta(x-\alpha y) \Big[ \\\nonumber && \(C_F-\frac{C_A}{2}\)2\bar \alpha T_q(-y,0,y)+\frac{3 \alpha\bar \alpha}{2}\frac{G_+(-y,0,y)+G_-(-y,0,y)}{y}\Big]. \end{eqnarray} The action of the evolution kernel on the function $T(-x,0,x)$ is \begin{eqnarray}\label{sivers:H} &&\mathbb{H}\otimes T_q(-x,0,x)=\int_{-1}^1 dy \int_0^1 d\alpha \delta(x-\alpha y) \Bigg\{ \\\nonumber && \qquad \(C_F-\frac{C_A}{2}\)\Big[ \(\frac{1+\alpha^2}{1-\alpha}\)_+T_q(-y,0,y)+(2\alpha-1)_+T_q(-x,y,x-y)-\Delta T_q(-x,y,x-y)\Big] \\\nonumber && \qquad +\frac{C_A}{2}\Big[\(\frac{1+\alpha}{1-\alpha}\)_+T_q(-x,x-y,y)+\Delta T_q(-x,x-y,y)\Big] \\\nonumber && \qquad +\frac{1-2\alpha\bar \alpha}{4}\frac{G_+(-y,0,y)+Y_+(-y,0,y)+G_-(-y,0,y)+Y_-(-y,0,y)}{y}\Bigg\}, \end{eqnarray} The choice of the sign $\pm$ is related to the process. For the case of Drell-Yan definition the "$+$" sign should be taken. For the case of SIDIS definition "$-$" sign should be taken. In the present form, the NLO matching for the Sivers function (\ref{sivers:nlo}) has been first computed in ref. \cite{Scimemi:2019gge}. The logarithmic part (\ref{sivers:H}) has been derived in ref. \cite{Braun:2009mi}. The quark and gluon contributions to the finite part (\ref{sivers:finite}) were derived earlier in \cite{Sun:2013hua} and \cite{Dai:2014ala}, respectively, performing fixed-order computations for the SSA cross-sections. The detailed comparison of (\ref{sivers:nlo}) with earlier work is given in ref. \cite{Scimemi:2019gge}. In this contribution we have reproduced the results of \cite{Scimemi:2019gge} which served us as a check of our computation. \subsection{Worm-gear-T function $g_{1T}$} The expression for the worm-gear-T function is the most cumbersome in this work. It is convenient to split it into twist-two and twist-three contributions \begin{eqnarray} g_{1T,q}(x,b;\mu,\zeta)= g_{1T,q}^{\text{tw2}}(x,b;\mu,\zeta) + g_{1T,q}^{\text{tw3}}(x,b;\mu,\zeta). \end{eqnarray} The twist-two part is convenient to present in the form \begin{eqnarray}\label{wgt:tw2} g_{1T,q}^{\text{tw2}}(x,b;\mu,\zeta)&=& x\int_x^1 \frac{dy}{y} \Big[ C_{1T,q\leftarrow q}^{\text{tw2}}\(\frac{x}{y}\)g_{1q}(y) + C_{1T,q\leftarrow g}^{\text{tw2}}\(\frac{x}{y}\)\Delta f_{g}(y)\Big], \end{eqnarray} where \begin{eqnarray}\nonumber C_{1T,q\leftarrow q}^{\text{tw2}}(x)&=&1+a_sC_F\[-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta -2\mathbf{L}_b\(-\bar x+2\ln\bar x-\ln x\) -2\bar x-2\ln x-\frac{\pi^2}{6} \]+\mathcal{O}(a_s^2), \\ C_{1T,q\leftarrow g}^{\text{tw2}}(x)&=& \frac{a_s}{2}\[-2\mathbf{L}_b(2\bar x+\ln x)+2\bar x+\ln x\]+\mathcal{O}(a_s^2). \end{eqnarray} These expressions can be used as the Wandzura-Wilczek approximation for the worm-gear-T function. The logarithmic part of eqn. (\ref{wgt:tw2}) coincides with the one predicted by evolution equations for helicity distributions (see e.g. \cite{Moch:2014sna}). The twist-three part is complicated. We split it into a number of terms \begin{eqnarray}\label{wgt:tw3:main} &&g_{1T,q}^{\text{tw3}}(x,b;\mu,\zeta)=g_{1T,q}^{(0),\text{tw3}}(x)+a_s\Big\{C_F\(-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta+3\mathbf{L}_b-\frac{\pi^2}{6}\)g_{1T,q}^{(0),\text{tw3}}(x) \\\nonumber && \qquad -2\mathbf{L}_b \(\mathbb{H}_{NS}+\mathbb{H}_{G}+\sum_{q'}\mathbb{H}^{q'}_{S}\)\otimes g_{1T,q}^{\perp,(0),\text{tw3}}(x) +\mathbf{\delta g}_{NS}(x) +\mathbf{\delta g}_{G}(x) \Big\}+\mathcal{O}(a_s^2,b^2). \end{eqnarray} We emphasize that the singlet-quark contribution to the finite part vanishes. At the tree-level \begin{eqnarray}\label{wgt:tree} g_{1T,q}^{(0),\text{tw3}}(x)= 2x\int [dy] \int_0^1 d\alpha \delta(x-\alpha y_3)\(\frac{\Delta T_q(y_{1,2,3})}{y_2^2}+\frac{T_q(y_{1,2,3})-\Delta T_q(y_{1,2,3})}{2y_2y_3}\), \end{eqnarray} where $(y_{i,j,k})$ is a shorthand notation for $(y_i,y_j,y_k)$. In this form the expression (\ref{wgt:tree}) has been derived in ref. \cite{Scimemi:2018mmi}. The same result (but in a different basis) has been also derived in ref. \cite{Kanazawa:2015ajw}. The finite parts for eqn.(\ref{wgt:tw3:main}) are \begin{eqnarray}\nonumber \mathbf{\delta g}_{NS}(x)&=& 2\int[dy]\int_0^1 d\alpha \Bigg\{ \(C_F-\frac{C_A}{2}\)\(-\frac{\bar \alpha}{y_3}T+\frac{\bar \alpha(1-2\alpha)}{y_3}\Delta T\)\delta(x-\alpha y_2) \\ &&\qquad \label{wgt:1} +\delta(x-\alpha y_3)\Bigg[ \(-C_F\frac{\alpha \ln \alpha}{y_2}+\(C_F-\frac{C_A}{2}\)\frac{\bar \alpha y_3}{y_1 y_2}\)T \\\nonumber &&\qquad +\(C_F \frac{\alpha \ln\alpha(y_2-2y_3)-2\bar \alpha y_3}{y_2^2}+\(C_F-\frac{C_A}{2}\)\( \frac{\bar \alpha (1-2\alpha) y_3}{y_1y_2}+\frac{2\bar \alpha^2y_3}{y_2^2}\)\)\Delta T\Bigg]\Bigg\}, \\ \mathbf{\delta g}_{G}(x)&=& \int[dy] \int_0^1 d\alpha \delta(x-\alpha y_3)\Bigg\{ \\\nonumber && \qquad \alpha(\ln\alpha-2\bar \alpha)\(\frac{G_+(y_{1,2,3})-4Y_+(y_{2,3,1})}{y_2 y_3}+2\frac{Y_+(y_{2,3,1})-Y_+(y_{3,1,2})}{y_2^2}\) \\\nonumber && \qquad +\alpha \bar \alpha \( 8\frac{Y_+(y_{2,3,1})-Y_+(y_{3,1,2})}{y_2^2}-18\frac{Y_{+}(y_{2,3,1})}{y_2y_3}\) \\\nonumber && \qquad +\bar \alpha\(1-\frac{3}{8}\alpha\)\frac{-G_+(y_{1,2,3})+G_-(y_{1,2,3})+2Y_+(y_{1,2,3})+2Y_-(y_{1,2,3})}{y_1y_2}\Bigg\}, \end{eqnarray} where we use the shortened notation $T=T_q(y_1,y_2,y_3)$, $\Delta T=\Delta T_q(y_1,y_2,y_3)$ for the quark-gluon-quark distributions. Notice that the singlet quark contribution (summed over flavors) does not appear in the finite part. The logarithmic parts are \begin{eqnarray} \mathbb{H}_{NS}\otimes g_{1T,q}^{(0),\text{tw3}}(x)&=& \int [dy]\int_0^1 d\alpha \Bigg\{ \delta(x-\alpha y_3)\Bigg[ \\\nonumber && 2x C_F\Big\{\(\frac{1}{2}+\alpha-\ln\alpha+2\ln\bar \alpha\)\(\frac{T-\Delta T}{2y_2y_3}+\frac{\Delta T}{y_2^2}\)-\frac{\Delta T}{y_2^2}\Big\} \\\nonumber && +\(C_F-\frac{C_A}{2}\) \(\alpha \frac{(2-\alpha)T-(4-3\alpha)\Delta T}{y_2}-\bar \alpha \frac{T-(1-2\alpha)\Delta T}{y_1}\) \\\nonumber && +\frac{C_A}{2}\Big\{\(\frac{\alpha \bar \alpha-2}{y_2}-\frac{1}{x+y_1}\)\(T-\Delta T-2y_3 \frac{\Delta T}{y_2}\)-2(1-2\alpha)y_3\frac{\Delta T}{y_2^2}\Big\}\Bigg] \\\nonumber && +\delta(x-\alpha y_2)\(C_F-\frac{C_A}{2}\)\(-\alpha +\bar \alpha^2\frac{y_2}{y_3}\)\frac{T+(1-2\alpha)\Delta T}{x+y_1} \\\nonumber && +\delta(x-y_2-\alpha y_3)\(C_F-\frac{C_A}{2}\)\frac{1}{y_2}\[ T+\(1+2\frac{\alpha y_3}{y_2}\)\Delta T\] \Bigg\}, \end{eqnarray} \begin{eqnarray} \mathbb{H}_{G}\otimes g_{1T,q}^{(0),\text{tw3}}(x)&=& -\int[dy] \int_0^1 d\alpha \delta(x-\alpha y_3)\Bigg\{\frac{\alpha\bar \alpha}{2} \frac{Y_+(y_{2,3,1})-Y_+(y_{3,1,2})}{y_2 y_3} \\\nonumber && \qquad +\alpha(2\bar \alpha+\ln\alpha)\(\frac{G_+(y_{1,2,3})-2Y_+(y_{2,3,1})}{y_2 y_3}+\frac{Y_+(y_{2,3,1})-Y_+(y_{3,1,2})}{y_2^2}\) \\\nonumber && \qquad +\frac{\bar \alpha}{4}\frac{G_+(y_{1,2,3})-G_-(y_{1,2,3})}{y_1y_2} -\frac{\bar \alpha (1-3\alpha)}{2}\frac{Y_+(y_{3,1,2})-Y_-(y_{3,1,2})}{y_1y_2} \\\nonumber && \qquad +\frac{\bar \alpha (1-2\alpha)}{2}\frac{Y_+(y_{2,3,1})-Y_+(y_{3,1,2})-Y_-(y_{2,3,1})+Y_-(y_{3,1,2})}{y_2^2}\Bigg\}, \\\label{wgt:2} \mathbb{H}^{q'}_{S}\otimes g_{1T,q}^{\perp,(0),\text{tw3}}(x) &=& 2\int[dy]\int_0^1 d\alpha \delta(x-\alpha y_2)(\alpha\bar \alpha +\alpha\ln \alpha)\frac{T_{q'}(y_1,y_2,y_3)}{y_2}, \end{eqnarray} where we use the shortened notation $T=T_q(y_1,y_2,y_3)$, $\Delta T=\Delta T_q(y_1,y_2,y_3)$ for the quark-gluon-quark distributions, and $(y_{i,j,k})=(y_i,y_j,y_k)$ for three-gluon distributions. To simplify these expressions we have used the symmetry relations (\ref{def:sym-quark}) and (\ref{def:sym-gluon}). The logarithmic part coincides with the prediction given by the renormalization group equation \cite{Braun:2009mi, Braun:2009vc} (see appendix \ref{app:evol}). It provides a strong check of our computation. The comparison has been made in position space (see appendix \ref{app:pos}). The integrands of eqns. (\ref{wgt:1} -- \ref{wgt:2}) are finite for $y_i\to0$. Also, we observed the cancelation of various undesirable terms such as $\ln^2\alpha$ and $\ln \bar\alpha/\alpha$ that appear in the individual diagrams. Altogether, these observations provide extra confidence in the result. \subsection{Boer-Mulders function $h_{1}^\perp$} The Boer-Mulders function is in many aspects similar to the Sivers function, which is the consequence of their T-oddness. We have \begin{eqnarray} h_{1,q}^{\perp}(x,b;\mu,\zeta)&=&\mp \pi E_q(-x,0,x)\mp \pi a_s\Big\{C_F\(-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta+3\mathbf{L}_b-\frac{\pi^2}{6}\)E_q(-x,0,x) \\\nonumber && -2\mathbf{L}_b \mathbb{H}\otimes E_q(-x,0,x) \Big\}+\mathcal{O}(a_s^2,b^2). \label{h1Perp_tempLabel} \end{eqnarray} where the $\mp$ identifies the process under consideration. For DY (SIDIS) the upper (lower) sign should be taken. For the Boer-Mulders function, we have found that the finite part (besides the $\pi^2/6$ contribution), exactly vanishes, i.e.: \begin{eqnarray}\label{bm:finite} \mathbf{\delta h}_{1,f}^{\perp}(x) &=& 0 \end{eqnarray} For the evolution kernel, we have \begin{eqnarray} && \mathbb{H}\otimes E_q(-x,0,x) =-\frac{C_F}{2}E_q(-x,0,x)+\int_0^1 d\alpha\int dy \delta(x-\alpha y) \Big\{ \\\nonumber &&\qquad 2\(C_F-\frac{C_A}{2}\) \Big[\(\frac{\alpha}{1-\alpha}\)_+E_q(-y,0,y) -\bar \alpha E_q(-x,y,x-y)\Big] +C_A\frac{E_q(-x,x-y,y)}{(1-\alpha)_+}\Big\}. \end{eqnarray} In general the expression for the Boer-Mulders function has the simplest form among all TMD distributions that match twist-three operators. The expression for the evolution kernel agrees with the general kernel for the twist-three functions \cite{Braun:2009vc, Braun:2021gvv}, see also appendix \ref{app:evol}. \subsection{Worm-gear-L function $h_{1L}^\perp$} It is convenient to split the expression for the worm-gear-T function into twist-two and twist-three contributions \begin{eqnarray} h_{1L,q}^\perp(x,b;\mu,\zeta)= h_{1L,q}^{\perp,\text{tw2}}(x,b;\mu,\zeta) + h_{1L,q}^{\perp,\text{tw3}}(x,b;\mu,\zeta). \end{eqnarray} The twist-two part can be written in the form \begin{eqnarray}\label{wgl:tw2} h_{1L,q}^{\perp,\text{tw2}}(x,b;\mu,\zeta)&=& -x^2\int_x^1 \frac{dy}{y} C_{1L,q\leftarrow q}^{\perp,\text{tw2}}\(\frac{x}{y}\)h_{1}(y), \end{eqnarray} where \begin{eqnarray}\nonumber C_{1L,q\leftarrow q}^{\perp,\text{tw2}}(x)&=&1+a_sC_F\[-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta -4\mathbf{L}_b\(\ln x-\ln\bar x\)-\frac{\pi^2}{6} \]+\mathcal{O}(a_s^2), \end{eqnarray} These expressions can be used as the Wandzura-Wilczek-like approximation for the worm-gear-L function. The logarithmic part of eqn. (\ref{wgl:tw2}) coincides with the one predicted by evolution equations for transversity distributions (see e.g. \cite{Vogelsang:1997ak}). The finite part contains only the trivial contribution $\pi^2/6$. The non-trivial part vanishes (see the diagram $\mathbf{B}$ in sec.\ref{app:diagrams:wgh}). The twist-three part is \begin{eqnarray}\label{wgl:tw3:main} h_{1L,q}^{\perp,\text{tw3}}(x,b;\mu,\zeta)&=&h_{1L,q}^{\perp,(0),\text{tw3}}(x)+a_s\Big\{C_F\(-\mathbf{L}_b^2+2\mathbf{L}_b \mathbf{l}_\zeta+3\mathbf{L}_b-\frac{\pi^2}{6}\)h_{1L,q}^{\perp,(0),\text{tw3}}(x) \\\nonumber && -2\mathbf{L}_b \mathbb{H} \otimes h_{1L,q}^{\perp,(0),\text{tw3}}(x) +\mathbf{\delta h}(x) \Big\}+\mathcal{O}(a_s^2,b^2). \end{eqnarray} At tree-level it is \begin{eqnarray} h_{1L,q}^{\perp,(0),\text{tw3}}(x)= -2x \int_0^1 d\alpha \int[dy] \alpha \delta(x-\alpha y_3) H_q(y_1,y_2,y_3) \frac{y_3-y_2}{y_2^2y_3}. \end{eqnarray} This expression has been derived in refs. \cite{Scimemi:2018mmi, Kanazawa:2015ajw}. Note that the integral is finite for $y_2\to0$, since $H(-y,0,y)=0$. The finite and logarithmic parts of the twist-three expression are \begin{eqnarray} &&\mathbf{\delta h}(x)=-4\int [dy]H_q(y_1,y_2,y_3) \Bigg\{ \\\nonumber &&\qquad \int_0^1 d\alpha \Big[ \(C_F-\frac{C_A}{2}\)\alpha \bar \alpha \(\frac{\delta(x-\alpha y_2)}{y_3}-\frac{\delta(x-\alpha y_3)}{y_1}\) +\frac{C_A}{2} \bar \alpha (\alpha y_2+\bar \alpha y_3)\frac{\delta(x-\alpha y_3)}{y_2^2}\Big] \\\nonumber &&\qquad +\int_0^1 d\alpha \int_0^1 d\beta \frac{\alpha}{x+y_1}\(\frac{C_A}{2}\delta(x+\alpha y_1+\alpha \beta y_2)-\(C_F-\frac{C_A}{2}\)\delta(x+\alpha y_1+\alpha \beta y_3)\)\Bigg\}. \\ && \mathbb{H} \otimes h_{1L,q}^{\perp,(0),\text{tw3}}(x) =-2\int [dy]H_q(y_1,y_2,y_3) \Bigg\{ \\\nonumber &&\qquad \int_0^1 d\alpha C_F \alpha x \(\frac{3}{2}+2 \ln \bar \alpha-2\ln \alpha\)\frac{y_3-y_2}{y_2^2 y_3} \delta(x-\alpha y_3) \\\nonumber &&\qquad +\int_0^1 d\alpha \int_0^1 d\beta \frac{\alpha (y_2-x)}{y_2(x+y_1)}\(\frac{C_A}{2}\delta(x+\alpha y_1+\alpha \beta y_2)-\(C_F-\frac{C_A}{2}\)\delta(x+\alpha y_1+\alpha \beta y_3)\)\Bigg\}. \end{eqnarray} The double-integrals in the last lines of these equations can be integrated over one of the variables, but the resulting expressions have a complicated form. \section{Conclusion} We have computed the leading small-$b$ asymptotics for Sivers ($f_{1T}^\perp$), Boer-Mulders ($h_{1}^\perp$) and worm-gear functions ($g_{1T}$ and $h_{1L}^\perp$) at NLO in perturbation theory. These functions are expressed in terms of twist-two and twist-three collinear distributions. The computation is performed using the well-established background-field method, which was also used for similar computations in refs. \cite{Scimemi:2019gge, Braun:2021aon, Braun:2021gvv}. The result is presented both in position (appendix \ref{app:pos}) and momentum-fraction (section \ref{sec:results}) space. The logarithmic parts of the obtained expressions agree with the predictions of the renormalization group equations. The result for the Sivers function coincides with the one computed in ref. \cite{Scimemi:2019gge}. With the results of this work the knowledge of small-$b$ expressions for TMDs of leading twist is complete at NLO (or even higher, see refs. \cite{Luo:2020epw, Gutierrez-Reyes:2018iod}). The only distribution for which this is still missing is pretzelocity that has leading twist-four contributions at small-$b$ \cite{Moos:2020wvd}. The perturbative expansions for the Sivers and Boer-Mulders functions on one side and the worm-gear functions on the other side are drastically different, which is a consequence of the T-parity properties of these functions. So, the Sivers and Boer-Mulders at LO have the Qiu-Sterman form of quark-anti-quark correlators with a null-momentum gluon field \cite{Qiu:1991pp} $T(-x,0,x)$ and $E(-x,0,x)$. The NLO expressions for these distributions contain only twist-three distributions and are relatively simple (in particular, the finite part of the Boer-Mulders function is trivial (\ref{bm:finite})). The global sign of the small-$b$ expression depends on the orientation of the gauge link. In contrast, the worm-gear functions have involved forms. Already at LO, they are expressed by convolution integrals of twist-two and twist-three distributions, which lead to bulky NLO expressions. The expression for the worm-gear-T distribution is especially cumbersome, since it contains mixtures with a three-gluon correlator and a singlet-quark contribution. Unfortunately, we have not found any significant simplifications for these distributions. At the moment, the most practically important result for worm-gear functions is the part proportional to twist-two distributions, because it can be used as an approximation for these functions (Wandzura-Wilczek-like approximation). The derived NLO expressions are important for the phenomenology of TMDs and twist-three distributions. They provide the leading logarithmic terms, and thus allow to properly include QCD evolution effects in the data analysis. This will be definitely important for the next-generation of high-precision polarized experiments such as EIC \cite{AbdulKhalek:2021gbh}. \acknowledgments We thank Vladimir Braun for discussions. A.V. is funded by the \textit{Atracci\'on de Talento Investigador} program of the Comunidad de Madrid (Spain) No. 2020-T1/TIC-20204. This work was partially supported by DFG FOR 2926 ``Next Generation pQCD for Hadron Structure: Preparing for the EIC'', project number 430824754.
1,108,101,566,025
arxiv
\section{Introduction} \label{sec:introduction} The study of fast time variability of X-ray emission from black-hole binaries (BHB) has received a considerable boost during the sixteen years of operation of the Rossi X-ray Timing Explorer (RXTE) mission, that has provided millions of seconds of high-sensitive observations of both transient and persistent systems. In addition to broad-band noise components and Low-Frequency Quasi-Periodic Oscillations (LFQPOs, in the frequency range 0.01-30 Hz), which had already been observed in the past with previous missions such as EXOSAT and Ginga, RXTE discovered High-Frequency QPOs (HFQPOs) at frequencies above 30 Hz, with a current highest measured frequency of 450 Hz (in GRO J1655-40, see Remillard et al. 1999). HFQPOs are important as their frequency is in the range expected for Keplerian motion of matter in the vicinity of the black hole and can be a direct way of exploring space-time near a collapsed object. Unfortunately, to date we have very few detections of them, obtained with RXTE and all corresponding to intervals when source fluxes were very high. This could be either because the signal is present only during those high-flux states and/or that a high count rate is needed to reach a significant detection (see Belloni, Sanna \& M\'endez 2012 and references therein). Moreover, a clear identification with a physical time scale in the accretion flow, whether associated to accretion properties or to General Relativity, can only come from the detection of multiple frequencies. While in very few cases double HFQPO peaks have been detected, these features appear to be visible only when LFQPOs are not detected. The only two cases of multiple detections have been analyzed by Motta et al. (2014a,b) and identified with the Relativistic Frequencies predicted by the Relativistic-Precession Model (RPM, Stella \& Vietri 1998,1999), leading to a measurement of mass and spin of the black hole in one case and of the spin in the other. An important exception to the parsimoniousness of black-hole binaries with HFQPOs is the bright source GRS 1915+105. The source appeared as a very bright transient in 1992 and since then it has remained bright (see Fender \& Belloni 2004 for a review). In addition to its high flux, which can reach the Eddington level, GRS 1915+105 displays very peculiar variability on time scales longer than a second, with structured patterns that repeat even after many years (see Belloni et al. 2000; Belloni 2010 for reviews). These variations, which have been associated to disk instabilities (see e.g. Belloni et al. 1997a,b; Janiuk, Czerny \& Siemiginowska 2000), involve major spectral and intensity changes and have been classified into a dozen of separate classes (Belloni et al. 2000; Klein-Wolt et al. 2002; Hannikainen et al. 2005). The first HFQPO was discovered in the early RXTE data of GRS 1915+105 (Morgan, Remillard \& Greiner 1997) at a frequency of $\sim$65-67 Hz, with a low fractional rms around 1\%, which increased to $\sim$10\% at high energies. Transient additional high-frequency peaks have been discovered later in selected observations (see Belloni \& Altamirano 2013a and references therein). A systematic analysis of the full set of RXTE observations of GRS 1915+105, for a total of more than $5\times 10^6$ s exposure, was performed by Belloni \& Altamirano (2013a). From this work, a total of 51 HFQPO peaks were detected, most of which in the 65-70 Hz range. The demise of the RXTE satellite left us without an instrument capable of efficiently detecting HFQPOs such as its Proportional Counter Array (PCA). The key necessary feature, in addition to high-time resolution, is a large collecting area at energies above a few keV, where HFQPOs are more intense. The launch of the \textsc{Astrosat} mission in September 2015 filled this gap. We report here the results of the analysis of a series of \textsc{Astrosat} observations of GRS 1915+105 made in July-September 2017, when a clear HFQPO was detected. \section{Observations and data analysis} \label{sec:observations} We analyzed a set of observations of GRS 1915+105 taken with the LAXPC instrument on board \textsc{Astrosat} (Singh et al. 2014). The data were obtained from the \textsc{Astrosat} public archive (https://astrobrowse.issdc.gov.in/astro\_archive/archive/Home.jsp). The LAXPC is an X-ray proportional counter array operating in the range 3-80 keV. The timing resolution of the instrument is 10 $\mu$s with a dead time of 42 $\mu$s. It consists of three identical detectors (referred to as LX10, LX20 and LX30 respectively), with a combined effective area of 6000 cm$^2$ (Yadav et al. 2016; Antia et al. 2017). For the full observation, all information about single photons is available. In order to obtain photon lists, we started from level1 production files from the archive and converted them to level2 using the LaxpcSoft tools provided by the \textsc{Astrosat} mission (see http://astrosat-ssc.iucaa.in). The tools provide a channel-to-energy conversion using the appropriate detector response matrices for the three instruments, minimizing effects due to gain changes. Therefore, the energy selection was not based on channels, but on the energies estimated with the tools. Net light curves, including those for the production of hardness ratios (see below) were obtained by subtracting the background estimated with the same tools. No deadtime correction was applied. We analyzed six observations, listed in Tab. \ref{tab:observations}. They cover a time period from 2017 July 9 16:46UT to 2017 September 12 04:54UT, for a total of 92.293 ks of net exposure, consisting of several satellite orbits. The observations were selected in order to have a similar behaviour of variability uninterrupted by observations of different type. Most of the observations see the source in its $\omega$ variability class (see Belloni et al. 2000, Klein-Wolt et al. 2002), although a secular evolution in the properties towards class $\mu$ is observed (see top panels in Fig. \ref{fig:licu}). \begin{table} \centering \caption{Log of the six data intervals in 2017. The last column reports the number of 1.024s data stretches contained in the interval.} \label{tab:observations} \begin{tabular}{lcc} \hline Obs. ID & T$_{start}$ & T$_{end}$\\ \hline G07\_028T01\_9000001370 &July 9 16:46&July 9 22:47\\ G07\_046T01\_9000001374 &July 11 20:07&July 12 11:59\\ G07\_028T01\_9000001406 &July 27 09:21&July 27 14:13\\ G07\_046T01\_9000001408 &July 27 14:18&July 28 04:45\\ G07\_028T01\_9000001500 &August 30 02:15&August 30 09:54\\ G07\_046T01\_9000001534 &September 11 13:38&September 12 05:17\\ \hline \end{tabular} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{licu.pdf} \caption{(a) Light curve of the start of the observation (bin size 1.024s, start time MJD 57943.71869). (b) Light curve of the end of the observation (bin size 1.024s, same start time). (c) Color-color diagram for the full observation, before the shift/rotation described below. (d) Hardness-intensity diagram for the full observation, before the shift/rotation described below. } \label{fig:licu} \end{figure} Using the GHATS analysis package, developed at INAF-OAB for the analysis of variability from X-ray datasets (http://www.brera.inaf.it/utenti/belloni/GHATS\_Package/Home.html) we extracted light curves with a 1.024s bin size for the energy bands 3-80 keV (band I), 3-5 keV (band A), 5-10 keV (band B) and 10-20 keV (band C), summing all three units and all instrument layers. No background subtraction was applied, as it would alter the statistical properties of the data without providing any advantage. From these we produced two X-ray hardness parameters HR1=B/A and HR2=C/A. The full color-color diagram (CCD: HR1 vs. HR2) has a very defined but broad shape, while the hardness-intensity diagram is even broader (HID: I vs. HR2), as can be seen in Fig. \ref{fig:licu}. Using GHATS, we extracted Power Density Spectra (PDS) from data stretches 1.024s long in the 3-80 keV band, corresponding to the times of the points in the diagrams, adding all detectors and all layers. The PDS were normalized after Leahy et al. (1983) and extended to a Nyquist frequency of 500 Hz. We then averaged the PDS in small regions of the HID and searched for HFQPOs in the 30-200 Hz band. Clear peaks around 70 Hz were seen in the different HID regions with variable frequency, but no systematic variations could be determined and broad and multiple peaks were seen. In order to ascertain whether these variations were due to secular variations in the HID shape, we produced the HID corresponding to six time intervals separated by large time gaps in the data. Intervals E and F are closer in time, but as there are differences in the time evolution we decided to keep them separate. The time limits of the intervals are shown in Tab. \ref{tab:intervals}. \begin{table} \centering \caption{Log of the six data intervals in 2017. The last column reports the number of 1.024s data stretches contained in the interval.} \label{tab:intervals} \begin{tabular}{lccc} \hline Interval & T$_{start}$ & T$_{end}$&N$_{PDS}$\\ \hline A & July 9 16:46 & July 9 22:17 & 9639 \\ B & July 11 19:39 & July 12 11:28 & 18430 \\ C & July 27 08:39 & July 28 04:16 & 28601 \\ D & August 30 02:02 & August 30 09:24 & 9903 \\ E & September 11 13:37 & September 11 17:25 & 6722 \\ F & September 11 18:54 & September 12 04:54 & 11592 \\ \hline \end{tabular} \end{table} The HID of the six intervals appear shifted. In order to ascertain the best shift values, noticing that the main direction of the HID 2D distribution is diagonal, we renormalised the count rate $I$ as $J=I/17500$ and rotated the HID counter-clockwise by 45 degrees obtaining two new coordinates $H'$ and $I'$. The resulting H$'$I$'$D has the shape of a mirrored L. With this rotation, it appeared clear that the shift between different intervals was only in the vertical direction $I'$. The marginal distribution in $I'$ for the six intervals are shown in Fig. \ref{fig:marginals}. \begin{figure} \includegraphics[width=\columnwidth]{marginals.pdf} \caption{Marginal distributions in $I'$ for the six intervals defined in Tab. \ref{tab:intervals}. The black lines are the best-fit Gaussian models to the leftmost peak, plotted over the range used for each fit. } \label{fig:marginals} \end{figure} We fitted a Gaussian function to the leftmost peak in the distributions in Fig. \ref{fig:marginals}, limiting the range in abscissa in order to obtain a good fit without the need to include other peaks (see Fig. \ref{fig:marginals}). We then shifted the H$'$I$'$Ds for the six intervals to coalign the peak of the Gaussians. Upon visual inspection, there were still differences between the shifted H$'$I$'$D, therefore we grouped the six intervals into three groups: $\alpha$ (A,B,C), $\beta$ (D) and $\gamma$ (E,F), obtaining three much more well-defined H$'$I$'$Ds distributions, which can be seen in Fig. \ref{fig:hid}. \begin{figure} \includegraphics[width=\columnwidth]{alpha.pdf} \includegraphics[width=\columnwidth]{beta.pdf} \includegraphics[width=\columnwidth]{gamma.pdf} \caption{The three H$'$I$'$Ds for the three groups (see text). The extraction regions 1 through 6 for each group are shown. } \label{fig:hid} \end{figure} For each of the three groups, we identified six regions to cover the main part of the H$'$I$'$D. The regions can be seen in Fig. \ref{fig:hid}. They were chosen in order to cover the track, include the bulk of the points, but leave stragglers out as they are rather distant from the main track. Since the valley between the peaks in regions 2 and 3 are not very deep (see Fig. \ref{fig:marginals}), we left a gap between these two regions. \section{Results} For each of the six regions in each of the three groups, we averaged the PDS, obtaining 18 final PDS. We fitted each PDS in the 30-200 Hz region with a model consisting of a power law (to account for Poissonian noise) and a Lorentzian peak. A significant (more than 3$\sigma$) detection of a $\sim$ 70 Hz QPO peak is found for all eighteen PDS, with the exception of two ($\beta5$ nd $\gamma6$). The QPO parameters are shown in Tab. \ref{tab:qpo}. For all three groups, the centroid frequency of the QPO is not constant and shows the same evolution as a function of segment number, shown in Fig. \ref{fig:centroid}. The frequency increases from region 1, peaks at region 3 (region 4 for $\beta$), then stabilizes around 69.5 Hz for all three groups, overall varying between 67.4 Hz and 72.3 Hz. The quality factor (defined as the ratio of the centroid frequency to the FWHM of the Lorentzian peak) is between 12 and 28, without clear trends. The integrated fractional rms of the QPO peak (in the 3-80 keV energy band) is shown in Fig. \ref{fig:rms}. As an example, the PDS for group $\gamma$ are plotted in Fig. \ref{fig:pds}, where the changes in centroid frequency are evident. We produced cross spectra over the same 1.024s stretches used for the PDS between the counts in the 5-10 keV energy band and those in the 10-20 keV and 20-30 keV bands, leaving the 3-5 keV band as the signal there is too faint (low rms and low count rate) to yield significant results. We averaged the cross spectra over the same eighteen regions as the PDS. From each averaged cross spectrum we averaged the complex values in a frequency range $\nu_0-\Delta/2$ -- $\nu_0+\Delta/2$ and from the average calculated the phase lag at the QPO (see M\'endez et al. 2013). In order to account for cross-channel talk, we subtracted from the complex value an average value computed in the 110-190 Hz band, where only Poissonian noise is present in the PDS. The phase lags as a function of region for the three groups are shown in Fig. \ref{fig:lags}. For both energy ranges, the phase lags are positive (hard lags soft) and decrease with region number. In the 20-30 keV case region 6 reaches a negative (soft lags hard). \begin{figure} \includegraphics[width=\columnwidth]{centroid.pdf} \caption{Centroid frequency of the HFQPO as a function of H$'$I$'$Ds region for the three selection groups. All frequencies have an error bar, often within the symbol.} \label{fig:centroid} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{rms.pdf} \caption{Integrated fractional rms of the HFQPO as a function of H$'$I$'$Ds region for the three selection groups.} \label{fig:rms} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{pds.pdf} \caption{PDS from group $\gamma$ together with their best fit. The changes in centroid frequency are evident.} \label{fig:pds} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{lags1.pdf} \includegraphics[width=\columnwidth]{lags2.pdf} \caption{Phase lags at the HFQPO as a function of H$'$I$'$Ds region for the three selection groups. All frequencies have an error bar, often within the symbol. Top: 10-20 keV vs. 5-10 keV. Bottom: 20-30 keV vs. 5-10 keV. } \label{fig:lags} \end{figure} \begin{table} \centering \caption{QPO parameters for the regions shown in Fig. \ref{fig:hid}} \label{tab:qpo} \begin{tabular}{lccc} \hline Region & $\nu_0$ (Hz) & FWHM (Hz) & \% rms\\ \hline $\alpha$1 & 68.70 $\pm$ 0.14& 3.84 $\pm$ 0.35& 1.59 $\pm$ 0.05\\ $\alpha$2 & 69.85 $\pm$ 0.10& 4.15 $\pm$ 0.22& 1.96 $\pm$ 0.04\\ $\alpha$3 & 70.92 $\pm$ 0.09& 5.33 $\pm$ 0.24& 2.39 $\pm$ 0.04\\ $\alpha$4 & 69.87 $\pm$ 0.10& 4.65 $\pm$ 0.26& 2.00 $\pm$ 0.04\\ $\alpha$5 & 69.57 $\pm$ 0.29& 5.11 $\pm$ 0.70& 1.52 $\pm$ 0.08\\ $\alpha$6 & 69.72 $\pm$ 0.12& 3.41 $\pm$ 0.25& 1.78 $\pm$ 0.05\\ \hline $\beta$1 & 67.39 $\pm$ 0.14& 2.45 $\pm$ 0.26& 1.40 $\pm$ 0.06\\ $\beta$2 & 68.00 $\pm$ 0.10& 3.08 $\pm$ 0.20& 2.04 $\pm$ 0.04\\ $\beta$3 & 70.00 $\pm$ 0.10& 3.20 $\pm$ 0.23& 2.27 $\pm$ 0.06\\ $\beta$4 & 70.44 $\pm$ 0.37& 4.38 $\pm$ 1.24& 1.93 $\pm$ 0.17\\ $\beta$5 & - & - & - \\ $\beta$6 & 69.32 $\pm$ 0.35& 2.53 $\pm$ 0.77& 1.77 $\pm$ 0.18\\ \hline $\gamma$1 & 69.15 $\pm$ 0.27& 5.42 $\pm$ 0.62& 1.22 $\pm$ 0.06\\ $\gamma$2 & 71.45 $\pm$ 0.05& 2.99 $\pm$ 0.11& 1.81 $\pm$ 0.02\\ $\gamma$3 & 72.29 $\pm$ 0.13& 3.36 $\pm$ 0.35& 2.14 $\pm$ 0.07\\ $\gamma$4 & 70.61 $\pm$ 0.20& 3.78 $\pm$ 0.55& 2.07 $\pm$ 0.11\\ $\gamma$5 & 69.17 $\pm$ 0.75& 5.34 $\pm$ 2.16& 2.12 $\pm$ 0.30\\ $\gamma$6 & - & - & - \\ \hline \end{tabular} \end{table} \section{Discussion} \label{sec:discussion} We have analyzed a set of observations of GRS 1915+105 obtained with the laxpc instrument on board \textsc{Astrosat} and found for the first time evidence for variations in the centroid frequency of the HFQPO, correlated with the position in the hardness-intensity diagram and therefore to spectral variations. Detailed spectral analysis from these data is difficult, given the complex selection of data points. Clearly, spectral variations along the CCD are strong (see Fig. \ref{fig:licu}), but at this stage our spectral analysis is still too uncertain and we will have to defer it to a future paper. The HFQPOs in GRS 1915+105 before \textsc{Astrosat} have been observed only with RXTE, from the initial discovery of Morgan et al. (1997). Additional peaks, simultaneous with the $\sim$70 Hz one and at different frequencies, have been detected previously (27 Hz: Belloni, M\'endez \& S\'anchez-Fern\'andez 2001; 41 Hz: Strohmayer 2001; 34 Hz: Belloni \& Altamirano 2013b). No additional peaks have been detected in our data. Belloni \& Altamirano (2013a) have detected a HFQPO in 51 RXTE observations. Three of them belonged to variability class $\mu$ and nine to variability class $\omega$. In this work, we analyzed data that belong to these two classes, which see GRS 1915+105 reach the very particular region in the HID that corresponds to HFQPO detections (see Belloni \& Altamirano 2013a). Notice that the frequencies that we observe here are on the high side compared to those in Belloni \& Altamirano (2013a), where HFQPOs from class $\omega$ were also systematically higher in frequency. The distributions of frequencies for RXTE and this work are shown in Fig. \ref{fig:comparison}. This indicates that there are secular variations, but it is remarkable that at the distance of years the frequencies are still rather close. The phase lags we detect are compatible in value with those reported by M\'endez et al. (2013), but we observe for the first time an evolution: higher region numbers have systematically lower hard lags. This is particularly true for the 20-30 keV vs. 5-10 keV lags, which decrease to the point of becoming negative for region 6. Contrary to centroid frequencies, the dependence on region is monotonic, indicating a more complex relationship with X-ray hardness. A full spectral analysis is required to link these changes to physical parameters in the accretion flow. \begin{figure} \includegraphics[width=\columnwidth]{comparison.pdf} \caption{Top: distribution of the HFQPOs in Tab. \ref{tab:qpo}. The dots indicate the single values. Bottom: distribution of the 51 HFQPOs detected with RXTE from Belloni \& Altamirano 2013a. } \label{fig:comparison} \end{figure} HFQPOs are very elusive signals and few detections are available, all of them from the RXTE satellite (see Belloni, Sanna \& M\'endez 2012, Belloni \& Altamirano 2013a,b and references therein). Theoretical models have been put forward, but given the scarcity of data they cannot be tested against each other. The models have been extensively explored in the recent past. The relativistic-precession model (Stella \& Vietri 1998,1999) associates the high-frequency signals to either the Keplerian frequency or the periastron precession frequency at a certain radius of the accretion flow. If the radius at which the frequencies are produced varies, the frequency will vary correspondingly (see e.g. Motta et al. 2014a,b). The changes observed here are of the order of 6\%, which would correspond to a rather small change in radius, most likely not measurable with the current spectral uncertainties. However, we notice two things. The first is that with the current best mass estimate ($12.4^{+2.0}_{1.8}$M$_\odot$, Reid et al. 2014), even assuming a zero spin the lowest frequency reachable by a Keplerian frequency at the innermost stable circular orbit would be higher than 70 Hz. This means that this feature cannot be associated to that physical frequency at that radius (nor to the periastron precession frequency at the same radius, which has the same value). Of course it can be either of the two frequencies at a larger radius, which however would need to be rather stable throughout the years. The second is that our analysis shows that the HFQPO frequency increases with hardness, as the six regions identified in the H$'$I$'$D have the real measured hardness increasing from region 1 to 3, to decrease again to region 6 (see Fig. \ref{fig:beta_HID}, in which the HID for the $\beta$ group is shown, with the points from the six regions highlighted). Low-frequency QPOs, very common in black-hole binaries, always show a centroid frequency that decreases with hardness. Within the RPM, these low-frequency features are associated to the Lense-Thirring precession at the same radius at which the other two frequencies are produced and therefore are always positively correlated with them. An opposite correlation with spectral hardness does not point towards such a connection, although here no LFQPOs are detected and a direct comparison cannot be made. However, Yadav et al. (2018) have shown that in \textsc{Astrosat} observations of GRS 1915+105 during the $\chi$ variability class the LFQPO is positively correlated with hardness. The difference between this source and the other BHBs could be that the disk component in GRS 1915+105 is considerably hotter, leading to a significant disk contribution to the hard band. A comparison with the spectral parameters in future work will clarify this dependence. An alternative model is the Epicyclic Resonance Model (Abramowicz and Kluzniak 2001), which associates the observed frequencies to relativistic frequencies at special radii when these are in resonance and therefore assume values in simple integer ratios. In this case we only observe one frequency, which prevents a measurement of a frequency ratio. We also observe 6\% variations in the centroid frequency, which in the model should remain constant as it is associated to a constant radius, determined solely by the mass and spin of the central black hole. However, the observed variations are small and it might be possible to reconcile them within the model. \begin{figure} \includegraphics[width=\columnwidth]{beta_HID.pdf} \caption{Original hardness-Intensity diagram for the points in group $\beta$ (black dots). The larger grayscale circles correspond to regions 1 (lighter) through 6 (darker). } \label{fig:beta_HID} \end{figure} \section{Conclusions} We have measured for the first time variations in the centroid frequency of the HFQPO in GRS 1915+105, which was observed to vary between 67.4 and 72.3 Hz. The variations were observed to be correlated with the position on the hardness-intensity diagram, where both hardness and count rate varied by almost one order of magnitude. Systematic variations in the hard lags at the QPO frequency were also measured. Future work will deal with the challenging task of extracting and fitting energy spectra, but it is clear that the HFQPO frequency increases as the spectrum hardens, while no monotonic variation is observed with count rate, despite the large variation of the latter. These results confirm that GRS 1915+105 is the best object available up to now to study HFQPOs, given the difficulty to observe them in other transients, both because of the faintness of the signal and its transient behaviour. The observed changes, while they cannot yet be applied to specific theoretical models, indicate that future timing missions with larger collective area such as eXTP will be able to follow in detail how the HFQPO varies as a function of spectral parameters and will shed light on the nature of these eslusive features. \section*{Acknowledgements} This work makes use of data from the \textsc{Astrosat} mission of the Indian Space Research Organisation (ISRO), archived at Indian Space Science Data Centre (ISSDC). This work has been supported by the Executive Programme for Scientific and Technological cooperation between the Italian Republic and the Republic of India for the years 2017-2019 under project IN17MO11 (INT/Italy/P-11/2016 (ER)). TMB acknowledges financial contribution from the agreement ASI-INAF n.2017-14-H.0. We thank an anonymous referee for his/her constructive comments.
1,108,101,566,026
arxiv
\section{Introduction}}} \bigskip\large In these notes we give a brief introduction to decomposition theory and we summarize some classical and well-known results. The main question is that if a partitioning of a topological space (in other words a \emph{decomposition}) is given, then what is the topology of the quotient space. The main result is that an \emph{upper semi-continuous} decomposition yields a homeomorphic decomposition space if the decomposition is \emph{shrinkable} (i.e.\ there exist self-homeomorphisms of the space which shrink the partitions into arbitrarily small sets in a controllable way). This is called \emph{Bing shrinkability criterion} and it was introduced in \cite{Bi52, Bi57}. It is applied in major $4$-dimensional results: in the disk embedding theorem and in the proof of the $4$-dimensional topological Poincar\'e conjecture \cite{Fr82, FQ90, BKKPR}. It is extensively applied in constructing approximations of manifold embeddings in dimension $\geq 5$, see \cite{AC79} and Edwards's cell-like approximation theorem \cite{Ed78}. If a decomposition is shrinkable, then a decomposition element has to be \emph{cell-like} and \emph{cellular}. Also the quotient map is approximable by homeomorphisms. A {cell-like map} is a map where the point preimages are similar to points while a {cellular map} is a map where the point preimages can be approximated by balls. There is an essential difference between the two types of maps: ball approximations always give cell-like sets but in a smooth manifold for a cell-like set $C$ the complement has to be simply connected in a nbhd of $C$ in order to be cellular. Finding conditions for a decomposition to be shrinkable is one of the main goal of the theory. For example, cell-like decompositions are shrinkable if the non-singleton decomposition elements have codimension $\geq 3$, that is any maps of disks can be made disjoint from them \cite{Ed16}. In many constructions Cantor sets (a set of uncountably many points that cutting out from the real line we are left with a manifold) arise as limits of sequences of sets defining the decomposition. The interesting fact is that a limit Cantor set can be non-standard and it can have properties very different from the usual middle-third Cantor set in $[0,1]$. An example for such a non-standard Cantor set is given by Antoine's necklace but many other explicit constructions are studied in the subsequent sections. The present notes will cover the following: upper semi-continuous decompositions, defining sequences, cellular and cell-like sets, examples like Whitehead continuum, Antoine's necklace and Bing decomposition, shrinkability criterion and near-homeomorphism, approximating by homeomorphisms and shrinking countable upper semi-continuous decompositions. We prove for example that every cell-like subset in a $2$-dimensional manifold is cellular, that Antoine's necklace is a wild Cantor set, that in a complete metric space a usc decomposition is shrinkable if and only if the decomposition map is a near-homeomorphism and that every manifold has collared boundary. \large \bigskip{\Large{\section{Decompositions}}} \bigskip\large A neighborhood (nbhd for short) of a subset $A$ of a topological space $X$ is an open subset of $X$ which contains $A$. \begin{defn} Let $X$ be a topological space. A set $\mathcal D \subset \mathcal P (X)$ is a \emph{decomposition} of $X$ if the elements of $\mathcal D$ are pairwise disjoint and $\bigcup \mathcal D = X$. An element of $\mathcal D$ which consists of one single point is called a \emph{singleton}. A non-singleton decomposition element is called \emph{non-degenerate}. The elements of $\mathcal D$ are the \emph{decomposition elements}. The set of non-degenerate elements is denoted by $\mathcal H_{\mathcal D}$. \end{defn} If $f \co X \to Y$ is an arbitrary (not necessarily continuous) map between the topological spaces $X$ and $Y$, then the set $$\{ f^{-1}(y) : y \in Y \}$$ is a decomposition of $X$. A decomposition defines an equivalence relation on $X$ as usual, i.e.\ $a, b \in X$ are equivalent iff $a$ and $b$ are in the same element of $\mathcal D$. \begin{defn} If $\mathcal D$ is a decomposition of $X$, then the \emph{decomposition space} $X_\mathcal D$ is the space $\mathcal D$ with the following topology: the subset $U \subset \mathcal D$ is open exactly if $\pi^{-1} (U)$ is open. Here $\pi \co X \to \mathcal D$ is the \emph{decomposition map} which maps each $x \in X$ into its equivalence class. \end{defn} In other words $X_\mathcal D$ is the quotient space with the quotient topology and $$\pi \co X \to X_\mathcal D$$ is just the quotient map. Recall that by well-known statements $X_\mathcal D$ is compact, connected and path-connected if $X$ is compact, connected and path-connected, respectively. Obviously $\pi$ is continuous. \begin{prop} The decomposition space is a $T_1$ space if the decomposition elements are closed. \end{prop} \begin{proof} We have to show that the points in the space $X_{\mathcal D}$ are closed. If $U$ is a point complement in $X_{\mathcal D}$, then $\pi^{-1}(U)$ is the complement of a decomposition element, which is open so $U$ is also open. \end{proof} We would like to construct and study such decompositions which have especially nice properties concerning the behavior of the sequences of decomposition elements. \begin{defn} Let $f \co X \to \mathbb R$ be a function. It is \emph{upper semi-continuous} (resp.\ \emph{lower semi-continuous}) if for every $x \in X$ and $\varepsilon > 0$ there is a nbhd $V_x$ such that $f(V_x) \subset (-\infty, f(x) + \varepsilon)$ (resp.\ $f(V_x) \subset (f(x) - \varepsilon, \infty))$. \end{defn} For us, upper semi-continuous functions will be important. They are such functions, where a sequence $f ( x_n)$ can have only smaller or equal values than $f(x) + \varepsilon_n$ as $x_n \rightarrow x$, where $\varepsilon_n \geq 0$ and $\varepsilon_n \rightarrow 0$. Let $f \co \mathbb R \to \mathbb R$ be an upper semi-continuous, positive function and consider the following decomposition of $\mathbb R^2$. Take the vertical segments of the form \begin{equation}\label{usc_example} A_x = \{ (x, y) : y \in [0, f(x) ]\} \end{equation} for each $x \in \mathbb R$. Together with the points in $\mathbb R^2$ which are not in these segments (these points are the so-called singletons) this gives a decomposition of $\mathbb R^2$. This has an interesting property: let $y \in [0, f(x) ]$ for some $x \in \mathbb R$ and let $(x_n) \in \mathbb R$ be a sequence (which is not necessarily convergent). If every nbhd of the point $(x,y)$ intersects all but finitely many segments $A_{x_n}$, then the points $(u, v) \in \mathbb R^2$ each of whose nbhds intersects all but finitely many $A_{x_n}$ are in $A_x$ as well, see Figure~\ref{uscfunction}. The set of the points $(u, v)$ is called the \emph{lower limit} of the sequence $A_{x_n}$. In other words, if an $A_x$ intersects the lower limit of a sequence $A_{x_n}$, then all the lower limit is a subset of $A_x$. More generally we have the following. \begin{figure}[h!] \begin{center} \epsfig{file=uscfunction.eps, height=3.7cm} \end{center} \caption{The graph of an upper semi-continuous function $f$ and some segments $A_x$. If the segments $A_{x_n}$ ``converge'' to a segment $A_x$, then $f(x_n)$ converges to a number $\leq f(x)$.} \label{uscfunction} \end{figure} \begin{defn} Let $A_n$ be a sequence of subsets of the space $X$. The \emph{lower limit} of $A_n$ is the set of the points $p \in X$ each of whose nbhds intersects all but finitely many $A_{n}$. It is denoted by $\liminf A_n$. The \emph{upper limit} of $A_n$ is the set of the points $p \in X$ each of whose nbhds intersects infinitely many $A_{n}$s. It is denoted by $\limsup A_n$. \end{defn} Note that $\liminf A_n \subset \limsup A_n$ is always true. In the previous example the sets $A_{x_n}$ could approach the set $A_x$ only in a manner determined by the function $f$. This leads to the following general definition. \begin{defn} Let $\mathcal D$ be a decomposition of a space $X$ such that all elements of $\mathcal D$ are closed and compact and they can converge to each other only in the following way: if $A \in \mathcal D$, then for every nbhd $U$ of $A$ there is a nbhd $V$ of $A$ with the property $V \subset U$ such that if some element $B \in \mathcal D$ intersects $V$, then $B \subset U$, i.e.\ the set $B$ is completely inside the nbhd $U$. Then $\mathcal D$ is an \emph{upper semi-continuous decomposition} (\emph{usc} decomposition for short). If all the decomposition elements are closed but not necessarily compact, then we say it is a \emph{closed upper semi-continuous decomposition}. \end{defn} For example, the decomposition defined in (\ref{usc_example}) is usc. \begin{lem}\label{saturated} Let $\mathcal D$ be a decomposition of the space $X$ such that each decomposition element is closed. The following are equivalent: \begin{enumerate}[\rm (1)] \item $\mathcal D$ is a closed usc decomposition, \item for every $D \in \mathcal D$ and every nbhd $U$ of $D$ there is a saturated nbhd $W \subset U$ of $D$, that is an open set $W$ which is a union of decomposition elements, \item for each open subset $U \subset X$, the set $\cup\{ D \in \mathcal D : D \subset U \}$ is open, \item for each closed subset $F \subset X$, the set $\cup\{ D \in \mathcal D : D \cap F \neq \emptyset \}$ is closed, \item the decomposition map $\pi \co X \to X_{\mathcal D}$ is a closed map. \end{enumerate} \end{lem} \begin{proof} Suppose $\mathcal D$ is usc and $U$ is a nbhd of $D$. Let $W$ be the union of all decomposition elements which are subsets of $U$. Then $D \subset W$ obviously and $W$ is open because if $x \in W$, then $x \in D'$ for some decomposition element $D' \subset W$ and $D' \subset U$, so by definition $D' \subset V \subset U$ for a nbhd $V$ but the nbhd $V$ of $x$ is in $W$ since all the decomposition elements intersecting $V$ have to be in $U$, which means they are in $W$ as well. This shows that (1) implies (2). Suppose (2) holds. If $U$ is an open set, then for each decomposition element $D \subset U$ a saturated nbhd $W$ of $D$ is also in $U$ and also in $\cup\{ D \in \mathcal D : D \subset U \}$. This means that the set $\cup\{ D \in \mathcal D : D \subset U \}$ is a union of open sets, which proves (3). We have that (3) and (4) are equivalent because we can take the complement of a given closed set $F$ or an open set $U$. We have that (4) and (5) are equivalent: from (4) we can show (5) by taking an arbitrary closed set $F \subset X$, then $\cup\{ D \in \mathcal D : D \cap F \neq \emptyset \}$ is closed, its complement is a saturated open set whose $\pi$-image is open so $\pi(F)$ is closed. If we suppose (5), then for a closed set $F \subset X$ the set $$\pi^{-1} ( \pi(F)) = \cup\{ D \in \mathcal D : D \cap F \neq \emptyset \}$$ is closed so we get (4). Finally (3) implies (1): if $D \in \mathcal D$ and $U$ is a nbhd of $D$, then let $V$ be the open set $\cup\{ D \in \mathcal D : D \subset U \}$, this is a nbhd of $D$, it is in $U$ and if a $D' \in \mathcal D$ intersects $V$, then it is in $V$ and hence also in $U$. \end{proof} There is also the notion of \emph{lower semi-continuous decomposition}: a decomposition $\mathcal D$ of a metric space is lower semi-continuous if for every element $A \in \mathcal D$ and for every $\varepsilon > 0$ there is a nbhd $V$ of $A$ such that if some decomposition element $B$ intersects $V$, then $A$ is in the $\varepsilon$-nbhd of $B$. A decomposition of a metric space is \emph{continuous} if it is upper and lower semi-continuous, see Figure~\ref{lscdecomp}. We will not study decompositions which are only lower semi-continuous. \begin{figure}[h!] \begin{center} \epsfig{file=lscdecomp.eps, height=11cm} \end{center} \caption{A lower semi-continuous, an upper semi-continuous and a continuous decomposition. In each of the cases the non-degenerate decomposition elements are line segments, which converge to other line segments. The dots indicate convergence. Only the non-singleton decomposition elements are sketched. The lower semi-continuous decomposition consists of decomposing the area under the graph of a lower semi-continuous function into vertical line segments, there are no singletons among the decomposition elements and the decomposed space itself is not closed. The upper semi-continuous and continuous decompositions are decompositions of the rectangle. Only the upper semi-continuous decomposition has singletons.} \label{lscdecomp} \end{figure} \begin{thm} Let $X$ be a $T_3$ space and $\mathcal D$ is a closed usc decomposition. If $A_n \in \mathcal D$ is a sequence of decomposition elements and $A \in \mathcal D$ are such that $A \cap \liminf A_n \neq \emptyset$, then $\limsup A_n \subset A$. \end{thm} \begin{proof} Suppose there is a point $x \in A$ such that $x \in \liminf A_n$ as well. By contradiction suppose that $\limsup A_n \nsubseteq A$, this means that a point $y \in \limsup A_n$ is such that $y \notin A$. Since $y \in D$ for a decomposition element, we get $D \neq A$ so $D$ is disjoint from the decomposition element $A$. The space $X$ is $T_3$, the sets $D$ and $\{x\}$ are closed so there is a nbhd $U$ of $D$ and a nbhd $V$ of $x$ which are disjoint from each other. We also have a nbhd $W \subset U$ of $D$ which is a union of decomposition elements by Lemma~\ref{saturated}. Since $x \in \liminf A_n$, we have that for an integer $k$ the sets $A_k, A_{k+1}, \ldots$ intersect $V$. The nbhd $W$ is saturated, this implies that a decomposition element does not intersect both of $W$ and $V$. So $A_k, A_{k+1}, \ldots$ are disjoint from $W$. This contradicts to that $W$ is a nbhd of $y$ and so infinitely many $A_n$ has to intersect $W$ because $y \in \limsup A_n$. \end{proof} An other example for a usc decomposition is the equivalence relation on $S^n$ defined by $x \sim -x$. Here the decomposition elements are not connected and the decomposition space is the projective space ${\mathbb R}{P}^n$. Or another example is the closed usc decomposition of $\mathbb R^2$, where the two non-singleton decomposition elements are the two arcs of the graph of the function $x \mapsto 1/x$, all the other decomposition elements are singletons. The decomposition space is homeomorphic to $$A \cup_{\phi} B \cup_{\psi} A',$$ where $A$ and $A'$ are open disks, each of them with one additional point in its frontier denoted by $a$ and $a'$ respectively. The space $B$ is an open disk with two additional points $b, b'$ in its frontier and the gluing homeomorphisms are $\phi \co \{a\} \to \{b\}$ and $\psi \co \{a'\} \to \{b'\}$. If a decomposition is given, then we would like to understand the decomposition space as well. \begin{prop} The decomposition space of a closed usc decomposition of a normal space is $T_4$. \end{prop} \begin{proof} We have to show that if $\mathcal D$ is a usc decomposition of a normal space $X$, then any two disjoint closed sets in the space $X_{\mathcal D}$ can be separated by open sets. Let $A, B$ be disjoint closed sets in $X_{\mathcal D}$. Then $\pi^{-1}(A)$ and $\pi^{-1}(B)$ are disjoint closed sets and by being $X$ normal and by Lemma~\ref{saturated} they have disjoint saturated nbhds $U_1$ and $U_2$. Taking $\pi(U_1)$ and $\pi(U_2)$ we get disjoint nbhds of $A$ and $B$. The decomposition elements are closed so $X_{\mathcal D}$ is $T_1$, which finally implies that $X_{\mathcal D}$ is $T_4$. \end{proof} If a space $X$ is not normal, then it is easy to define such closed usc decomposition, where the decomposition space is even not $T_2$. Take two disjoint closed sets $A, B$ in $X$ which can not be separated by open sets. For example the direct product of the Sorgenfrei line with itself is not normal and choose the points with rational and irrational coordinates in the antidiagonal respectively, to have two closed sets $A$ and $B$. These two sets are the two non-singleton elements of the decomposition $\mathcal D$, other elements are singletons. Then $\mathcal D$ is closed usc but $X_{\mathcal D}$ is not $T_2$ because $\pi(A)$ and $\pi(B)$ can not be separated by open sets. \begin{defn} Let $\mathcal D$ be a decomposition of the space $X$. A decomposition is \emph{finite} if it has only finitely many non-degenerate elements and \emph{countable} if it has countably many non-degenerate elements. A decomposition is \emph{monotone} if every decomposition element is connected. If $X$ is a metric space, then a decomposition is \emph{null} if the decomposition elements are bounded and for every $\varepsilon > 0$ there is only a finite number of elements whose diameter is greater than $\varepsilon$. \end{defn} \begin{prop} Let $\mathcal D$ be a decomposition and suppose that all elements are closed. If $\mathcal D$ is finite, then it is a closed usc decomposition. \end{prop} \begin{proof} Let $C \subset X$ be a closed subset, then $\pi^{-1}(\pi(C))$ is closed because it is the finite union of the closed set $C$ and the non-degenerate elements which intersect $C$. Then by Lemma~\ref{saturated} (4) the statement follows. \end{proof} \begin{prop} If $\mathcal D$ is a closed and null decomposition of a metric space, then it is usc. \end{prop} \begin{proof} Denote the metric by $d$. All the decomposition elements are compact because they are bounded. Let $U$ be a nbhd of a $D \in \mathcal D$, then there is an $\varepsilon >0$ such that the $\varepsilon$-nbhd of $D$ is in $U$. Since $\mathcal D$ is null, there are only finitely many decomposition elements $D_1, \ldots, D_n$ whose diameter is greater than $\varepsilon/4$ and $D_i \neq D$. Let $\delta$ be the minimum of $\varepsilon/4$ and the distances between $D$ and the $D_i$s. If $D' \in \mathcal D$ is such that the distance between $D'$ and $D$ is less than $\delta$, then $D'$ is in the $\varepsilon$-nbhd of $D$: there are $x \in D$ and $y \in D'$ such that $d(x, y) < \delta$ so for every $a \in D'$ \begin{multline*} \inf \{ d(a, b) : b \in D \} \leq d(a, y) + d(y, x) + \inf \{ d(x, b) : b \in D \} = d(a, y) + d(y, x) \leq \\ {\mathrm {diam}} \thinspace D' + \delta \leq \varepsilon/2, \end{multline*} which means that $D'$ is in the $\varepsilon$-nbhd of $D$ so $D' \subset U$. \end{proof} \begin{prop} Let $\mathcal D$ be a usc decomposition of a space $X$. \begin{enumerate}[\rm (1)] \item If $X$ is $T_2$, then $X_{\mathcal D}$ is $T_2$ as well. \item If $X$ is regular, then $X_{\mathcal D}$ is $T_3$. \end{enumerate} \end{prop} \begin{proof} The decomposition elements are compact so every $\pi^{-1}(a)$ and $\pi^{-1}(b)$ for different $a, b \in X_{\mathcal D}$ can be separated by open sets. The statement follows easily. \end{proof} \begin{prop} Let $\mathcal D$ be a usc decomposition of a $T_2$ space $X$. The decomposition $\mathcal D'$ whose elements are the connected components of the elements of $\mathcal D$ is a monotone usc decomposition. \end{prop} \begin{proof} Take an element $D' \in \mathcal D'$ and denote by $D$ the decomposition element in $\mathcal D$ which contains $D'$. Suppose $D \neq D'$. Then $D - D'$ is closed in $D$ so it is closed in $X$. Let $U$ be a nbhd of $D'$. Then there exists a nbhd $U' \subset U$ of $D'$ which is disjoint from a nbhd $U''$ of the closed set $D - D'$. By the usc property we can find a nbhd $V$ of $D$ such that $V \subset U' \cup U''$ and if a $C \in \mathcal D$ intersects $V$, then $C \subset U' \cup U''$. If $C' \in \mathcal D'$ intersects $V \cap U'$, then the element $C \in \mathcal D$ which contains $C'$ as a connected component intersects $V$ hence $C \subset U' \cup U''$. Since $U'$ and $U''$ are disjoint, the component $C'$ of $C$ is in $U'$ because it intersects $U'$. We got that $C' \subset U$. \end{proof} For example, it follows that the decomposition of a compact $T_2$ space $X$ whose elements are the connected components of the space is a usc decomposition. To see this, at first take the decomposition $\mathcal D$, where $\mathcal H_{\mathcal D} = \{ X \}$ and hence the decomposition has no singletons. This is usc so we can apply the previous proposition. \begin{prop} If $X$ is a metric space and $\mathcal D$ is its usc decomposition, then $X_{\mathcal D}$ is metrizable. If $X$ is separable, then $X_{\mathcal D}$ is also separable. \end{prop} \begin{proof} By \cite{St56} if there is a continuous closed map $f$ of a metric space onto a space $Y$ such that for every $y \in Y$ the closed set $f^{-1}(y) - {\mathrm {int}}\thinspace f^{-1}(y)$ is compact, then $Y$ is metrizable. But for every $y \in X_{\mathcal D}$ the set $\pi^{-1}(y)$ and so its closed subset $\pi^{-1}(y) - {\mathrm {int}}\thinspace \pi^{-1}(y)$ are compact hence $X_{\mathcal D}$ is metrizable. Moreover if $X$ is separable, then there is a countable subset $S \subset X$ intersecting every open set, which gives the countable set $\pi(S)$ intersecting every open set in $X_{\mathcal D}$. \end{proof} \bigskip{\Large{\section{Examples and properties of decompositions}}} \bigskip\large Usually, we are interested in the topology of the decomposition space if a decomposition of $X$ is given. Especially those situations are stimulating where the decomposition space turns out to be homeomorphic to $X$. Let $X = \mathbb R$ and let $\mathcal D$ be a decomposition such that $\mathcal H_{\mathcal D}$ consists of countably many disjoint compact intervals. Then this is a usc decomposition: any open interval $U \subset \mathbb R$ contains at most countably many compact intervals of $\mathcal H_{\mathcal D}$ and the infimum of the left endpoints of these intervals could be in $U$ or it could be the left boundary point of $U$. Similarly, we have this for the right endpoints. In all cases the union of the decomposition elements being in $U$ is open. For an arbitrary open set $U \subset \mathbb R$ we have the same, this means we have a usc decomposition. Later we will see, that the decomposition space $X_{\mathcal D}$ is homeomorphic to $\mathbb R$. Moreover the decomposition map $\pi \co X \to X_{\mathcal D}$ is approximable by homeomorphisms, which means there are homeomorphisms from $\mathbb R$ to $\mathbb R$ arbitrarily close to $\pi$ in the sense of uniform metric. For example, let $X = \mathbb R$ and consider the infinite Cantor set-like construction by taking iteratively the middle third compact intervals in the interval $[0,1]$. These are countably many intervals and define the decomposition ${\mathcal D}$ so that the non-degenerate elements are these intervals. We can obtain this decomposition ${\mathcal D}$ by taking the connected components of $[0,1] - \mbox{Cantor set}$ and then taking the closure of them. This is usc and we will see that the decomposition space is $\mathbb R$. If $X = \mathbb R^2$, then an analogous decomposition is that $\mathcal H_{\mathcal D}$ consists of countably many compact line segments. More generally, let $\mathcal H_{\mathcal D}$ be countably many \emph{flat} arcs, that is such subsets $A$ of $\mathbb R^2$ for which there exist self-homeomorphisms $h_A$ of $\mathbb R^2$ mapping $A$ into the standard compact interval $\{ (x, 0) \in \mathbb R^2 : 0 \leq x \leq 1 \}$. Such a decomposition is not necessarily usc, for example take the function $f \co [0,1) \to \mathbb R$, $f(x) = 1+x$, and the sequence $x_n = 1 -1/n$. Define the decomposition by $\mathcal H_{\mathcal D} = \{ (x_n, y) : y \in [0, f(x_n) ], n \in \mathbb N \}$ and the singletons are all the other points of $\mathbb R^2$. Then $\mathcal H_{\mathcal D}$ consists of countably many straight line segments but this decomposition is not usc: consider the point $(1, 3/2) \in \mathcal D$ and its $\varepsilon$-nbhds for small $\varepsilon>0$. These intersect infinitely many non-degenerate decomposition elements but none of the elements is a subset of any of these $\varepsilon$-nbhds. The decomposition space is not $T_2$: the points $\pi((1, y))$, where $0 \leq y \leq 2$, cannot be separated by disjoint nbhds because the sequence $\pi((x_n, 0))$ converges to all of them. However, if $\mathcal D$ is such a decomposition of $\mathbb R^2$ that $\mathcal H_{\mathcal D}$ consists of countably many {flat} arcs and further we suppose that $\mathcal D$ is usc, then the decomposition space $X_{\mathcal D}$ is homeomorphic to $\mathbb R^2$ and again $\pi$ can be approximated by homeomorphisms, we will see this later. We get another interesting example by taking a smooth function with finitely many critical values on a closed manifold $M$. Then the decomposition elements are defined to be the connected components of the point preimages of the function. This is a monotone decomposition ${\mathcal D}$ and it is usc because the decomposition map $\pi \co M \to M_{\mathcal D}$ is a closed map: in $M$ a closed set is compact, its $\pi$-image is compact as well and $M_{\mathcal D}$ is $T_2$ because it is a graph \cite{Iz88, Re46, Sa20} so this $\pi$-image is also closed. If $X$ is $3$-dimensional, then the possibilites increase tremendously. This is illustrated by the following surprising statement. \begin{prop}\label{line_decomp} For every compact metric space $Y$ there exists a monotone usc decomposition of the compact ball $D^3$ such that $Y$ can be embedded into the decomposition space. \end{prop} \begin{proof} Recall that by the Alexandroff-Hausdorff theorem the Cantor set in the $[0,1]$ interval can be mapped surjectively and continuously onto every compact metric space. Let $T$ be a tetrahedron in $D^3$, denote two of its non-intersecting edges by $e$ and $f$. Identify these edges linearly with $[0,1]$ and let $C_1$ and $C_2$ be the Cantor sets in $e$ and $f$, respectively. For $i=1, 2$ denote the existing surjective maps of $C_i$ onto $Y$ by $\psi_i \co C_i \to Y$. For every $x \in Y$ take the union of all the line segments in $T$ connecting all the points of $\psi_1^{-1}(x)$ to all the points of $\psi_2^{-1}(x)$. Denote this subset of $T$ by $D_x$, see Figure~\ref{tetrahedron}. They are compact and connected for all $x \in Y$ and they are pairwise disjoint because all the lines in $T$ connecting points of $e$ and $f$ are pairwise disjoint. So we have a monotone usc decomposition with $\mathcal H_{\mathcal D} = \{ D_x : x \in Y \}$. Define the embedding $i$ of $Y$ into $D^3_{\mathcal D}$ by $i (x ) = \pi( \psi_1^{-1}(x))$. This map is injective, closed because $\pi$ is closed and continuous because $\psi_1$ is closed. \end{proof} \begin{figure}[h!] \begin{center} \epsfig{file=tetrahedron.eps, height=9cm} \put(-1.2, 1.7){$f$} \put(-9, 5.5){$e$} \end{center} \caption{The tetrahedron $T$, the edges $e$ and $f$ and a set $D_x$ pictured in blue.} \label{tetrahedron} \end{figure} To see further examples in $\mathbb R^3$ let us introduce some notions. \begin{defn}[Defining sequence] Let $X$ be a connected $n$-dimensional manifold. A \emph{defining sequence} for a decomposition of $X$ is a sequence $$C_1, C_2, \ldots, C_n, \ldots$$ of compact $n$-dimensional submanifolds-with-boundary in $X$ such that $C_{n+1} \subset \mathrm{int} \thinspace C_n$. The decomposition elements of the defined decomposition are the connected components of $\cap_{n=1}^{\infty} C_n$ and the other points of $X$ are singletons. \end{defn} Obviously a decomposition defined in this way is monotone. The set $\cap_{n=1}^{\infty} C_n$ is closed and compact so its connected components are closed and compact as well. Also the space $\cap_{n=1}^{\infty} C_n$ is $T_2$ hence its decomposition to its connected components is usc. Then adding all the points of $X - \cap_{n=1}^{\infty} C_n$ to this decomposition as singletons results our decomposition. This is usc: the only thing which is not completely obvious is that in a nbhd of an added point the conditions being usc are satisfied or not. But $\cap_{n=1}^{\infty} C_n$ is closed, its complement is open so every such singleton has a nbhd disjoint from $\cap_{n=1}^{\infty} C_n$. \begin{prop} If all $C_n$ in a defining sequence is connected, then $\cap_{n=1}^{\infty} C_n$ is connected. \end{prop} \begin{proof} Let $C$ denote the non-empty set $\cap_{n=1}^{\infty} C_n$. Suppose $C$ is not connected, this means there are disjoint closed non-empty subsets $A, B \subset C$ such that $A \cup B = C$. These $A$ and $B$ are closed in the ambient manifold $X$ as well, so there exist disjoint nbhds $U$ of $A$ and $V$ of $B$ in $X$. It is enough to show that for some $n \in \mathbb N$ we have $C_n \subset U \cup V$, because then $C_n \cap U \neq \emptyset$, $C_n \cap V \neq \emptyset$ imply that $C_n$ is not connected, which is a contradiction. If we suppose that for every $n \in \mathbb N$ we have $C_n \cap (X - (U \cup V)) \neq \emptyset$, then for every $n$ we have $C_n \cap (X - U) \cap (X-V) \neq \emptyset$, i.e.\ the closed set $F = (X - U) \cap (X-V)$ and each element of the nested sequence $C_1, C_2, \ldots$ satisfy $$C_n \cap F \neq \emptyset.$$ Of course $$C_{n+1} \cap F \subset C_n \cap F$$ which implies that $$F \cap C = F \cap (\cap_{n=1}^{\infty} C_n) = \cap_{n=1}^{\infty} (C_n \cap F) \neq \emptyset$$ because all $C_n \cap F$ is closed in the compact space $C_1$. But $F \cap C \neq \emptyset$ contradicts to $C \subset U \cup V$. \end{proof} The $\pi$-image of the union of non-degenerate elements of a decomposition associated to a defining sequence is closed and also totally disconnected because if $\cap_{n=1}^{\infty} C_n$ is not connected, then all the pairs of decomposition elements have disjoint saturated nbhds which yield disjoint nbhds of their $\pi$-image. \subsection{The Whitehead continuum} One of the most famous such decomposition is related to the so called Whitehead continuum. Its defining sequence consists of solid tori embedded into each other in such a way that $C_{i+1}$ is a thickened Whitehead double of the center circle of $C_i$, see Figure~\ref{whitehead_decomp}. The intersection $\cap_{i = 1}^{\infty} C_i$ is a compact subset of $\mathbb R^3$, this is the Whitehead continuum, which we denote by $\mathcal W$. The decomposition consists of the connected components of $\mathcal W$ and the singletons in the complement of them. If the diameters $d_i$ of the meridians of the tori $C_i$ converges to $0$ as $i$ goes to $\infty$, then $\mathcal W$ intersects the vertical sheet $S$ in Figure~\ref{whitehead_decomp} in a Cantor set: $C_i \cap S$ is equal to $2^{i-1}$ copies of disks of diameter $d_i$ nested into each other. The intersection $S \cap (\cap_{i = 1}^{\infty} C_i)$ is then a Cantor set. The Whitehead continuum $\mathcal W$ is connected because the $C_i$ tori are connected but it is not path-connected. We will see later that the decomposition space $\mathbb R^3_{\mathcal W}$ is not homeomorphic to $\mathbb R^3$ but taking its direct product with $\mathbb R$ we get $\mathbb R^4$. An important property of $\mathbb R^3 - \mathcal W$ is that it is a contractible $3$-manifold, which is not homeomorphic to $\mathbb R^3$. For understanding further properties of this decomposition, we are going to define some notions. \begin{figure}[h!] \begin{center} \epsfig{file=whitehead_decomp.eps, height=16cm} \put(0.2, 4){$S$} \end{center} \caption{A sketch of the defining sequence of the Whitehead decomposition. The first figure shows the solid torus $C_1$ and the Whitehead double of its center circle. The second figure shows the Whitehead double of the center circle of $C_2$. The torus $C_2$ is not shown but we get it by thickening the Whitehead double in $C_1$. Then thicken the knot in the second figure (so we get the solid torus $C_3$) and take its center circle. Take the Whitehead double of this circle and so we get the knot embedded in $C_3$ in the third figure. In the third figure we can see the intersection of $C_3$ with a vertical sheet $S$, which is four small disks. This vertical sheet $S$ intersects the Whitehead continuum in a Cantor set.} \label{whitehead_decomp} \end{figure} \begin{defn}[Cellular set, cell-like set] Let $X$ be an $n$-dimensional manifold and $C \subset X$ be a subset of $X$. The set $C$ is \emph{cellular} if there is a sequence $B_1, B_2, \ldots, B_n, \ldots$ of closed $n$-dimensional balls in $X$ such that $B_{n+1} \subset \mathrm{int} \thinspace B_n$ and $C = \cap_{n=1}^{\infty} B_n$. A compact subset $C$ of a topological space $X$ is \emph{cell-like} if for every nbhd $U$ of $C$ there is a nbhd $V$ of $C$ in $U$ such that the inclusion map $V \to U$ is homotopic in $U$ to a constant map. Similarly, a decomposition is called cellular or cell-like if each of its decomposition elements is cellular or cell-like, respectively. \end{defn} For example the ``topologist's sine curve'' in $\mathbb R^2$ is cellular. A cellular set is compact and also connected but not necessarily path-connected. It is also easy to see that every compact contractible subset of a manifold is cell-like. Also a compact and contractible metric space is cell-like in itself. A cell-like set $C$ is connected because if there were two open subsets $U_1$ and $U_2$ in $X$ separating some connected components of $C$, then it would be not possible to contract any nbhd $V \subset U_1 \cup U_2$ of $C$ to one single point. \begin{prop}\label{Wnotcell} The set $\mathcal W$ is cell-like but not cellular. \end{prop} \begin{proof} Let $U$ be a nbhd of $\mathcal W$. Then there is an $n$ such that $C_i \subset U$ for all $i \geq n$. Let $V$ be such a small tubular nbhd of $C_{n+1}$ which is inside $C_n$. Then since the Whitehead double of the center circle of $C_n$ is null-homotopic in the solid torus $C_n$, the thickened Whitehead double $C_{n+1}$ and its nbhd $V$ are also null-homotopic in $C_n$, hence the map $V \to U$ is homotopic in $U$ to a constant map. \begin{lem} The $3$-manifold $S^3 - \mathcal W$ is not simply connected at infinity. \end{lem} \begin{proof} We have to show that there is a compact subset $C \subset S^3 - \mathcal W$ such that for every compact set $D \subset S^3 - \mathcal W$ containing $C$ the induced homomorphism $$ \varphi \co \pi_1(S^3 - \mathcal W - D) \to \pi_1(S^3 - \mathcal W - C) $$ is not the zero homomorphism. Let $C$ be the closure of $S^3 - C_1$. If $D$ is a compact set in $S^3 - \mathcal W$ containing $C$, then $S^3 - D$ is a nbhd of $\mathcal W$ in $C_1$. Then there is an $n$ such that $C_i \subset S^3 - D$ for all $i \geq n$. Consider the commutative diagram \begin{equation*} \begin{CD} \pi_1(S^3 - C_n - D) @>>> \pi_1(S^3 - C_n - C) \\ @VVV @VV \alpha V \\ \pi_1(S^3 - \mathcal W - D) @> \varphi >> \pi_1(S^3 - \mathcal W - C). \end{CD} \end{equation*} By \cite{NW37} the generator of the group $\pi_1(S^3 - C_n - C)$ represented by the meridian of the torus $C_n$ is mapped by $\alpha$ into a generator of $\pi_1(S^3 - \mathcal W - C)$. Since this meridian also represents an element of $\pi_1(S^3 - C_n - D)$, we get that $\varphi$ is not the zero homomorphism. \end{proof} Let us continue the proof of Proposition~\ref{Wnotcell}. If $\mathcal W$ is cellular, then there are $B_1, B_2, \ldots, B_n, \ldots$ closed $n$-dimensional balls in $S^3$ such that $B_{n+1} \subset \mathrm{int} \thinspace B_n$ and $\mathcal W = \cap_{n=1}^{\infty} B_n$. This would imply that $S^3 - \mathcal W$ is simply connected at infinity because if $C \subset S^3 - \mathcal W$ is a compact set, then take a $B_n \subset S^3 - C$ and a loop in $\mathrm {int} B_n - \mathcal W$, then there is a $B_m \subset \mathrm {int} B_n$ not containing this loop and the loop in null-homotopic in $\mathrm {int} B_n-B_m$ because $\pi_1 (\mathrm {int} B_n-B_m ) = 0$. Hence we obtain that $S^3 - \mathcal W$ is not cellular. \end{proof} With more effort we could show that $S^3- \mathcal W$ is contractible so it is homotopy equivalent to $\mathbb R^3$ but by the previous statement it is not homeomorphic to $\mathbb R^3$. It is known that the set $\mathcal W \times \{ 0 \}$ is cellular in $\mathbb R^3 \times \mathbb R$ and the decomposition space of the decomposition of $\mathbb R^3 \times \mathbb R$ whose only non-degenerate element is $\mathcal W \times \{ 0 \}$ is homeomorphic to $\mathbb R^4$. This fact is the starting point of the proof of the $4$-dimensional Poincar\'e conjecture. Being cell-like often does not depend on the ambient space. To understand this, we have to introduce a new notion. \begin{defn}[Absolute nbhd retract] A metric space $Y$ is an \emph{absolute nbhd retract} (or \emph{ANR} for short) if for an arbitrary metric space $X$ and its closed subset $A$ every map $f$ from $A$ to $Y$ extends to a nbhd of $A$. In other words, the nbhd $U$ and the dashed arrow exist in the following diagram and make the diagram commutative. \begin{center} \begin{graph}(6,4.5) \graphlinecolour{1}\grapharrowtype{2} \textnode {A}(0.5,1.5){$A$} \textnode {X}(5.5, 1.5){$X$} \textnode {U}(3, 0){$U$} \textnode {Y}(5.5, 4){$Y$} \diredge {A}{Y}[\graphlinecolour{0}] \diredge {A}{U}[\graphlinecolour{0}] \diredge {U}{X}[\graphlinecolour{0}] \diredge {A}{X}[\graphlinecolour{0}] \diredge {U}{Y}[\graphlinecolour{0}\graphlinedash{4}] \freetext (3,3.2){$f$} \freetext (3,1.2){$\subseteq$} \freetext (1.2, 0.6){$\subseteq$} \freetext (4.8, 0.6){$\subseteq$} \end{graph} \end{center} \end{defn} This is equivalent to say that for every metric space $Z$ and embedding $i \co Y \to Z$ such that $i(Y)$ is closed there is a nbhd $U$ of $i(Y)$ in $Z$ which retracts onto $i(Y)$, that is $r|_{i(Y)} = \mathrm {id}_{i(Y)}$ for some map $r \co U \to i(Y)$. It is a fact that every manifold is an ANR. The property of cell-likeness is independent of the ambient space until that is an ANR as the following statement shows. \begin{prop} If $C \subset X$ is a compact cell-like set in a metric space $X$, then the embedded image of $C$ in an arbitrary ANR is also cell-like. \end{prop} \begin{proof} Suppose $e \co C \to Y$ is an embedding into an ANR $Y$. We have to show that $e(C)$ is cell-like. Let $U$ be a nbhd of $e(C)$. Since $Y$ is ANR, there is a nbhd $\tilde V$ of $C$ in $X$ such that $e$ extends to an $\tilde e \co \tilde V \to Y$. Let $V \subset X$ be the open set $\tilde V \cap \tilde e^{-1}(U)$, it is a nbhd of $C$. There is a nbhd $W$ of $C$ such that $C \subset W \subset V$ and there is a homotopy of the inclusion $W \subset V$ to the constant in $V$ since $C$ is cell-like, denote this homotopy by $\varphi \co W \times [0,1] \to V$. Then $\varphi|_{C \times [0,1]}$ is a homotopy of the inclusion $C \subset V$ to the constant. Take $$\tilde e \circ \varphi|_{C \times [0,1]} \circ (e^{-1}|_{e(C)} \times \mathrm {id}_{[0,1]}),$$ this is a homotopy of the inclusion $e(C) \subset U$ to the constant in $U$. The space $e(C) \times [0,1]$ is compact in $Y \times [0,1]$ and the homotopy maps it into $Y$, which is ANR. This implies that there is a nbhd $\tilde U \subset U$ of $e(C)$ such that the inclusion $\tilde U \subset U$ is homotopic to constant in $U$. \end{proof} For example, this shows that a compact and contractible metric space is cell-like if we embed it into any ANR. In practice, we do not consider cell-like sets as subsets in some ambient space but rather as compact metric spaces which are cell-like if we embed them into an arbitrary ANR. It is clear that every cellular set $C$ is cell-like because in every nbhd $U$ of $C$ some open ball is contractible. Also, we have seen that the Whitehead continuum is cell-like but not cellular. In order to compare cell-like and cellular sets we introduce the notion of cellularity criterion. \begin{defn}[Cellularity criterion] A subset $Y \subset X$ satisfies the \emph{cellularity criterion} if for every nbhd $U$ of $Y$ there is a nbhd $V$ of $Y$ such that $V \subset U$ and every loop in $V - Y$ is null-homotopic in $U - Y$. \end{defn} The cellularity criterion and being cellular measure how wildly a subset is embedded into a space. The next theorem compares cell-like and cellular sets in a PL manifold. We omit its difficult proof here. \begin{thm} Let $C$ be a cell-like subset of a PL $n$-dimensional manifold, where $n \geq 4$. Then $C$ is cellular if and only if $C$ satisfies the cellularity criterion. \end{thm} In dimension $2$ we have a simpler statement: \begin{thm} Every cell-like subset in a $2$-dimensional manifold $X$ is cellular. \end{thm} \begin{proof} At first suppose $X = \mathbb R^2$ and $C \subset \mathbb R^2$ is a cell-like set. Let $U$ be a bounded nbhd of $C$ and let $V \subset U$ a nbhd of $C$ such that the inclusion $V \to U$ is homotopic to constant. Choose another nbhd $W \subset V$ of $C$ as well such that $\mathrm{cl} \thinspace W \subset V$. Take a compact smooth $2$-dimensional manifold $H \subset V$ such that $C \subset \mathrm{int} H$, $\partial H \subset V - \mathrm{cl} \thinspace W$ and $\mathrm{int} H$ is connected. Such an $H$ can be obtained by taking a Morse function $f \co V \to [0,1]$ which maps the nbhd $W$ of $C$ into $0$ and a small nbhd of $\mathbb R^2 - V$ into $1$. Then the preimage of a regular value $r$ close to $1/2$ is a smooth $1$-dimensional submanifold of $\mathbb R^2$ and the preimage of $(-\infty, r]$ is a compact subset containing $W$ and $C$, denote this $f^{-1}((-\infty, r])$ by $H$. Then $H$ is a compact smooth $2$-dimensional submanifold of $\mathbb R^2$, see Figure~\ref{twomanifoldconstruct}. Take its connected component (this is also a path-connected component because $H$ is a manifold) which contains $C$ and denote this by $H$ as well. \begin{figure}[h!] \begin{center} \epsfig{file=twomanifoldconstruct.eps, height=8cm} \put(-6, 3.7){$C$} \put(-0.6, 3.1){$U$} \put(-2.4, 5.2){$V$} \put(-7.8, 3.3){$W$} \put(-6.6, 5.7){${\textcolor[rgb]{0,0,1}{H_2}}$} \put(-9, 5.7){${\textcolor[rgb]{0,0,1}{H_1}}$} \end{center} \caption{The compact manifold $H = H_1 \cup H_2$. Its component $H_2$ contains $C$. Since $H_2 - C$ is path-connected, there is a path (dashed in the figure) in $H_2$ connecting two different components of the boundary of $H_2$. } \label{twomanifoldconstruct} \end{figure} We show that $H - C$ is connected. For this consider the commutative diagram \begin{equation*} \begin{CD} H_1 ( H; \mathbb Z_2 ) @>>> H_1 ( H, H - C; \mathbb Z_2 ) @>>> H_0 ( H - C; \mathbb Z_2 ) @>>> H_0 ( H ) @>>> 0 \\ @VVV @VVV @VVV @VVV \\ H_1 ( \mathbb R^2 ; \mathbb Z_2 ) @>>> H_1 ( \mathbb R^2, \mathbb R^2 - C; \mathbb Z_2 ) @>>> H_0 ( \mathbb R^2 - C; \mathbb Z_2 ) @> i_*>> H_0 ( \mathbb R^2 ) @>>> 0 \end{CD} \end{equation*} coming from the long exact sequences and the inclusion $( H, H - C ) \subset ( \mathbb R^2, \mathbb R^2 - C )$. This is just the diagram \begin{equation*} \begin{CD} H_1 ( H; \mathbb Z_2 ) @>>> H_1 ( H, H - C; \mathbb Z_2 ) @>>> H_0 ( H - C; \mathbb Z_2 ) @>>> \mathbb Z_2 @>>> 0 \\ @VVV @VV \cong V @VVV @VV \cong V \\ 0 @>>> H_1 ( \mathbb R^2, \mathbb R^2 - C; \mathbb Z_2 ) @>>> H_0 ( \mathbb R^2 - C; \mathbb Z_2 ) @> i_* >> \mathbb Z_2 @>>> 0 \end{CD} \end{equation*} If the group $H_0 ( \mathbb R^2 - C; \mathbb Z_2 )$ is $\mathbb Z_2$, i.e.\ the manifold $\mathbb R^2 - C$ is connected, then exactness implies that $H_0 ( H - C; \mathbb Z_2 ) \cong \mathbb Z_2$ so $H - C$ is connected. To show that $\mathbb R^2 - C$ is connected, we apply \cite[Theorem~VI.5, page~86]{HW41}, which implies that if $C$ is a closed subset of a space $D$ and $f, g$ are homotopic maps of $C$ into $S^1$ such that $f$ extends to $D$, then $g$ extends to $D$ and the extensions are homotopic. Suppose the open set $\mathbb R^2 - C$ is not connected, then it is the disjoint union of two open sets $A$ and $B$. At least one of these is bounded because for large enough $s$ the set $\mathbb R^2 - [-s, s]^2$ is disjoint from $C$ and it is connected hence it is in $A$ or $B$ but then $[-s, s]^2$ contains $B$ or $A$, respectively. Suppose $A$ is bounded, $p \in A$ and $q \in B$. For a subset $S \subset \mathbb R^2$ and point $x \in \mathbb R^2$ denote by $\pi_{S,x} \co S - \{x\} \to S^1$ the radial projection of $S - \{x\}$ to the circle $S^1$ of radius $1$ centered at $x$. Then $\pi_{C,q}$ extends to $\mathbb R^2 - \{q\}$ so also to $A \cup C$ but $\pi_{C,p}$ does not extend to $A \cup C$ because such an extension would extend to a much larger disk $P$ centered at $p$ as well by radial projection and then a retraction of $P$ onto its boundary (if we identify it with the target circle of $\pi_{C,p}$) would exists. Consequently $\pi_{C,q}$ and $\pi_{C,p}$ are not homotopic and so at least one of them is not homotopic to constant. This means if $\mathbb R^2 - C$ is not connected, then there is a map $C \to S^1$ which is not homotopic to constant. But since the inclusion $V \subset U$ and then also $C \subset U$ are homotopic to constant, we get that $\mathbb R^2 - C$ is connected. Finally, we get that $H - C$ is a path-connected smooth $2$-dimensional manifold with boundary. Hence if the number of components of $\partial H$ is larger than one, then there exists a smooth curve transversal to $\partial H$, disjoint from $C$ and connecting different components of $\partial H$. We can cut $H$ along this curve and by repeating this process we end up with $\partial H$ being a single circle. By the Jordan curve theorem $H$ is a compact $2$-dimensional disk. In this way we get $$C \subset W \subset \mathrm { int} H \subset H \subset V \subset U.$$ Since in $\mathbb R^2$ every compact set is a countable intersection of open sets which form a decreasing sequence, we have $C = \cap_{n = 1}^{\infty} U_n$, where $U_1 \supset U_2 \supset \cdots \supset U_n \supset \cdots$, where the sets $U_n$ are open. We can also assume that for each $n$ we have $\mathrm{cl}\thinspace U_{n+1} \subset U_n$. We obtain countably many compact $2$-dimensional disks $H_1, H_2, \ldots$ by the previous construction, which satisfy $$C \subset U_{n+1} \subset \mathrm { int} H_n \subset H_n \subset V \subset U_n.$$ Hence $C = \cap_{n = 1}^{\infty} H_n$ so $C$ is cellular. In the case of $X$ is an arbitrary $2$-dimensional manifold, since $C$ is cell-like, there exists a nbhd of $C$ which is homotopic to constant so $C$ is contained in a simply-connected $2$-dimensional manifold nbhd, which is homeomorphic to $\mathbb R^2$. Hence a similar argument gives that $C$ is cellular. \end{proof} \begin{prop} If $C$ is cell-like in a smooth $n$-dimensional manifold $X$, where $n \geq 3$, then $C \times \{ 0 \}$ is cellular in $X \times \mathbb R^3$. \end{prop} \begin{proof} It is enough to show that $C \times \{ 0 \}$ satisfies the cellularity criterion. It is easy to see that $C \times \{ 0 \}$ is cell-like in $X \times \mathbb R^3$. Let $U$ be a nbhd of $C \times \{ 0 \}$ in $X \times \mathbb R^3$. It is obvious that there is a nbhd $V \subset U$ of $C \times \{ 0 \}$ such that every loop $\gamma \co [0,1] \to V$ is null-homotopic in $U$. Let $\gamma$ be an arbitrary loop in $V - C \times \{ 0\}$, it is homotopic to a smooth loop in $\tilde \gamma \co V - C \times \{ 0\}$ by a homotopy $H$. A homotopy of $\tilde \gamma$ to constant can be approximated by a smooth map $\tilde H \co D^2 \to U$, where $\tilde H|_{\partial D^2} = \tilde \gamma$. In the subspace $X \times \{ 0 \}$ of $X \times \mathbb R^3$ let $W$ be a nbhd of $C \times \{ 0 \}$ which is disjoint from the homotopy $\tilde H$. Perturb $\tilde H$ keeping $\tilde H|_{\partial D^2}$ fixed to get a transversal map to the $n$-dimensional manifold $W$ in $U$, hence we get that $\gamma$ is null-homotopic in $U - C \times \{ 0 \}$. So the cellularity criterion holds for $C \times \{ 0 \}$. \end{proof} \subsection{Antoine's necklace} Take the defining sequence where \begin{itemize} \item $C_1$ is a solid torus, \item $C_2$ is a finite number of solid tori embedded in $C_1$ in such a way that each torus is unknotted and linked to its neighbour as in a usual chain, \item $C_3$ is again a finite number of similarly linked solid tori, \end{itemize} \ldots, etc., see Figure~\ref{antoine}. \begin{figure}[h!] \begin{center} \epsfig{file=antoine.eps, height=11cm} \end{center} \caption{A sketch of the defining sequence of Antoine's necklace. We can see the solid torus $C_1$, the linked tori $C_2$ and some linked tori from the collection $C_3$, etc. The number of components of $C_{n+1}$ in $C_n$ is large enough to make the diameters of the tori converge to $0$.} \label{antoine} \end{figure} We always consider at least three tori in each $C_n$. We require that the maximal diameter of tori in $C_n$ converges to $0$. The set $\cap_{n = 1}^{\infty} C_n$ is called Antoine's necklace and denoted by $\mathcal A$. It is easy to see that each of its components is cell-like. Unlike Whitehead continuum the components of $\mathcal A$ are cellular because every component of $C_{n+1}$ is inside a ball in $C_n$. Recall that the Cantor set is the topological space $$D_1 \times D_2 \times \cdots \times D_n \times \cdots,$$ where every space $D_n$ is a finite discrete metric space with $|D_n| \geq 2$. \begin{prop} The space $\cap_{n = 1}^{\infty} C_n$ is homeomorphic to the Cantor set. \end{prop} \begin{proof} Denote the number of tori embedded in $C_1$ by $m_1$, these tori are $$C_{2, 1}, \ldots, C_{2, m_1}$$ whose disjoint union is $C_2$. For $1 \leq i_1 \leq m_1$ take the $i_1$-th torus $C_{2, i_1}$ and denote the number of tori embedded into it by $m_{2, i_1}$, these tori are $$C_{3, i_1, 1}, \ldots, C_{3, i_1, m_{2, i_1}}$$ whose disjoint union is $C_3$. Again for $1 \leq i_2 \leq m_{2, i_1}$ take the $i_2$-th torus $C_{3, i_1, i_2}$ and denote the number of tori embedded into it by $m_{3, i_1, i_2}$, these tori are $$C_{4, i_1, i_2, 1}, \ldots, C_{4, i_1, i_2, m_{3, i_1, i_2}}$$ whose disjoint union is $C_4$. In general in the $n$-th step for $1 \leq i_n \leq m_{n, i_1, \ldots, i_{n-1}}$ take the $i_n$-th torus $C_{n+1, i_1, \ldots, i_n}$ and denote the number of tori embedded into it by $m_{n+1, i_1, \ldots, i_n}$, these tori are $$C_{n+2, i_1, \ldots, i_n, 1}, \ldots, C_{n+2, i_1, \ldots, i_n, m_{n+1, i_1, \ldots, i_n}}$$ whose disjoint union is $C_{n+2}$. Now we construct a Cantor set $\mathcal C$ in the interval $[0,1]$. Divide $[0,1]$ into $2m_{1} -1$ closed intervals $$I_{2, 1}, \ldots, I_{2, 2m_{1} -1} \subset [0,1]$$ of equal length and disjoint interiors. Then divide the $i_1$-th interval $I_{2, i_1}$, where $i_1$ is odd, into $2m_{2, i_1} -1$ closed intervals $$I_{3, i_1, 1}, \ldots, I_{3, i_1, 2m_{2, i_1}-1}$$ of equal length. Then divide the $i_2$-th interval $I_{3, i_1, i_2}$, where $i_2$ is odd, into $2m_{3, i_1, i_2} -1$ closed intervals $$I_{4, i_1, i_2, 1}, \ldots, I_{4, i_1, i_2, 2m_{3, i_1, i_2}-1}$$ of equal length. In the $n$-th step divide the $i_n$-th interval $I_{n+1, i_1, \ldots, i_n}$, where $i_n$ is odd, into the closed intervals $$I_{n+2, i_1, \ldots, i_n, 1}, \ldots, I_{n+2, i_1, \ldots, i_n, 2m_{n+1, i_1, \ldots, i_n}-1}$$ of equal length and so on. So all the intervals $I_{n+1, i_1, \ldots, i_n }$ have length $$\frac{1}{(2m_{1} -1)\cdots(2m_{n, i_1, \ldots, i_{n-1}}-1)}.$$ Then let $$\mathcal C = \bigcap_{n=1}^{\infty} \bigcup_{ {\begin{smallmatrix} 1 \leq i_1 \leq m_1 \\ 1 \leq i_2 \leq m_{2, i_1} \\ \cdots \\ 1 \leq i_n \leq m_{n, i_1, \ldots, i_{n-1}} \end{smallmatrix}} } I_{n+1, 2i_1-1, \ldots, 2i_n-1 }.$$ Assign to a point $x \in \cap_{n = 1}^{\infty} C_n$ the point $$\bigcap_{n=1}^{\infty} I_{n+1, 2i_1(x)-1, \ldots, 2i_n(x)-1 },$$ which is the intersection of the closed intervals containing $x$. This defines a map $$f \co \cap_{n = 1}^{\infty} C_n \to \mathcal C,$$ which is clearly surjective. It is injective as well because if $x \neq x'$, then for large $n$ they are in different $C_n$ so they are mapped into different intervals as well. The map $f$ is continuous because if $x$ and $x'$ are in the same $C_n$ until some large enough $n$, then they are mapped to the same intervals until a large index so $f(x)$ and $f(x')$ are close enough. Then $f$ is a homeomorphism since its domain is compact and it maps injectively into a $T_2$ space. \end{proof} Of course the components of $\mathcal A$ are points so the decomposition space is obviously $\mathbb R^3$. An important property of $\mathcal A$ is that it is \emph{wild}, i.e.\ there is no self-homeomorphism of $\mathbb R^3$ mapping $\mathcal A$ onto the Cantor set in a line segment. To prove this, we study the local behaviour of the complement of $\mathcal A$. \begin{defn} Let $k \geq 0$. A closed subset $A$ of a space $X$ is locally $k$-co-connected ($k$-LCC for short) if for every point $a \in A$ and for every nbhd $U$ of $a$ in $X$ there is a nbhd $V \subset U$ of $a$ in $X$ such that if $\varphi \co \partial D^{k+1} \to V-A$ is a map of the $k$-sphere, then $\varphi$ extends to a map of $D^{k+1}$ into $U- A$. \end{defn} \begin{prop} The set $\mathcal A$ in $\mathbb R^3$ is not $1$-LCC. \end{prop} \begin{proof}[Sketch of the proof] At first we show that if $\alpha \co S^1 \to C_1$ is the meridian of the torus $C_1$, then every smooth embedding $\tilde \alpha \co D^2 \to \mathbb R^3$ extending $\alpha$ is such that $\tilde \alpha( D^2)$ intersects $\mathcal A$. If this was not true, then $\tilde \alpha( D^2)$ would intersect at most finitely many tori $C_1, \ldots, C_n$ and it would be possible to perturb $\tilde \alpha$ to get a smooth embedding transversal to each $\partial C_n$. Then it is possible to show that there is a disk $D_1 \subset D^2$ such that $\tilde \alpha (\partial D_1)$ intersects some torus $\partial C_{2, i_1}$ in a meridian. Inductively, $\tilde \alpha( D^2)$ has to intersect some torus $\partial C_{m, i_1, \ldots, i_{m-1}}$ for arbitrarily large $m > n$, which is a contradiction. Suppose that $\mathcal A$ is $1$-LCC. Let $\beta \co D^2 \to \mathbb R^3$ be a smooth embedding such that $\beta (\partial D^2)$ is a meridian of $C_1$. Cover $\beta( D^2 ) \cap \mathcal A$ by open sets $\{ U_{\gamma} \}_{\gamma \in \Gamma}$ around each of its points, then there is a covering $\{ V_{\gamma} \}_{\gamma \in \Gamma}$ such that for all $\gamma \in \Gamma$ we have $V_{\gamma} \subset U_{\gamma}$ and each map $\partial D^2 \to (\mathbb R^3 - \mathcal A) \cap V_{\gamma}$ can be extended to a map $D^2 \to (\mathbb R^3 - \mathcal A) \cap U_{\gamma}$. We can also suppose that $\cup_{\gamma} U_{\gamma}$ is disjoint from $\beta(\partial D^2)$. By Lebesgue lemma there is a refinement of $D^2$ into finitely many small disks with disjoint interiors such that each of their boundary circles is mapped by $\beta$ into some $V_{\gamma}$. After a small perturbation we can suppose that each of the $\beta$-images of these boundary circles is disjoint from $C_n$ for some common large $n$ but it is still in some $V_{\gamma}$. Now change $\beta$ on each of the small disks to get a map into $(\mathbb R^3 - \mathcal A) \cap U_{\gamma}$. By Dehn's lemma there are embeddings as well of the small disks into $(\mathbb R^3 - \mathcal A) \cap U_{\gamma}$. In this way we get an embedding of the original disk $D^2$ which is disjoint from $\mathcal A$. This contradicts to the fact that every embedded disk $D^2 \subset \mathbb R^3$ with boundary circle being a meridian of $C_1$ intersects $\mathcal A$. \end{proof} The standard Cantor set $C \subset \mathbb R\times \{ 0 \} \times \{ 0 \} \subset \mathbb R^3$ is $1$-LCC, because having a small loop in its complement $\mathbb R^3 - C$ yields by approximation a small smooth loop in $\mathbb R^3 - C$ transversal to and disjoint from $\mathbb R \times \{ 0 \} \times \{ 0 \}$. Then deform this loop by compressing it in a direction parallel to $\mathbb R\times \{ 0 \} \times \{ 0 \}$ until the loop sits in the plane $\{ x \} \times \mathbb R^2$ for some number $x \in \mathbb R^3 - C$. After these the loop can be squeezed easily inside this plane to a point in $\mathbb R^3 - C$. This implies that Antoine's necklace is a wild Cantor set in $\mathbb R^3$. \subsection{Bing decomposition} If in the construction of Antoine's necklace there are always two tori components of $C_{n+1}$ in each component of $C_n$, then we call the arising decomposition Bing decomposition. Apriori there could be many different Bing decompositions depending on how the solid tori are embedded into each other. It is not obvious that we can arrange the components of $C_n$ embedded in such a way that $\cap_{n = 1}^{\infty} C_n$ is a Cantor set, which would follow if the maximal diameter of the tori in $C_n$ converges to $0$. A random defining sequence can be seen in Figure~\ref{bing_random}. \begin{figure}[h!] \begin{center} \epsfig{file=bing_random.eps, height=11cm} \end{center} \caption{A sketch of a defining sequence of the Bing decomposition. We can see the torus $C_1$, the two torus components of $C_2$ and the four torus components of $C_3$. The maximal diameter of tori in $C_n$ does not converge to $0$ necessarily.} \label{bing_random} \end{figure} Now we construct a defining sequence, where the maximal diameter of the tori in $C_n$ converges to $0$. For this, consider the following way to define a finite sequence of finite sequences of embeddings: $$D_{0},$$ $$D_{0} \supset D_{2,1},$$ $$D_{0} \supset D_{3,1} \supset D_{3,2},$$ $$D_{0} \supset D_{4, 1} \supset \cdots \supset D_{4,3},$$ \begin{figure}[h!] \begin{center} \epsfig{file=bing_deform.eps, height=17.1cm} \put(-12.5, 16.1){$D_{1}$} \put(0, 16.1){$D_{2,1}$} \put(0, 13.8){$D_{3,2}$} \put(0.5, 10.6){$D_{4,3}$} \put(-0.5, 2.1){$D_{5,4}$} \end{center} \caption{A sketch of constructing the tori $D_{n+1, n}$. Instead of the solid tori we just draw their center circles. We always take the previously obtained linked tori $D_{n, n-1}$, squeeze them to become ``flat'' as the figure shows, then curve them a little and link them with another copy at the two ``endings''. Hence we get $D_{n+1, n}$. The sequence of embeddings $D_{0} \supset D_{n+1, 1} \supset \cdots \supset D_{n+1,n}$ can be kept in sight by checking all the smaller linkings.} \label{bing_deform} \end{figure} \ldots, etc., where $D_{n, 0} = D_{0}$ is a solid torus, $D_{n,k}$ is a disjoint union of $2^{k}$ copies of solid tori and the components of $D_{n,k}$ are pairwise embedded in the components of $D_{n,k-1}$, moreover these pairs are linked just like in the defining sequence of Bing decomposition, for further subtleties see Figure~\ref{bing_deform}. Arriving to the tori $D_{n+1,n}$ and assuming that their meridional size is small enough, we obtain a regular $2n$-gon-like arrangement of $2^{n}$ copies of solid tori as Figure~\ref{bing_deform} shows. Two conditions are satisfied: the meridional size of all the tori is small and an ``edge'' of this $2n$-gon is also small. This means that if this $D_{n+1,n}$ is embedded into a torus (as the figure suggests) whose meridional size is small, then the maximal diameter of the torus components of $D_{n+1,n}$ is small if $n$ is large. \begin{prop} There is a defining sequence $C_1, \ldots, C_n, \ldots$ of the Bing decomposition, where the maximal diameter of tori in $C_n$ converges to $0$. Hence $\cap_{n=1}^{\infty} C_n$ is homeomorphic to the Cantor set. \end{prop} \begin{proof} Let $\varepsilon_n > 0$ be a sequence whose limit is $0$. Let $n_1$ be so large that in $C_{n_1}$ in a defining sequence the meridional size of tori is smaller than $\varepsilon_1$. Let $m_1$ be so large that we can embed $D_{m_1+1,m_1}$ into the torus components of $C_{n_1}$ so that the maximal diameter of tori in the obtained $C_{n_1 + m_1}$ is smaller then $\varepsilon_1$. Then let $n_2 > n_1 + m_1$ be so large that in a continuation of the defining sequence in $C_{n_2}$ the meridional size of tori is smaller than $\varepsilon_2$. Let $m_2$ be so large that we can embed $D_{m_2+1,m_2}$ into the torus components of $C_{n_2}$ so that the maximal diameter of tori in the obtained $C_{n_2 + m_2}$ is smaller then $\varepsilon_2$. And so on. It is easy to see that the maximal diameter of tori converges to $0$. \end{proof} This implies that the decomposition space of this decomposition is $\mathbb R^3$. For an arbitrary defining sequence the space $\cap_{n=1}^{\infty} C_n$ may be not the Cantor set, however the decomposition space could be still homeomorphic to the ambient space $\mathbb R^3$. It is a very important observation that the embedding of the tori in $D_{n+1,n}$ can be obtained by an isotopy of $C_1 \subset \cdots \subset C_{n+1}$ in any defining sequence, see \cite{Bi52}. By such an isotopy for a given defining sequence we can manage something similar to the previous statement: if $n$ is large enough, then the meridional size of the torus components in $C_n$ is smaller than a given $\varepsilon > 0$. Then apply the required isotopy for $C_{n+1}, \ldots, C_{n+k}$ for some large $k$ to make the maximal diameter of the torus components of $C_{n+k}$ smaller than $\varepsilon$. Note that since $n$ is large enough and all the isotopy happens inside $C_n$, all the isotopy happens inside an arbitrarily small nbhd of $\cap_{n=1}^{\infty} C_n$. This means that for every $\varepsilon > 0$ there is a self-homeomorphism $h$ of $\mathbb R^3$ with support $C_1$ such that $h( \mathcal D ) < \varepsilon$ for every decomposition element $\mathcal D \subset \cap_{n=1}^{\infty} C_n$ and also $\pi \circ h( \mathcal D)$ stays in the $\varepsilon$-nbhd of $\pi ( \mathcal D)$ for some metric on the decomposition space. This condition is called shrinkability criterion and it implies that the decomposition space is homeomorphic to the ambient space $\mathbb R^3$ as we will see in the next section. \bigskip{\Large{\section{Shrinking}}} \bigskip\large Let $X$ be a topological space and $\mathcal D$ a decomposition of $X$. An open cover $\mathcal U$ of $X$ is called $\mathcal D$-saturated if every $U \in \mathcal U$ is a union of decomposition elements. \begin{defn}[Bing shrinkability criterion] Let $\mathcal D$ be a usc decomposition of the space $X$. We say $\mathcal D$ is \emph{shrinkable} if for every open cover $\mathcal V$ and $\mathcal D$-saturated open cover $\mathcal U$ there is a self-homeomorphism $h$ of $X$ such that for every $D \in \mathcal D$ the set $h(D)$ is in some $V \in \mathcal V$ and for every $x \in X$ there is a $U \in \mathcal U$ such that $x, h(x) \in U$. In other words, $h$ shrinks the elements of $\mathcal D$ to arbitrarily small sets and $h$ is $\mathcal U$-close to the identity. We say $\mathcal D$ is \emph{strongly shrinkable} if for every open set $W$ containing all the non-degenerate elements of $\mathcal D$ the decomposition $\mathcal D$ is shrinkable so that the support of $h$ is in $W$. \end{defn} In other words $\mathcal D$ is shrinkable if its elements can be made small enough simultaneously so that this shrinking process does not move the points of $X$ too far in the sense of measuring the distance in the decomposition space. If $X$ has a shrinkable decomposition, then we expect that the local structure of $X$ is similar to the structure of the nbhds of the decomposition elements. \begin{prop} Let $X$ be a regular space and let $\mathcal D$ be a shrinkable usc decomposition of $X$. If every $x \in X$ has arbitrarily small nbhds satisfying a fixed topological property, then every $D \in \mathcal D$ has arbitrarily small nbhds satisfying the same property. \end{prop} \begin{proof} Let $W$ be an arbitrary nbhd of an element $D \in \mathcal D$. Then there is a saturated nbhd $\tilde U_1$ of $D$ such that $\tilde U_1 \subset W$. Let $U_1$ denote $\pi(\tilde U_1)$. Since $X_{\mathcal D}$ is regular, there are open sets $U_2$ and $U_3$ such that $$\pi(D) \subset U_3 \subset \mathrm{cl}\thinspace U_3 \subset U_2 \subset \mathrm{cl}\thinspace U_2 \subset U_1.$$ Then take the sets $$ \pi^{-1}(U_3),\mbox{\ } \pi^{-1}( U_2) - D,\mbox{\ } \pi^{-1}( U_1 - \mathrm{cl}\thinspace U_3 ),\mbox{\ and\ } X - \pi^{-1}(\mathrm{cl}\thinspace U_2 ),$$ see Figure~\ref{localprop}. These yield a $\mathcal D$-saturated open cover $\mathcal U$ of $X$. Let $\mathcal V$ be an open cover of $X$ which refines $\mathcal U$ and consists of open sets with our fixed property. Since $\mathcal D$ is shrinkable, we have a homeomorphism $h \co X \to X$ such that $h(D) \subset V$ for some $V \in \mathcal V$ and $h$ is $\mathcal U$-close to the identity. Then $D \subset h^{-1}(V)$ so it is enough to show that $$h^{-1}(V) \subset W.$$ Suppose there exists some $x \in h^{-1}(V) - W$, then $x \in h^{-1}(V) - \pi^{-1}(U_1)$ since $\pi^{-1}(U_1) \subset W$. Hence among the sets in $\mathcal U$ only $X - \pi^{-1}(\mathrm{cl}\thinspace U_2 )$ contains $x$ so $h(x) \in V$ has to be in $X - \pi^{-1}(\mathrm{cl}\thinspace U_2 )$ as well. This implies that $V \subset X - \pi^{-1} ( \mathrm{cl}\thinspace U_3 )$ because $V$ is a subset of some sets in $\mathcal U$. Also we know that $h(D) \subset \pi^{-1}(U_3)$ since $D \subset \pi^{-1}(U_3)$. This means that $h(D) \subset V \subset X - \pi^{-1} ( \mathrm{cl}\thinspace U_3 )$ cannot hold so $h^{-1}(V) \subset W$. \end{proof} \begin{figure}[h!] \begin{center} \epsfig{file=localprop.eps, height=6cm} \put(-1, 5.2){$X$} \put(-5.5, 3){$D$} \end{center} \caption{The set $D$ and the $\pi$-preimages of the sets $U_3 \subset U_2 \subset U_1$ in $X$. In the figure the sets $\pi^{-1}(U_3)$ and $\pi^{-1}( U_2) - D$ are shaded. } \label{localprop} \end{figure} For example every decomposition element of a shrinkable decomposition of a manifold is cellular. It is often not too difficult to check whether a decomposition of a space $X$ is shrinkable. A corollary of shrinkability is that the decomposition space is homeomorphic to $X$. This is often applied when we want to construct embedded manifolds and the construction uses mismatched pieces, which we eliminate by taking them as the decomposition elements and then looking at the decomposition space. \begin{defn}[Near-homeomorphism, approximating by homeomorphism] Let $X$ and $Y$ be topological spaces. An $f \co X \to Y$ surjective map is a \emph{near-homeomorphism} if for every open covering $\mathcal W$ of $Y$ there is a homeomorphism $h \co X \to Y$ such that for every $x \in X$ the points $f(x)$ and $h(x)$ are in some $W \in \mathcal W$, in other words $h$ is \emph{$\mathcal W$-close} to $f$. \end{defn} If $(Y, \varrho)$ is a metric space, then $f$ being a near-homeomorphism implies that $f$ can be approximated by homeomorphisms in the possibly infinite-valued metric $d(f, g) = \sup \{ \varrho(f(x), g(x)) \}$. Notice that if $f \co X \to Y$ is a near-homeomorphism, then $X$ and $Y$ are actually homeomorphic. The main result is that a usc decomposition yields a homeomorphic decomposition space if the decomposition is shrinkable. This is applied in major $4$-dimensional results: in the disk embedding theorem and in the proof of the $4$-dimensional topological Poincar\'e conjecture \cite{Fr82, BKKPR}. It is extensively applied in constructing approximations of manifold embeddings in dimension $\geq 5$, see \cite{AC79} and Edwards's cell-like approximation theorem. For an open cover $\mathcal W$ of a space $X$ and a subset $A \subset X$ let $St(A, \mathcal W)$ denote the subset $$ \bigcup \{ W \in \mathcal W : W \cap A \neq \emptyset \}. $$ This is called the \emph{star} of $A$ and it is a nbhd of $A$. Of course if $A \subset B$, then $St(A, \mathcal W) \subset St(B, \mathcal W)$. If $\mathcal W'$ is an open cover which is a refinement of the open cover $\mathcal W$, then obviously $St(A, \mathcal W') \subset St(A, \mathcal W)$. We will use often that if the covering $\mathcal W'$ is a \emph{star-refinement} of the covering $\mathcal W$, that is the collection $$\left\{ St(W_{\alpha}, \mathcal W') : W_{\alpha} \in \mathcal W' \right\}$$ of stars of elements of $\mathcal W'$ is a refinement of $\mathcal W$, then for every point $x \in X$ we have $ St(\{ x \}, \mathcal W' ) \subset W$ for some $W \in \mathcal W$. The following theorem requires a complete metric on the space $X$, for example the statement holds for an arbitrary manifold. \begin{thm}\label{shrinkinglocallycompact} Let $\mathcal D$ be a usc decomposition of a space $X$ admitting a complete metric. Then the following are equivalent: \begin{enumerate}[\rm (1)] \item the decomposition map $\pi \co X \to X_{\mathcal D}$ is a near-homeomorphism, \item $\mathcal D$ is shrinkable. \end{enumerate} If additionally $X$ is also locally compact and separable, then shrinkability is equivalent to \begin{enumerate}[\rm (3)] \item if $C \subset X$ is an arbitrary compact set, $\varepsilon > 0$ and $\mathcal U$ is a $\mathcal D$-saturated open cover of $X$, then there is a homeomorphism $h \co X \to X$ such that ${\mathrm{diam}}\thinspace h(D) < \varepsilon$ for every $D \subset C$, $D \in \mathcal D$ and $h$ is $\mathcal U$-close to the identity. \end{enumerate} \end{thm} \begin{proof} {\textbf{Near-homeomorphism (1) implies shrinking (2) and (3).}} Of course (2) implies (3) so we are going to prove only that (1) implies (2). At first, suppose that the decomposition map $\pi \co X \to X_{\mathcal D}$ is a near-homeomorphism. We have to show that $\mathcal D$ is shrinkable by finding an appropriate homeomorphism $h$. We know that since $X$ is metric, the decomposition space $X_{\mathcal D}$ is metrizable hence it is paracompact. (To show that $\mathcal D$ is shrinkable, we will use only that the space $X$ is paracompact and $T_4$.) Let $\mathcal V$ be an open cover and let $\mathcal U$ be a $\mathcal D$-saturated open cover of $X$. Take the open covering $\{ \pi ( U) : U \in \mathcal U \}$ of $X_{\mathcal D}$. Since $X_{\mathcal D}$ is paracompact, this covering has a star-refinement $\mathcal W_0$, i.e.\ $\mathcal W_0$ is a covering and the collection of stars of elements of $\mathcal W_0$, that is the collection $$\left\{ St(W_{\alpha}, \mathcal W_0) : W_{\alpha} \in \mathcal W_0 \right\}$$ is a refinement of $\{ \pi ( U) : U \in \mathcal U \}$, see \cite[Section~8.3]{Du66}. Similarly $\mathcal W_0$ has a star-refinement covering $\mathcal W_1$. Then there is a homeomorphism $$ h_1 \co X \to X_{\mathcal D} $$ which is $\mathcal W_1$-close to $\pi$ because $\pi$ is a near-homeomorphism. Take the open cover $$ \mathcal W_1 \bigcap h_1(\mathcal V) = \{ W \cap h_1(V) : W \in \mathcal W_1, V \in \mathcal V \} $$ and a star-refinement $\mathcal W_2$ of it. Of course $\mathcal W_2$ is a star-refinement of $\mathcal W_1$ and $h_1(\mathcal V)$ as well. There is a homeomorphism $$ h_2 \co X \to X_{\mathcal D} $$ which is $\mathcal W_2$-close to $\pi$. Let $h \co X \to X$ be the composition $$ h_1^{-1} \circ h_2. $$ At first we show that $h$ shrinks every decomposition element $D \in \mathcal D$ into some $V \in \mathcal V$. Let $D \in \mathcal D$. It is enough to show that $h_2 (D) \subset h_1(V)$ for some $V \in \mathcal V$. We have that for every $x \in D$ the points $\pi(D)$ and $h_2(x)$ are in the same $W_x \in \mathcal W_2$ so $$ h_2(D) \subset St( \{ \pi(D) \}, \mathcal W_2 ) \subset h_1(V) $$ for some $V \in \mathcal V$ because $\mathcal W_2$ is a star-refinement of $h_1(\mathcal V)$. Now we show that $h$ is $\mathcal U$-close to the identity. We have that for every $x \in D$ the points $\pi(D)$ and $h_1(x)$ are in the same $W_x \in \mathcal W_1$ because $h_1$ is $\mathcal W_1$-close to $\pi$ so $$ h_1(D) \subset St( \{ \pi(D) \}, \mathcal W_1 ).$$ Since $\mathcal W_2$ is a refinement of $\mathcal W_1$, we have $$ h_2(D) \subset St( \{ \pi(D) \}, \mathcal W_2 ) \subset St( \{ \pi(D) \}, \mathcal W_1 ).$$ These imply that $$ h_1(D) \cup h_2(D) \subset St( \{ \pi(D) \}, \mathcal W_1 ) \subset W_0 $$ for some $W_0 \in \mathcal W_0$ because $\mathcal W_1$ is a star-refinement of $\mathcal W_0$. Hence for every $D \in \mathcal D$ we have $$ D \cup h(D) = h_1^{-1} \circ h_1 ( D \cup h(D) ) = h_1^{-1} ( h_1 ( D ) \cup h_2 ( D ) ) \subset h_1^{-1} ( W_0 ) $$ so if we show that $$h_1^{-1} ( W_0 ) \subset U$$ for some $U \in \mathcal U$, then we prove the statement. Since $h_1$ and $\pi$ are $\mathcal W_1$-close, they are $\mathcal W_0$-close as well. This means that for every $x \in X$ the points $\pi(x)$ and $h_1(x)$ are in the same $W_x \in \mathcal W_0$. So if $x \in h^{-1}_1 ( W_0 )$, then $$ \pi(x) \in St( W_0, \mathcal W_0 ),$$ which gives that $$ \pi( h^{-1}_1 ( W_0 ) ) \subset St( W_0, \mathcal W_0 ) \subset \pi(U) $$ for some $U \in \mathcal U$ because $W_0$ is a star-refinement of $\pi(\mathcal U)$. Then the statement follows because $$ h^{-1}_1 ( W_0 ) \subset \pi^{-1} \circ \pi (h^{-1}_1 ( W_0 ) ) \subset \pi^{-1} \circ \pi (U) = U. $$ {\textbf{Shrinking (2) or (3) implies near-homeomorphism (1).}} At first observe that in the case of (3) if $X$ is locally compact and separable, then $X$ is $\sigma$-compact so $X$ is the union $\cup_{n=1}^{\infty} C_n$ of countably many compact sets $$C_1 \subset C_2 \subset \cdots \subset C_n \subset \cdots.$$ We also suppose that every $C_n$ is $\mathcal D$-saturated and has non-empty interior. Let $\mathcal W$ be an arbitrary open cover of $X_{\mathcal D}$. We have to construct a homeomorphism $h \co X \to X_{\mathcal D}$ which is $\mathcal W$-close to $\pi$. At first, we construct a sequence $$ \mathcal U_0, \mathcal U_1, \ldots, \mathcal U_n, \ldots $$ of $\mathcal D$-saturated open covers of $X$ and a sequence $$ h_0, h_1, \ldots, h_n, \ldots $$ of self-homeomorphisms of $X$ with some useful properties. Let $\mathcal U_0$ be a $\mathcal D$-saturated open cover of $X$ such that the collection of the closures of the elements of $\mathcal U_0$ refines the open cover $\pi^{-1}(\mathcal W)$. This obviously exists because $X_{\mathcal D}$ is regular so around every point of $X_{\mathcal D}$ there is a small closed nbhd contained in some element of $\mathcal W$. Let $h_0$ be the identity homeomorphism. Let $\varepsilon_n > 0$ be a decreasing sequence converging to $0$. Define $\varepsilon_0$ to be $\infty$. Denote the metric on $X$ by $d$. Suppose inductively that we constructed already the covers $ \mathcal U_0, \ldots, \mathcal U_n $ and the homeomorphisms $ h_0, \ldots, h_n $ with the following properties: \begin{enumerate}\label{egyesketto} \item \begin{enumerate} \item $\mathcal U_{i+1}$ is a $\mathcal D$-saturated open cover, which refines $\mathcal U_{i}$ for $0 \leq i \leq n-1$, \item for all $0 \leq i \leq n$ the set $\mathcal U_i$ refines the collection of $\varepsilon_i$-nbhds of the elements of $\mathcal D$ and also refines the collection $\{ \pi^{-1}(B_{\varepsilon_i}(y)) : y \in X_{\mathcal D} \}$, where $B_{\varepsilon_i}(y)$ is the open ball of radius $\varepsilon_i$ around $y$, \end{enumerate} \item \begin{enumerate} \item for every $0 \leq i \leq n-1$ every $D \in \mathcal D$ has a nbhd $U \in \mathcal U_i$ such that for every $U' \in \mathcal U_{i+1}$ which contains $D$ we have $$ h_i(U' ) \cup h_{i+1}(U') \subset h_i(U),$$ \item[(b)] for every $0 \leq i \leq n$ the diameter of each $h_i(U)$, $U \in \mathcal U_i$, is smaller than $\varepsilon_i$, \item[(b$'$)] in the case of $X$ is $\sigma$-compact we require only that for every $0 \leq i \leq n$ and for every nbhd $U \in \mathcal U_i$ such that $U\cap C_i \neq \emptyset$ the diameter of each $h_i(U)$ is smaller than $\varepsilon_i$. \end{enumerate} \end{enumerate} There will be some important corollaries of these constructions. Part (a) of (2) implies that every $D \in \mathcal D$ has a nbhd $U \in \mathcal U_{i}$ such that for every $k \geq 1$ and $U' \in \mathcal U_{i+k}$ which contains $D$ we have \begin{equation}\label{tart} h_{i+k}(U') \subset h_i(U). \end{equation} For $k=1$ this is immediate from (2)(a) and for $k \geq 2$ this follows by a simple induction. This means that once we have $U_n$ and $h_n$ for every $n \in \mathbb N$ satisfying (1) and (2), the sequence $h_n$ is a Cauchy sequence in the sense of local uniform convergence in the space of maps of $X$ into $X$. Indeed, if $x \in X$, then for some $D \in \mathcal D$ we have $x \in D$ and then $D$ has a nbhd $U \in \mathcal U_{n}$ for every $n$ such that by applying (\ref{tart}) for all $k \in \mathbb N$ \begin{equation}\label{tart2} h_{n+k}(D) \subset h_n(U), \end{equation} which means that $d( h_n(x), h_{n+k}(x) ) < \varepsilon_n$ for all $x \in X$ by (2)(b). In the case of $X$ is $\sigma$-compact we have that for some $m \in \mathbb N$ the intersection $D \cap C_n \neq \emptyset$ for $n \geq m$ hence by (2)(b$'$) we have $\mathrm{diam}\thinspace h_n(U) < \varepsilon_n$ for every $n \geq m$ and nbhd $U \in \mathcal U_{n}$ of $D$. This implies that for all $n \geq m$ we get $d( h_n(x), h_{n+k}(x) ) < \varepsilon_n$ for all $k$ and $x \in D$, where $D \subset C_n$. Since $(X, d)$ is complete, the sequence $h_n$ converges locally uniformly to a continuous map $$\chi \co X \to X,$$ which will be a good candidate for obtaining our desired near-homeomorphism. {\textbf{Defining $\mathcal U_{n+1}$ and $h_{n+1}$.}} So let us return to the definition of the covers $\mathcal U_n$ and homeomorphisms $h_n$. Suppose inductively that we constructed already the covers $ \mathcal U_0, \ldots, \mathcal U_n $ and the homeomorphisms $ h_0, \ldots, h_n $ with the properties (1) and (2). We are going to define $\mathcal U_{n+1}$ and $h_{n+1}$. The metrizable space $X_{\mathcal D}$ is paracompact so the open cover $\pi ( \mathcal U_n )$ has a star-refinement whose $\pi$-preimage $\mathcal U_n'$ is a $\mathcal D$-saturated open cover of $X$, which star-refines $\mathcal U_n$. Let $\mathcal V$ be an open cover of $X$ such that the diameter of each of its elements is smaller than $\varepsilon_{n+1}$. Then we have two possibilities. \begin{itemize} \item If $\mathcal D$ is shrinkable, then there is a self-homeomorphism $H$ of $X$, which is $h_n( \mathcal U_n' )$-close to the identity and shrinks the elements of $h_n(\mathcal D)$ into the sets of $\mathcal V$. Let $$ h_{n+1} = H \circ h_n.$$ Clearly the diameter of each $h_{n+1} (D)$, where $D \in \mathcal D$, is smaller than $\varepsilon_{n+1}$. \item If only those elements of $\mathcal D$ are shrinkable which are in a chosen compact set as we suppose in (3) of the statement of Theorem~\ref{shrinkinglocallycompact}, then there is a homeomorphism $H_0 \co X \to X$ such that the elements of $\mathcal D$ in the compact set $C_{n+1}$ are mapped by $H_0$ into some element of $h_n^{-1}(\mathcal V)$ and $H_0$ is $\mathcal U_n'$-close to the identity. This implies that $h_n \circ H_0 \circ h_n^{-1}$ is such a self-homeomorphism of $X$ that maps the elements of $h_n(\mathcal D)$ which are in $h_n(C_{n+1})$ into the sets of $\mathcal V$ and it is $h_n( \mathcal U_n' )$-close to the identity. Denote $h_n \circ H_0 \circ h_n^{-1}$ by $H$. Then let $$ h_{n+1} = H \circ h_n.$$ So $h_{n+1}$ maps every $D \in \mathcal D$, $D \subset C_{n+1}$ into a set of diameter smaller than $\varepsilon_{n+1}$. \end{itemize} The definition of $\mathcal U_{n+1}$ is a little more complicated. For every $U_n' \in \mathcal U_n'$ $$ h_{n+1}(U_n') \subset h_n(St(U_n', \mathcal U_n' ))$$ because $$ h_{n+1}(U_n') = H \circ h_{n}(U_n') \subset St( h_n (U_n'), h_n ( \mathcal U_n' ) ) $$ since $H$ is $h_n( \mathcal U_n' )$-close to the identity and also $$ St( h_n (U_n'), h_n ( \mathcal U_n' ) ) = h_n(St(U_n', \mathcal U_n' )).$$ The covering $\mathcal U_n'$ star-refines $\mathcal U_n$ so for every $U_n' \in \mathcal U_n'$ there is an $U_n \in \mathcal U_n$ such that $$ h_n(St(U_n', \mathcal U_n' )) \subset h_n(U_n),$$ which obviously implies that for every $U_n' \in \mathcal U_n'$ there is an $U_n \in \mathcal U_n$ such that $$ h_n (U_n') \cup h_{n+1}(U_n') \subset h_n(St(U_n', \mathcal U_n' )) \subset h_n(U_n).$$ Let $\mathcal S$ be a $\mathcal D$-saturated open cover of $X$ with the following properties: \begin{enumerate}[\rm (i)] \item the elements of $\mathcal S$ are nbhds of the elements of $\mathcal D$ such that the diameter of each $h_{n+1} (S)$, where $S \in \mathcal S$, is smaller than $\varepsilon_{n+1}$ (in the case of $S \cap C_{n+1} \neq \emptyset$ if $X$ is $\sigma$-compact), \item $\mathcal S$ refines the collection of $\varepsilon_{n+1}$-nbhds of the elements of $\mathcal D$, \item $\mathcal S$ also refines the $\mathcal D$-saturated coverings \begin{enumerate} \item $\mathcal U_n'$ and \item the collection $\{ \pi^{-1}(B_{\varepsilon_{n+1}}(y)) : y \in X_{\mathcal D} \}$, \end{enumerate} \item for every $S \in \mathcal S$ there is a $U_n \in \mathcal U_n$ such that $$ h_n (S) \cup h_{n+1}(S) \subset h_n(U_n).$$ \end{enumerate} Let $\mathcal U_{n+1}$ be the $\pi$-preimage of an open cover of $X_{\mathcal D}$ which star-refines the open cover $\pi(\mathcal S)$. It follows that $\mathcal U_{n+1}$ star-refines $\mathcal S$. After we defined $\mathcal U_{n+1}$ and $h_{n+1}$ let us check if $ \mathcal U_0, \ldots, \mathcal U_{n+1} $ and $ h_0, \ldots, h_{n+1} $ satisfy the conditions (1) and (2) on page~\pageref{egyesketto}. The cover $\mathcal U_{n+1}$ refines the cover $\mathcal U_{n}$ because $\mathcal U_{n}'$ refines $\mathcal U_{n}$, $\mathcal S$ refines $\mathcal U_{n}'$ by (iii)(a) and $\mathcal U_{n+1}$ refines $\mathcal S$. So (1)(a) holds. Also (1)(b) holds because of (ii) and (iii)(b). To prove (2)(a) observe that for every $D \in \mathcal D$ the set $St(D, \mathcal U_{n+1})$ is a subset of $St(U, \mathcal U_{n+1})$ for some $U \in \mathcal U_{n+1}$. Then $St(D, \mathcal U_{n+1}) \subset S$ for some $S \in \mathcal S$ since $\mathcal U_{n+1}$ star-refines $\mathcal S$. By (iv) there exists a $U \in \mathcal U_n$ such that $$ h_n (S) \cup h_{n+1}(S) \subset h_n(U)$$ so $$ h_n (St(D, \mathcal U_{n+1}) ) \cup h_{n+1}( St(D, \mathcal U_{n+1}) ) \subset h_n(U).$$ But every $U' \in \mathcal U_{n+1}$ which contains $D$ is in $St(D, \mathcal U_{n+1})$ so we have $$ h_n (U') \cup h_{n+1}(U') \subset h_n(U),$$ which proves (2)(a). Finally, the diameter of each $h_{n+1}(U)$, $U \in \mathcal U_{n+1}$, is smaller than $\varepsilon_{n+1}$ (if $U \cap C_{n+1} \neq \emptyset$ in the case of $\sigma$-compact $X$), because $\mathcal U_{n+1}$ refines $\mathcal S$ and we can apply (i). {\textbf{Constructing the near-homeomorphism.}} After having these infinitely many $\mathcal D$-saturated open coverings $$ \mathcal U_0, \mathcal U_{1}, \ldots $$ and homeomorphisms $$ h_0, h_{1}, \ldots $$ take the map $$ \chi \co X \to X$$ that we obtained applying (\ref{tart2}) and defined to be the pointwise limit of the sequence $h_n$. At first, we show that $\chi$ is surjective. Let $x \in X$ and $x_n = h_{n+1}^{-1}(x)$. Let $D_n \in \mathcal D$ be such that $x_n \in D_n$, then by (2)(a) we get a nbhd $U_n \in \mathcal U_n$ of $D_n$ such that for every $U' \in \mathcal U_{n+1}$ containing $D_n$ we have $$ h_n(U' ) \cup h_{n+1}(U') \subset h_n(U_n).$$ In this way we get a decreasing sequence $$ U_1 \supset U_2 \supset \cdots \supset U_n \supset \cdots $$ because of the following. It is enough to show that $D_n \subset U_{n+1}$ as well, then by (2)(a) we obtain $h_n(U_{n+1}) \subset h_n(U_n)$ so $U_{n+1} \subset U_n$. But $D_n \subset U_{n+1}$ because $$ h_{n+2}(V) \subset h_{n+1}(U_{n+1}) $$ by (2)(a) for every $V \in \mathcal U_{n+2}$ containing $D_{n+1}$, so we also have $$ h_{n+2} (x_{n+1}) \in h_{n+2} (D_{n+1}) \subset h_{n+2} ( V),$$ which implies $$ x \in h_{n+1} ( U_{n+1} ) $$ hence $$ h_{n+1}(x_n) \in h_{n+1} ( U_{n+1} )\mbox{\ \ \ \ and so\ \ \ \ }x_n \in U_{n+1}$$ but $U_{n+1}$ is $\mathcal D$-saturated hence also $D_n \subset U_{n+1}$. The sequence $(x_n)$ has a Cauchy (hence convergent) subsequence: since $x_ n \in U_n$, for all $k \geq 0$ $$ x_{n+k} \in U_n $$ and for every $\varepsilon > 0$ there is $\varepsilon_n < \varepsilon$ such that $U_n$ is in the $\varepsilon$-nbhd of some $D \in \mathcal D$ by (1)(b). Since the metric space $X$ is complete, there is an $x_0 \in X$ such that a subsequence $(x_{n_k})$ of $(x_n)$ converges to $x_0$. All of these imply that because of the definition of $\chi$ and the locally uniform convergence of $h_n$ we have $$ \chi(x_0) = \lim_{k \to \infty} h_{n_k + 1}( x_{n_k}) = \lim_{k \to \infty} x = x. $$ This means $\chi$ is surjective. It will turn out that $\chi$ is not injective so it is not a homeomorphism. However, the composition $\pi \circ \chi^{-1}$ of the relation $\chi^{-1}$ and the decomposition map $\pi$ is a homeomorphism. To see this, we show that the sets $\chi^{-1}(x)$, where $x \in X$, are exactly the decomposition elements of $\mathcal D$. By (\ref{tart2}) for every $n \in \mathbb N$ and $D \in \mathcal D$ there is a nbhd $U \in \mathcal U_n$ of $D$ such that for every $k \geq 0$ $$ h_{n+k} ( D) \subset h_n(U)$$ hence $$ \chi ( D ) = \lim_{k \to \infty} h_{n+k}( D ) \subset \mathrm{cl}\thinspace h_n(U). $$ It is a fact that ${\mathrm {diam}}\thinspace \mathrm{cl}\thinspace A = {\mathrm {diam}}\thinspace A$ for an arbitrary subset $A$ of a metric space so by (2)(b) we obtain $\chi(D) < \varepsilon_n$ for each $n$ and by (2)(b$'$) for some $C_{m} \supset D$ we obtain $\chi(D) < \varepsilon_n$ for each $n \geq m$, which implies that $\chi(D)$ is a point. To show that the $\chi$-preimage of a point is not bigger than a decomposition element, observe that for different elements $D_1$ and $D_2$ and for large enough $n$ by (1)(b) there are $U, V \in \mathcal U_n$ which lie in the small $\varepsilon_n$-nbhds of $D_1$ and $D_2$, respectively, hence $U$ and $V$ are disjoint. Then similarly to above, $$ \chi ( D_1 ) \subset \mathrm{cl}\thinspace h_n(U)\mbox{\ \ \ \ and\ \ \ \ }\chi ( D_2 ) \subset \mathrm{cl}\thinspace h_n(V), $$ which implies that $\chi ( D_1 )$ and $\chi ( D_2 )$ are different so the sets $\chi^{-1}(x)$, where $x \in X$, are exactly the decomposition elements of $\mathcal D$. This means that $\pi \circ \chi^{-1}$ is a bijection. Its inverse is continuous because $\chi$ is continuous and $\pi$ is a closed map since the decomposition is usc. To prove that $\pi \circ \chi^{-1}$ is continuous it is enough to show that $\chi$ is a closed map. Let $A \subset X$ be a closed set and observe that a point $y \in X$ is in $X - \chi (A)$ if and only if $\chi^{-1}(y) \cap \chi^{-1} ( \chi ( A ) ) = \emptyset$, which holds exactly if $\chi^{-1}(y) \cap \pi^{-1} ( \pi ( A )) = \emptyset$. This means that in order to show that $\chi (A)$ is closed it is enough to prove that for any decomposition element $D$ such that $D \cap \pi^{-1} ( \pi ( A )) = \emptyset$ the point $\chi (D)$ is an inner point of $X - \chi (A)$. If $\varepsilon_n$ is small enough, then since $D \cap \pi^{-1} ( \pi ( A )) = \emptyset$, by (1)(b) for every $U_n \in \mathcal U_n$ containing $D$ we have $$ \mathrm{cl}\thinspace U_n \cap \mathrm{cl}\thinspace St(A, \mathcal U_n) = \emptyset.$$ By (\ref{tart2}) we have $\chi ( D ) \in h_n ( \mathrm{cl}\thinspace U_n )$ and obviously $$ \chi ( A ) \subset h_n ( \mathrm{cl}\thinspace St(A, \mathcal U_n) ) = \mathrm{cl}\thinspace h_n ( St(A, \mathcal U_n) ) $$ so finally we get $$ \chi (D) \in X - \mathrm{cl}\thinspace h_n ( St(A, \mathcal U_n) ) \subset X - \chi ( A ) $$ implying that $\chi (D)$ is an inner point of $X - \chi (A)$. As a consequence the map $\pi \circ \chi^{-1}$ is a homeomorphism. We have to prove that it is $\mathcal W$-close to the identity. By (\ref{tart}) for every $D$ and for all $n$ there exist $U_n \in \mathcal U_n$ nbhds of $D$ such that $$ U_0 = h_0 ( U_0 ) \supset h_1 (U_1) \supset \cdots \supset h_n (U_n) \supset \cdots. $$ So $h_n(D) \in U_0$ for every $n$ and then $\chi(D) \in \mathrm{cl}\thinspace U_0$. Since the collection of the closures of the elements in $\mathcal U_0$ refines the cover $\pi^{-1}(\mathcal W)$, both of $D$ and $\chi(D)$ are in the same $\pi^{-1}(W) \in \mathcal W$. This implies that if we denote $\chi(D)$ by $x$, then both of $\chi^{-1} ( x )$ and $x$ are in $\pi^{-1}(W)$. As a result $\chi ( \chi^{-1} ( x )) = x$ and $\chi(x)$ are in $\chi (\pi^{-1}(W))$ so by applying the map $\pi \circ \chi^{-1}$ we get that $$ \pi \circ \chi^{-1} (x)\mbox{\ \ \ \ and\ \ \ \ }\pi \circ \chi^{-1} ( \chi(x) ) = \pi(x)$$ are in $W$. This shows that $\pi \circ \chi^{-1}$ is $\mathcal W$-close to $\pi$. \end{proof} The goal of most of the applications of shrinking is to obtain some kind of embedding of a manifold by the process of approximating a given map. Let $\mathbb R^d_+$ denote the closed halfspace in $\mathbb R^d$. \begin{defn}[Flat subspace and locally flat embedding] Let $A \subset X$ be a chosen subspace of a topological space $X$. We say that the subspace $B \subset X$ homeomorphic to $A$ is \emph{flat} if there is a homeomorphism $h \co X \to X$ such that $h( B ) = A$. Let $X$ be an $n$-dimensional manifold. An embedding $e \co B \to X$ of a $d$-dimensional manifold $B$ is \emph{locally flat} if every point $e(b)$ has a nbhd $U$ in $X$ such that the pair $$(U, e(B) \cap U)\mbox{\ is homeomorphic to\ } \left\{ \begin{array}{ccc} (\mathbb R^n, \mathbb R^d) & \mbox{if $b$ is an inner point of $B$} \\ (\mathbb R^n, \mathbb R^d_+) & \mbox{if $b$ is a boundary point of $B$}. \end{array} \right. $$ \end{defn} \begin{defn}[Collared and bicollared subspaces] The subspace $A \subset X$ is \emph{collared} if there is an embedding $f \co A \times [0,1) \to X$ onto an open subspace of $X$ such that $f(a, 0) = a$. The subspace $A \subset X$ is \emph{bicollared} if there is an embedding $f \co A \times (-1,1) \to X$ such that $f(a, 0) = a$. The subspace $A \subset X$ is \emph{locally collared} (or \emph{locally bicollared}) if every $a \in A$ has a nbhd $U$ in $X$ such that $A \cap U$ is collared (resp.\ bicollared). \end{defn} A typical application of shrinking is the following. \begin{thm} Let $X$ be an $n$-dimensional manifold with boundary $\partial X$. Then $\partial X$ is collared in $X$. \end{thm} \begin{proof} Attach the manifold $\partial X \times [0, 1]$ to $X$ along $\partial X \subset X$ by the identification $$\varphi \co \partial X \times \{ 0 \} \to X,$$ $$\varphi (x, 0) = x.$$ In this way we get a manifold $\tilde X$, which contains the attached $\partial X \times [0, 1]$ as a subset. The boundary of $\tilde X$ is $\partial X \times \{ 1 \}$ and so the boundary $\partial \tilde X$ is obviously collared. Let $\mathcal D$ be the decomposition of $\tilde X$ into the intervals $\{ \{ x \} \times [0,1] : x \in \partial X \}$ and the singletons in $\tilde X - \partial X \times [0, 1]$. Then $X$ and the quotient space $\tilde X_{\mathcal D}$ are homeomorphic by the map $$\alpha \co X \to \tilde X_{\mathcal D},$$ $$\alpha(x) = [x],$$ where $[x]$ denotes the equivalence class of $x$. Indeed, $\alpha$ is a bijection mapping $X - \partial X$ to the classes consisting of single points and mapping the boundary points $x \in \partial X$ to the class $[x]$. It is easy to see that $\alpha$ and also $\alpha^{-1}$ are continuous so $\alpha$ is a homeomorphism. If we prove that $\tilde X$ is also homeomorphic to the decomposition space $\tilde X_{\mathcal D}$ by a map $\beta$ as the diagram \begin{center} \begin{graph}(6,2) \graphlinecolour{1}\grapharrowtype{2} \textnode {X}(0.5,1.5){$X$} \textnode {T}(5.5, 1.5){$\tilde X$} \textnode {Q}(3, 0){$\tilde X_{\mathcal D}$} \diredge {X}{Q}[\graphlinecolour{0}] \diredge {T}{Q}[\graphlinecolour{0}] \freetext (1.9,1){$\alpha$} \freetext (4, 1){$\beta$} \end{graph} \end{center} shows, then we obtain that $X$ and $\tilde X$ are homeomorphic through the map $\beta^{-1} \circ \alpha$, which finishes the proof. A homeomorphism $\beta$ exists if we prove that ${\mathcal D}$ is shrinkable because then $\pi \co \tilde X \to \tilde X_{\mathcal D}$ is a near-homeomorphism. Let $\mathcal V$ be an arbitrary open cover of $\tilde X$ and let $\mathcal U$ be a $\mathcal D$-saturated open cover of $\tilde X$. Let $\mathcal W$ be a refinement of $\mathcal V$ such that $\mathcal W$ contains all the small nbhds of the form $U_{x} \times [1, 1- \varepsilon_{x})$ for all ${(x,1)} \in \partial \tilde X$ and for some appropriate $\varepsilon_{x} > 0$ and relative nbhd $U_{x} \subset \partial X$. We also suppose that in $\mathcal W$ the nbhds of the inner points of $\tilde X$ are only these or such nbhds which do not intersect $\partial \tilde X$. We will apply Theorem~\ref{shrinkinglocallycompact}. Let $C \subset \tilde X$ be a compact set and let $E \subset \tilde X$ be a compact set containing the attached $\{x\} \times [0, 1]$ for all $(x, 1) \in \partial \tilde X$ such that $\{x\} \times [0, 1]$ intersects $C$. Since $E$ is compact, there are finitely many nbhds in $\mathcal W$ and also in $\mathcal U$ which cover $E$. Let us restrict ourselves to these finitely many nbhds. Let $\varepsilon>0$ be such that $\varepsilon < \varepsilon_x$ for all these finitely many points $(x, 1) \in \partial \tilde X$. Let $U$ be the union of the chosen finitely many nbhds in $\mathcal U$ and let $\delta > 0$ be such that for a metric on $\tilde X$ the $\delta$-nbhd of $$\bigcup_{(x,1) \in \partial \tilde X \cap E} \{x\} \times [0, 1]$$ is inside $U$. Then define a homeomorphism $h \co \tilde X \to \tilde X$ which maps $$\bigcup_{(x,1) \in \partial \tilde X \cap E} \{x\} \times [0, 1]$$ into $$\bigcup_{(x,1) \in \partial \tilde X \cap E} \{x\} \times [1, 1- \varepsilon)$$ by mapping each arc $\{x\} \times [0, 1]$, where $(x, 1) \in \partial \tilde X$, into itself. We suppose that the support of $h$ is inside the $\delta/2$-nbhd of $\bigcup_{(x,1) \in \partial \tilde X \cap E} \{x\} \times [0, 1]$. This $h$ satisfies (3) of Theorem~\ref{shrinkinglocallycompact} so $\pi$ is a near-homeomorphism which yields the claimed homeomorphism $\beta$. \end{proof} \bigskip{\Large{\section{Shrinkable decompositions}}} \bigskip\large The following notions are often used to describe types of decompositions which turn out to be shrinkable. \begin{defn} Let $\mathcal D$ be a usc decomposition of $\mathbb R^n$. \begin{itemize} \item $\mathcal D$ is \emph{cell-like} if every decomposition element is cell-like, \item $\mathcal D$ is \emph{cellular} if every decomposition element is cellular, \item the decomposition elements are \emph{flat arcs} if for every $D \in \mathcal D$ there is a homeomorphism $h \co \mathbb R^n \to \mathbb R^n$ such that $h(D)$ is a straight line segment, \item $\mathcal D$ is \emph{starlike} if every decomposition element $D$ is a starlike set, that is, $D$ is a union of compact straight line segments with a common endpoint $x_0 \in \mathbb R^n$, \item $\mathcal D$ is \emph{starlike-equivalent} if for every $D \in \mathcal D$ there is a homeomorphism $h \co \mathbb R^n \to \mathbb R^n$ such that $h(D)$ is starlike, \item $\mathcal D$ is \emph{thin} if for every $D \in \mathcal D$ and every nbhd $U$ of $D$ there is an $n$-dimensional ball $B \subset \mathbb R^n$ such that $D \subset B \subset U$ and $\partial B$ is disjoint from the non-degenerate elements of $\mathcal D$, \item $\mathcal D$ is \emph{locally shrinkable} if for each $D \in \mathcal D$ we have that for every nbhd $U$ of $D$ and open cover $\mathcal V$ of $\mathbb R^n$ there is a homeomorphism $h \co \mathbb R^n \to \mathbb R^n$ with support $U$ such that $h(D) \subset V$ for some $V \in \mathcal V$, \item $\mathcal D$ \emph{inessentially spans} the disjoint closed subsets $A, B \subset \mathbb R^n$ if for every $\mathcal D$-saturated open cover $\mathcal U$ of $\mathbb R^n$ there is a homeomorphism $h \co \mathbb R^n \to \mathbb R^n$ which is $\mathcal U$-close to the identity and no element of $\mathcal D$ meets both of $h(A)$ and $h(B)$, \item the decomposition element $D$ has \emph{embedding dimension} $k$ if for every $(n-k-1)$-dimensional smooth submanifold $M$ of $\mathbb R^n$ and open cover $\mathcal V$ of $\mathbb R^n$ there is a homeomorphism $h \co \mathbb R^n \to \mathbb R^n$ which is $\mathcal V$-close to the identity, $h (M) \cap D = \emptyset$ and this is not true for $(n-k)$-dimensional submanifolds. \end{itemize} \end{defn} Most of these notions have the corresponding verisons in arbitrary manifolds or spaces. A condition that is obviously satisfied by at least $5$-dimensional Euclidean spaces is the following. \begin{defn}[Disjoint disks property] The metric space $X$ has the \emph{disjoint disks property} if for arbitrary maps $f_1$ and $f_2$ from $D^2$ to $X$ and for every $\varepsilon > 0$ there are approximating maps $g_i$ from $D^2$ to $X$ $\varepsilon$-close to $f_{i}$, $i = 1, 2$, such that $g_1(D^2)$ and $g_2(D^2)$ are disjoint. \end{defn} The next theorem \cite{Ed78} is one of the fundamental results of decomposition theory, we omit its proof here. \begin{thm} Let $X$ be an at least $5$-dimensional manifold and let $\mathcal D$ be a cell-like decomposition of $X$. Then $\mathcal D$ is shrinkable if and only if $X_{\mathcal D}$ is finite dimensional and has the disjoint disks property. \end{thm} Recall that a separable metric space is finite dimensional if every point has arbitrarily small nbhds having one less dimensional frontiers and dimension $-1$ is by definition the dimension of the empty set. For example, a manifold is finite dimensional. In the following statement we enumerate several conditions which imply that a (usc) decomposition is shrinkable. \begin{thm}\label{shrinking_thm} The following decompositions are strongly shrinkable: \begin{enumerate}[\rm (1)] \item cell-like usc decompositions of a $2$-dimensional manifold, \item countable usc decompositions of $\mathbb R^n$ if the decomposition elements are flat arcs, \item countable and starlike usc decompositions of $\mathbb R^n$, \item countable and starlike-equivalent usc decompositions of $\mathbb R^3$, \item null and starlike-equivalent usc decompositions of $\mathbb R^n$, \item thin usc decompositions of $3$-manifolds, \item countable and thin usc decompositions of $n$-dimensional manifolds, \item countable and locally shrinkable usc decompositions of a complete metric space if $\cup \mathcal H_{\mathcal D}$ is $G_{\delta}$, \item monotone usc decompositions of $n$-dimensional manifolds if $\mathcal D$ inessentially spans every pair of disjoint, bicollared $(n-1)$-dimensional spheres, \item null and cell-like decompositions of smooth $n$-dimensional manifolds if the embedding dimension of every $D \in \mathcal D$ is $\leq n-3$, \end{enumerate} \end{thm} Before proving Theorem~\ref{shrinking_thm} let us make some observations and preparations. At first, note that there are usc decompositions of $\mathbb R^3$ into straight line segments which are not shrinkable: in the proof of Proposition~\ref{line_decomp} for any given compact metric space $Y$ we constructed a decomposition of $\mathbb R^3$ into straight line segments and singletons such that $Y$ is a subspace of the decomposition space. Since $\mathbb R^3$ is a complete metric space and the decomposition is usc it is also shrinkable if and only if $\pi$ is approximable by homeomorphisms. This means that if $Y$ cannot be embedded into $\mathbb R^3$, then the decomposition space cannot be homeomorphic to $\mathbb R^3$ and then this decomposition is not shrinkable. If the decomposition is countable, then we can shrink successively the decomposition elements if there is a guaranty of not expanding an already shrunken element while shrinking another one. The next proposition is a technical tool for this process. \begin{prop}\label{shrinking_lemma} Let $\mathcal D$ be a countable usc decomposition of a locally compact metric space $X$. Suppose for every $D \in \mathcal D$, for every $\varepsilon > 0$ and for every homeomorphism $f \co X \to X$ there exists a homeomorphism $h \co X \to X$ such that \begin{enumerate}[\rm (1)] \item outside of the $\varepsilon$-nbhd of $D$ the homeomorphism $h$ is the same as $f$, \item $\mathrm {diam} \thinspace h(D) < \varepsilon$ and \item for every $D' \in \mathcal D$ we have $\mathrm {diam} \thinspace h(D') < \varepsilon + \mathrm {diam} \thinspace f ( D')$. \end{enumerate} Then $\mathcal D$ is strongly shrinkable. \end{prop} \begin{proof}[Sketch of the proof] Let $\varepsilon > 0$ and let $\mathcal U$ be a $\mathcal D$-saturated open cover of $X$. We enumerate the non-degenerate elements of $\mathcal D$ which have diameter at least $\varepsilon/2$ as $D_1, D_2, \ldots$. We can find $\mathcal D$-saturated open sets $U_1, U_2, \ldots$ such that for all $n$ we have $D_n \subset U_n$ and all sets $U_n$ are pairwise disjoint or coincide. These $U_n$ are subsets of sets in $\mathcal U$ and they will ensure $\mathcal U$-closeness. We produce a sequence ${\mathrm {id}}=h_0, h_1, \ldots$ of self-homeomorphisms of $X$ and a sequence $C_1, C_2, \ldots$ of $\mathcal D$-saturated closed nbhds of $D_1, D_2, \ldots$, respectively, such that a couple of conditions are satisfied for every $n \geq 1$: \begin{enumerate}[(a)] \item $h_n |_{X - U_n} = h_{n-1}|_{X - U_n}$, \item ${\mathrm {diam}}\thinspace h_n ( D_n) < \varepsilon$, \item for every $D \in \mathcal D$ we have ${\mathrm {diam}}\thinspace h_n ( D) < (1 - \frac{1}{2^n})\frac{\varepsilon}{2} + {\mathrm {diam}} \thinspace D$, \item $h_{n+1}|_{C_1 \cup \cdots \cup C_n} = h_{n}|_{C_1 \cup \cdots \cup C_n}$, \item if some $D \in \mathcal D$ is in $C_n$, then ${\mathrm {diam}}\thinspace h_n ( D) < \varepsilon$ and \item $h_n = h_{n-1}$ if ${\mathrm {diam}}\thinspace h_{n-1} ( D_n) < \varepsilon$. \end{enumerate} The sets $C_n$ serve as protective buffers in which no further motion will occur. For $n=1$ by the conditions (1), (2) and (3) in the statement of Proposition~\ref{shrinking_lemma} with the choice $f={\mathrm {id}}$ we can find a homeomorphism $h_1 \co X \to X$ satisfying (a), (b) and (c) and also an appropriate $C_1$ such that (d) and (e) are satisfied as well. If $h_k$ and $C_k$ are defined already for $1 \leq k \leq n$, then we find $h_{n+1}$ and $C_{n+1}$ as follows. If ${\mathrm {diam}}\thinspace h_{n} ( D_{n+1}) < \varepsilon$, then let $h_{n+1} = h_n$. If the diameter of $h_{n} ( D_{n+1} )$ is at least $\varepsilon$, then by the conditions (1), (2) and (3) with the choice $f=h_n$ we can find a homeomorphism $h_{n+1} \co X \to X$ satisfying \begin{enumerate}[(i)] \item $h_{n+1}|_{X - U_{n+1}} = h_{n}|_{X - U_{n+1}}$ \item ${\mathrm {diam}}\thinspace h_{n+1} ( D_{n+1}) < \varepsilon/ 2^{n+2}$, \item for every $D \in \mathcal D$ we have ${\mathrm {diam}}\thinspace h_{n+1} ( D) < \varepsilon/{2^{n+2}} + {\mathrm {diam}} \thinspace h_n(D)$ \end{enumerate} furthermore (iii) and (c) imply that for every $D \in \mathcal D$ we have $$ {\mathrm {diam}}\thinspace h_{n+1} ( D) <\varepsilon/{2^{n+2}} + \left(1 - \frac{1}{2^n}\right)\frac{\varepsilon}{2} + {\mathrm {diam}} \thinspace D = \left(1 - \frac{1}{2^{n+1}}\right)\frac{\varepsilon}{2} + {\mathrm {diam}} \thinspace D $$ so (a), (b) and (c) are satisfied. It is not too difficult to get (d) and (e) with some $C_{n+1}$ as well. After having all $h_1, h_2, \ldots$ and $C_1, C_2, \ldots$ with properties (a)-(f) it is easy to see by (d), (e) and (f) that every $D \in \mathcal D$ which is in $C_1 \cup \cdots \cup C_n$ is shrunk by $h_n$ to size smaller than $\varepsilon$ and other $h_{n+i}$ does not modify this. If some $D \in \mathcal D$ had diameter smaller than $\varepsilon/2$ originally, then (c) implies that its diameter is smaller than $\varepsilon$ during all the process. These imply that the sequence $h_1, h_2, \ldots$ is locally stationary and it converges to a shrinking homeomorphism $h$. \end{proof} We are going to give a sketch of the proof of Theorem~\ref{shrinking_thm}. For the detailed proof of (1) see \cite{Mo25}, for the proofs of (2) and (3) see \cite{Bi57}, for the proof of (4) see \cite{DS83}, for (5) see \cite{Be67}, for (6) see \cite{Wo77} and for (7), (8), (9) and (10) see \cite{Pr66}, \cite{Bi57}, \cite{Ca78} and \cite{Ca79, Ed16}, respectively. \begin{proof}[Sketch of the proof of Theorem~\ref{shrinking_thm}] (1) follows from the fact that in a $2$-dimensional manifold $X$ a cell-like decomposition is thin. The reason of this is that an arbitrarily small $2$-dimensional disk nbhd $B$ with the property $\partial B \cap (\cup \mathcal H_{\mathcal D}) = \emptyset$ can be obtained by finding the circle $\partial B$ in $X$ as a limit of a sequence of maps $f_n \co S^1 \to X$ avoiding smaller and smaller decomposition elements. A thin usc decomposition of a $2$-dimensional manifold is shrinkable if the points of $\pi(\bigcup \mathcal H_{\mathcal D})$ do not converge to each other in a too complicated way. Since the quotient space $X_{\mathcal D}$ can be filtered in a way which implies this, the decomposition map $\pi$ can be successively approximated by maps which are homeomorphisms on the induced filtration in $X$. (2)-(5) follows from Proposition~\ref{shrinking_lemma}: the flat arcs, starlike sets and starlike-equivalent sets can be shrunk successively because of geometric reasons. To prove (6) and (7) we also use Proposition~\ref{shrinking_lemma}. Let $D \in \mathcal D$ be a non-degenerate decomposition element, $U$ a nbhd of $D$ and let $B$ be a ball such that $D \subset B \subset U$ and $\partial B$ is disjoint from the non-degenerate elements of $\mathcal D$. After applying a self-homeomorphism of $X$, we can suppose that $B$ is the unit ball. Let $k$ be some large enough integer and let $1 > \delta_0 > \delta_1 > \cdots > \delta_{k-1} > 0$ be such that if $D' \in \mathcal D$ intersects the $\delta_{n+1}$-nbhd of $\partial B$, then $D'$ is inside the $\delta_{n}$-nbhd of $\partial B$. Define a homeomorphism $f \co B \to B$ which is the identity on $\partial B$, keeps the center of $B$ fixed and on each radius $R$ the point at distance $\delta_n$ from $\partial B$, where $1 \leq n \leq k-1$, is mapped to the point at distance $n/k$ from the center. We require that the homeomorphism $f$ is linear between these points. After applying this homeomorphism, every $D' \in \mathcal D$ in $B$ is shrunk to size small enough. In the proof of (8) we enumerate the non-degenerate decomposition elements and we construct a sequence of homeomorphisms of the ambient space which shrink the decomposition elements successively using the locally shrinkable property. To prove (9) for a given $\varepsilon > 0$ we cover the manifold by two collections $\{ B_{\alpha} \}_{\alpha \in A}$ and $\{ B_{\alpha}' \}_{\alpha \in A}$ of $n$-dimensional balls such that $B_{\alpha} \subset \mathrm {int} \thinspace B_{\alpha}'$ and $\mathrm {diam} \thinspace B_{\alpha}' < \varepsilon$. Then the closed sets $\pi (\partial B_{\alpha})$ and $\pi ( \partial B_{\alpha}')$ are made disjoint by applying homeomorphisms $h_{\alpha}$ successively. This implies that the homeomorphism $h$ obtained by composing all the homeomorphisms $h_{\alpha}$ is such that for every $D \in \mathcal D$ the set $h(D)$ is fully contained in some ball $B_{\alpha}'$ so its diameter is smaller than $\varepsilon$. In the proof of (10) at first we obtain that every decomposition element $D$ is cellular because of the following. By assumption $D$ is cell-like and it behaves like an at most $(n-3)$-dimensional submanifold so the $2$-skeleton of the ambient manifold is disjoint from $D$. This means that $D$ satisfies the cellularity criterion since the $2$-skeleton carries the fundamental group. Hence $D$ is cellular, which implies that it is contained in an $n$-dimesional ball and also in a starlike-equivalent set $C$ of embedding dimension $\leq n-2$. Now it is possible to use an argument similar to the proof of Proposition~\ref{shrinking_lemma}: we can shrink $C$ to become smaller than an $\varepsilon > 0$ by successively compressing $C$ and in each iteration carefully controlling and avoiding other decomposition elements close to $C$ which would become too large during the compression procedure. \end{proof}
1,108,101,566,027
arxiv
\section{Introduction}\label{int} Gradient Ricci solitons are important in understanding the Hamilton's Ricci flow \cite{Hamilton2}. They arise often as singularity models of the Ricci flow and that is why understanding them is an important question in the field. In view of their importance, it is then natural to seek classification results for gradient Ricci solitons. Indeed, the classification of gradient Ricci solitons has been a subject of interest for many people. We refer the reader to an excellent survey by Cao \cite{caoALM11} and references therein for a nice overview on the subject. A Riemannian metric $g$ on a smooth manifold $M^n$ is called {\it gradient Ricci soliton} if there exists a smooth potential function $f$ on $M^n$ such that the Ricci tensor ${\rm Ric}$ of the metric $g$ satisfies the following equation \begin{equation} \label{maineq} {\rm Ric}+{\rm Hess}\,f=\lambda g, \end{equation} for some constant $\lambda.$ Here, ${\rm Hess}\,f$ denotes the Hessian of $f.$ Clearly, when $f$ is a constant a gradient Ricci soliton is simply an Einstein manifold. If $\nabla f$ is replaced by a general vector field $V$, then the above equation defines a Ricci soliton. The Ricci soliton will be called {\it expanding}, {\it steady} or {\it shrinking} if $\lambda<0,\,\lambda=0$ or $\lambda>0$, respectively. Ricci solitons are also of interest to physicists who refer to them as quasi-Einstein metrics (Friedan \cite{Friedan}). A classical topic in this subject is to classify gradient Ricci solitons with constant scalar curvature. It is already known that compact Ricci solitons with constant scalar curvature are Einstein. Moreover, it is also easy to check that a steady gradient Ricci soliton with constant scalar curvature must be Ricci-flat. Petersen and Wylie \cite{PW2} showed that a shrinking (respectively, expanding) gradient Ricci soliton with constant scalar curvature satisfies $0\leq R\leq n\lambda$ (respectively, $n\lambda\leq R\leq 0$). Recently, Fern\'andez-L\'opez and Garc\'ia-R\'io \cite{MG} showed that if a gradient Ricci soliton $(M^{n},\,g)$ has constant scalar curvature $R,$ then $R\in\{0,\,\lambda,\,\ldots,\,(n-1)\lambda,\,n\lambda\}.$ They also showed the rigidity of gradient Ricci solitons with constant scalar curvature under the assumption that the Ricci operator has constant rank. Despite the progress, the complete classification of gradient Ricci solitons with constant scalar curvature still remains open. In order to make our approach more understandable, we need to recall some terminology. A smooth vector field $X$ on a Riemannian manifold $(M^n,g)$ is conformal if $\mathcal{L}_X g=2\psi g$ for some smooth function $\psi\in C^\infty(M),$ where $\mathcal{L}_X g$ is the Lie derivative of $g$ with respect to $X.$ In this case, the function $\psi$ is called conformal factor of $X.$ A particular case of a conformal vector $X$ is that for which \begin{equation} \label{closedEq} \nabla_{Y} X=\psi Y,\,\,\hbox{for any}\,\,Y\in\mathfrak{X}(M), \end{equation} in this situation we say that $X$ is {\it closed}. They are also known as concircular vector fields (Fialkow \cite{Fialkow}) and appeared in the study of conformal mappings preserving geodesic circles having interesting applications in general relativity , e.g. trajectories of timelike concircular fields in the de Sitter model determine the world lines of receding or colliding galaxies satisfying the Weyl hypothesis (cf. \cite{Takeno}). Chen \cite{Chen} provided a simple characterization of generalized Robertson-Walker spacetimes in terms of timelike concircular vector field. Yet more particularly, a closed and conformal vector field $X$ is said to be {\it parallel} if $\psi$ vanishes identically. For more details, we refer the reader to \cite{Amur/Hegde,Caminha,Obata,Obata/Yanno,Tanno/Webber} and \cite{Tashiro}. We recall that the Gaussian Ricci soliton is given by $\Bbb{R}^n$ with canonical metric and the potential function $f(x) = \frac{\lambda}{2}|x|^2,$ where $\lambda$ is an arbitrary constant. It has constant scalar curvature and carries a closed conformal vector field. In \cite{DRS}, Di\'ogenes, Ribeiro Jr. and Silva Filho were able to show that a complete gradient Ricci soliton carrying a non-parallel closed conformal vector field must be either locally conformally flat for $n=3$ or $n=4,$ or has harmonic Weyl tensor for $n\ge 5.$ But, up to our knowledge, there is no classification results for steady and expanding gradient Ricci solitons with harmonic weyl tensor; the shrinking case was classified by Cao, Wang and Zhang in \cite{CWZ}, by Fern\'{a}ndez-L\'opes and Garc\'ia-R\'io in \cite{FLGR} and by Munteanu and Sesum \cite{MS}. It was shown by Jauregui and Wylie \cite{JW} that a gradient Ricci soliton admitting a non-homothetic conformal vector field $V$ that preserves the gradient $1$-form $df$ (i.e. $\nabla_{V} f$ is constant) is Einstein. Recently, Sharma \cite{Sharma} showed that gradient Ricci soliton with constant scalar curvature and admitting a non-homothetic conformal vector field leaving the potential vector field invariant is Einstein. Motivated by these historical developments, \textit{we provide a classification result for gradient Ricci solitons with constant scalar curvature and carrying a non-parallel closed conformal vector field}. More precisely, we establish the following result. \begin{theorem}\label{thmA} Let $\big(M^n,\,g,\, f,\,\lambda\big),$ $n\geq3,$ be a complete gradient Ricci soliton with constant scalar curvature and carrying a non-parallel closed conformal vector field. Then, $(M^n,\,g)$ is isometric to either \begin{enumerate} \item the Euclidean space $\Bbb{R}^{n}$, or \item a round sphere $\Bbb{S}^n$, or \item a negatively Einstein warped product of the real line with a complete non-positively Einstein manifold. \end{enumerate} In dimension 4, the class (3) is a quotient of a hyperbolic space form $\Bbb{H}^n$. \end{theorem} We also obtain the following corollaries. \begin{corollary} If a complete gradient Ricci soliton $\big(M^n,\,g,\, f, \lambda\big),$ $n\geq3,$ with constant scalar curvature admits a non-homothetic closed conformal vector field of bounded norm, then it is isometric to a round sphere. \end{corollary} \begin{corollary} A complete gradient shrinking Ricci soliton $\big(M^n,\,g,\, f, \lambda\big),$ $n\geq3,$ with constant scalar curvature and admitting a non-parallel closed conformal vector field is isometric to either a Euclidean space or a round sphere. \end{corollary} A key ingredient in establishing the proof of Theorem \ref{thmA} is a collection of identities pointed out by Ros and Urbano in \cite[Lemma 1]{RosUrbano}; see also \cite{Castro/Montealegre/Urbano}. It is worthwhile to remark that Theorem \ref{thmA} does not require any sign for the Ricci soliton constant $\lambda.$ At the same time, it does not require any relation between the gradient of the potential function and the assumed closed conformal vector field. Thus, our assumption is clearly weaker than the previous ones used in \cite{DRS,JW} and \cite{Sharma}. Besides, when $M^n$ is isometric to Euclidean space $\Bbb{R}^{n}$ in Theorem \ref{thmA}, it follows from Theorem 1.3 of Pigola, Rigoli, Rimoldi and Setti \cite{PRRS} that $f(x)=\frac{\lambda}{2}|x|^{2}+\langle b,\,x\rangle+c,$ for some $b\in\Bbb{R}^n$ and $c\in \Bbb{R}.$ \begin{remark} The Bryant soliton admits closed conformal vector field, but its scalar curvature is not constant. Hence, the existence of a closed conformal vector field on a gradient Ricci soliton does not imply the constancy of the scalar curvature. \end{remark} \section{Background} In this section we review some basic facts that will be useful in the proof of the main result. We begin by recalling some important features of gradient Ricci solitons (cf. Hamilton \cite{Hamilton2}). \begin{lemma} \label{lem1} Let $\big(M^n,\,g,\,f,\,\lambda\big)$ be a gradient Ricci soliton. Then we have: \begin{enumerate} \item $R+\Delta f=n\lambda.$ \item $\frac{1}{2}\nabla R={\rm Ric}(\nabla f).$ \item $\Delta R=\langle \nabla R,\nabla f\rangle +2\lambda R-2|Ric|^{2}.$ \item $R+|\nabla f|^{2}=2\lambda f$ (after a possible rescaling). \end{enumerate} \end{lemma} It follows from Lemma \ref{lem1} (1) and the Hopf maximum principle that a compact Ricci soliton with constant scalar curvature is necessarily Einstein. Moreover, cons-tant scalar curvature is a very restrictive condition for steady gradient Ricci soliton. Indeed, by Lemma \ref{lem1} (3), steady gradient Ricci solitons with constant scalar curvature are Ricci-flat. The same conclusion holds for gradient Ricci solitons with scalar flat curvature. According to Petersen and Wylie \cite{PW1}, a gradient Ricci soliton is said to be {\it rigid} if it is isometric to a quotient of $N\times \Bbb{R}^k,$ where $N$ is an Einstein manifold and $f=\frac{\lambda}{2}|x|^2$ on the Euclidean factor. Any rigid gradient Ricci soliton has constant scalar curvature. A special family of manifolds with constant scalar curvature are the homogeneous ones. In this context, it is known that any homogeneous gradient Ricci soliton is rigid (see \cite{PW1}). However, as it was previously mentioned, the complete classification of gradient Ricci solitons with constant scalar curvature still remains open. The next lemma summarizes some useful well-known results on the theory of closed conformal vector fields (cf. Ross and Urbano \cite[Lemma 1]{RosUrbano}, see also Castro, Montealegre and Urbano \cite{Castro/Montealegre/Urbano}). \begin{lemma} \label{lemConf} Let $(M^n,\,g)$ be a Riemannian manifold and $X$ a closed conformal vector field. Then the following assertions hold: \begin{itemize} \item[(1)] The set $\mathcal{Z}(X)$ of zeros of $X$ is a discrete set. \item[(2)] $\nabla|X|^2=2\psi X.$ \item[(3)] $\nabla^2|X|^2=2\psi^2g+2d\psi\otimes X^\flat.$ \item[(4)] $|X|^{2}\nabla \psi=X(\psi) X.$ \item[(5)] The Ricci tensor satisfies ${\rm Ric}(X)=-(n-1)\nabla \psi .$ \end{itemize} \end{lemma} In \cite{joao}, Silva showed that every Ricci soliton endowed with a non-parallel homothetic closed vector field is a gradient Ricci soliton and has zero scalar curvature. Hence, by using Lemma \ref{lem1} (3) we obtain the following result.\\ \begin{proposition} \label{propA} Every Ricci soliton (not necessarily gradient) carrying a non-parallel homothetic closed conformal vector field is Ricci flat. \end{proposition} \section{Proof of Theorem1} Throughout this section we will present the proof of Theorem \ref{thmA}. To begin with, we are going to obtain the following characterization of gradient Ricci solitons carrying a non-parallel closed conformal vector field; see also \cite{Silva1}. \begin{lemma}\label{lemA} Let $(M^n, g, \nabla f, \lambda)$ be a gradient Ricci soliton admitting a non-parallel closed conformal vector field $X.$ Then one of the following assertions holds: \begin{enumerate} \item The gradient of $f$ satisfies $|X|^2\nabla f = X(f)X.$ \item $(M^n, g)$ is isometric to the Euclidean space $\Bbb{R}^n$. \end{enumerate} \end{lemma} \begin{proof} First, we claim that $$T = |X|^2d\psi \otimes df$$ is a symmetric $2$-tensor. Indeed, for an arbitrary vector field $Y$ on $M^n$ we have \begin{eqnarray*} YX(f) &=& \langle \nabla_Y \nabla f, X \rangle + \langle \nabla f, \nabla_Y X\rangle \\ &=& Hess\,f(X, Y) + \psi\langle \nabla f, Y\rangle\\ &=& \lambda \langle X, Y\rangle + (n-1)\langle \nabla \psi, Y\rangle + \psi\langle \nabla f, Y\rangle, \end{eqnarray*} where we used the fundamental equation (\ref{maineq}) jointly with Lemma \ref{lemConf} (5). The last equation shows that \begin{eqnarray*} \nabla X(f) = \lambda X + (n-1) \nabla \psi + \psi\nabla f, \end{eqnarray*} Differentiating it gives \begin{eqnarray*} \hspace{0,5cm} Hess\,X(f) = \lambda\psi g + (n-1)Hess\,\psi + \psi Hess\, f + d\psi\otimes df, \end{eqnarray*} which immediately shows that $T$ is a symmetric $2$-tensor. Using the fourth item of Lemma \ref{lemConf}, one can re-write $T$ as \begin{eqnarray*} T = X(\psi)X^{\flat} \otimes df. \end{eqnarray*} Moreover, taking into account that $T$ is symmetric we infer \begin{eqnarray*} X(\psi)\Big(|X|^2\langle \nabla f, Y\rangle - X(f)\langle X, Y \rangle\Big) = 0, \end{eqnarray*} which reduces to \begin{eqnarray*} X(\psi)\Big(|X|^2\nabla f - X(f) X\Big) = 0. \end{eqnarray*} Next, recalling that every gradient Ricci soliton has analytic metric (Ivey \cite[Theorem 1]{Ivey}) we conclude that either \begin{eqnarray}\label{P1.1} X(\psi) = 0 \end{eqnarray} or \begin{eqnarray}\label{P1.2} |X|^2\nabla f = X(f)X, \end{eqnarray} where (\ref{P1.2}) is exactly the first assertion. Now we must pay attention to (\ref{P1.1}). Indeed, in conjunction with Lemma \ref{lemConf} (4), it implies \begin{eqnarray*} |X|^2\nabla\psi = 0, \end{eqnarray*} and consequently \begin{eqnarray*} \nabla\psi = 0, \end{eqnarray*} for all non-singular points of $X.$ Thus, from Lemma \ref{lemConf} (1) we conclude, by continuity, that the last identity holds everywhere and therefore, $\psi$ is constant. Finally, taking into account that $X$ is non-parallel, we define the function $h: M^n \rightarrow \Bbb{R}$ by \begin{eqnarray*} h = \frac{1}{2\psi}|X|^2. \end{eqnarray*} A straightforward computation using this function, in conjunction with Lemma \ref{lemConf} (3) shows that \begin{eqnarray*} Hess\, h = \psi g. \end{eqnarray*} Hence, we invoke Theorem 2 of \cite{Tashiro} to conclude that $(M^n, g)$ is isometric to Euclidean space, completing the proof of the lemma. \end{proof} \vspace{0.4cm} Now we are ready to conclude the proof of Theorem \ref{thmA}. First, we suppose that $M^n$ is a gradient Ricci soliton carrying a non-parallel closed conformal vector field $X$. Then, it follows by Lemma \ref{lemA} that \begin{equation} \label{eq1p} |X|^{2}\nabla f=X(f)X, \end{equation} or ($M^n,g$) is isometric to the Euclidean space $\Bbb{R}^n$. Hereafter, since $M^n$ has constant scalar curvature we use Lemma \ref{lem1} (2) to infer $$Ric(|X|^{2}\nabla f)=|X|^{2}Ric(\nabla f)=\frac{1}{2}|X|^{2}\nabla R=0.$$ Using this consequence in (\ref{eq1p}) provides $X(f)Ric(X)=0$. Consequently, $$|X|^{2}X(f)Ric(X)=0.$$ Now, using Lemma \ref{lemConf} (5) we obtain $|X|^{2}X(f)\nabla \psi=0$ and then by Lemma \ref{lemConf} (4), it follows that $X(f)X(\psi)X=0.$ So, it follows from (\ref{eq1p}) that $$X(\psi)|X|^{2}\nabla f=0.$$ This implies that $X(\psi)\nabla f=0$ in $M\setminus\mathcal{Z}(X),$ where $\mathcal{Z}(X)$ is the set of zeros of $X.$ Taking into account that $\mathcal{Z}(X)$ is a discrete set, we conclude \begin{equation} \label{eqAq} X(\psi)\nabla f=0\,\,\,\hbox{on}\,\,\,M^n. \end{equation} Indeed, (\ref{eqAq}) is trivially satisfied for the zeros of $X.$ Proceeding further, since any gradient Ricci soliton is analytic in harmonic coordinates, $\nabla f$ can not vanish on any nonempty open dense subset of $M^n$, otherwise, it will be Einstein and $f$ will be constant. Thus, we use (\ref{eqAq}) to conclude that $X(\psi)=0$. Using Lemma \ref{lemConf} (4) again, we deduce that $\nabla \psi=0,$ and this therefore forces $\psi$ to be constant. Now, we define the function $\varphi: M^n \rightarrow \Bbb{R}$ by \begin{eqnarray*} \varphi = \frac{1}{2\psi}|X|^2. \end{eqnarray*} Computing its Hessian, and using Lemma \ref{lemConf} (3) we obtain \begin{eqnarray*} Hess\, \varphi = \psi g. \end{eqnarray*} Thus, it suffices to apply Theorem 2 of \cite{Tashiro} to conclude that $(M^n,\,g)$ is isometric to Euclidean space $\Bbb{R}^n.$ Finally, we turn our attention to the case when $M^n$ is Einstein and $f$ is constant. By Theorem G of Kanai \cite{Kanai}, we know for the positively Einstein case that $M^n$ is isometric to a Euclidean sphere, and for the Ricci-flat case that it is isometric to the Euclidean space $\Bbb{R}^n$. For the negatively Einstein case, i.e. $R < 0$. Let us define $k$ by $R=n(n-1)k$. Then $k < 0$. By virtue of part (ii) of Theorem G of \cite{Kanai}, we find that ($M^n,g$) is isometric to the warped product $(\bar{M},\bar{g})_{\xi}\times (R,g_0)$ of a complete Einstein manifold $(\bar{M},\bar{g})$ of constant scalar curvature 4$(n-1)(n-2)kc_1 c_2$ and the real line $(R,g_0)$, warped by $\xi(t)=c_1 e^{\sqrt{-k}t}+c_2 e^{-\sqrt{-k}t}$ such that $c_1$ and $c_2$ are non-negative constants. In particular, for dimension 4, the fiber $\bar{M}$ is 3-dimensional, and hence has zero Weyl tensor. By Gauss and Codazzi equations, it follows by a straightforward calculation that $M$ is conformally flat, and as it is negatively Einstein, it has constant negative curvature. This completes the proof. \subsection{Proof of Corollary 1} \begin{proof} As $X$ is non-homothetic, the case (1) of Theorem 1 are ruled out. So, it would suffice to rule out the case (3). Let us decompose the closed conformal vector field $X$ orthogonally as $\bar{X}+\alpha T$ where $\bar{X}$ is the component along $\bar{M}$ and $T$ is the lift of $\frac{d}{dt}$ on $M^n$. The components $\nabla_{T}X=\psi T$ of the closed conformal equation, the warped product formulas $\nabla_T \bar{X}=\frac{\dot \xi}{\xi}\bar{X}$ and $\nabla_T T = 0$, and comparison of components tangent to $\bar{M}$ and $R$ lead to $\dot{\alpha}=\psi$ and $\dot \xi \bar{X}= 0$. So, either $\bar{X}=0$ or $\dot{\xi}=0$ on an open sub-interval of $R$, which is not possible because $\xi(t)=c_1 e^{\sqrt{-k}t}+c_2 e^{-\sqrt{-k}t}$. Hence $\bar{X}=0$ and thus $X = \alpha T$. Now, using $\nabla_{\bar{Y}}X = \psi \bar{Y}$, where $\bar{Y}$ is an arbitrary vector field tangent to $\bar{M}$ and comparing components along $\bar{M}$ and $R$ shows that $\alpha$ is a function of $t$ alone, and $\psi = \alpha \frac{\dot{\xi}}{\xi}$. Consequently, we find $\dot{\alpha}=\alpha \frac{\dot{\xi}}{\xi}$ which easily integrates as $\alpha = \xi$ (the constant factor due to integration can be absorbed by the constants in $\xi$). Therefore, we obtain $X = \xi T$, and so the norm of $X$ is $\xi$ which is unbounded and this contradicts our assumption that $X$ has bounded norm. This completes the proof. \end{proof} \subsection{Proof of Corollary 2} \begin{proof}If $\psi$ is a non-zero constant, then as shown in the proof of Theorem 1, $M^n$ is isometric to a Euclidean space. If $\psi$ is non-constant, then as shown in the proof of Theorem 1, $f$ is constant and $g$ is Einstein. The constancy of $R$ and Lemma \ref{lem1} (3) implies that $|Ric|^{2}=\lambda R$. Thus, the hypothesis $\lambda > 0$ implies that $R > 0$, because $R = 0$ would imply Ricci flatness and $\lambda =0$, contradicting our hypothesis. As $g$ is Einstein, Lemma \ref{lemConf} (5) provides $X = -\frac{n(n-1)}{R}\nabla \psi$, i.e. $X$ is gradient. Differentiating the foregoing equation we obtain $Hess \ \psi = -\frac{R}{n(n-1)}g$. This implies, by Obata's theorem \cite{Obata}, that $(M^n, g)$ is isometric to the round sphere $\Bbb{S}^n$, completing the proof. \end{proof} \section{K\"ahler Gradient Ricci Soliton} When the underlying manifold is a complex manifold, we have the corresponding notion of a K\"ahler-Ricci soliton. Recall that a complex $n$-dimensional (real $2n$-dimensional) Riemannian manifold $(M,\,g)$ with a complex structure $J: TM \rightarrow TM$ defined by $J^2 = -I$, is a K\"ahler manifold provided the metric $g$ is $J$-invariant (Hermitian), i.e. $g(JY,JZ) = g(Y,Z)$ for arbitrary real vector fields $Y,Z$, and $J$ is parallel with respect to the Riemannian connection $\nabla$ of $g$, i.e. $\nabla J = 0$. The metric $g$ is called the K\"ahler metric and its Ricci tensor is $J$-invariant, i.e. $Ric \circ J = J \circ Ric$. Following Chow et al. (p. 97, \cite{Chow}), we state the following definition. \begin{definition}A K\"ahler-Ricci soliton is a K\"ahler manifold ($M,g,J$) such that the soliton structure equation \begin{equation}\label{T2.1} \mathcal{L}_V g +2Ric=2\lambda g \end{equation} holds for a real constant $\lambda$ and a vector field $V$ that is an infinitesimal automorphism of the complex structure $J$, i.e. \begin{equation}\label{T2.2} \mathcal{L}_V J = 0. \end{equation} \end{definition} One may note here that the condition (\ref{T2.2}) is automatically satisfied for a gradient K\"ahler-Ricci soliton (for which $V = \nabla f$ for a smooth function $f$ on $M$). Now we state our next result as follows. \begin{proposition} If $(M^{2n},\, g,\, \nabla f,\, \lambda),$ $2n\geq 4,$ is a gradient Kaehler Ricci soliton real dimension $2n>4$, with a non-parallel closed conformal real vector field $X$, then (i )$ X$ is homothetic and an infinitesimal automorphism of the complex structure, and (ii) it is Ricci-flat and is flat in dimension 4. \end{proposition} Its proof uses the following known result(Goldberg \cite[p. 265]{Goldberg}). \begin{lemma} A closed conformal vector field on a K\"ahler manifold of real dimension $2n \ge 4$ is homothetic and an infinitesimal automorphism of the complex structure. \end{lemma} \textbf{Proof of the Proposition.} As $X$ is homothetic, we invoke Proposition \ref{propA} to conclude that $Ric = 0.$ Hence, it follows from (\ref{closedEq}) that the curvature tensor annihilates $X,$ and therefore, the Weyl conformal tensor $W$ annihilates $X.$ So, if the real dimension of $M$ is $4,$ then a combinatorial computation using the symmetries and complete traceless property of $W$ implies that $W = 0$ at all points other than the zeros of $X.$ But the zeros of $X$ are discrete, and hence using the continuity of $W$ we conclude that $W = 0$ on $M$. But $Ric = 0$, and so $M$ is flat. This completes the proof. \begin{remark}A Ricci flat K\"ahler manifold (for which the first Chern class vanishes) is known as a Calabi-Yau manifold which has application in superstring theory based on a $10$-dimensional manifold that is the product of the $4$-dimensional space-time and a $6$-dimensional Calabi-Yau manifold (see Candelas et al. \cite{Candelas}). \end{remark} \section{A constant scalar curvature condition for Ricci soliton} Let ($M,g,V,\lambda$) be a Ricci soliton (not necessarily gradient). Then \begin{equation}\label{5.1} (\mathcal{L}_V g)(Y,Z)+2Ric(Y,Z)=2\lambda g(Y,Z) \end{equation} for arbitrary $Y,Z \in \mathfrak{X}(M)$. Decomposing $g(\nabla_Y V,Z)$ into symmetric and skew-symmetric parts, and using (\ref{5.1}) we have \begin{equation}\label{5.2} g(\nabla_Y V,Z)=-Ric(Y,Z)+\lambda g(Y,Z) +(dv)(Y,Z) \end{equation} where $dv$ is the exterior derivative of the 1-form $v$ metrically equivalent to $V$, i.e. $v(Y)=g(V,Y)$. Let us set $(dv)(Y,Z)=g(FX,Z)$. Then $F$ is a skew-symmetric (1,1)-tensor field, i.e. $g(FY,Z)=-g(Y,FZ)$. Factoring $Z$ out of (\ref{5.2}) gives \begin{equation}\label{5.3} \nabla_Y V=-Ric(Y)+\lambda Y +FY \end{equation} Tracing it provides \begin{equation}\label{5.4} \delta v = -div(V)=R-n\lambda \end{equation} where $\delta$ is the co-differential operator. We establish the following result that provides a condition for a Ricci soliton to have constant scalar curvature. \begin{theorem} Let ($M,g,V,\lambda$) be a Ricci soliton. If the 1-form $v$ metrically equivalent to $V$ is harmonic, then $M$ has constant scalar curvature, and the Ricci operaror annihilates $V$. \end{theorem} \begin{proof}As $v$ is harmonic, \begin{equation}\label{5.5} \Delta v=d\delta v +\delta dv = 0, \end{equation} where $\Delta$ is the Hodge Laplacian operator. Also, $(dv)(Y,Z)=g(FY,Z)$ shows that $\delta dv=div(F)$. This, in conjunction with equation (\ref{5.4}) shows that equation (\ref{5.5}) assumes the form \begin{equation}\label{5.6} dR+div(F)=0. \end{equation} A straightforward computation using (\ref{5.3}) leads to \begin{equation}\label{5.7} Ric(V)=\frac{1}{2}\nabla R+(div(F))^{*} \end{equation} where $(div(F))^{*}$ denotes the vector field metrically equivalent to the 1-form $div(F)$. At this point, we recall the following result (Stepanov and Shelepova \cite{S-S}): If $(M,g,V,\lambda)$ is a Ricci soliton, then $\square V=0$, where $\square$ is the Yano operator transformiong a vector field $V$ into the vector field with components $(\square V)^i =-(\nabla^j \nabla_j V^i+R^i _j V^j)$. By our hypothesis, $\Delta v=0$, which is equivalent to (Yano \cite{Yano}) \begin{equation*} -(\nabla^j \nabla_j V^i-R^i _j V^j)=0, \end{equation*} and hence, in conjunction with the aforementioned result yields $Ric(V)=0$. The use of this consequence in equations (\ref{5.6}) and (\ref{5.7}) immediately shows that $R$ is constant, completing the proof. \end{proof} \begin{remark} For a gradient Ricci soliton $v =df$, we note that the constant scalar curvare condition is easily seen to be equivalent to $\Delta v=0$, and also equivalent to $Ric(V)=0$, in view of Lemma \ref{lem1} (2). \end{remark} The authors have no conflicts of interest to declare that are relevant to the content of this article.
1,108,101,566,028
arxiv
\section{Introduction} \label{sec:intro} Relation extraction (RE) extracts semantic relations between entities from plain text. For instance, ``\textbf{Jon Robin Baitz}$_{head}$ , born in \textbf{Los Angeles}$_{tail}$ ...'' expresses the relation \emph{/people/person/place\_of\_birth} between the two head-tail entities. Extracted relations are then used for several downstream tasks such as information retrieval~\cite{corcoglioniti2016knowledge} and knowledge base construction~\cite{al2018extracting}. RE has been widely studied using fully supervised learning~\cite{nguyen-grishman-2015-relation,miwa-bansal-2016-end,zhang-etal-2017-position,zhang-etal-2018-graph} and distantly supervised approaches~\cite{mintz-etal-2009-distant,riedel2010modeling,lin-etal-2016-neural}. Unsupervised relation extraction (URE) methods have not been explored as much as fully or distantly supervised learning techniques. URE is promising, since it does not require manually annotated data nor human curated knowledge bases (KBs), which are expensive to produce. Therefore, it can be applied to domains and languages where annotated data and KBs are not available. Moreover, URE can discover new relation types, since it is not restricted to specific relation types in the same way as fully and distantly supervised methods. One might argue that Open Information Extraction (OpenIE) can also discover new relations. However, OpenIE identifies relations based on textual surface information. Thus, similar relations with different textual forms may not be recognised. Unlike OpenIE, URE groups similar relations into clusters. Despite these advantages, there are only a few attempts tackling URE using machine learning (ML) \cite{hasegawa-etal-2004-discovering, banko2007open, yao-etal-2011-structured,marcheggiani-titov-2016-discrete, simon-etal-2019-unsupervised}. Similarly to other unsupervised learning tasks, a challenge in URE is how to evaluate results. Recent approaches~\cite{yao-etal-2011-structured,marcheggiani-titov-2016-discrete,simon-etal-2019-unsupervised} employ a widely used data generation setting in distantly supervised RE, i.e., aligning a large amount of raw text against triplets in a curated KB. A standard metric score is computed by comparing the output relation clusters against the automatically annotated relations. In particular, the \mbox{NYT-FB} dataset~\cite{marcheggiani-titov-2016-discrete} which is used for evaluation, has been created by mapping relation triplets in Freebase~\cite{bollacker2008freebase} against plain text articles in the New York Times (NYT) corpus~\cite{sandhaus2008new}. Standard clustering evaluation metrics for URE include B$^3$~\cite{bagga1998algorithms}, V-measure~\cite{rosenberg-hirschberg-2007-v}, and ARI~\cite{hubert1985comparing}. Although the above mentioned experimental setting can be created automatically, there are three challenges to overcome. Firstly, the development and test sets are silver, i.e., they include noisy labelled instances, since they are not human-curated. Secondly, the development and test sentences are part of the training set, i.e., a transductive setting. It is thus unclear how well the existing models perform on unseen sentences. Finally, \mbox{NYT-FB} can be considered highly imbalanced, since only 2.1\% of the training sentences can be aligned with Freebase's triplets. Due to the noisy nature of silver data (\mbox{NYT-FB}), evaluation on silver data will not accurately reflect the system performance. We also need unseen data during testing to examine the system generalisation. To overcome these challenges, we will employ the test set of \mbox{TACRED}~\cite{zhang-etal-2017-position}, a widely used manually annotated corpus. Regarding the imbalanced data, we will demonstrate that in fact around 60\% (instead of 2.1\%) of instances in the training set express relation types defined in Freebase. In this work, we present a simple URE approach relying only on entity types that can obtain improved performance compared to current methods. Specifically, given a sentence consisting of two entities and their corresponding entity types, e.g., PERSON and LOCATION, we induce relations as the combination of entity types, e.g., PERSON-LOCATION. It should be noted that we employ only entity types because their combinations form reasonably coarse relation types (e.g., PERSON-LOCATION covers \emph{/people/person/place\_of\_birth} defined in Freebase). We further discuss our improved performance in \cref{sec:discussion}. Our contributions are as follows: (i) We perform experiments on both automatically/manually-labelled datasets, namely \mbox{NYT-FB} and TACRED, respectively. We show that two methods using only entity types can outperform the state-of-the-art models including both feature-engineering and deep learning approaches. The surprising results raise questions about the current state of unsupervised relation extraction. (ii) For model design, we show that link predictor provides a good signal to train a URE model (Fig 1). We also illustrate that entity types are a strong inductive bias for URE (\cref{tab:results}). \section{Methods for URE} The goal of URE is to predict the relation $r$ between two entities $e_{head}$ and $e_{tail}$ in a sentence $s$. We will describe three recent ML-based methods tackling URE and our own methods. We divide the ML-based methods into two main approaches: generative and discriminative. \subsection{Generative Approach} \citet{yao-etal-2011-structured} extended topic modelling -- Latent Dirichlet Allocation (LDA)~\cite{blei2003latent} for RE, developing two models, herewith \textbf{RelLDA} and \textbf{RelLDA1}. In both models, a sentence and an entity pair perform as a document in topic modelling, while a relation type corresponds to a topic. RelLDA uses three features, i.e., the shortest dependency path between two entities and the two entity mentions. RelLDA1 extends RelLDA with five more features, i.e., the entity types, words and part-of-speech tags between the two entities. \subsection{Discriminative Approaches} \citet{marcheggiani-titov-2016-discrete} proposed a discrete-state variational autoencoder (VAE) to tackle URE (herewith \textbf{March}). Their model consists of two components: a \emph{relation classifier} and a \emph{link predictor}. The \emph{relation classifier}, which is discriminative, takes entity types and several linguistic features (e.g., dependencies) as input to predict the relation $r$. The \emph{link predictor} then uses the (soft) predicted relation $r$ to predict the missing entity $e_i$ in a specific position $\{\text{head}, \text{tail}\}$, given the other entity $e_{-i}$, where if $i=\text{head}$ then $-i=\text{tail}$ and vice versa. In other words, entity prediction, in a self-supervised manner, provides training signals to learn the relation classifier. However, by using only entity prediction, only a few relation types are chosen. They thus used \emph{entropy} over all relations as a regulariser. The maximisation of the \emph{entropy} regulariser ensures the uniform relation distribution and allows more relations to be predicted. Another discriminative method is by \citet{simon-etal-2019-unsupervised} (herewith \textbf{Simon}) which differs from March in the following ways: a) firstly, its relation classifier employs a piece-wise convolutional network (PCNN) using only surface form without requiring hand-crafted features; b) secondly, they replaced \emph{entropy} with two regularisers: $L_s$ (\emph{skewness}), to encourage the relation classifier to be confident in its prediction, and $L_d$ (\emph{dispersion}), to ensure several relation types are predicted over a minibatch. Note that, $L_s$ is equivalent to the negation of the \emph{entropy} used in March. \subsection{Our Methods} We introduce two entity-based methods, herewith \textbf{EType} and \textbf{EType+}. Our motivation is that entity types are helpful for RE, as mentioned in \newcite{zhang-etal-2017-position} for supervised learning and \newcite{ren2017cotype} for distant learning. In URE, \newcite{yao-etal-2011-structured, marcheggiani-titov-2016-discrete} also used entity types. We therefore propose EType that induces coarse relation clusters from the entity types. In particular, given two entity types $t_{e_{head}}$, $t_{e_{tail}}$ as input, EType would output their concatenation $t_{e_{head}}$-$t_{e_{tail}}$ as the relation. One problem with EType is that the number of relation types is determined by the number of entity types. For instance, 4 entity types lead to $4^2=16$ relation types. To extract an arbitrary number of relation types, we build a relation classifier that consists of one-layer feed-forward network taking entity type combinations as input: \[ r=\mathbf{FFN}(t_{e_{head}}\text{-}t_{e_{tail}}),\] where $t_{e_{head}}$-$t_{e_{tail}}$ is the one hot vector of the entity type pair. We then employ the link predictor used in March and the two regularisers used in Simon, to produce a new method, herewith EType+. \section{Experiments and Results} \label{sec:exps} \paragraph{Evaluation metrics} We use the following evaluation metrics for our analysis: a) B$^3$~\cite{bagga1998algorithms} used in previous work, which is the harmonic mean of precision and recall for clustering task; b) V-measure~\cite{rosenberg-hirschberg-2007-v}, and c) ARI~\cite{hubert1985comparing} used in \citet{simon-etal-2019-unsupervised}.~\footnote{We used sklearn.metrics package to compute V-measure and ARI.} V-measure is analysed in terms of homogeneity and completeness, while ARI measures the similarity between two clusterings. We note that V-measure is sensitive to the dependency between the number of clusters and instances. A relatively small number of clusters compared to the number of instances should be used to maintain the comparability of using V-measure. More precisely, we evaluated V-measure of the trivial homogeneity, where there are only singular clusters (i.e., each instance is its own cluster). The V-measure of the trivial homogeneity on \mbox{NYT-FB} reached 43.77\%, which is higher than all the implemented methods in \cref{tab:results}. Meanwhile, neither B$^3$ nor ARI encounters this problem. \vspace{-2.5mm} \paragraph{Datasets} We employed \mbox{NYT-FB} for training and evaluation following previous work~\cite{yao-etal-2011-structured,marcheggiani-titov-2016-discrete, simon-etal-2019-unsupervised}. Because only 2.1\% of the sentences in \mbox{NYT-FB} were aligned against Freebase's triplets, we were concerned whether this dataset contains enough sentences for a model to learn relation types from Freebase. We thus examined $100$ randomly chosen instances from 1.86m non-aligned sentences. We found that 61\% of them (or 60\% of the whole dataset) express relation types in Freebase. This suggests that the \mbox{NYT-FB} dataset can be employed to train a relation extractor. However, there are two further issues when evaluating URE methods on \mbox{NYT-FB}. Firstly, the development and test sets are all aligned sentences without human curation, which means that they include wrong/noisy labelled instances. In particular, we found that 35 out of 100 randomly chosen sentences were given incorrect relations. Secondly, the two validation sets are part of the training set. This setting is obviously not inductive, as it does not evaluate how a model performs on unseen sentences. Therefore, we \emph{additionally evaluate} all methods (except topic modelling) on the test set of \mbox{TACRED}~\cite{zhang-etal-2017-position}, a widely used manually annotated corpus for supervised RE. The statistics of both \mbox{NYT-FB} and \mbox{TACRED} are provided in~\cref{secapp:datasets}. \vspace{-2.5mm} \begin{table}[t!] \small \centering \begin{tabular}{lcccc} \toprule \multicolumn{2}{l}{\textbf{Model}} & \textbf{B\textsuperscript{3}} & \textbf{V} & \textbf{ARI} \\ \midrule \multicolumn{5}{c}{\mbox{NYT-FB}} \\ \midrule RelLDA & \multirow{5}{*}{$c=10$} & 29.1 & 30.0 & 13.3 \\ RelLDA1 & & 36.9 & 34.7 & 24.2 \\ March ($L_s$+$L_d$) && 37.5 & 38.7 & 27.6 \\ Simon && 39.4 & 38.3 & \textbf{33.8} \\ EType+ && \textbf{41.9} & 40.6 & 30.7 \\ \cmidrule{2-5} March\textsuperscript{$\diamond$} ($L_s$+$L_d$) && 36.9 & 37.4 & 28.1 \\ EType & \multirow{2}{*}{$c=16$} & 41.7 & \textbf{42.1} & 30.7 \\ EType+ && 41.5 & 41.3 & 30.5 \\ \cmidrule{2-5} RelLDA1 & \multirow{2}{*}{$c=100$} & 29.6 & - & - \\ March && 35.8 & - & - \\ \midrule \multicolumn{5}{c}{\mbox{TACRED}}\\ \midrule March\textsuperscript{$\diamond$} ($L_s$+$L_d$) & \multirow{3}{*}{$c=10$} & 31.0 & 43.8 & 22.6\\ Simon\textsuperscript{$\diamond$} && 15.7 & 17.1 & 6.1 \\ EType+ && 43.3 & 59.7 & 25.7\\ \cmidrule{2-5} March\textsuperscript{$\diamond$} ($L_s$+$L_d$) & \multirow{3}{*}{$c=16$} & 34.6 & 47.6 & 23.2 \\ EType && \textbf{48.3} & \textbf{64.4} & \textbf{29.1} \\ EType+ && 46.1 & 62.0 & 27.4 \\ \cmidrule{2-5} March\textsuperscript{$\diamond$} & $c=100$ &33.13 & 43.63& 20.21 \\ \bottomrule \end{tabular} \caption{\label{tab:results} Average results (\%) across three runs of different models (except the EType) on \mbox{NYT-FB} and \mbox{TACRED}. $c$ indicates the number of clusters in each method. \textsuperscript{$\diamond$} indicates our implementation of the corresponding model. We note that all methods were trained on \mbox{NYT-FB} and evaluated on the test set of both \mbox{NYT-FB} and \mbox{TACRED}. } \end{table} \paragraph{Hyper-parameters} We examine three models RelLDA1, March, and Simon using the reported hyper-parameters~\cite{yao-etal-2011-structured,marcheggiani-titov-2016-discrete, simon-etal-2019-unsupervised}. For comparison, we also evaluate March with the two regularisers of Simon, namely \textbf{March ($L_s+L_d$)}. To evaluate on \mbox{TACRED}, we employed the original March with $n=100$ using the published repository\footnote{\href{https://github.com/diegma/relation-autoencoder}{github.com/diegma/relation-autoencoder}}. Meanwhile, for March ($L_s$+$L_d$) and Simon, we reimplemented these models and evaluated them on \mbox{TACRED}. Regarding our methods, EType does not have hyper-parameters, while EType+ uses the same optimiser and entity type dimension as in Simon. All the hyper-parameters used in our experiments are listed in~\cref{secapp:hyper}. \vspace{-2.5mm} \paragraph{Results} \cref{tab:results} demonstrates the average performance of our methods across three runs in comparison with the three ML models on \mbox{NYT-FB} and \mbox{TACRED}. Our models outperform the best performing system of~\newcite{simon-etal-2019-unsupervised} on both datasets, except ARI on \mbox{NYT-FB}. ARI is shown to be used when there are large equal-sized clusters~\cite{romano2016adjusting} while relation datasets are generally imbalanced (both \mbox{NYT-FB} and \mbox{TACRED} in this study; please refer to~\cref{secapp:datasets} for the detailed statistics). Due to this reason, ARI might not be appropriate to evaluate URE systems. In addition, the ML methods consistently exhibit lower performance on \mbox{TACRED} than on \mbox{NYT-FB}. The full results are shown in~\cref{secapp:results}. \label{sec:discussion} \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{images/lcurve.pdf} \caption{Average negative log likelihood losses across three runs of the link predictor on the training data (not including negative instances). Each line demonstrates a different relation input setting. } \label{fig:lp} \end{figure} \section{Discussion} The results of our evaluation demonstrate that our models outperform previous methods, despite being simpler than them. These results lead us to the following findings. \paragraph{Do ML models employ proper inductive biases?} In common with other unsupervised learning approaches, there is no guarantee that a URE model would learn the relation types in the used KBs and/or annotated data. A common solution is to employ inductive biases~\cite{wagstaff2000refining} to guide the learning process towards desired relation types. Inductive biases can emanate from pre-processed data. Since our models outperform other methods, we conclude that entity type information alone constitutes a better bias than the biases employed by existing ML models. Indeed, entity types constitute a useful bias for this task. Among the topic modelling based methods, RelLDA1 outperforms RelLDA, which does not employ entity types. In a separate experiment, we found that adding entity types to the Simon model helped to achieve higher performance than the original version, i.e., 42.74\% vs. 39.4\% F1 B$^3$ on the \mbox{NYT-FB} test set. However, although both RelLDA1 and March also employ entity types, their performance is still lower than ours. This is because other syntactic and word features used in these two models might cancel out the useful bias of entity types. (More details are in the last paragraph of this section.) Inductive biases can emanate from training signals. March and Simon are trained from a link predictor, which provides indirect signals to train a relation classifier. Hence, the question here is \emph{``can the link predictor induce good training signals?''} To answer this, we examine the link predictor with alternative settings: \begin{itemize}[noitemsep,nosep] \item \textbf{Rand10} randomly assigns one among 10 relation types to each entity pair; \item \textbf{Rand10 with silver frequencies}, similar to \emph{Rand10}, randomly generates relation types but follows the silver relation distribution; \item \textbf{One relation} assumes all entity pairs sharing the same relation type; \item \textbf{EType} uses 16 relation types induced from 4 coarse entity types; \item \textbf{Silver relations (10)} takes the top 9 most frequent relation types and groups the rest together to form the tenth relation type; \item \textbf{Silver relations (full)} considers the full (silver) annotated relations, i.e., 262 types. \end{itemize} \cref{fig:lp} illustrates the average loss values of using these settings. If high quality relations were critical for training the link predictor, we would expect lower losses while using annotated relations. Indeed, the loss curve of using 10 correct relation types is consistently below all the others. This implies that the link predictor is able to provide reasonable signals for training a relation classifier. So why are the Simon and March models outperformed by our models? As pointed out by \newcite{simon-etal-2019-unsupervised}, the link predictor itself cannot be trained without a good relation classifier. It suggests that the relation classifiers in both methods need to be improved. Empirical evidence shows that both Simon and March models are outperformed (in B$^3$ and V) by our Etype+, which uses the same link predictor. We also notice that both \textit{One relation} and \textit{EType} at the end sharing similar performances. This might imply that we only need one relation (matrix) to predict head/tail entities, as the link predictor is very expressive. However, the silver relations are clearly helpful as during the first 15 epochs their losses are much lower than others. \paragraph{Why was the performance on \mbox{TACRED} lower?} Despite the fact that \mbox{TACRED} shares similar relation types with Freebase, we observed that both the March and Simon models consistently fare less well in terms of their performance on the \mbox{TACRED} dataset. More precisely, Simon model results in significantly worse performance on TACRED, with 15.7\% in terms of B\textsuperscript{3}, which is twice as low as on \mbox{NYT-FB} (39.4\%). This performance drop might be attributed to the distributional shift of the two datasets: variation and semantic shift in vocabulary and language structure over time, since NYT was collected long before TACRED. \begin{table}[t!] \centering \begin{tabular}{lllll} \toprule \multicolumn{2}{l}{\textbf{Model}} & \textbf{B\textsuperscript{3}} & \textbf{V} & \textbf{ARI} \\ \midrule \multicolumn{2}{l}{EType+} & 42.5 & 40.1 & 29.2 \\ & +Entity & 40.5 & 39.9 & 28.6 \\ & +BOW & 37.7 & 38.0 & 20.5 \\ & +DepPath & 41.4 & 39.4 & 26.7 \\ & +POS & 41.6 & 40.4 & 27.8 \\ & +Trigger & 41.7 & 41.3 & 29.0 \\ & +PCNN & 40.8 & 39.6 & 27.1 \\ \bottomrule \end{tabular} \caption{\label{tab:combination}Study of \mbox{EType+} in combination with different features. The results are average across three runs on the development set.} \end{table} \paragraph{How is the performance when combining entity types with other features?} Our experiments using only entity types surprisingly perform higher than the previous state-of-the-art methods including feature engineering and deep learning models. However, we know that context information is crucial to distinguish the relation between two entities, as many RE studies have been proposed to integrate the context information to improve the RE performance. We conduct experiments when combining entity types with common features for RE in~\cref{tab:combination}. The list of features include: (i) Entity: textual surface form of two entities, (ii) BOW: bag of words between two entities, (iii) DepPath: words on the dependency path between two entities, (iv) POS: part-of-speech tag sequence between two entities, and (v) Trigger: DepPath without stop words. In general, naively combining entity types with other features could not improve the model performance. Additionally, BOW feature had negative effects on the RE performance. This indicates that bag of words between two entities often include uninformative and redundant words, i.e., noises, that are difficult to eliminate using simple neural architectures. While (i)-(v) are widely used hand-crafted features for RE, we also incorporated a neural-based context encoder PCNN which is the combination of \emph{Simon}'s PCNN encoder, the entity masking and position-aware attention proposed in \cite{zhang-etal-2017-position}. However, the performance of combining PCNN is also lower than only entity types. \section{Conclusion} \label{sec:conclusion} We have shown the importance of entity types in URE. Our methods use only entity types, yet they yield higher performance than previous work on both \mbox{NYT-FB} and TACRED. We have investigated the current experimental setting, concluding that a strong inductive bias is required to train a relation extraction model without labelled data. URE remains challenging, which requires improved methods to deal with silver data. We also plan to use different types of labelled data, e.g., domain specific data sets, to ascertain whether entity type information is more discriminative in sub-languages. \section*{Acknowledgments} We would like to thank the reviewers for their comments, Diego Marcheggiani for sharing his dataset with us, and \'Etienne Simon for sharing the hyperparameters. The first author thanks the University of Manchester for the Research Impact Scholarship Award. This work is also funded by Lloyd’s Register Foundation, Discovering Safety Programme, Thomas Ashton Institute. \section{Datasets} \label{secapp:datasets} \cref{tab:statistics} shows the statistics of the NYT-FB~\cite{marcheggiani-titov-2016-discrete} and TACRED~\cite{zhang-etal-2017-position} datasets. We followed the same data split and pre-processing described in~\citet{marcheggiani-titov-2016-discrete}. For all methods, we trained on \mbox{NYT-FB} and evaluated them on both \mbox{NYT-FB} and \mbox{TACRED}. \cref{fig:rel-stat} illustrates the relation distributions of two datasets: \mbox{NYT-FB} and TACRED. We can see that 15/253 most frequent relations account for 82.97\% of the total number of instances in \mbox{NYT-FB}. Meanwhile, 15/41 relations sum upto 74.94\% of the total number of instances in \mbox{TACRED}. \section{Hyper-parameter Settings} \label{secapp:hyper} We used the development set to stop the training process. For every model, we conducted three runs with different initialised parameters and computed the average performance. We list the hyper-parameters of different models in \cref{tab:hyperparams}. \section{Detailed Results} \label{secapp:results} \cref{tab:details} presents the average test scores of three runs on the \mbox{NYT-FB} and \mbox{TACRED} datasets. We note that the two models proposed by \citet{marcheggiani-titov-2016-discrete} and \citet{simon-etal-2019-unsupervised} are sensitive to the hyper-parameters and thus difficult to train. We could not replicate the performance of Simon on the \mbox{NYT-FB} dataset. \begin{table}[th!] \small \centering \begin{tabular}{lrrrr} \toprule & & \textbf{Train} & \textbf{Dev} & \textbf{Test} \\ \midrule \multicolumn{5}{c}{NYT-FB (\#$r=262$)} \\ \midrule \multicolumn{2}{l}{Raw instances} & 1,950,557 & 389,819 & 1,560,738 \\ & Positive & 41,685 & 7,793 & 33,808 \\ \midrule \multicolumn{5}{c}{\mbox{TACRED} (\#$r=41$)} \\ \midrule \multicolumn{2}{l}{Raw instances} & 68,124 & 22,631 & 15,509 \\ & Positive & 13,012 & 5,436 & 3,325 \\ \bottomrule \end{tabular} \caption{\label{tab:statistics} The statistics of the \mbox{NYT-FB} and the \mbox{TACRED} datasets. \#$r$ indicates the number of relation types in each dataset.} \end{table} \begin{table}[th!] \begin{subtable}[t]{0.45\textwidth} \centering \begin{tabular}{lrr} \toprule \textbf{Parameter} & \textbf{$Ls$} & \textbf{$Ls+Ld$} \\ \midrule Optimiser & \multicolumn{2}{c}{AdaGrad} \\ Number of epochs & \multicolumn{2}{c}{10} \\ Batch size & \multicolumn{2}{c}{100} \\ L2 regularisation & \multicolumn{2}{c}{1e-7} \\ Feature dimension & \multicolumn{2}{c}{10} \\ Learning rate & 0.1 & 0.005 \\ $L_s$ coefficient & 0.1 & 0.01 \\ $L_d$ coefficient & -- & 0.02 \\ \bottomrule \end{tabular} \caption{\label{tab:march} \citet{marcheggiani-titov-2016-discrete}'s model.} \end{subtable} \vfill \begin{subtable}[t]{0.45\textwidth} \centering \begin{tabular}{lr} \toprule \textbf{Parameter} & \textbf{Value} \\ \midrule Optimiser & Adam \\ Learning rate & 0.005 \\ Learning rate annealing & $0.5^{0.25}$ \\ Batch size & 100 \\ Early stop patience & 10 \\ L2 regularisation & 2e-11 \\ Word dimension & 50 \\ Entity type dimension & 10 \\ $L_s$ coefficient & 0.01 \\ $L_d$ coefficient & 0.02 \\ \bottomrule \end{tabular} \caption{\label{tab:simon}\citet{simon-etal-2019-unsupervised}'s model.} \end{subtable} \vfill \begin{subtable}[t]{0.45\textwidth} \centering \begin{tabular}{lr} \toprule \textbf{Parameter} & \textbf{Value} \\ \midrule Optimiser & Adam \\ Learning rate & 0.001 \\ Batch size & 100 \\ Early stop patience & 10 \\ L2 regularisation & 1e-5 \\ Entity type dimension & 10 \\ $L_s$ coefficient & 0.0001 \\ $L_d$ coefficient & 0.02 \\ \bottomrule \end{tabular} \caption{\label{tab:etype}EType+.} \end{subtable} \caption{\label{tab:hyperparams}Hyper-parameter values used in our experiments.} \end{table} \begin{table*}[t!] \small \centering \begin{tabular}{lcccccccccc} \toprule \multicolumn{2}{l}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c}{\textbf{B\textsuperscript{3}}} & & \multicolumn{3}{c}{\textbf{V-measure}} & & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{ARI}}} \\ \multicolumn{2}{l}{} & \multicolumn{1}{c}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{P}} & \multicolumn{1}{c}{\textbf{R}} & & \multicolumn{1}{c}{\textbf{F1}} & \multicolumn{1}{c}{\textbf{Hom.}} & \multicolumn{1}{c}{\textbf{Comp.}} & & \multicolumn{1}{c}{} \\\midrule \multicolumn{11}{c}{NYT-FB} \\\midrule RelLDA & \multicolumn{1}{c}{\multirow{7}{*}{$n=10$}} & 29.1 & 24.8 & 35.2 & & 30.0 & 26.1 & 35.1 & & 13.3 \\ RelLDA1 & \multicolumn{1}{c}{} & 36.9 & 30.4 & 47.0 & & 37.4 & 31.9 & 45.1 & & 24.2 \\ March ($L_s$+$L_d$) & \multicolumn{1}{c}{} & 37.5 & 31.1 & 47.4 & & 38.7 & 32.6 & 47.8 & & 27.6 \\ March ($L_s$+$L_d$)$\ddagger$ & \multicolumn{1}{c}{} & 38.7 & 30.9 & 51.7 & & 37.6 & 31.0 & 47.7 & & 26.1 \\ Simon & \multicolumn{1}{c}{} & 39.4 & 32.2 & 50.7 & & 38.3 & 32.2 & 47.2 & & 33.8 \\ Simon$\ddagger$ & \multicolumn{1}{c}{} & 32.6 & 28.2 & 38.9 && 30.5 & 26.1 & 36.8 && 23.8 \\ EType+ & \multicolumn{1}{c}{} & 41.9 & 31.3 & 63.7 & & 40.6 & 31.8 & 56.2 & & 30.7 \\\midrule March ($L_s$+$L_d$)$\ddagger$ & \multirow{3}{*}{$n=16$} & 36.9 & 32.0 & 43.7 & & 37.4 & 32.6 & 43.9 & & 28.1 \\ EType & & 41.7 & 32.5 & 58.0 & & 42.1 & 34.7 & 53.6 & & 30.7 \\ EType+ & & 41.5 & 32.0 & 59.0 & & 41.3 & 33.6 & 53.9 & & 30.5 \\\midrule RelLDA1 & \multirow{3}{*}{$n=100$} & 29.6 & -& - & & -& -& -& & -\\ March & & 35.8 & -& -& & - & - & -& & -\\ March$\ddagger$ & & 34.8 & 24.4 & 62.4 & & 25.9 & 18.7 & 42.7 & & 13.1 \\\midrule \multicolumn{11}{c}{TACRED} \\\midrule March ($L_s$+$L_d$)$\ddagger$ & \multirow{3}{*}{$n=10$} & 31.0 & 21.7 & 54.9 & & 43.8 & 35.5 & 57.2 & & 22.6 \\ Simon$\ddagger$ && 15.7 & 12.1 & 22.4 && 17.1 & 14.6 & 20.6 && 6.1 \\ EType+ & & 43.3 & 28.0 & 96.9 & & 59.7 & 43.4 & 96.0 & & 25.7 \\\midrule March ($L_s$+$L_d$)$\ddagger$ & \multirow{3}{*}{$n=16$} & 34.6 & 24.3 & 61.3 & & 47.6 & 38.9 & 61.4 & & 23.2 \\ EType & & 48.3 & 32.3 & 96.3 & & 64.4 & 48.6 & 95.6 & & 29.1 \\ EType+ & & 46.1 & 30.3 & 96.9 & & 62.0 & 45.8 & 96.1 & & 27.4 \\\midrule March$\ddagger$ & $n=100$ & 33.13 & 21.83 & 69.20 & & 43.63 & 32.96 & 64.66 & & 20.21 \\ \bottomrule \end{tabular} \caption{\label{tab:details}Average results (\%) across three runs of different models (except the rule-based EType) on two datasets: the distant supervision \mbox{NYT-FB} and the large supervised dataset TACRED. The model of \citet{marcheggiani-titov-2016-discrete} is March and the model of \citet{simon-etal-2019-unsupervised} is Simon. $\ddagger$ indicates our implementation of the corresponding model.} \end{table*} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=\linewidth]{images/nyt_rel_dev_horizon.pdf} \caption{NYT-FB has 253 relation types in total \label{fig:nyt}} \end{subfigure} \quad \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=\linewidth]{images/tacred_rel_dev_horizon.pdf} \caption{TACRED has 41 relation types in total\label{fig:tacred}} \end{subfigure} \caption{\label{fig:rel-stat} Relation distribution of \mbox{NYT-FB} and \mbox{TACRED} (\%).} \end{figure*}
1,108,101,566,029
arxiv
\section{Introduction} Advances in many areas, such as genomics, signal processing, image analysis and finance, call for new approaches to handle high dimensional data problems. Consider the multiple linear regression model: \begin{equation} \boldsymbol{y} = \boldsymbol{X}\boldsymbol{\theta}+\boldsymbol{\varepsilon}, \label{RegY} \end{equation} where $\boldsymbol{X}=(\boldsymbol{X}_1,\ldots,\boldsymbol{X}_p)\in\mathbb{R}^{n\times p}$ is the design matrix that collects $n$ independently and identically distributed (IID) observations $\boldsymbol{x}_i\in\mathbb{R}^p$ ($i=1,\ldots,n$) as its rows, $\boldsymbol{y}\in\mathbb{R}^{n}$ collects the $n$ responses and $\boldsymbol{\varepsilon}\in\mathbb{R}^n$ is the noise term. The model is called ultra-high dimensional if the number of variables $p$ grows exponentially with the number of observations $n$ ($p\gg n$). In (ultra-)high dimensional settings it is common to assume that only very few predictors contribute to the response. In other words, the coefficient vector $\boldsymbol{\theta}$ is assumed to be sparse, meaning that most of its elements are equal to zero. A major goal is then to identify all the important variables that actually contribute to the response. Variable selection plays an essential role in modern statistics. Widely used classical variable selection techniques are based on the Akaike~\citep{Akaike1973,Akaike1974} and Bayesian information criteria~\citep{Schwarz1978}. However, they are unsuitable for high dimensional data due to their high computational cost. Penalized least squares (PLS) methods have gained a lot of popularity in the past decades, such as nonnegative garrote~\citep{Garrote1995,Garrote2007}, the least absolute shrinkage and selection operator (Lasso)~\citep{Lasso1996,Lasso2006}, adaptive Lasso~\citep{AdaptLasso2006}, bridge regression~\citep{Bridge1998,Bridge2007}, elastic net~\citep{Elastic2005} and smoothly clipped absolute deviation (SCAD)~\citep{SCAD2001,SCAD2007} among others. Many of these methods are variable selection consistent under the condition that the sample size $n$ is larger than the dimension $p$. Although it has been proven that lasso-type estimators can also select variables consistently for ultra-high dimensional data, this was studied under the irrepresentable condition on the design matrix~\citep{Lasso2006,Garrote2007}. As pointed out in~\citep{Lasso2006}, correct model selection for Lasso cannot be reached in ultra-high dimensions for all error distributions, e.g. when higher moments of the error distribution do not exist. Moreover, all these techniques have super-linear (in $p$) computational complexity which makes them computationally prohibitive in ultra-high dimensional settings~\citep{ExSIS}. Sure Independence Screening (SIS) is a very fast variable selection technique for ultra-high dimensional data~\citep{SIS}. SIS has the sure screening property which means that under certain assumptions all the important variables can be selected with probability tending to 1. The basic idea is to apply univariate least squares regression for each predictor variable separately, to measure its marginal contribution to the response variable. Define $\mathcal{M}_F=\{1,\ldots,p\}$ as the full model, $\mathcal{M}_T=\{j:\theta_{j}\neq 0\}$ as the true model, and $\mathcal{M}_{q^*}=\{j_1,\ldots,j_{q^*}\} \subset \mathcal{M}_F$ as a candidate model of size $|\mathcal{M}|=q^*$. Denote by $\hat{\theta}_j$ the $j$th simple regression coefficient estimate, i.e. \begin{equation} \hat{\theta}_j = (\boldsymbol{X}_j^\text{T}\boldsymbol{X}_j)^{-1}\boldsymbol{X}_j^\text{T}\boldsymbol{y}. \nonumber \end{equation} SIS then selects a model of size $q$ as \begin{equation} \mathcal{M}_{q}=\{1\leqslant j\leqslant p:|\hat{\theta}_j|\text{ is among the first $q$ largest of all}\}. \nonumber \end{equation} The model size $q$ usually is of order $\mathcal{O}(n)$. When the variables are standardized componentwise, the regression coefficient estimate $\hat{\theta}_j$ equals the marginal correlation between $\boldsymbol{X}_j$ and $\boldsymbol{y}$. Hence, SIS is also called correlation screening. SIS can reduce the dimensionality from a large scale (e.g. $\mathcal{O}(\exp{(n^{\xi})})$ with $0<\xi<1$) to a moderate scale (e.g. $\mathcal{O}(n)$) while retaining all the important variables with high probability, which is called the sure screening property. Applying variable selection or penalized regression on this reduced set of variables rather than the original set then largely improves the variable selection results. To guarantee the sure screening property for a reduced model of moderate size, SIS assumes that the predictors are independent, which is a strong assumption in high dimensional settings. In case of correlation among the predictors the number of variables that is falsely selected by SIS can increase dramatically. As shown in~\citet{Tilt}, in this case the estimate $\hat{\theta}_j$ can be written as $\theta_j$ plus a bias term \begin{equation} \hat{\theta}_j = \boldsymbol{X}_j^\text{T}\boldsymbol{y} = \boldsymbol{X}_j^\text{T}\left( \sum_{k=1}^{p}\theta_k\boldsymbol{X}_k + \boldsymbol{\varepsilon} \right) = \theta_j + \underset{\text{bias}}{\underbrace{\underset{k\in\mathcal{M}_T\backslash \{ j\}}{\sum} \theta_k \boldsymbol{X}_j^\text{T}\boldsymbol{X}_k +\boldsymbol{X}_j^\text{T}\boldsymbol{\varepsilon}}}. \label{Bias} \end{equation} Hence, the higher the correlation of $\boldsymbol{X}_j$ with other important predictors, the larger the bias of $\hat{\theta}_j$. Moreover, correlation between $\boldsymbol{X}_j$ and the error $\boldsymbol{\varepsilon}$ introduces bias on $\hat{\theta}_j$ as well. Even when the predictors are IID Gaussian variables, so-called spurious correlations can be non-ignorable in high dimensional settings~\citep{SIS}. To handle correlated predictors, several methods have been developed, such as Iterative SIS~\citep{SIS}, Tilted Correlation Screening (TCS)~\citep{Tilt}, Factor Profiled Sure Screening (FPSIS)~\citep{FPSIS}, Conditional SIS~\citep{CSIS}, and High Dimensional Ordinary Least Squares Projection (HOLP)~\citep{HOLP}. A common feature shared by these methods is that they try to remove the correlation among the predictors before estimating their marginal contribution to the response. Although the aforementioned methods work well on clean data, none of these methods can resist the adverse influence of potential outliers. On the other hand, robust regression estimators, such as M-estimators~\citep{Huber1981}, S-estimators~\citep{RousseeuwYohai1984}, MM-estimators~\citep{Yohai1987} and the LTS-estimator~\citep{Rousseeuw1984} cannot be applied when $p>n$. To handle contamination in high dimensional regression problems, penalized robust estimators such as penalized M-estimators~\citep{Geer2008,Li2011}, penalized S-estimators~\citep{Maronna2011}, penalized MM-estimators~\citep{Maronna2011,Smucler2015}, LAD-Lasso~\citep{LAD-Lasso}, LTS-lasso~\citep{Alfons2013}, the enet-LTS estimator~\citep{Filzmoser2018}, and the Penalized Elastic Net S-Estimator (PENSE)~\citep{Smucler2018} have been proposed, as well as a robustified LARS algorithm~\citep{Khan2007}. Similarly as their classical counterparts, these methods cannot handle ultra-high dimensional problems. To deal with ultra-high dimensional regression problem with outliers, more robust variable screening methods have been developed. Robust rank correlation screening (RRCS)~\citep{RRCS2012} replaces the classical correlation measure with Kendall's $\tau$ estimator in SIS. In~\citep{Filzmoser2014} A trimmed SIS-SCAD, called TSIS-SCAD, has been proposed which replaces the maximum likelihood and the penalized maximum likelihood estimator in SIS-SCAD with their trimmed versions. An iterative algorithm which combines SIS and the C-step for LTS regression estimator~\citep{Rousseeuw2006} has been developed in~\citep{TWang2018}. Although iterative versions of RRCS and TSIS-SCAD have been introduced for the case of correlated predictors, similarly to iterated SIS they may fail when a considerable proportion of the predictors are correlated. In this paper, we propose a fast robust procedure for ultra-high dimensional regression analysis based on FPSIS, called Robust Factor Profiled Sure Independence Screening (RFPSIS). FPSIS can be seen as a combination of factor profiling and SIS. It assumes that the predictors can be represented by a few latent factors. If these factors can be obtained accurately, then the correlations among the predictors can be profiled out by projecting all the variables onto the orthogonal complement of the subspace spanned by the latent factors. Performing SIS on the profiled variables rather than the original variables then improves the screening results. FPSIS possesses the sure screening property and even variable selection consistency~\citep{FPSIS}. However, the method can break down with even a small amount of contamination in the data. Different types of outliers can be defined based on the factor model and regression model. To avoid the impact of potential outliers on the factor model, RFPSIS estimates the latent factors using a Least Trimmed Squares method proposed in~\citep{Maronna2005}. Based on the robustly estimated low-dimensional factor space we identify vertical outliers and four types of potential leverage points in the multiple regression model, and examine their roles in the marginal factor profiled regressions. After removing bad leverage points, the marginal regression coefficients are estimated using a 95\% efficient MM-estimator. Finally, a modified BIC criterion is used to determine the final model. The rest of this paper is organized as follows. In Section 2, we first review the factor profiling procedure and the LTS method to estimate the factor space. We study the effect of different types of outliers on the models and introduce the Robust FPSIS method. We then compare SIS, FPSIS and RFPSIS by simulation. We consider several modified BIC criteria for final model selection in Section~3 and compare their performance. Section 4 provides a real ultra-high dimensional dataset analysis while Section 5 contains conclusions. \section{Robust FPSIS} \label{sec:RFPSIS} \subsection{Factor profiling} FPSIS aims to construct decorrelated predictors. It assumes that the correlation structure of the predictors can be represented by a few latent factors. We now summarize the model proposed in~\citep{FPSIS}. The factor model for the predictors is given by \begin{equation} \boldsymbol{X} = \boldsymbol{Z}\boldsymbol{B}^\text{T} + \widetilde{\boldsymbol{X}}, \label{RegX} \end{equation} under the constraint $\boldsymbol{Z}^\text{T}\boldsymbol{Z}=\boldsymbol{I}_d$, where $\boldsymbol{Z}\in\mathbb{R}^{n\times d}$ collects the $d$-dimensional latent factor scores as its rows, $\boldsymbol{B}\in\mathbb{R}^{p\times d}$ is the factor loading matrix which specifies the linear combinations of the factors involved in each of the predictors $\boldsymbol{X}_j$. Finally, $\widetilde{\boldsymbol{X}}=(\widetilde{\boldsymbol{X}}_{1},\ldots,\widetilde{\boldsymbol{X}}_{p})\in\mathbb{R}^{n\times p}$ contains the information in $\boldsymbol{X}$ which is missed by $\boldsymbol{Z}$. It is assumed that $E(\boldsymbol{y})=E(\boldsymbol{X}_j)=E(\widetilde{\boldsymbol{X}}_j)=0$ and $\text{var}(\boldsymbol{y})=\text{var}(\boldsymbol{X}_j)=1\geqslant\tilde{\sigma}_j^2=\text{var}(\widetilde{\boldsymbol{X}}_j)$. Moreover, it is assumed that $\text{cov}(\widetilde{\boldsymbol{X}})$ is a diagonal matrix, so $\text{cov}(\widetilde{\boldsymbol{X}}_{j_1},\widetilde{\boldsymbol{X}}_{j_2})=0$ for $j_1\neq j_2\in\{1,\ldots,p\}$. The error term is allowed to be correlated with the predictors, but only through the latent factors. It is modeled by \begin{equation} \boldsymbol{\varepsilon} = \boldsymbol{Z}\boldsymbol{\alpha}+\tilde{\boldsymbol{\varepsilon}}, \label{RegE} \end{equation} where $\boldsymbol{\alpha}\in\mathbb{R}^d$ is a $d$-dimensional vector and $\tilde{\boldsymbol{\varepsilon}}$ is independent of both $\boldsymbol{Z}$ and $\widetilde{\boldsymbol{X}}$. The two factor models \eqref{RegX} and \eqref{RegE} allow us to profile out the correlations introduced by the latent factors, both among the predictors and with the error term. The resulting $\widetilde{\boldsymbol{X}}_j$'s and $\tilde{\boldsymbol{\varepsilon}}$ are called profiled predictors and error variable, respectively. By writing $\boldsymbol{\gamma}=\boldsymbol{B}^\text{T}\boldsymbol{\theta}+\boldsymbol{\alpha}\in\mathbb{R}^d$, one can define the profiled response variable as $\widetilde{\boldsymbol{y}}=\boldsymbol{y}-\boldsymbol{Z}^\text{T}\boldsymbol{\gamma}$. Using equations \eqref{RegX}-\eqref{RegE}, the regression model~\eqref{RegY} can then be modified to \begin{equation} \widetilde{\boldsymbol{y}}=\boldsymbol{y}-\boldsymbol{Z}\boldsymbol{\gamma}=\widetilde{\boldsymbol{X}}\boldsymbol{\theta}+\tilde{\boldsymbol{\varepsilon}}, \label{RegFP} \end{equation} which has uncorrelated predictors and error term. \subsection{Robustly fitting the factor model} \label{sec:FS} To estimate the latent factors $\boldsymbol{Z}$, in~\citep{FPSIS} the least squares type objective function \begin{equation} \mathcal{O}(\boldsymbol{Z},\boldsymbol{B}) = \|\boldsymbol{X}-\boldsymbol{Z}\boldsymbol{B}^\text{T}\|_E^2\,, \label{LSFP} \end{equation} is minimized under the constraint $\boldsymbol{Z}^\text{T}\boldsymbol{Z}=\boldsymbol{I}_d$, where $\|\cdot\|_E$ denotes the Euclidean norm. Let $\widehat{\boldsymbol{Z}}$ and $\widehat{\boldsymbol{B}}$ denote minimizers of \eqref{LSFP}. Then $\boldsymbol{X}$ can be approximated by $\widehat{\boldsymbol{X}} = \widehat{\boldsymbol{Z}}\widehat{\boldsymbol{B}}^\text{T}$ which is a low-dimensional approximation of $\boldsymbol{X}$ in a $d$-dimensional subspace. The optimal solution to (6) is not unique, but one solution is given by $\widehat{\boldsymbol{Z}} =(\hat{U}_1,\ldots,\hat{U}_d )^\text{T}\in\mathbb{R}^{n\times d}$, where $\hat{U}_j$ is the $j$th leading eigenvector of the matrix $\boldsymbol{X}\bX^\text{T}$~\citep[see][]{FPSIS}. Note that minimization of objective function~\eqref{LSFP} is closely related to dimension reduction by principal component analysis. Indeed, the $d$ first principal components of the centered matrix $\boldsymbol{X}$ are obtained by minimizing \begin{equation} \mathcal{O}(\boldsymbol{T},\boldsymbol{V}) = \|\boldsymbol{X}-\boldsymbol{T}\boldsymbol{V}^\text{T}\|_E^2\,, \label{LSPCA} \end{equation} under the constraint $\boldsymbol{V}^\text{T}\boldsymbol{V} = \boldsymbol{I}_{d}$, where $\boldsymbol{V}\in \mathbb{R}^{p\times d}$ contains the PC loadings as its columns and $\boldsymbol{T}\in\mathbb{R}^{n\times d}$ is the corresponding PC score matrix. Clearly, the objective functions in~\eqref{LSFP} and~\eqref{LSPCA} are the same, but this objective function is optimized under different constraints in both cases. The constraint $\boldsymbol{Z}^\text{T}\boldsymbol{Z}=\boldsymbol{I}_d$ for~\eqref{LSFP} yields uncorrelated latent factors while for~\eqref{LSPCA} the constraint $\boldsymbol{V}^\text{T}\boldsymbol{V} = \boldsymbol{I}_{d}$ yields uncorrelated principal components. Both solutions can immediately be derived from a singular value decomposition of the matrix $\boldsymbol{X}$ and yield the same approximation $\widehat{\boldsymbol{X}}$ of $\boldsymbol{X}$. It is well-known that LS-estimation is very sensitive to outliers. Observations that lie far away from the true subspace may pull the estimated subspace toward them if least squares is applied. Using the notation $r_{ij}=x_{ij}-\hat{x}_{ij}$, the objective function \eqref{LSFP} can be written as \begin{equation} \mathcal{O}(\boldsymbol{Z},\boldsymbol{B})= \sum_{i=1}^{n}(\|\boldsymbol{r}_i\|_E^2), \label{ResCl} \end{equation} with $\boldsymbol{r}_i = (r_{i1},\ldots,r_{ip})^\text{T}$. To downweight the influence of potential outliers, the LS objective function in \eqref{ResCl} can be replaced by a Least Trimmed Squares (LTS) objective function~\citep{Maronna2005}. The LTS objective function is the sum of squared residuals over the observations with the $h$ smallest residuals $\|\boldsymbol{r}_i\|_E$. That is, \begin{equation} \mathcal{O}(\boldsymbol{Z},\boldsymbol{B},\boldsymbol{\mu})=\sum_{i=1}^{h}(\|\boldsymbol{r}_i\|_E^2)_{i:n}=\sum_{i=1}^{h}(\|\boldsymbol{x}_i-\boldsymbol{B}\boldsymbol{z}_i-\boldsymbol{\mu}\|_E^2)_{i:n}, \label{ResMLTS1} \end{equation} with $[(n-d+2)/2]\leqslant h < n$, where $\boldsymbol{z}_i$ is the $i$th row of $\boldsymbol{Z}$, $\boldsymbol{\mu}$ is a robust location estimator, and $(\cdot)_{i:n}$ means the $i$th smallest value of an ordered sequence. To obtain the latent factors, we first minimize \eqref{ResMLTS1} without constraint and then orthogonalize $\widehat{\boldsymbol{Z}}$ afterwards. To solve~\eqref{ResMLTS1}, we use a computationally efficient algorithm that has been developed recently, see~\citep{Cevallos2016,MLTS-PCA}. A brief summary of this LTS algorithm can be found in the Supplemental Material. Similarly as in~\citep{robpca}, to further speed up the procedure we use singular value decomposition to represent the data matrix $\boldsymbol{X}$ in the subspace spanned by the $n$ observations before estimating the factor subspace using the LTS algorithm. We thus first reduce the data space $\boldsymbol{X}$ to the affine subspace of dimension $r=\text{rank}(\boldsymbol{X}-\boldsymbol{1}_n\bar{\boldsymbol{x}}^\text{T})$ where $\bar{\boldsymbol{x}}$ is the columnwise mean of $\boldsymbol{X}$. Denote the new matrix as $\boldsymbol{X}^* \in\mathbb{R}^{n\times r}$. By applying the LTS algorithm on $\boldsymbol{X}^*$, we obtain estimates $(\widehat{\boldsymbol{Z}}^*,\widehat{\boldsymbol{B}}^*, \hat{\boldsymbol{\mu}}^*)$, with $\widehat{\boldsymbol{Z}}^*\in\mathbb{R}^{n\times d}$, $\widehat{\boldsymbol{B}}^*\in\mathbb{R}^{r\times d}$ and $\hat{\boldsymbol{\mu}}^*\in\mathbb{R}^r$. The final solution is given by $(\widehat{\boldsymbol{Z}}^*, \boldsymbol{P}\widehat{\boldsymbol{B}}^*, \boldsymbol{P}\hat{\boldsymbol{\mu}}^*+\bar{\boldsymbol{x}})$, where $\boldsymbol{P}\in \mathbb{R}^{p\times r}$ is the projection matrix from the initial singular value decomposition. To simplify the notation, we write the final output of the LTS algorithm as $(\widehat{\boldsymbol{Z}}, \widehat{\boldsymbol{B}}, \hat{\boldsymbol{\mu}})$. To refine the estimation of the factor model, we apply two reweighting steps to the initial solution obtained by the LTS algorithm. The first step improves the estimation of the low dimensional subspace spanned by the latent factors and the second step increases the accuracy of the robust location estimate $\hat{\boldsymbol{\mu}}$. For these reweighting steps, we need to identify outliers in the data with respect to the assumed factor model~\eqref{RegX}. Therefore, following~\citet{robpca} we first introduce two distances of an observation with respect to a given subspace. The orthogonal distance (OD) of an observation $\boldsymbol{x}_i$ measures the distance of that observation to the subspace. It is thus given by $\text{OD}_i = \|\boldsymbol{r}_i\|_E$. On the other hand, the score distance (SD) of an observation $\boldsymbol{x}_i$ measures the distance between its approximation $\widehat{\boldsymbol{x}}_i$ in the subspace to the center of the subspace and is given by $\text{\small SD}_i =\|\boldsymbol{z}_i\|_E$. Based on the orthogonal distance we can identify {\it OC outliers} which are observations that lie far from the subspace and thus are outlying in the orthogonal complement (OC) of the subspace, i.e. the OC subspace~\citep{She2016}. Based on the score distance within the subspace we can identify {\it score outliers}, also called {\it PC outliers} in~\citep{She2016}, which are observations that lie far from the center within the subspace. Following~\citep{robpca} we call a score outlier a {\it good leverage point} if it is outlying within the subspace, but does not lie far from the subspace. A score outlier is called a {\it bad leverage point} if it is not only outlying within the subspace, but at the same time is an OC outlier. The plots in Figure~\ref{PC_OC_outlier} show examples of PC and OC outliers in case of bivariate data and a one-dimensional subspace. \begin{figure}[ht!] \centering \begin{minipage}{0.45\textwidth} \centering \footnotesize (a) PC outlier \end{minipage} \begin{minipage}{0.45\textwidth} \centering \footnotesize (b) OC outlier \end{minipage}\\ \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=5.5 cm]{PC_outlier.eps} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=5.5 cm]{OC_outlier.eps} \end{minipage} \caption{\small PC outliers and OC outliers: (a) Normal data ($\protect \bullet$) and PC outliers (“+”); (b) Normal data ($\protect \bullet$) and OC outliers (“o”).} \label{PC_OC_outlier} \end{figure} {\bf Reweighted subspace estimation}. The $n-h$ observations with the largest squared residuals are excluded in the least trimmed squares objective function~\eqref{ResMLTS1}. Smaller values of $h$ yield more robustness, but also a lower efficiency because many observations are excluded. To increase the statistical efficiency, we identify the OC outliers, and re-estimate the factor subspace by applying least squares on the subset of observations that is obtained by removing the OC outliers. Unfortunately, the distribution of the orthogonal distances for the regular data is generally not known, so it is not straightforward to define a cutoff value to identify OC outliers. To overcome this issue we use a robust version of the Yeo-Johnson transformation~\citep{Yeo2000}, proposed in~\citep{RobT2010}. The orthogonal distances are first standardized robustly by using their median and $Q_n$ scale estimate, that is \[ d_i=\frac{\|\boldsymbol{r}_i\|_E-\text{med}(\|\boldsymbol{r}_i\|_E)}{Q_n(\|\boldsymbol{r}_i\|_E)}. \] Then, we apply the Yeo-Johnson transformation \begin{equation} \psi(\lambda,d) = \scalebox{2}{\Bigg\{ } \begin{tabular}{cc} $((d+1)^\lambda - 1)/\lambda$ \quad & if $\lambda \neq 0$ and $d\geqslant 0 $ \\ $\log(d+1)$ \quad & if $\lambda = 0$ and $d\geqslant 0$ \\ $-((-d+1)^{2-\lambda}-1)/(2-\lambda)$ & if $\lambda \neq 2$ and $d <0$ \\ $-\log(-d+1)$ & if $\lambda =2$ and $d<0$ \end{tabular} \end{equation} to the standardized orthogonal distances $d_i$ for a grid of $\lambda$ values. The optimal value of $\lambda$ is selected by maximizing the trimmed likelihood \begin{equation} L_\text{Trim}(\lambda) = \sum_{i=1}^{h}l(\lambda;d_i)_{i:n}, \end{equation} where $l(\lambda;d_i)$ measures the contribution of the $i$th observation to the likelihood, given by \begin{equation} l(\lambda;d_i) = -\frac{1}{2}\log(2\pi)-\log(\hat{\sigma}_{\lambda}) - \frac{1}{2\hat{\sigma}_{\lambda}^2}(\psi(\lambda,d_i)-\hat{\mu}_{\lambda})^2 + (\lambda - 1)\text{sign}(d_i)\log(|d_i|+1), \end{equation} where $\hat{\mu}_{\lambda}$ and $\hat{\sigma}_{\lambda}$ are the median and $Q_n$ estimates of the transformed observations $\psi(\lambda,d_i)$ ($i=1,\ldots,n$), respectively. Here, we use the same value of $h$ as in the LTS estimation of the factor space. The optimal value of $\lambda$ is searched over the grid $[0,1]$ with step size 0.02. $\lambda$ values exceeding 1 are not considered to avoid a swamping effect when the chosen contamination level (through $h$) in the LTS algorithm is much larger than the actual level in the data. Finally, observations whose transformed orthogonal distance $\psi(\lambda_{\text{opt}},d_i)$ exceeds the cutoff $\Phi^{-1}(0.975)$ are flagged as OC outliers. After re-estimating the factor subspace, we update the orthogonal distance of each observation and flag the OC outliers. {\bf Reweighting within the subspace}. The LTS method is designed to downweight the adverse influence of OC outliers when estimating the low-dimensional subspace. However, there may be score outliers as well. These outliers do not influence the estimation of the subspace, but they affect the factor scores and the estimate of the subspace center. Therefore, we re-estimate the location and scatter of the scores and update the estimates of $\boldsymbol{\mu}$ and $\boldsymbol{Z}$ accordingly. Similarly as in~\citep{robpca}, we first estimate the location and scatter of the scores $\hat{\boldsymbol{z}}_i$ using the reweighted MCD estimator~\citep{Rousseeuw1984} and calculate the corresponding robust distances $\text{\footnotesize RD}_i$ of the observations $\hat{\boldsymbol{z}}_i$, that is the Mahalanobis distances of the scores $\hat{\boldsymbol{z}}_i$ with respect to these reweighted MCD estimates. The reweighted estimate of the center of the scores then becomes $\hat{\boldsymbol{\mu}}_{\boldsymbol{z}}=\sum_{i=1}^{n}\tilde{w}_i\hat{\boldsymbol{z}}_i/\sum_{i=1}^{n}\tilde{w}_i$, where $\tilde{w}_i = I(\text{\footnotesize RD}_i\leqslant c_\text{\tiny SD} \text{ and } \text{\footnotesize OD}_i \leqslant c_\text{\tiny OD})$ with \scalebox{0.9}{$c_\text{\tiny RD}=\sqrt{\chi_{d;0.975}^2}$}, and $I(\cdot)$ denotes the indicator function. Similarly, the scatter estimate $\hat{\boldsymbol{\Sigma}}_{\boldsymbol{z}}$ of the scores is given by the covariance matrix of the scores with weight $\tilde{w}_i =1$. Note that to minimize the bias due to outlying observations, both the PC and OC outliers are downweighted when re-estimating the location and scatter of the scores. Finally, we update the location estimate in the original space and the score matrix, i.e. $\hat{\boldsymbol{\mu}} \gets \hat{\boldsymbol{\mu}}+\hat{\boldsymbol{B}}\hat{\boldsymbol{\mu}}_{\boldsymbol{z}}$, $\hat{\boldsymbol{Z}} \gets (\hat{\boldsymbol{Z}} - \boldsymbol{1}_d\hat{\boldsymbol{\mu}}_{\boldsymbol{z}}^\text{T}) \hat{\boldsymbol{\Sigma}}_{\boldsymbol{z}}^{-1/2}$ and $\hat{\boldsymbol{B}}=\hat{\boldsymbol{B}} \hat{\boldsymbol{\Sigma}}_{\boldsymbol{z}}^{1/2}$. Then, we recompute the score distance for each observation $i$ by $\text{\footnotesize SD}_i = \|\widehat{\boldsymbol{z}}_i\|_E$, and flag it as a score outlier if $\text{\footnotesize SD}_i > c_\text{\tiny SD}$. {\bf Estimating $d$.} In practice, the dimension $d$ of the factor subspace is unknown. To estimate the dimension $d$, we use the criterion in~\citep{Bai2002} which determines the number of factors by minimizing \begin{eqnarray} \label{PC_Select} \text{\footnotesize PC}(d) & = & \tilde{n}_d^{-1}p^{-1}tr\{(\boldsymbol{X}-\boldsymbol{1}_{\tilde{n}_d}\hat{\boldsymbol{\mu}}_d^\text{T}-\widehat{\boldsymbol{Z}}_d\widehat{\boldsymbol{B}}_d^\text{T})^\text{T}W^d(\boldsymbol{X}-\boldsymbol{1}_{\tilde{n}_d}\hat{\boldsymbol{\mu}}^\text{T}_d-\widehat{\boldsymbol{Z}}_d\widehat{\boldsymbol{B}}_d^\text{T})\} \\ &+& {\tilde{n}_d}^{-1} p ^{-1}tr((\boldsymbol{X}-\boldsymbol{1}_{\tilde{n}_d}\hat{\boldsymbol{\mu}}_d^\text{T})^\text{T}W^d(\boldsymbol{X}-\boldsymbol{1}_{\tilde{n}_d}\hat{\boldsymbol{\mu}}_d^\text{T}))\{d(\frac{\tilde{n}_d+p}{\tilde{n}_d p})\log(\frac{\tilde{n}_d p}{\tilde{n}_d+p})\}, \nonumber \end{eqnarray} with respect to $d$. Here, $\hat{\boldsymbol{\mu}}_d$, $\widehat{\boldsymbol{Z}}_d$ and $\widehat{\boldsymbol{B}}_d$ are the estimates obtained by the procedure outlined above when the number of factors is fixed at $d$. $W^d$ is a diagonal matrix with on the diagonal the weights $w_i^d = I(\text{\footnotesize OD}^d_i \leqslant c_\text{\tiny OD} \text{ and } \text{\footnotesize SD}^d_i \leqslant c_\text{\tiny SD})$ where $\text{\footnotesize OD}_i^d$ and $\text{\footnotesize SD}_i^d$ are computed with $\hat{\boldsymbol{\mu}}_d$, $\widehat{\boldsymbol{Z}}_d$ and $\widehat{\boldsymbol{B}}_d$. Finally, $\tilde{n}_d = \sum_{i=1}^{n}w_i^d$. To control the computation time we fix $d_{\max}$, the maximal number of factors. The estimated dimension of the subspace is then given by $\hat{d} = \arg\min_{1\leqslant d \leqslant d_{\max}}\text{\footnotesize PC}(d)$ which yields the final estimates $\hat{\boldsymbol{\mu}}_{\hat{d}}$, $\widehat{\boldsymbol{Z}}_{\hat{d}}$ and $\widehat{\boldsymbol{B}}_{\hat{d}}$. To simplify notation we will drop the subscript $\hat{d}$ in the remainder of the paper. \subsection{Robust Variable Screening} \label{sec:robust screening} In FPSIS, the profiled variables are obtained by projecting the original variables onto the orthogonal complement of the subspace spanned by the latent factors. However, each profiled observation is then a linear combination of all the original observations. If there are outliers in the data, this implies that all the profiled observations would become contaminated which would make them useless. To avoid this, we instead calculate the profiled variables directly by using \eqref{RegX}-\eqref{RegE}. The profiled predictors are obtained as \begin{equation} \widehat{\widetilde{\boldsymbol{X}}} = \boldsymbol{X} - \boldsymbol{1}_n\hat{\boldsymbol{\mu}}^\text{T} - \widehat{\boldsymbol{Z}}\,\widehat{\boldsymbol{B}}^\text{T}. \end{equation} To obtain the profiled response variable, we robustly regress $\boldsymbol{y}$ on $\widehat{\boldsymbol{Z}}$. We use the 95\% efficient MM-estimator~\citep{Yohai1987} with bisquare loss function for this purpose. The resulting slope estimates are denoted by $\hat{\boldsymbol{\gamma}}$ while the estimated intercept is denoted by $\hat{\mu}_y$ since it provides a robust estimate of the center of $\boldsymbol{y}$. The corresponding profiled response is given by \begin{equation} \widehat{\widetilde{\boldsymbol{y}}} = \boldsymbol{y} - \hat{\mu}_y\boldsymbol{1}_n-\widehat{\boldsymbol{Z}}^\text{T}\hat{\gamma}. \end{equation} Variable screening is conducted on the profiled variables by using marginal regression models. Before applying variable screening, we first investigate which types of outliers can occur in the data with respect to the different regression and factor models. As discussed in Section \ref{sec:FS}, in the factor model for the predictors, we may have two types of outliers: OC outliers and PC outliers. Since PC outliers are only outlying in $\widehat{\boldsymbol{Z}}$ rather than $\widehat{\boldsymbol{X}}$, by profiling out the effect of $\widehat{\boldsymbol{Z}}$ they become non-outlying observations in $\widehat{\boldsymbol{X}}$. However, OC outliers are outlying with respect to the low-dimensional subspace, it is unable to remove their outlyingness by factor profiling. Therefore, these observations remain outliers in the profiled predictor matrix $\widehat{\boldsymbol{X}}$. For the multiple regression model~\eqref{RegY} based on the original variables, there can be vertical outliers, good leverage points, and bad leverage points. Vertical outliers are only outlying in the response variable $\boldsymbol{y}$. Good leverage points are outlying in the predictor space $\boldsymbol{X}$, but do follow the regression model, while bad leverage points are not only outlying in $\boldsymbol{X}$ but also have responses that deviate from the regression model of the majority. By combining the types of outliers that can occur in the multiple linear regression model (LM) and the factor model for the predictors, we can have the following 5 types of outliers: \begin{itemize}[topsep= -6 pt,itemsep=-0.5 ex,partopsep=-0.5 ex, parsep= 1ex] \item[1.] LMV: vertical outlier in the multiple regression; \item[2.] PC+LMG: good leverage point due to PC outlier in the predictors; \item[3.] PC+LMB: bad leverage point due to PC outlier in the predictors; \item[4.] OC+LMG: good leverage point due to OC outlier in the predictors; \item[5.] OC+LMB: bad leverage point due to OC outlier in the predictors. \end{itemize} Each outlier type may affect the multiple regression model for the profiled variables as well as the corresponding marginal regression models. To illustrate the effect of the different types of leverage points on these regression models, we consider a regression example with only 2 predictors and 1 factor. A set of clean observations ($\boldsymbol{X}_\text{clean}$, $\boldsymbol{y}_\text{clean}$) is generated according to $\boldsymbol{X}_\text{clean} = \boldsymbol{z}\boldsymbol{B}^\text{T} + \tilde{\boldsymbol{X}}$ and $\boldsymbol{y}_\text{clean} = \boldsymbol{X}_\text{clean}\boldsymbol{\beta} + \boldsymbol{\varepsilon}$, where $\boldsymbol{\beta} = (2,1)^\text{T}$, $\boldsymbol{z} \sim N(0,1)$, $\tilde{\boldsymbol{X}}\sim N_2(0,\boldsymbol{I}_2)$ $\boldsymbol{B} = (1/\sqrt{2},1/\sqrt{2})^\text{T}$, and $\boldsymbol{\varepsilon}\sim N(0,1)$. For the factor model we generate PC outliers by $\boldsymbol{X}_\text{\tiny PC} = \boldsymbol{z}_\text{\tiny PC}\boldsymbol{B}^\text{T} + \tilde{\boldsymbol{X}}$ with $\boldsymbol{z}_\text{\tiny PC} \sim N(10,1)$ and OC outliers by $\boldsymbol{X}_\text{\tiny OC}= \boldsymbol{z}_\text{\tiny OC}\boldsymbol{B}_\text{\tiny OC}^\text{T} + \tilde{\boldsymbol{X}}$ with $\boldsymbol{z}_\text{\tiny OC} \sim N(10,1)$ and $\boldsymbol{B}_\text{\tiny OC} = (-1/\sqrt{2},1/\sqrt{2})^\text{T}$. Observations according to the 4 types of leverage points are then obtained as follows. \begin{itemize}[topsep= -6 pt,itemsep=-0.5 ex,partopsep = -0.5 ex,parsep= 1 ex] \item[1.] PC+LMG: ($\boldsymbol{X}_\text{\tiny PC}$,$\boldsymbol{y}_\text{\tiny PC+LMG}$) where $\boldsymbol{y}_\text{\tiny PC+LMG}$ = $\boldsymbol{X}_\text{\tiny PC}\boldsymbol{\beta} + \boldsymbol{\varepsilon}$ ; \item[2.] PC+LMB: ($\boldsymbol{X}_\text{\tiny PC}$,$\boldsymbol{y}_\text{\tiny PC+LMB}$), where $\boldsymbol{y}_\text{\tiny PC+LMB}\sim N(50,1)$; \item[3.] OC+LMG: ($\boldsymbol{X}_\text{\tiny OC}$,$\boldsymbol{y}_\text{\tiny OC+LMG}$), where $\boldsymbol{y}_\text{\tiny OC+LMG}$ = $\boldsymbol{X}_\text{\tiny OC}\boldsymbol{\beta} + \boldsymbol{\varepsilon}$; \item[4.] OC+LMB: ($\boldsymbol{X}_\text{\tiny OC}$,$\boldsymbol{y}_\text{\tiny OC+LMB}$), where $\boldsymbol{y}_\text{\tiny OC+LMB}\sim N(50,1)$; \end{itemize} For a generated dataset ($\boldsymbol{X}$,$\boldsymbol{y}$) with $\boldsymbol{X} = (\boldsymbol{X}_1,\boldsymbol{X}_2)$ we can obtain $\boldsymbol{z}$ by $(\boldsymbol{X}-\widetilde{\boldsymbol{X}})\boldsymbol{B}$ in this case because $\boldsymbol{B}$ is known and there are only two predictors. It follows that the profiled predictors and response are given by: ${\widehat{\boldsymbol{X}}} = (\widehat{\boldsymbol{X}}_1,\widehat{\boldsymbol{X}}_2) = \boldsymbol{X} - \hat{\boldsymbol{z}}\boldsymbol{B}^\text{T}$ and ${\hat{\boldsymbol{y}}} = \boldsymbol{y}-\hat{\boldsymbol{z}}\boldsymbol{B}^\text{T}\boldsymbol{\beta}$. The scatter plots of the original variables ($\boldsymbol{X}_1,\boldsymbol{X}_2)$ and ($\boldsymbol{X},\boldsymbol{y}$), the profiled variables ($\widehat{\boldsymbol{X}},\hat{\boldsymbol{y}}$) as well as ($\widehat{\boldsymbol{X}}_1,\hat{\boldsymbol{y}}$) and ($\widehat{\boldsymbol{X}}_2,\hat{\boldsymbol{y}}$) are shown in the five rows of Figure \ref{leverage2}. The four columns in Figure \ref{leverage2} correspond to the cases PC+LMG, PC+LMB, OC+LMG and OC+LMB leverage points, respectively. Since PC outliers become regular observations after factor profiling, i.e. they are non-outlying in the factor profiled predictors, PC+LMG leverage points become regular observations in the multiple regression model~\eqref{RegFP} based on the factor profiled variables, as can be seen in panel a3 of Figure \ref{leverage2}; while PC+LMB leverage points become vertical outliers in this model (see Figure \ref{leverage2}, b3). On the other hand, OC outliers remain outlying in the factor profiled predictors. Hence, OC+LMG leverage points remain good leverage points (see Figure \ref{leverage2}, c3); while OC+LMB leverage points remain bad leverage points (see Figure \ref{leverage2}, d3) in the multiple regression model with factor profiled variables. Let us now look at the marginal regression models based on the profiled variables. The PC+LMG leverage points became regular observations in the multiple model and thus remain regular observations for the marginal models (see Figure \ref{leverage2}, a4 and a5). Similarly, the PC+LMB leverage points remain vertical outliers in the marginal models (see Figure \ref{leverage2}, b4 and a5). On the other hand, while the OC+LMG leverage points remain good leverage points for the multiple model~\eqref{RegFP}, they in general become bad leverage points in the marginal models (see Figure \ref{leverage2}, c4 and c5). Finally, the OC+LMB leverage points remain bad leverage points in the marginal models as well (see Figure \ref{leverage2}, d4 and d5). To avoid the adverse effect of outliers, our procedure downweights all types of leverage points in an initial variable screening step. Since outlying scores will affect the estimates of the profiled response variable, we first estimate the profiled response variables based on the observations with non-outlying predictors. Then, we check whether a PC outlier is outlying in the profiled response as well or not, i.e. whether it is a good or a bad leverage point in the regression models. The PC+LMG leverage points will not be downweighted anymore, and both the profiled response and the marginal coefficients will be re-estimated by including these good leverage points to increase efficiency. Finally, we give an overview of the proposed robust factor profiled sure independence screening (RFPSIS) procedure. The RFPSIS procedure consists of the following steps: \textbf{Step 1. Profiled predictors.}\\ Standardize each of the original variables using its median and $Q_n$ estimates. Fit the factor model to the scaled data robustly by using the least trimmed squares method discussed in Subsection~\ref{sec:FS} to obtain the factor profiled predictors $\widehat{\widetilde{\boldsymbol{X}}}$. Then, identify the PC and OC outliers. Denote by $\mathcal{I}_1$ the index set of the regular observations, i.e. the observations with non-outlying predictors according to the factor model. Let $\widehat{\boldsymbol{Z}}_{\mathcal{I}_1}$ denote the sub-matrix of $\widehat{\boldsymbol{Z}}$ which collects the observations corresponding to $\mathcal{I}_1$. \begin{figure}[H] \centering \begin{minipage}{0.24\textwidth} \centering \footnotesize (a) PC + LMG \end{minipage} \begin{minipage}{0.24\textwidth} \centering \footnotesize (b) PC + LMB \end{minipage} \begin{minipage}{0.24\textwidth} \centering \footnotesize (c) OC + LMG \end{minipage} \begin{minipage}{0.24\textwidth} \centering \footnotesize (d) OC + LMB \end{minipage}\\ \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_xs_ori.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_xs2_ori.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_xo_ori.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_xo2_ori.eps} \end{minipage}\\ \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_gs_ori.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bs_ori.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_go_ori.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bo_ori.eps} \end{minipage}\\ \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_gs_pf.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bs_pf.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_go_pf.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bo_pf.eps} \end{minipage}\ \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_gs_pf_x1.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bs_pf_x1.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_go_pf_x1.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bo_pf_x1.eps} \end{minipage}\\ \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_gs_pf_x2.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bs_pf_x2.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_go_pf_x2.eps} \end{minipage} \begin{minipage}{0.24\textwidth} \centering \includegraphics[width=3.5 cm]{out_type_bo_pf_x2.eps} \end{minipage} \caption{\small The scatter plot of the original variables ($\protect \boldsymbol{X}_1,\protect \boldsymbol{X}_2$) and ($\protect \boldsymbol{X},\protect \boldsymbol{y}$), the profiled variables ($\protect \widehat{\boldsymbol{X}},\protect \hat{\protect\boldsymbol{y}}$), ($\protect \widehat{\boldsymbol{X}}_1,\protect\hat{\protect \boldsymbol{y}}$) and ($\protect \widehat{\boldsymbol{X}}_2,\protect\hat{\protect\boldsymbol{y}}$) for the datasets with the PC+LMG, PC+LMB, OC+LMG and OC+LMG leverage points. The regular observations and the leverage points are plotted by {\color{blue} $\protect\bullet$} and {\color{red} $\protect\blacktriangle$}, respectively.} \label{leverage2} \end{figure} \textbf{Step 2. Profiled response.} \begin{itemize} \item[2a.] \textit{Initial profiling.}\\ Regress $\boldsymbol{y}_{{ \mathcal{I}_1}}$ on $\widehat{\boldsymbol{Z}}_{\mathcal{I}_1}$ robustly to obtain the estimated slope $\hat{\boldsymbol{\gamma}}^o$ and intercept $\hat{\mu}_y^o$. By default we use a $95\%$ efficient regression MM-estimator. Let $\hat{\sigma}_y^o$ denote the estimated error scale. An initial estimate of the profiled response is then obtained by $\hat{y}_i^o = y_i - \hat{\mu}^o_y- {\hat{\boldsymbol{z}}_i}^\text{T}\hat{\gamma}^o$, $i=1,\ldots,n$. \item[2b.] \textit{Outlier identification.}\\ Denote $\mathcal{I}_\text{s}$ as the index set of the PC outliers identified in Step 1. For each of these PC outliers, check whether it is a vertical outlier or a regular observation based on its standardized residual $\hat{t}_i = {\hat{y}_i^o}/{\hat{\sigma}^o_{\hat{y}}}$ corresponding to the regression model in Step 2a. Define an enlarged index set $\mathcal{I}_2 =\mathcal{I}_1\cup \{ i \in \mathcal{I}_s: \hat{t}_i^2\leqslant \chi^2_{0.975,1}\}$. \item[2c.] \textit{Updated profiling.}\\ Calculate updated estimates $\hat{\mu}_y$, $\hat{\boldsymbol{\gamma}}$ and $\widehat{\boldsymbol{y}}_{\mathcal{I}_2}$ by regressing $\boldsymbol{y}_{{ \mathcal{I}_2}}$ on $\widehat{\boldsymbol{Z}}_{\mathcal{I}_2}$ using the MM-estimator (by default). The updated estimate of the profiled response is then given by $\hat{y}_i = y_i - \hat{\mu}_y-\hat{\boldsymbol{z}}_i^\text{T}\hat{\gamma}$, $i=1,\ldots,n$. \end{itemize} \textbf{Step 3. Variable screening.}\\ Regress $\widehat{\boldsymbol{y}}_{\mathcal{I}_2}$ robustly on each of the corresponding profiled predictors $(\widehat{\boldsymbol{X}}_{\mathcal{I}_2})_j$ ($j=1,\ldots,p$), using a $95\%$ efficient simple regression MM-estimator by default. Let $\hat{\boldsymbol{\theta}}=(\hat{\theta}_1,\ldots,\hat{\theta}_p)^\text{T}\in\mathbb{R}^p$ be the marginal slope estimates. Sort these estimates according to decreasing absolute value to obtain the solution path $\mathbb{M} = \{\mathcal{M}_{(k)}: k=0,\ldots,p\}$, with $\mathcal{M}_{(0)}=\emptyset$ and $\mathcal{M}_{(k)}=\{j: |\hat{\theta}_{j}| \text{ belongs to the largest } k \text{ values }\}$, $k=1,\ldots,p$. \subsection{Empirical performance study} \label{sec:perf} To investigate the performance of RFPSIS, we generate regular data as in~\citep{FPSIS}. The predictors are obtained by \scalebox{1}{$\boldsymbol{X} = \boldsymbol{Z}\boldsymbol{B}^\text{T} + \widetilde{\boldsymbol{X}}$}, where $\boldsymbol{Z}\sim N_{d}(\boldsymbol{0},\boldsymbol{I}_d)$, $\boldsymbol{B}\sim N_{d}(\boldsymbol{0},\boldsymbol{I}_d)$, $\widetilde{\boldsymbol{X}}\sim N_p(\boldsymbol{0},\boldsymbol{I}_p)$. The response is generated as \scalebox{1}{$\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\theta}_0+\boldsymbol{\varepsilon}$} with coefficients \begin{equation} \label{theta} \theta_{0j}= \begin{cases} (-1)^{R_{aj}}(4n^{-1/2}\log n +|R_{bj}|) & \text{for}\ j=1,\ldots,|\mathcal{M}_\text{T}| \\ 0, & \text{otherwise} \end{cases} \end{equation} where $R_{aj}\sim B(1,0.4)$, $R_{bj}\sim N(0,1)$ and $|\mathcal{M}_\text{T}|=8$. Hence, there are 8 important variables in the model. Moreover, the errors are generated according to $\boldsymbol{\varepsilon} = \boldsymbol{Z}^\text{T}\boldsymbol{\alpha}_0+\tilde{\boldsymbol{\varepsilon}}$, with $\boldsymbol{\alpha}_0=0.8\sigma_\varepsilon(\sqrt{2},\sqrt{2})^\text{T}\in\mathbb{R}^2$ and $\tilde{\boldsymbol{\varepsilon}}\sim N(0,\tilde{\sigma}_\varepsilon^2)$, where $\tilde{\sigma}_\varepsilon=0.6\sigma_\varepsilon$ and $\sigma_\varepsilon^2=\text{var}(\boldsymbol{X}^\text{T}\boldsymbol{\theta}_0)/c$, with $c$ the signal-to-noise ratio. To study the robustness of our method, we replace a fraction of the observations by outliers. Let $y_{\min}$ and $y_{\max}$ be the minimal and maximal value of the regular responses, respectively. Then, we simulate outlying responses by replacing the original response $y_i$ of the observation by $y_{\text{\tiny LMV}}\sim N(\mu_c^y ,1)$, where $\mu_c^y = y_{\max}\cdot\mathcal{I}(y_i\leqslant \frac{y_{\min} + y_{\max}}{2}) + y_{\min}\cdot\mathcal{I}(y_i >\frac{y_{\min} + y_{\max}}{2})$ with $I(\cdot)$ the indicator function. In this way we generate a set of vertical outliers which lie at the tails of the response distribution. These outliers are extreme vertical outliers while they are hard to detect by inspecting the empirical distribution of $\boldsymbol{y}$. Next to vertical outliers we also consider leverage points. Leverage points are generated as either PC outliers or OC outliers. PC outliers are generated as $\boldsymbol{X}_\text{\tiny PC} = \boldsymbol{Z}_\text{\tiny PC}\boldsymbol{B}^\text{T} + \widetilde{\boldsymbol{X}}$, where $\boldsymbol{Z}_\text{\tiny PC} \sim N_{d}(\boldsymbol{\mu}_\text{\tiny PC},\boldsymbol{I}_{d})$ and $\boldsymbol{\mu}_\text{\tiny PC}=5\cdot\boldsymbol{1}_{d}^\text{T}$. OC outliers are generated as $\boldsymbol{X}_\text{\tiny OC}\sim N_p(\boldsymbol{\mu}_\text{\tiny OC},\boldsymbol{I}_p)$, where $\boldsymbol{\mu}_\text{\tiny OC}=10\cdot(\underbrace{1,\ldots,1}_{0.2p},0\ldots,0)_p^\text{T}$. Both good leverage points and bad leverage points for the linear model are considered: for good leverage points, the response is generated according to the true regression model; and for bad leverage points, the response is simulated in the same way as vertical outliers. The following five contamination levels are considered: \begin{compactlist} \item {\emph Case 1.} $\epsilon = 0\%$, no contamination; \item {\emph Case 2.} $\epsilon =5\%$ (good/bad) leverage points, no vertical outliers; \item {\emph Case 3.} $\epsilon =5\%$ (good/bad) leverage points + $5\%$ extra vertical outliers; \item {\emph Case 4.} $\epsilon = 20\%$ (good/bad) leverage points, no vertical outliers; \item {\emph Case 5.} $\epsilon = 20\%$ (good/bad) leverage points + $10\%$ extra vertical outliers. \end{compactlist} The simulations are performed for different combinations of $p$ ($1000$ or $10000$), $n$ ($200$ or $400$) and $d$ ($2$ or $5$). We also consider three levels for the signal-to-noise ratio, by setting $c=1$, $3$ or $5$. Screening performance is measured by the minimal model size that is required to cover $m$ ($m=1,\ldots,\mathcal{M}_\text{T}$) of the important variables. For each setting, we use 200 simulated datasets and report both the median and the $95\%$ quantile of the minimal model size. Here, we only present the simulation results for $d=2$, $n=200$ or $400$, and $p=10000$. The results for the other settings lead to similar conclusions and can be found in the Supplemental Material. The results for SIS and FPSIS on regular data and data with $5\%$ leverage points are shown in Figure~\ref{location_classical}. We can see that the SIS curves increase quickly, even for regular data. SIS can only detect the first two important predictors with a reasonable model size. Clearly, SIS fails in all cases due to the correlation in the data. On the other hand, FPSIS which takes the correlation structure into account performs well on clean data. FPSIS shows nearly optimal performance on regular data with a moderate sample size and signal-to-noise ratio. The decreasing sample size or the signal-to-noise ratio only affect FPSIS in the model size required to screen out the last few important variables. Interestingly, FPSIS can obtain equally good results for data with good PC leverage points as for regular data. However, when the data contains bad PC leverage points or (good/bad) OC leverage outliers, FPSIS can at best pick up 3 to 4 important predictors in the beginning of its solution path in case of a large sample size and a high signal-to-noise ratio, but the model size required to include the remaining ones increases dramatically. \begin{figure}[ht!] \centering \begin{minipage}{0.01\textwidth} \hfill \end{minipage} \begin{minipage}{0.32\textwidth} \centering (a) $n=200$, $c=1$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (b) $n=200$, $c=3$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (c) $n=200$, $c=5$ \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize Median} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_classical_Q50_d2_n200_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_classical_Q50_d2_n200_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_classical_Q50_d2_n200_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize $95\%$ Quantile} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_classical_Q95_d2_n200_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_classical_Q95_d2_n200_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_classical_Q95_d2_n200_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{\textwidth} \hfill \end{minipage}\\ \begin{minipage}{0.01\textwidth} \hfill \end{minipage} \begin{minipage}{0.32\textwidth} \centering (e) $n=400$, $c=1$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (f) $n=400$, $c=3$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (g) $n=400$, $c=5$ \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize Median} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_classical_Q50_d2_n400_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_classical_Q50_d2_n400_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_classical_Q50_d2_n400_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize $95\%$ Quantile} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_classical_Q95_d2_n400_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_classical_Q95_d2_n400_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_classical_Q95_d2_n400_P10000_c5.eps} \end{minipage}\\ \caption{ \small Median and $\protect 95\%$ of the minimal model size needed to capture $m$ important variables by SIS (dotted lines) and FPSIS (solide lines) for data containing regular observations (bullets), PC+LMG (black triangles), PC+LMB (triangles), OC+LMG (black diamonds) and OC+LMB (diamonds) with $p=10000$ and $d=2$.} \label{location_classical} \end{figure} RFPSIS is performed with $h=[(n-d+2)/2]$ for maximal robustness. For RFPSIS we first remark that in our simulation settings the estimate $\hat{d}$ of the factor subspace dimension according to criterion~\eqref{PC_Select} consistently corresponded to the true dimension $d$ that was used to generate the data. The results of RFPSIS in presence of leverage points are shown in Figure~\ref{location_score} for PC outliers, and in Figure~\ref{location_orth} for OC outliers. By comparing the plots in these two figures with those in Figure~\ref{location_classical}, we can see that RFPSIS performs almost as good as FPSIS on regular data. Moreover, unlike FPSIS, RFPSIS succeeds to reduce the model size to a large extent while keeping all the important predictors for all considered contamination levels and outlier types. Since any OC outliers become bad leverage points in marginal regression models, both good and bad OC leverage points are downweighted in RFPSIS, and hence these two types of outliers lead to the same results. However, for PC outliers, there is a significant difference between good and bad leverage points because they are treated differently by RFPSIS. With good PC leverage points, the screening results of RFPSIS are always close to those obtained on regular data. \begin{figure}[ht!] \centering \begin{minipage}{0.01\textwidth} \hfill \end{minipage} \begin{minipage}{0.32\textwidth} \centering (a) $n=200$, $c=1$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (b) $n=200$, $c=3$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (c) $n=200$, $c=5$ \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize Median} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_score_Q50_d2_n200_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_score_Q50_d2_n200_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_score_Q50_d2_n200_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize $95\%$ Quantile} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_score_Q95_d2_n200_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_score_Q95_d2_n200_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_score_Q95_d2_n200_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{\textwidth} \hfill \end{minipage}\\ \begin{minipage}{0.01\textwidth} \hfill \end{minipage} \begin{minipage}{0.32\textwidth} \centering (e) $n=400$, $c=1$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (f) $n=400$, $c=3$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (g) $n=400$, $c=5$ \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize Median} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_score_Q50_d2_n400_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_score_Q50_d2_n400_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_score_Q50_d2_n400_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize $95\%$ Quantile} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_score_Q95_d2_n400_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_score_Q95_d2_n400_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_score_Q95_d2_n400_P10000_c5.eps} \end{minipage}\\ \caption{\small Median and $95\%$ Quantile of the minimal model size needed to capture $m$ important variables by RFPSIS in the case of PC+LMG and PC+LMB for $p=10000$ and $d=2$.} \label{location_score} \end{figure} \begin{figure}[ht!] \centering \begin{minipage}{0.01\textwidth} \hfill \end{minipage} \begin{minipage}{0.32\textwidth} \centering (a) $n=200$, $c=1$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (b) $n=200$, $c=3$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (c) $n=200$, $c=5$ \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize Median} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_orth_Q50_d2_n200_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_orth_Q50_d2_n200_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_orth_Q50_d2_n200_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize $95\%$ Quantile} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_orth_Q95_d2_n200_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_orth_Q95_d2_n200_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_orth_Q95_d2_n200_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{\textwidth} \hfill \end{minipage}\\ \begin{minipage}{0.01\textwidth} \hfill \end{minipage} \begin{minipage}{0.32\textwidth} \centering (e) $n=400$, $c=1$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (f) $n=400$, $c=3$ \end{minipage} \begin{minipage}{0.32\textwidth} \centering (g) $n=400$, $c=5$ \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize Median} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_orth_Q50_d2_n400_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_orth_Q50_d2_n400_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=3.8 cm]{location_robust_orth_Q50_d2_n400_P10000_c5.eps} \end{minipage}\\ \begin{minipage}{0.01\textwidth} \rotatebox[]{90}{\footnotesize $95\%$ Quantile} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_orth_Q95_d2_n400_P10000_c1.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_orth_Q95_d2_n400_P10000_c3.eps} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width= 3.8 cm]{location_robust_orth_Q95_d2_n400_P10000_c5.eps} \end{minipage}\\ \caption{\small Median and $95\%$ Quantile of the minimal model size needed to capture $m$ important variables by RFPSIS in the case of OC+LMG and OC+LMB for $p=10000$ and $d=2$.} \label{location_orth} \end{figure} By comparing the results for the median of the minimal model size to those for the $95\%$ quantile, we can see that in all cases RFPSIS does pick up 6 to 7 of the important predictors (with the strongest signals) in the beginning of its solution path. The contamination mainly affects the required model size to cover the last one or two important predictors (with the smallest signals), leading to a large variation in the models size needed to pick up these variables. Not surprisingly, the performance decreases for datasets with smaller sample size, lower signal-to-noise ratio and/or higher contamination level. Although RFPSIS overall performs less well for the small sample size case ($n=200$), it is still able to establish a huge dimension reduction when the signal-to-noise ratio is sufficiently high. Including extra vertical outliers in the data also only affects the important variables at the end of the solution path. \section{Final Model Selection} \label{sec: BIC} The RFPSIS procedure above sequences the predictors in order of importance. After sequencing the predictors the goal is now to find a model $\mathcal{M}_{(q)}$ with size $q$ of order $O(n^{\eta})$ ($0<\eta<1$) that ideally covers all the important predictors. A popular criterion to determine the final model size, is the general Bayesian Information Criterion (BIC) \begin{equation} \text{BIC}(\mathcal{M}) = \log \text{RSS}(\mathcal{M}) + \mathcal{P}(k,n,p) \end{equation} where $\text{RSS}(\mathcal{M}) = \|\boldsymbol{y}-\widehat{\boldsymbol{y}}\|_E^2$ is the sum of squared residuals corresponding to the fitted model. $\mathcal{P}(k,n,p)$ is a penalty term which depends on the number of predictors $k$ in the model, the sample size $n$ and the dimension $p$. Compared to AIC, BIC includes the sample size dependent factor $\log(n)$ in the penalty term and therefore penalizes more heavily on model complexity, which results in more parsimonious models. Since $\text{RSS}(\mathcal{M})$ involves all observations, the general BIC criterion is not robust. Therefore, we consider robust adaptations of this criterion to select the final model. For each of the solutions $\mathcal{M}_{(l)}$ ($l=1,\dots,\tilde{k}_{\max}$) in the solution path $\mathbb{M}$, we robustly regress $\tilde{\boldsymbol{y}}$ on $\tilde{\boldsymbol{X}}_{\mathcal{M}_{(k)}}$, using solely the observations in $\mathcal{I}_2$. Since we already have obtained the marginal slope estimates, we apply a multiple regression M-estimator with these marginal coefficient estimates and the S-scale of the resulting residuals as the initial values rather than fully calculating the MM-estimator from scratch. In this way, we obtain a huge reduction in computation time because we avoid having to calculate the time-consuming initial S-estimator. To avoid the over-identification problem in the multiple regression M-estimator, we set $\tilde{k}_{\max} \leqslant n/2$. Let us denote the resulting coefficient estimates by $\hat{\theta}^{(k)}_j$ ($j=1,\ldots,k$, $k=1,\ldots,\tilde{k}_{max}$). For each of these models, we then calculate a weighted sum of squared residuals, given by \begin{equation} \text{WRSS}_{(k)}=\sum_{i \in \mathcal{I}_2} w^{(k)}_i (\hat{y}_i - \sum_{j=1}^{k}\hat{\theta}^{(k)}_j\hat{\boldsymbol{x}}_{ij})^2, \end{equation} where $w^{(k)}_i$ is the weight given by the M-estimator for the observations in $\mathcal{I}_2$. Note that observations not in $\mathcal{I}_2$ are thus given weight zero. The final model can then be selected by minimizing either of the following criteria \begin{eqnarray} \text{BIC}(\mathcal{M}_{(k)}) &=& \log \text{WRSS}_{(k)} + |\mathcal{M}_{(k)}|n^{-1}\log(n), \label{BIC}\\ \text{EBIC}(\mathcal{M}_{(k)}) &=& \log \text{WRSS}_{(k)} + |\mathcal{M}_{(k)}|n^{-1}(\log(n) + \log(p)), \label{EBIC}\\ \text{FPBIC}(\mathcal{M}_{(k)}) &=& \log \text{WRSS}_{(k)} + |\mathcal{M}_{(k)}|n^{-1}\log(n)\log(p), \label{FPBIC} \end{eqnarray} where \eqref{BIC} is a robust adaptation of the original BIC and \eqref{EBIC} belongs to the extended BIC family~\citep{Chen2008} which favors sparser model than BIC. FPBIC uses a penalty term which selects even more parsimonious models than both BIC and EBIC~\citep{FPSIS}. Asymptotically, BIC, EBIC and FPBIC are equivalent when $p=\mathcal{O}(\exp(n^\xi))$ ($0<\xi<1$). The multiple regression models fitted by M-estimators generally yield more accurate coefficient estimates than the marginal models. Hence, these coefficient estimates can be used to reorder the predictors in order of importance. For each model $\mathcal{M}_{(k)}$, we thus reorder the coefficient estimates $\hat{\theta}^{(k)}_j$ in decreasing absolute values. These reordered coefficients and their corresponding predictors are denoted by $\hat{\theta}^{(k)}_{(j)}$ and $\boldsymbol{X}^{(k)}_{(j)}$ ($j=1,\ldots,k$, $k=1,\ldots,\tilde{k}_{\max}$) respectively. Each of the robust general BIC criteria can also be calculated for these reordered sequences, and will be denoted as R-BIC, R-EBIC, R-FPBIC, respectively. That is, for $l=1,\dots,k$ we calculate the weighted sum of squared residuals as \begin{equation} \text{WRSS}_{(kl)} = \sum_{i \in \mathcal{I}_2} w^{(k)}_i (\hat{y}_i - \sum_{j=1}^{l}\hat{\theta}^{(k)}_{(j)}\hat{\boldsymbol{X}}_{(j)}^{(k)})^2. \end{equation} The final model is determined by minimizing \begin{equation} \text{R-BIC}(\mathcal{M}_{(kl)}) = \log \text{WRSS}_{(kl)} + |\mathcal{M}_{(kl)}|n^{-1}\log(n), \end{equation} or \begin{equation} \text{R-EBIC}(\mathcal{M}_{(kl)}) = \log \text{WRSS}_{(kl)} + |\mathcal{M}_{(kl)}|n^{-1}(\log(n)+\log(p)), \end{equation} or \begin{equation} \text{R-FPBIC}(\mathcal{M}_{(kl)}) = \log \text{WRSS}_{(kl)} + |\mathcal{M}_{(kl)}|n^{-1}\log(n)\log(p). \end{equation} To evaluate the performance of these six criteria, we investigate their average performance over 200 datasets generated according to the designs discussed in Subsection~\ref{sec:perf}. For the model selected by each of these criteria we report both the average number of truly important predictors in the model (TP) and the average number of falsely selected predictors (FP). Tables~\ref{BIC_10000_n200_d2} and~\ref{BIC_10000_n400_d2} contain the results for $n=200$ and $n=400$ with $\tilde{k}_{\max}$=100, respectively. From these tables we can see that FPBIC and R-FPBIC select the models with the smallest false positive rate, but these models also miss more important predictors than the other criteria. The penalty term proposed in~\citep{FPSIS} thus tends to select too sparse models in practice. The four other criteria generally are able to produce better screening results with a high number of true positives and a small number of false positives for the regular data. Their performance improves for larger sample size and higher signal-to-noise ratio. Among these criteria, R-BIC selects the most important predictors, but at the cost of selecting more noise predictors. Interestingly, R-EBIC not only can get a number of true positives that is similar or larger than BIC/EBIC, but at the same time also a smaller number of false positives when the signal-to-noise ratio is sufficiently high ($c=3$ or $c=5$ in our simulations). This shows that reordering the predictors according to the multiple regression coefficient estimates before computing the selection criterion indeed improves the selection performance. When we have a coherent data set with a strong signal and a sparse model is highly preferred, we recommend to use R-EBIC. However, if only a noisy data set is available, R-BIC may be preferred to avoid missing too many important predictors. \begin{table}[ht!] \footnotesize \centering \renewcommand\arraystretch{1.25} \scalebox{0.85}{ \begin{tabular}{c|c|c|c|cc|cc|cc|cc|cc|cc} \hline \multirow{2}{*}{} & \multirow{2}{*}{eps} & \multirow{2}{*}{c} & \multirow{2}{*}{LMV} & \multicolumn{2}{c|}{BIC} & \multicolumn{2}{c|}{EBIC} & \multicolumn{2}{c|}{FPBIC} & \multicolumn{2}{c|}{R-BIC} & \multicolumn{2}{c|}{R-EBIC} & \multicolumn{2}{c}{R-FPBIC}\tabularnewline \cline{5-16} & & & & TP & FP & TP & FP & TP & FP & TP & FP & TP & FP & TP & FP\tabularnewline \hline \multirow{3}{*}{clean} & \multirow{3}{*}{0} & 1 & \multirow{3}{*}{no} & 3.63 & 4.54 & 2.70 & 1.01 & 0.97 & 0.11 & 4.55 & 19.04 & 2.83 & 0.83 & 0.95 & 0.11\tabularnewline & & 3 & & 5.45 & 4.36 & 4.51 & 0.70 & 1.52 & 0.01 & 6.36 & 12.60 & 5.29 & 0.33 & 1.54 & 0.01\tabularnewline & & 5 & & 6.01 & 4.83 & 5.23 & 1.01 & 2.02 & 0.00 & 6.88 & 9.62 & 6.13 & 0.23 & 2.49 & 0.00\tabularnewline \hline \multirow{12}{*}{\shortstack{PC \\ +LMG}} & \multirow{6}{*}{5} & 1 & \multirow{3}{*}{no} & 3.55 & 4.71 & 2.67 & 1.04 & 0.93 & 0.12 & 4.51 & 19.84 & 2.88 & 0.80 & 0.92 & 0.11\tabularnewline & & 3 & & 5.48 & 4.30 & 4.56 & 0.87 & 1.43 & 0.01 & 6.36 & 13.27 & 5.22 & 0.31 & 1.47 & 0.01\tabularnewline & & 5 & & 6.06 & 5.44 & 5.28 & 1.03 & 1.98 & 0.01 & 6.88 & 10.44 & 5.97 & 0.21 & 2.34 & 0.00\tabularnewline \cline{4-16} & & 1 & \multirow{3}{*}{yes} & 3.05 & 4.21 & 2.33 & 1.13 & 1.00 & 0.14 & 3.09 & 4.12 & 2.43 & 0.85 & 0.93 & 0.12\tabularnewline & & 3 & & 4.63 & 3.19 & 4.00 & 0.79 & 1.50 & 0.01 & 4.79 & 2.86 & 4.30 & 0.34 & 1.49 & 0.00\tabularnewline & & 5 & & 5.22 & 3.19 & 4.55 & 0.90 & 1.95 & 0.01 & 5.31 & 2.17 & 4.92 & 0.22 & 1.97 & 0.00\tabularnewline \cline{2-16} & \multirow{6}{*}{20} & 1 & \multirow{3}{*}{no} & 3.47 & 5.27 & 2.38 & 1.03 & 0.92 & 0.14 & 4.35 & 21.71 & 2.61 & 0.78 & 0.90 & 0.14\tabularnewline & & 3 & & 5.17 & 4.22 & 4.23 & 0.82 & 1.37 & 0.01 & 6.15 & 14.27 & 4.79 & 0.34 & 1.36 & 0.01\tabularnewline & & 5 & & 5.74 & 4.60 & 4.83 & 0.89 & 1.71 & 0.00 & 6.65 & 9.92 & 5.66 & 0.20 & 1.99 & 0.00\tabularnewline \cline{3-16} & & 1 & \multirow{3}{*}{yes} & 2.75 & 4.30 & 2.05 & 1.15 & 0.95 & 0.15 & 2.83 & 4.52 & 2.17 & 1.09 & 0.89 & 0.16\tabularnewline & & 3 & & 4.34 & 3.17 & 3.59 & 0.75 & 1.41 & 0.02 & 4.43 & 3.04 & 3.92 & 0.35 & 1.34 & 0.01\tabularnewline & & 5 & & 4.90 & 2.91 & 4.31 & 0.87 & 1.65 & 0.01 & 5.02 & 2.39 & 4.62 & 0.26 & 1.67 & 0.00\tabularnewline \hline \multirow{12}{*}{\shortstack{PC \\ +LMB}} & \multirow{6}{*}{5} & 1 & \multirow{3}{*}{no} & 3.49 & 5.20 & 2.65 & 1.13 & 0.93 & 0.12 & 4.33 & 17.94 & 2.72 & 0.92 & 0.90 & 0.12\tabularnewline & & 3 & & 5.34 & 4.81 & 4.34 & 0.83 & 1.37 & 0.01 & 6.26 & 14.26 & 5.00 & 0.33 & 1.36 & 0.01\tabularnewline & & 5 & & 5.90 & 5.38 & 4.96 & 0.97 & 1.82 & 0.00 & 6.77 & 11.44 & 5.91 & 0.23 & 2.06 & 0.00\tabularnewline \cline{3-16} & & 1 & \multirow{3}{*}{yes} & 2.28 & 4.34 & 1.77 & 1.35 & 0.83 & 0.29 & 2.33 & 4.61 & 1.83 & 1.18 & 0.78 & 0.26\tabularnewline & & 3 & & 3.77 & 2.96 & 3.14 & 0.80 & 1.41 & 0.09 & 3.81 & 2.65 & 3.41 & 0.52 & 1.23 & 0.04\tabularnewline & & 5 & & 4.22 & 2.62 & 3.72 & 0.85 & 1.66 & 0.07 & 4.28 & 2.23 & 4.00 & 0.33 & 1.60 & 0.02\tabularnewline \cline{2-16} & \multirow{6}{*}{20} & 1 & \multirow{3}{*}{no} & 2.64 & 4.94 & 1.82 & 1.14 & 0.79 & 0.22 & 3.41 & 23.61 & 1.91 & 1.03 & 0.76 & 0.25\tabularnewline & & 3 & & 4.20 & 3.87 & 3.11 & 0.74 & 1.07 & 0.04 & 5.44 & 18.80 & 3.56 & 0.46 & 1.07 & 0.04\tabularnewline & & 5 & & 4.71 & 3.87 & 3.74 & 0.69 & 1.16 & 0.02 & 5.98 & 16.58 & 4.34 & 0.30 & 1.17 & 0.02\tabularnewline \cline{4-16} & & 1 & \multirow{3}{*}{yes} & 1.32 & 4.74 & 0.93 & 1.62 & 0.49 & 0.54 & 1.31 & 5.47 & 0.95 & 1.47 & 0.51 & 0.49\tabularnewline & & 3 & & 2.28 & 3.20 & 1.82 & 1.15 & 0.88 & 0.27 & 2.35 & 3.48 & 1.96 & 0.92 & 0.89 & 0.20\tabularnewline & & 5 & & 2.68 & 2.84 & 2.23 & 1.06 & 1.06 & 0.21 & 2.72 & 3.12 & 2.41 & 0.70 & 1.01 & 0.12\tabularnewline \hline \multirow{12}{*}{\shortstack{OC \\ +LMG\\/LMB}} & \multirow{6}{*}{5} & 1 & \multirow{3}{*}{no} & 3.48 & 5.06 & 2.46 & 0.96 & 0.92 & 0.14 & 4.34 & 21.02 & 2.68 & 0.81 & 0.90 & 0.14\tabularnewline & & 3 & & 5.26 & 4.68 & 4.24 & 0.81 & 1.42 & 0.01 & 6.28 & 14.31 & 4.90 & 0.29 & 1.40 & 0.01\tabularnewline & & 5 & & 5.94 & 5.77 & 4.99 & 1.15 & 1.78 & 0.00 & 6.85 & 11.53 & 5.94 & 0.22 & 2.23 & 0.00\tabularnewline \cline{4-16} & & 1 & \multirow{3}{*}{yes} & 2.84 & 4.47 & 2.09 & 1.08 & 0.96 & 0.15 & 2.93 & 4.93 & 2.21 & 0.93 & 0.91 & 0.14\tabularnewline & & 3 & & 4.38 & 3.14 & 3.69 & 0.76 & 1.46 & 0.02 & 4.53 & 3.03 & 3.96 & 0.30 & 1.37 & 0.01\tabularnewline & & 5 & & 4.99 & 2.95 & 4.32 & 0.78 & 1.85 & 0.01 & 5.12 & 2.20 & 4.72 & 0.24 & 1.93 & 0.01\tabularnewline \cline{2-16} & \multirow{6}{*}{20} & 1 & \multirow{3}{*}{no} & 2.79 & 4.76 & 1.84 & 1.04 & 0.82 & 0.21 & 3.72 & 25.64 & 1.96 & 0.95 & 0.81 & 0.21\tabularnewline & & 3 & & 4.53 & 4.36 & 3.35 & 0.69 & 1.14 & 0.05 & 5.80 & 17.51 & 3.96 & 0.35 & 1.14 & 0.04\tabularnewline & & 5 & & 5.19 & 5.37 & 4.02 & 0.85 & 1.29 & 0.02 & 6.26 & 14.73 & 5.06 & 0.28 & 1.35 & 0.02\tabularnewline \cline{3-16} & & 1 & \multirow{3}{*}{yes} & 1.36 & 4.31 & 1.01 & 1.48 & 0.53 & 0.52 & 1.34 & 5.71 & 1.05 & 1.32 & 0.55 & 0.45\tabularnewline & & 3 & & 2.44 & 2.99 & 1.90 & 0.99 & 0.92 & 0.25 & 2.49 & 3.90 & 2.09 & 0.79 & 0.88 & 0.19\tabularnewline & & 5 & & 2.85 & 2.67 & 2.33 & 0.97 & 1.11 & 0.17 & 2.89 & 3.03 & 2.50 & 0.58 & 1.07 & 0.08\tabularnewline \hline \end{tabular} } \caption{\small The model selection performance of the robust modified BIC criteria for $p=10000$, $n=200$, $d=2$, and all the contamination schemes.} \label{BIC_10000_n200_d2} \end{table} \begin{table}[ht!] \footnotesize \centering \renewcommand\arraystretch{1.25} \scalebox{0.85}{ \begin{tabular}{c|c|c|c|cc|cc|cc|cc|cc|cc} \hline \multirow{2}{*}{} & \multirow{2}{*}{eps} & \multirow{2}{*}{c} & \multirow{2}{*}{LMV} & \multicolumn{2}{c|}{BIC} & \multicolumn{2}{c|}{EBIC} & \multicolumn{2}{c|}{FPBIC} & \multicolumn{2}{c|}{R-BIC} & \multicolumn{2}{c|}{R-EBIC} & \multicolumn{2}{c}{R-FPBIC}\tabularnewline \cline{5-16} & & & & TP & FP & TP & FP & TP & FP & TP & FP & TP & FP & TP & FP\tabularnewline \hline \multirow{3}{*}{clean} & \multirow{3}{*}{0} & 1 & \multirow{3}{*}{no} & 6.13 & 4.51 & 5.47 & 0.71 & 2.29 & 0.02 & 6.24 & 4.16 & 5.70 & 0.42 & 2.35 & 0.02\tabularnewline & & 3 & & 7.25 & 2.71 & 6.91 & 0.90 & 5.26 & 0.02 & 7.34 & 1.65 & 7.20 & 0.13 & 5.56 & 0\tabularnewline & & 5 & & 7.50 & 2.52 & 7.26 & 1.02 & 6.38 & 0.12 & 7.62 & 1.31 & 7.52 & 0.05 & 6.78 & 0\tabularnewline \hline \multirow{12}{*}{PC+LMG} & \multirow{6}{*}{5} & 1 & \multirow{3}{*}{no} & 6.05 & 4.12 & 5.41 & 0.85 & 2.30 & 0.02 & 6.17 & 4.08 & 5.60 & 0.50 & 2.40 & 0.01\tabularnewline & & 3 & & 7.23 & 2.56 & 6.88 & 0.74 & 5.24 & 0.02 & 7.35 & 1.64 & 7.19 & 0.07 & 5.57 & 0.00\tabularnewline & & 5 & & 7.45 & 2.20 & 7.26 & 0.98 & 6.33 & 0.12 & 7.58 & 1.19 & 7.47 & 0.04 & 6.72 & 0.00\tabularnewline \cline{4-16} & & 1 & \multirow{3}{*}{yes} & 5.81 & 4.47 & 5.20 & 0.94 & 2.46 & 0.02 & 5.86 & 4.15 & 5.39 & 0.51 & 2.41 & 0.02\tabularnewline & & 3 & & 6.99 & 2.53 & 6.69 & 0.92 & 5.00 & 0.07 & 7.09 & 1.49 & 6.93 & 0.15 & 5.40 & 0.00\tabularnewline & & 5 & & 7.26 & 2.62 & 7.04 & 1.15 & 6.09 & 0.13 & 7.36 & 1.05 & 7.28 & 0.07 & 6.57 & 0.00\tabularnewline \cline{2-16} & \multirow{6}{*}{20} & 1 & \multirow{3}{*}{no} & 5.88 & 4.06 & 5.30 & 0.78 & 2.10 & 0.02 & 5.99 & 4.06 & 5.50 & 0.48 & 2.11 & 0.02\tabularnewline & & 3 & & 7.07 & 2.27 & 6.70 & 0.75 & 4.99 & 0.04 & 7.21 & 1.60 & 6.99 & 0.10 & 5.20 & 0.00\tabularnewline & & 5 & & 7.35 & 2.36 & 7.10 & 0.78 & 6.11 & 0.08 & 7.50 & 1.22 & 7.35 & 0.06 & 6.50 & 0.00\tabularnewline \cline{3-16} & & 1 & \multirow{3}{*}{yes} & 5.61 & 4.40 & 4.94 & 0.99 & 2.41 & 0.03 & 5.68 & 4.22 & 5.15 & 0.54 & 2.34 & 0.02\tabularnewline & & 3 & & 6.82 & 2.51 & 6.48 & 0.76 & 4.72 & 0.02 & 6.92 & 1.47 & 6.78 & 0.14 & 5.06 & 0.00\tabularnewline & & 5 & & 7.10 & 2.32 & 6.89 & 1.11 & 5.79 & 0.12 & 7.17 & 1.02 & 7.10 & 0.12 & 6.29 & 0.00\tabularnewline \hline \multirow{12}{*}{PC+LMB} & \multirow{6}{*}{5} & 1 & \multirow{3}{*}{no} & 6.07 & 4.66 & 5.35 & 0.83 & 2.28 & 0.01 & 6.15 & 4.44 & 5.61 & 0.45 & 2.30 & 0.01\tabularnewline & & 3 & & 7.09 & 2.27 & 6.80 & 0.86 & 5.20 & 0.04 & 7.31 & 1.85 & 7.11 & 0.16 & 5.56 & 0.00\tabularnewline & & 5 & & 7.43 & 2.57 & 7.18 & 0.82 & 6.20 & 0.12 & 7.58 & 1.27 & 7.47 & 0.06 & 6.67 & 0.00\tabularnewline \cline{3-16} & & 1 & \multirow{3}{*}{yes} & 5.25 & 4.43 & 4.66 & 1.00 & 2.56 & 0.03 & 5.30 & 4.06 & 4.89 & 0.63 & 2.36 & 0.02\tabularnewline & & 3 & & 6.49 & 2.65 & 6.16 & 1.03 & 4.77 & 0.08 & 6.58 & 2.03 & 6.40 & 0.12 & 4.98 & 0.00\tabularnewline & & 5 & & 6.83 & 2.39 & 6.59 & 1.02 & 5.66 & 0.18 & 6.89 & 1.25 & 6.83 & 0.08 & 6.05 & 0.00\tabularnewline \cline{2-16} & \multirow{6}{*}{20} & 1 & \multirow{3}{*}{no} & 5.16 & 4.62 & 4.45 & 1.02 & 1.41 & 0.02 & 5.33 & 5.47 & 4.67 & 0.77 & 1.41 & 0.02\tabularnewline & & 3 & & 6.51 & 2.74 & 6.07 & 0.87 & 3.75 & 0.02 & 6.74 & 2.97 & 6.35 & 0.20 & 3.91 & 0.00\tabularnewline & & 5 & & 6.91 & 2.93 & 6.53 & 0.96 & 4.88 & 0.03 & 7.04 & 1.97 & 6.87 & 0.13 & 5.31 & 0.00\tabularnewline \cline{4-16} & & 1 & \multirow{3}{*}{yes} & 4.00 & 5.08 & 3.44 & 1.25 & 1.60 & 0.11 & 4.09 & 4.96 & 3.53 & 0.99 & 1.30 & 0.08\tabularnewline & & 3 & & 5.31 & 3.27 & 4.82 & 0.90 & 2.97 & 0.05 & 5.39 & 2.74 & 5.09 & 0.33 & 3.01 & 0.01\tabularnewline & & 5 & & 5.65 & 2.81 & 5.34 & 1.08 & 3.79 & 0.07 & 5.71 & 2.14 & 5.55 & 0.26 & 3.99 & 0.00\tabularnewline \hline \multirow{12}{*}{OC} & \multirow{6}{*}{5} & 1 & \multirow{3}{*}{no} & 5.96 & 4.20 & 5.33 & 0.75 & 2.16 & 0.02 & 6.09 & 4.14 & 5.53 & 0.46 & 2.21 & 0.02\tabularnewline & & 3 & & 7.13 & 2.72 & 6.74 & 0.75 & 5.06 & 0.03 & 7.26 & 1.77 & 7.08 & 0.10 & 5.36 & 0.00\tabularnewline & & 5 & & 7.39 & 2.61 & 7.11 & 0.78 & 6.21 & 0.10 & 7.55 & 1.37 & 7.42 & 0.06 & 6.66 & 0.00\tabularnewline \cline{4-16} & & 1 & \multirow{3}{*}{yes} & 5.62 & 4.56 & 4.92 & 0.86 & 2.39 & 0.03 & 5.70 & 4.22 & 5.14 & 0.52 & 2.37 & 0.02\tabularnewline & & 3 & & 6.86 & 2.52 & 6.53 & 0.89 & 4.82 & 0.05 & 6.97 & 1.47 & 6.82 & 0.14 & 5.07 & 0.00\tabularnewline & & 5 & & 7.19 & 2.60 & 6.98 & 1.22 & 5.90 & 0.12 & 7.28 & 1.06 & 7.20 & 0.08 & 6.42 & 0.00\tabularnewline \cline{2-16} & \multirow{6}{*}{20} & 1 & \multirow{3}{*}{no} & 5.33 & 4.49 & 4.54 & 0.84 & 1.61 & 0.03 & 5.52 & 5.72 & 4.79 & 0.65 & 1.60 & 0.03\tabularnewline & & 3 & & 6.74 & 3.58 & 6.23 & 0.85 & 4.17 & 0.03 & 7.07 & 3.23 & 6.67 & 0.16 & 4.48 & 0.00\tabularnewline & & 5 & & 7.16 & 4.11 & 6.69 & 1.21 & 5.40 & 0.09 & 7.41 & 2.43 & 7.20 & 0.13 & 5.92 & 0.00\tabularnewline \cline{3-16} & & 1 & \multirow{3}{*}{yes} & 4.10 & 4.12 & 3.61 & 1.14 & 1.83 & 0.12 & 4.13 & 3.86 & 3.72 & 0.81 & 1.59 & 0.07\tabularnewline & & 3 & & 5.39 & 2.76 & 5.06 & 0.88 & 3.43 & 0.07 & 5.46 & 2.07 & 5.29 & 0.26 & 3.45 & 0.00\tabularnewline & & 5 & & 5.80 & 2.47 & 5.51 & 1.02 & 4.16 & 0.13 & 5.90 & 1.65 & 5.74 & 0.11 & 4.56 & 0.00\tabularnewline \hline \end{tabular} } \caption{\small The model selection performance of the robust modified BIC criteria for $p=10000$, $n=400$, $d=2$, and all the contamination schemes.} \label{BIC_10000_n400_d2} \end{table} \section{Real Data Analysis} We analyze a dataset which contains gene expression measurements of 31099 genes on eye tissues from 120 12-week-old male F2 rats. The data is available at \url{https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE5680}. The gene coded as TRIM32 is of particular interest for its causal effect on the Bardet-Biedl syndrome. As in~\citep{GSE2006}, the 18976 genes which exhibit at least a two-fold variation in expression level are included for analysis. It is believed that TRIM32 is associated with a small number of other genes. We consider a multiple regression with TRIM32 as response to identify these genes, which results in an ultra-high dimensional regression problem. To identify the most important genes, we apply the RFPSIS method of Section~\ref{sec:RFPSIS} with $h=[(n-d+2)/2]$ for maximal robustness. The variables are first standardized using their median and $Q_n$ scale estimate. Based on criterion \eqref{PC_Select}, the number of factors is estimated to be 4. The robust Yeo-Johnson transformation selects $\lambda=0$, so a logarithmic transformation is applied on the orthogonal distances. The histogram of both the $\text{\small OD}_i$ and $\log(\text{\small OD}_i)$ are shown in Figure~\ref{hist_t}. After applying the logarithmic transformation, the orthogonal distances can clearly be approximated much better by a normal distribution. Based on the corresponding diagnostic plot, shown in Figure \ref{diagnostic_t}, we can see that observations 80 and 95 are identified as OC outliers while there are also 21 observations identified as PC outliers. To examine these outliers further, we compare the measurements of all genes in the analysis for the clean observations to the PC and OC leverage points in Figure \ref{matplot_95}. From these plots we can see that the OC outliers show more variation than the remaining data. Hence, these plots indeed confirm that the OC outliers identified in the diagnostic plot show a behavior that is different from the majority. \begin{figure}[ht!] \centering \begin{minipage}{0.45\textwidth} \small (a) \end{minipage} \begin{minipage}{0.45\textwidth} \small (b) \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width= 6.5 cm]{GSE_Hist_OD.eps} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width= 6.5 cm]{GSE_Hist_OD_t.eps} \end{minipage} \caption{\small The histogram of $\text{\small OD}$, and $\protect \log(\text{\small OD})$ for the rat genome data.} \label{hist_t} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width = 6.5 cm]{GSE_Diagnostic_05.eps} \caption{\small The diagnostic plot of the rat genome data showing the clean observations ($\protect \bullet$), the PC outliers ({$\protect \MyDiamond[draw=blue,fill=blue]$}) the OC outliers ({\color{red} $\protect \blacktriangle$}).} \label{diagnostic_t} \end{figure} \begin{figure}[ht!] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=4.5 cm]{GSE_matplot_clean.png} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=4.5 cm]{GSE_matplot_score.png} \end{minipage} \\ \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=4.5 cm]{GSE_matplot_80.png} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=4.5 cm]{GSE_matplot_95.png} \end{minipage} \caption{\small The plot of the original variables for the clean observations, the PC outliers, and the OC outliers (obs. 80 and 95) in the rat genome data.} \label{matplot_95} \end{figure} RFPSIS applied on the full dataset, denoted by {\it rat1}, identified 11 of the PC outliers as bad leverage points, while the other 10 PC outliers are considered to be good leverage points, and thus are included in the variable screening. For comparison, we also consider two reduced datasets. We call {\it rat2} the dataset which contains all the observations except the extreme outlier (obs. 80) identified in Figure~\ref{diagnostic_t}. Finally, {\it rat3} is the reduced dataset obtained by removing the 2 OC outliers as well as the 11 bad leverage PC outliers identified by RFPSIS. We then apply SIS and FPSIS on all three datasets and compare the results with those of RFPSIS on the full dataset ({\it rat1}). We thus obtained 7 solution paths. For convenience, we denote by (FP)SIS({\it rat1}), (FP)SIS({\it rat2}) and (FP)SIS({\it rat3}) the solution path that is obtained when applying (FP)SIS on dataset {\it rat1}, {\it rat2} and {\it rat3}, respectively. To compare how successfully SIS, FPSIS and RFPSIS screen out the most relevant predictors, we calculate for each solution path the minimally obtainable median of absolute 10-fold cross-validation prediction error. Note that the 10-fold cross-validation prediction errors, denoted by 10-fold-MAPE, are averages over 100 random splits of the data. Hence, for each of the 7 solution paths, we regress the response, TRIM32, on the first $k$ ($k=1,\ldots,50$) variables in the path using MM-estimators. For each solution path, the smallest mean 10-fold-MAPE among the 50 models is reported in Table \ref{gse_cross_mse_05_t}, and Table \ref{gse_cross_k_05_t} contains the corresponding model size $k$, i.e. the number of predictors in the model with smallest mean 10-fold-MAPE. \begin{table}[ht!] \footnotesize \centering \renewcommand\arraystretch{1.2} \scalebox{0.95}{ \begin{tabular}{c|ccccccc} \hline & RFPSIS & SIS ({\scriptsize \it rat1}) & SIS ({\scriptsize \it rat2}) & SIS ({\scriptsize \it rat3}) & FPSIS ({\scriptsize \it rat1}) & FPSIS ({\scriptsize \it rat2}) & FPSIS ({\scriptsize \it rat3})\tabularnewline \hline {\it rat1} & 0.3470 & 0.4710 & 0.4478 & 0.4456 & 0.5775 & 0.5640 & 0.4375 \tabularnewline {\it rat2} & 0.3416 & 0.4651 & 0.4369 & 0.3996 & 0.5736 & 0.5396 & 0.4348 \tabularnewline {\it rat3} & \bf{0.3359} & 0.4064 & 0.4064 & 0.3608 & 0.4597 & 0.4969 & \bf{0.3375} \tabularnewline \hline \end{tabular} } \caption{\small The smallest mean 10-fold-MAPE fitting the first $k$ ($k=1,\protect \ldots,50$) variables in the 7 solution paths and evaluated on the three rat datasets.} \label{gse_cross_mse_05_t} \end{table} \begin{table}[ht!] \footnotesize \centering \renewcommand\arraystretch{1.2} \scalebox{0.95}{ \begin{tabular}{c|ccccccc} \hline & RFPSIS & SIS ({\scriptsize \it rat1}) & SIS ({\scriptsize \it rat2}) & SIS ({\scriptsize \it rat3}) & FPSIS ({\scriptsize \it rat1}) & FPSIS ({\scriptsize \it rat2}) & FPSIS ({\scriptsize \it rat3})\tabularnewline \hline {\it rat1} & 8 & 4 & 14 & 7 & 9 & 25 & 7 \tabularnewline {\it rat2} & 8 & 13 & 14 & 7 & 18 & 25 & 7 \tabularnewline {\it rat3} & {\bf 8} & 4 & 4 & 5 & 10 & 8 & {\bf 12} \tabularnewline \hline \end{tabular} } \caption{\small The model sizes with respect to the smallest mean 10-fold-MAPE fitting the first $k$ ($k=1,\protect \ldots,50$) variables in the 7 solution paths and evaluated on the three rat datasets.} \label{gse_cross_k_05_t} \end{table} Comparing the result of RFPSIS with the results of SIS and FPSIS, we can see from Table \ref{gse_cross_mse_05_t} that RFPSIS and FPSIS({\it rat3}) produce the smallest 10-fold-MAPE's for all three datasets, showing that both methods select the most relevant variables. Since we are particularly interested in predicting well the non-outliers, let us consider the 10-fold-MAPE evaluated on the reduced dataset {\it rat3}. Clearly, RFPSIS gives the best 10-fold-MAPE which is 0.3359 for the regular observations. FPSIS({\it rat3}) gives a very close result which is 0.3375 for the regular observations in {\it rat3}, but the optimal model contains 12 predictors rather than only 8 for the model selected by RFPSIS as can be seen from Table \ref{gse_cross_k_05_t}. \begin{table}[ht!] \footnotesize \centering \hspace{6 pt} \renewcommand\arraystretch{1.2} \begin{tabular}{c|cccc} \hline & MM-LASSO-50 & MM-LASSO-full & (R-)BIC & (R-)EBIC/(R-)FPBIC \tabularnewline \hline k & 21.66 (3.35) & 62.94 (20.41) & 4 & 1 \tabularnewline {10-fold-MAPE} & 0.2934 (0.02) & 0.4814 (0.35) & 0.4894 & 0.4568 \tabularnewline \hline \end{tabular} \caption{\small The model size and 10-fold-MAPE evaluated on the clean observations ({\it rat3}) of the models selected by MM-LASSO-50, MM-LASSO-full, and the six BIC criteria from the RFPSIS solution path.} \label{gse_cross_BIC_05_t} \end{table} Comparing (FP)SIS({\it rat3}) with (FP)SIS({\it rat1}) and (FP)SIS({\it rat2}), we can conclude that removing the potential outliers significantly improves the predictions for the regular observations in {\it rat3}. Moreover, the smaller 10-fold-MAPE given by FPSIS({\it rat3}) than SIS({\it rat3}) indicates that there exists correlation among the predictors which allows FPSIS to perform better. When there are outliers, FPSIS({\it rat1}) and FPSIS({\it rat2}) give much worse results than SIS({\it rat1}) and SIS({\it rat2}) since the outliers in these datasets distort the correlation structure estimated by FPSIS. On the other hand, RFPSIS can correctly estimate the correlation structure of the regular data from the full dataset and thus yields similar results as FPSIS applied to the reduced dataset {\it rat3}. We also applied MM-LASSO (\citep[]{Smucler2015}) on the full dataset. First we considered all 18976 variables and then we only considered the first 50 variables from the solution path given by RFPSIS. We denote the two models by MM-LASSO-full and MM-LASSO-50, respectively. Due to the randomness of 5-fold-cross-validation for the selection of the optimal value of the regularization parameter, we run MM-LASSO 50 times for each setting. Then, we compute the 10-fold-MAPE when fitting MM-regression with the selected predictors on {\it rat3}. The average number of selected predictors and the resulting 10-fold-MAPE's, with their standard errors, are displayed in Table~\ref{gse_cross_BIC_05_t}. It can be seen that the MM-LASSO-50 model yields a smaller 10-fold-MAPE than the model with the first 8 variables from the solution path of RFPSIS obtained previously. To obtain this result, MM-LASSO-50 selects much larger models with around 24 predictors. MM-LASSO-50 yields very stable results as can be seen from the small standard error for the 10-fold-MAPE. On the other hand, MM-LASSO-full selects even much more variables which results in much larger and unstable 10-fold-MAPE's. Moreover, applying MM-LASSO on the dataset with all 18976 variables is much more time consuming. For example, it took on average (over the 50 runs) 10.28 minutes to run MM-LASSO-full in \texttt{R}~\citep{Rcore} on an Intel Core i7-4790 X64 at 3.6 GHz, while running MM-LASSO-50 only required 36.58 seconds on average and the initial RFPSIS screening took 42.84 seconds. This illustrates that for ultrahigh-dimensional data, initial screening also yields a big advantage both in terms of performance and computation time when penalized regression methods such as MM-LASSO are used. In Section~\ref{sec: BIC} we noticed that the BIC type criteria tend to be too parsimonious when the signal-to-noise ratio in the data is low. When using $\tilde{k}_\text{max} = 50$, EBIC and FPBIC, and their re-ordered versions, only select the first predictor in the solution path for this dataset. BIC and R-BIC yield a bit less parsimonious model consisting of the first four predictors in the solution path. We again focus on the prediction errors for the regular observations in the reduced dataset ({\it rat3}). The model size and 10-fold-MAPE for the selected models by the different BIC criteria are shown in Table \ref{gse_cross_BIC_05_t}. It can be seen that the model with only the first predictor produces a smaller 10-fold-MAPE than the model with the first four predictors selected by BIC and R-BIC. Furthermore, we found that the first predictor in the solution path was consistently selected by MM-LASSO-50 across the 50 runs. Therefore, we can conclude the model obtained by (R-)EBIC and (R-)FPBIC identified the most important predictor, which can be a good starting point for further analysis. \section{Conclusions} Sure Independence Screening has aroused a lot of research interest recently due to its simpleness and fastness. It has been proven that SIS performs well with orthogonal or weakly dependent predictors and a sufficiently large sample size. However, its performance deteriorates greatly when there is substantial correlation among the predictors. To handle this problem, FPSIS removes the correlations by projecting the original variables onto the orthogonal complement of the subspace spanned by the latent factors which capture the correlation structure. However, FPSIS is based on classical estimators which are nonrobust and thus cannot resist the adverse influence of outliers. In this paper we investigated the effect of both vertical outliers and leverage points in the original multiple regression model. Our proposed RFPSIS estimates the latent factors by an LTS procedure. We considered leverage points due to both orthogonal complement outliers and score outliers in the subspace for the factor model, and examined their effect on the marginal regressions with factor profiled variables. It turned out that only good leverage points caused by PC outliers do not affect the variable screening results. Hence, RFPSIS only includes this type of good leverage points in the marginal screening to increase efficiency. Moreover, to reduce the influence of potential outliers, the marginal coefficients are estimated using MM-estimators. Our simulation studies showed that RFPSIS is almost as accurate as FPSIS on regular datasets, and at the same time can resist the adverse influence of all types of outliers, while both SIS and FPSIS fail in presence of outliers. In Section~\ref{sec: BIC}, we investigated the performance of six BIC criteria to select a final model from the solution path of RFPSIS. Our results indicate that R-EBIC, the EBIC criterion applied to the reordered variable sequence, generally yields the best model. However, for very noisy datasets it may lead to over-sparsified models. Instead of using these information criteria, regularized robust regression methods can be used to select the final model as shown in the real data analysis. Determining the final model after initial screening to determine the most promising predictors is a problem that deserves more attention to further improve selection results. Similar as FPSIS, RFPSIS is built on the strong assumption that the correlations among the predictions can be fully modeled by a few latent factors. In this case the correlations among the predictors can be removed by factor profiling. Similar technique has been applied to de-correlate covariates in high-dimensional sparse regression~\citep{Fan2016_FAD} and it was stated that Factor Adjusted Decorrelation (FAD) pays no price in case of weakly or uncorrelated covariates. When there are weakly correlated predictors, i.e. weak correlations among the predictors that cannot be removed by factor profiling, similar procedure as those to improve SIS, e.g. Iterative SIS~\citep{SIS} or Conditional SIS~\citep{CSIS}, can be applied on the robustly profiled variables in RFPSIS to improve its performance. This could be an interesting topic for future research. While RFPSIS can effectively handle all types of outlying observations, it does require a majority of regular observations in the dataset. However, for high-dimensional data it is not always realistic to assume that there is a majority of completely clean observations. Therefore, alternative contamination models can be considered, such as the {\it fully independent contamination model} which assumes that each of the variables is independently contaminated by some fraction of outliers~\citep{Propout}. In high-dimensional data, even a small fraction of such cellwise outliers in each variable leads to a majority of observations that is contaminated in at least one of its components. Similarly as in~\citep{CoLTSPCA}, a componentwise least trimmed squares objective function can be used to estimate the correlation structure. Such a loss function does not require the existence of a majority of regular observations. In future work, we will extend RFPSIS by combining this estimator of the factor structure with the use of marginal regressions for variable screening to handle data with cellwise outliers. In high-dimensional data analysis, another difficult situation might be that the outliers are hard to detect due to the presence of abundant noisy variables or due to the complex correlation structure of the features. Hence, searching for a lower dimensional projection subspace, called High Contrast Subspace (HiCS) by~\citep{HiCS}, in which outliers can be distinguished from the regular data, or selecting features which contribute most to the outlyingness of observations, as done by Coupled Unsupervised Feature Selection (CUFS)~\citep{CUFS}, would be crucial to detect outliers. In these cases, combining feature selection for outlier detection and for sparse estimation can be very challenging, and deserves more research attention. \section*{Acknowledgments} This research was supported by grant C16/15/068 of International Funds KU Leuven and COST Action IC1408 CRoNoS. Their support is gratefully acknowledged. \bibliographystyle{plainnat} \clearpage
1,108,101,566,030
arxiv
\section{Introduction} Off-axis mirror systems provide additional degrees of freedom for the design of more compact and accurate imaging systems compared to rotationally symmetric ones. The study of their aberration behaviour is of great interest to the optics community. The mathematical characterization of aberrations has been investigated in \cite{Moore,MooreErr}. Possible approaches to derive explicit aberration expansions are given in \cite{Chang}, where only confocal arrangements are considered, or in \cite{Korsch} where the starting design is rotationally symmetric. Recently, explicit expressions for plane-symmetric reflective optical systems have been determined using a matrix formalism \cite{Caron, Caron2, Caron3}. Here, the matrix method for paraxial ray-tracing is extended to accommodate for higher degree polynomial terms and aberrations are composed by manipulating the respective matrix coefficients. In this work we will describe the Lie algebraic method needed to obtain analytical expressions for the aforementioned aberration terms of arbitrary order. Starting from a chosen ray, which we will define as the optical axis ray (OAR), we will follow its path through the system from object to image plane. At the image plane the aberration terms are given as polynomials in $(\bm{q},\bm{p})$, which are the phase-space variables of our optical system \cite{Dragt82,Wolf2004}. In this Hamiltonian formulation, the propagation and reflection maps are symplectic, i.e., volume preserving in phase-space. Our goal is to approximate these maps while preserving symplecticity. Applying these approximating maps to the initial coordinates will deliver the desired aberration expansion terms. The Lie approach provides the tools to systematically determine the approximating map for one single plane-symmetric mirror. The description of a complete optical system is then reduced to a concatenation of maps. This process is described and handled by the Lie theory. Compared to the matrix formalism in \cite{Caron, Caron2, Caron3}, the mathematical framework of the Lie method reduces the number of coefficients necessary to be stored. Additionally, the phenomenon of low-order aberrations composing into higher order contributions follows directly from the mathematical framework. This is also known as the distinction between intrinsic and extrinsic aberrations \cite{Sasian}, where low-order aberrations of individual surfaces (intrinsic) combine into higher order contributions to the complete system (extrinsic). In Section \ref{sec::analytic} we will describe the explicit maps that govern ray propagation and reflection in a plane-symmetric reflective optical system. A brief summary of the essential Lie algebraic notions is given in Section \ref{sec::LieTools}, even though we refer to \cite{DragtFinn, Wolf2004} for a more in depth description. Section \ref{sec::fundElem} contains the steps needed to construct the approximation maps and the calculations up to third-order aberrations. Three examples to validate the presented method are given in Section \ref{sec::Examples}, where both existing theoretical and computational results are reproduced. \section{Analytic Ray-Tracing}\label{sec::analytic} In this section we discuss the mappings needed to ray-trace light rays through a reflective system composed of plane-symmetric, i.e., symmetric with respect to the $yz$-plane, optical surfaces; see Figure \ref{fig::reflectionTilt}. In order to follow a ray path from object to image plane, we describe three transformations. First, the incoming ray is propagated from the object plane to the reflecting surface and the reflected ray from the surface to the image plane. Second, we describe the reflection of the ray at a plane-symmetric mirror. Finally, rotation of the coordinate system is shown, such that the $z$-axis remains aligned with the optical axis ray (OAR) before and after reflection. This implies that the considered $z$-axis will be broken into line segments. Once these three mappings have been described, they are concatenated to describe a single mirror, which we call the \textit{fundamental element}, according to the following five steps: $i)$ propagation from object plane to mirror; $ii)$ rotation of the optical axis and corresponding coordinate system by an angle $\theta$, equal to the incidence angle of the OAR; $iii)$ reflection of the rays; $iv)$ second rotation of coordinate system by the angle $\theta$ and $v)$ propagation from mirror to image plane. Position coordinates of an arbitrary ray before and after reflection are projected along the ray onto the two planes passing through the point of impact of the OAR and orthogonal to it; see Figure \ref{fig::reflectionTilt}. The incoming (outgoing) plane, which is orthogonal to the incoming (outgoing) OAR, will be called the incoming (outgoing) standard screen and the incoming (outgoing) position and direction coordinates will be evaluated with respect to it. The incoming standard screen is the $xy$-plane and the outgoing one is the $x'y'$-plane, where the $x$ and $x'$-axis are the same; see Figure \ref{fig::reflectionTilt}. In the remaining part of this section the three elementary maps are described independently from each other. Eventually, we concatenate them to describe a complete mirror element as previously described. Each ray is characterized by its position $\bm{q}=(q_x,q_y)$ and its direction $\bm{p}=(p_x,p_y)$ at a standard screen. As such, we use the phase-space coordinates $(\bm{q},\bm{p})$ as our ray coordinates, cf. \cite{Barion:22}. Note that the coordinates of the OAR are at the origin of phase-space both before and after reflection, i.e., the OAR will have coordinates $\bm{q}=\bm{0}=\bm{q}'$ and $\bm{p}=\bm{0}=\bm{p}'$. In the descriptions to follow phase-space coordinates $(\bm{q},\bm{p})$ are mapped to primed coordinates $(\bm{q}',\bm{p}')$ by the respective mappings. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{reflectionTilt.pdf} \caption{Point $A$ of the incoming ray in $xy$-coordinates is mapped to point $B$ of the outgoing ray in the rotated $x'y'$-coordinate system. The axes $x$ and $x'$ are perpendicular to the $yz$-plane (not shown) and the incidence angle of the OAR is equal to $\theta$.} \label{fig::reflectionTilt} \end{figure} \subsection{Propagation} We introduce the Hamiltonian $H(\bm{p})$ governing free propagation of light in a medium of constant refractive index $n$ \cite{DragtFoundations86,Wolf2004,Barion:22}, i.e., \begin{equation} \label{eq::Hamiltonian} H(\bm{p})=-\sigma\sqrt{n^2-\vert\bm{p}\vert^2}=-\sigma p_z, \end{equation} where $\bm{p}=(p_x,p_y)$ and $p_z$ are the direction momenta -- direction cosines times the refractive index $n$ -- along the respective axes. The variable $\sigma=\pm 1$, is positive for forward travelling rays and negative for backwards propagating rays. Since we are only considering reflections, the refractive index of our medium (air/vacuum) $n=1$. The distance measured along the optical axis, which coincides with the $z$-axis, serves as evolution parameter of the Hamiltonian system related to Eq.~\eqref{eq::Hamiltonian}: \begin{equation} \label{eq::HamSys} \dot{\bm{q}}=\frac{\partial H}{\partial\bm{p}}=-\frac{\bm{p}}{H},\qquad\dot{\bm{p}}=-\frac{\partial H}{\partial\bm{q}}=\bm{0}. \end{equation} The solution to the Hamiltonian system Eq.~\eqref{eq::HamSys} with initial conditions $(\bm{q},\bm{p})$, after propagating a distance $d$ along the optical axis ray, reads: \begin{equation} \label{eq::hamSolution} \bm{q}'=\bm{q}-d\frac{\bm{p}}{H(\bm{p})},\quad \bm{p}'=\bm{p}. \end{equation} \subsection{Reflection} Next, we consider the law of reflection in vector form regardless of the coordinate system \cite{Welford1986} \begin{equation} \label{eq::lawOfReflection} \hat{\bm{k}}_\mathrm{r}=\hat{\bm{k}}_\mathrm{i}-2(\hat{\bm{k}}_\mathrm{i}\cdot\hat{\bm{n}})\hat{\bm{n}}, \end{equation} where $\hat{\bm{k}}_\mathrm{r}$ is the unit direction vector of the reflected ray, $\hat{\bm{k}}_\mathrm{i}$ the unit direction vector of the incoming ray and $\hat{\bm{n}}$ the unit outer normal of the reflector at the impact point. Here, $\hat{}$ (hat) indicates that the vector has length one and with the term `outer' we mean opposite to the incoming ray direction, i.e., $\hat{\bm{k}}_\mathrm{i}\cdot\hat{\bm{n}}<0$. Let the reflector be described by $z=\zeta(\bm{q})$, then the outer normal $\hat{\bm{n}}$ of the surface at point $(\bm{q},\zeta(\bm{q}))$ reads \begin{equation} \label{eq::usedNormal} \hat{\bm{n}}=\frac{(\nabla\zeta(\bm{q}),-1)}{\sqrt{1+\vert\nabla\zeta(\bm{q})\vert^2}}. \end{equation} The incoming and outgoing ray directions are $\hat{\bm{k}}_\mathrm{i}=(\bm{p},p_z)/n$ and $\hat{\bm{k}}_r=(\bm{p}',p_z')/n$, respectively. The vector $\hat{\bm{k}}_\mathrm{r}$ is calculated inserting Eq.~\eqref{eq::usedNormal} in Eq.~\eqref{eq::lawOfReflection}. This way we get for the reflected momenta $(\bm{p}',p_z')$: \begin{subequations} \label{eq::reflectedMomentum} \begin{equation} \bm{p}'=\bm{p}-2\frac{\nabla\zeta(\bar{\bm{q}})}{1+\vert\nabla\zeta(\bar{\bm{q}})\vert^2}(\bm{p}\cdot\nabla\zeta(\bar{\bm{q}})-p_z), \end{equation} \begin{equation} p_z'=p_z+\frac{2}{1+\vert\nabla\zeta(\bar{\bm{q}})\vert^2}(\bm{p}\cdot\nabla\zeta(\bar{\bm{q}})-p_z), \end{equation} \end{subequations} where $(\bar{\bm{q}},\zeta(\bar{\bm{q}}))$ is the intersection point of the incoming ray and the reflector. The intersection point $\bar{\bm{q}}$ is related to the screen coordinate before reflection $\bm{q}$ and the one after reflection $\bm{q}'$ by \cite{Wolf2004,DragtFoundations86, Barion:22} \begin{equation} \label{eq::qBar} \bar{\bm{q}}=\bm{q}+\zeta(\bar{\bm{q}})\frac{\bm{p}}{p_z},\quad \bm{q}'=\bar{\bm{q}}-\zeta(\bar{\bm{q}})\frac{\bm{p}'}{p_z'}. \end{equation} Eq.~\eqref{eq::qBar} gives an implicit relation for $\bar{\bm{q}}$ which needs to be solved iteratively; see \cite{Barion:22,SaadWolf1986,DragtFoundations86}. The Eqs.~\eqref{eq::qBar} are again solution to the Hamiltonian system Eq.~\eqref{eq::HamSys}, but now propagating a distance $d=\zeta(\bar{\bm{q}})$. \subsection{Rotation of the Standard Screen} After propagation and reflection, we discuss the necessary steps to rotate our coordinates according to the OAR. We describe an arbitrary rotation by an angle $\theta$ that rotates our standard screen, see Figure~\ref{fig::rotPosition}. For a single reflector two rotations of angle $\theta$ are used, where eventually $\theta$ is the incidence angle of the OAR. The first rotation brings the $z$-axis of the incoming coordinate system from being aligned with the OAR to being aligned with the surface normal at the point of intersection of the OAR. The surface equation $z=\zeta(\bm{q})$ is defined in this coordinate system aligned with its normal and therefore has zero gradient at the origin, i.e., $\nabla\zeta(\bm{0})=\bm{0}$, which is the point of impact of the OAR. We then apply the reflection mapping and subsequently rotate to the outgoing coordinate system aligned with the reflected OAR, see Figure~\ref{fig::rotationCoordSys}. Let us define positive rotations when the $y$-axis is rotated towards the $z$-axis (clock-wise). In the starting coordinate system the $z$-axis is aligned with the incoming OAR and as such we can call it the incoming coordinate system. We consider surfaces with plane-symmetry with respect to the $yz$-plane and as such the rotations are around the $x$-axis. The rotation mapping of the screen around the $x$-axis for the phase-space coordinates can be found as a Lie transformation \cite{Wolf2004}. Here we present an equivalent derivation. The momentum coordinates $p_x,p_y,p_z$ are rotated into the coordinates $p_x',p_y',p_z'$ according to the well-known rotation matrix \begin{equation} \label{eq::rotMomentum} \begin{pmatrix} p_x' \\ p_y' \\ p_z' \end{pmatrix} =\begin{pmatrix} 1 & 0 & 0\\ 0 & \cos\theta & \sin\theta\\ 0 & -\sin\theta & \cos\theta \end{pmatrix}\begin{pmatrix} p_x \\ p_y \\ p_z \end{pmatrix}. \end{equation} The expressions for the position coordinates are more complex. In fact, recall that we map the intersection points of the light rays with the (rotated) standard screens; see Figure \ref{fig::rotPosition}. \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{rotPosition.pdf} \caption{Upon rotation of axes we map the position coordinate $q_y$ to the position coordinate $q_y'$ relating to the same ray (red).} \label{fig::rotPosition} \end{figure} As such, let us first fix the parametrization of the ray and the normal equation of the tilted surface. The path of the ray can be parametrized by \begin{equation} \binom{\bm{q}}{0}+\lambda\binom{\bm{p}}{p_z},\quad\lambda\in\mathbb{R}. \label{eq::rayEq} \end{equation} After the first rotation, the equation of the rotated standard screen, with normal $(0,\sin\theta,-\cos\theta)$ and passing through $(0,0,0)$, reads \begin{equation} y\sin\theta-z\cos\theta=0. \label{eq::tiltedScreenEq} \end{equation} By substituting the parametrization \eqref{eq::rayEq} into the rotated standard screen equation \eqref{eq::tiltedScreenEq} we can solve for $\lambda$ to get the point of intersection. We get \begin{equation} \lambda=\frac{q_y\sin\theta}{p_z\cos\theta-p_y\sin\theta}. \label{eq::lambda} \end{equation} Substituting the value in Eq.~\eqref{eq::lambda} in the parametrization \eqref{eq::rayEq} gives us the coordinates of the point of intersection of the considered ray and the tilted screen. The last step is to derive the position coordinates with respect to the rotated coordinate system, which corresponds to dividing the $y$-coordinate by $\cos\theta$. The map for positive rotation around the $x$-axis of the position coordinates $\bm{q}$ by an angle $\theta$ reads: \begin{subequations} \label{eq::rotPosition} \begin{equation} q_x'=\frac{q_x\,p_z\cos\theta-(q_x\,p_y-q_y\,p_x)\sin\theta}{p_z\cos\theta-p_y\sin\theta},\\ \end{equation} \begin{equation} q_y'=\frac{q_y\,p_z}{p_z\cos\theta-p_y\sin\theta}. \end{equation} \end{subequations} In Eq. \eqref{eq::rotPosition} the condition $p_z\cos\theta-p_y\sin\theta=0$ implies that the considered ray is parallel to the rotated plane and as such will not intersect with the plane. \subsection{The Fundamental Map}\label{sec::fundElemLast} With the transformations described in Eqs.~\eqref{eq::reflectedMomentum}-\eqref{eq::rotMomentum} and \eqref{eq::rotPosition} we can rotate the coordinate system for the $z$-axis to be aligned with the surface normal, reflect the incoming rays and rotate the system again to align the $z$-axis with the outgoing OAR. If propagation before and after the surface are added to this map, we will call it the \textit{fundamental map}. This composition of transformations can be expanded up to the desired order in terms of the phase-space coordinates $(\bm{q},\bm{p})$. After reflection, $\sigma$ in the Hamiltonian described in Eq.~\eqref{eq::Hamiltonian} changes sign. An intuitive way to understand this is to recall Eq.~\eqref{eq::Hamiltonian} with $\sigma=1$ where $H=-p_z$. By the condition $\hat{\bm{k}}_\mathrm{i}\cdot\hat{\bm{n}}<0$ that we imposed at reflection, the reflected OAR travels in the same $z$-direction as the surface normal, which is opposite to the one of the incoming OAR. This would lead to negative propagation distances. Since we prefer to consider forward moving rays, we opt for dealing with a left-handed coordinate system and align the $z'$-axis in Figure \ref{fig::rotationCoordSys} with the direction of the reflected OAR after the second rotation. It can be verified that this change does not influence the form of our rotation map and the reflection map remains also unchanged. The only important caveats are that the reflective surface must always be described in the coordinate system of the incoming OAR and that angles are positive when the $y$-axis rotates towards the $z$-axis. \begin{figure}[!htb] \centering \includegraphics[width=0.5\textwidth]{rotCoordSys.pdf} \caption{Definition of incoming $xyz$ and outgoing $x'y'z'$-coordinate systems. The auxiliary system denoted by the index $s$ is where the surface equation is defined.} \label{fig::rotationCoordSys} \end{figure} With the mappings described in Eqs.~\eqref{eq::reflectedMomentum}-\eqref{eq::rotMomentum} and \eqref{eq::rotPosition} we can concatenate them into a reflection plus rotation mapping. We define this composition of transformations by $\mathcal{S}(\theta)$. Let $\mathcal{R}(\theta)$ denote the rotation mapping by an angle of $\theta$ and $\mathcal{T}$ the reflection mapping. Then, we can concisely describe the map $\mathcal{S}(\theta)$ as \begin{equation} \label{eq::reflWithRotMap} \mathcal{S}(\theta)=\mathcal{R}(\theta)\,\mathcal{T}\,\mathcal{R}(\theta). \end{equation} In Figure \ref{fig::reflectionTilt}, we have that $\mathcal{S}(\theta)$ maps $A$ to $B$. This definition of $\mathcal{S}(\theta)$ is necessary to apply the Lie algebraic method. Note that the surface equation is given with respect to the coordinate system denoted by the index $s$ in Figure~\ref{fig::rotationCoordSys}. To conclude, concatenating $\mathcal{S}(\theta)$ with propagation in object and image-space, $\mathcal{P}_{\mathrm{ob}}$ and $\mathcal{P}_{\mathrm{im}}$ respectively, constitutes the fundamental map $\mathcal{M}$ necessary for our description of the optical system \begin{equation} \label{eq::fundMap} \mathcal{M}=\mathcal{P}_{\mathrm{im}}\mathcal{S}(\theta)\mathcal{P}_{\mathrm{ob}}. \end{equation} \section{Lie Algebraic Tools}\label{sec::LieTools} With the help of the Lie algebraic method it is possible to construct operators that reproduce the actions of propagation, reflection and rotation. These operators enable us to derive closed form expressions for the aberration components of an arbitrary optical system. A more detailed description of the Lie algebraic tools used in this work can be found in \cite{Barion:22,Wolf2004,DragtFoundations86}. Here, we briefly introduce the main concepts. The space of functions on phase-space becomes a Lie algebra when endowed with the Poisson bracket $[\cdot,\cdot]$. The Poisson bracket of two functions $f(\bm{q},\bm{p}) ,g(\bm{q},\bm{p})$ is defined as \begin{equation} \label{eq::poissonBracket} [f,g]=\frac{\partial f}{\partial \bm{q}}\boldsymbol{\cdot}\frac{\partial g}{\partial \bm{p}}-\frac{\partial f}{\partial \bm{p}}\boldsymbol{\cdot}\frac{\partial g}{\partial \bm{q}}. \end{equation} Accordingly, we can associate with each $f$ a Lie operator $[f,\cdot\,]$ that acts on a second function $g$ by taking the Poisson bracket of the two. For example, $[q_1,\cdot\,]=\partial\cdot/\partial p_1$ and for vectors we have $[\bm{q},\cdot\,]=\partial\cdot/\partial \bm{p}$. Using the Poisson bracket, we can associate to each function $f$ on phase-space a mapping $\exp([f,\cdot\,])$, called a Lie transformation, defined as \begin{equation} \label{eq::LieTransformation} \exp([f,\cdot\,])=\sum_{k=0}^\infty \frac{[f,\cdot\,]^k}{k!}, \end{equation} where $[f,\cdot\,]^0=I$ and $[f,\cdot\,]^k=[f,[f,\cdot\,]^{k-1}]$ for $k>1$. Suppose $f$ is only dependent on $\bm{q}$, i.e. $f=f(\bm{q})$, then \begin{equation} \label{eq::LieTransExample} \exp([f(\bm{q}),\cdot\,])\bm{q}=\bm{q}\quad\text{and}\quad\exp([f(\bm{q}),\cdot\,])\bm{p}=\bm{p}+\frac{\partial f}{\partial\bm{q}}. \end{equation} In Eq.~\eqref{eq::LieTransExample} the infinite series is truncated after the first two terms as any subsequent one is equal to zero. Note that Lie transformation are applied component-wise to vectors. A map $(\bm{q},\bm{p})\mapsto (\bm{q}'(\bm{q},\bm{p})$, $\bm{p}'(\bm{q},\bm{p}))$ is said to be a symplectic transformation, if it satisfies \cite{Wolf2004,DragtFinn}: \begin{align} [q'_i,q'_j]&=[q_i,q_j]=0,\nonumber\\ [p'_i,p'_j]&=[p_i,p_j]=0,\label{eq::canonicalTransformation}\\ [q'_i,p'_j]&=[q_i,p_j]=\delta_{ij},\nonumber \end{align} where $\delta_{ij}$ is the Kronecker delta. Symplectic transformations preserve volumes in phase-space. In fact, light propagation, reflection and rotation are all symplectic maps. It can be proven that a mapping defined as in Eq.~\eqref{eq::LieTransformation} is symplectic \cite{DragtFinn}. Conversely, symplectic mappings $\mathcal{M}$ that map the origin to itself, i.e., $\mathcal{M}(\bm{0})=\bm{0}$, can be represented as an infinite concatenation of Lie transformations of the form \begin{equation} \label{eq::thrm2} \mathcal{M}=\exp([g_2,\cdot\,])\exp([g_3,\cdot\,])\cdots, \end{equation} where the generators $g_2,g_3,$ etc.\ are homogeneous polynomials in the variables $(\bm{q},\bm{p})$ of degree $2,3,$ etc.~\cite{DragtFinn}. Here, we omit the concatenation symbol $\circ$, as it is clear from the context that we are concatenating operators. Recall that a homogeneous polynomial $g$ of degree $m$, as in Eq. \eqref{eq::thrm2}, has the following property \begin{equation} g(\lambda\bm{q},\lambda\bm{p})=\lambda^mg(\bm{q},\bm{p})\quad\forall\lambda\in\mathbb{R}. \end{equation} The maps for ray propagation and reflection plus rotation are symplectic and map the origin onto itself \cite{Wolf2004,DragtFoundations86,Barion:22}. It is therefore possible to represent them as an infinite, or approximate them by a truncated, concatenation of Lie transformations according to the result in Eq.~\eqref{eq::thrm2} and then rearrange the Lie transformations using additional Lie tools given in Appendix \ref{sec::AddLieTools}, cf. Eq.~\eqref{eq::BCH} and Eq.~\eqref{eq::thrm3}. Our aim is to approximate the fundamental map $\mathcal{M}$ in Eq.~\eqref{eq::fundMap} of a reflector by means of a truncated concatenation of Lie transformations in ascending order, similarly to the structure of Eq.~\eqref{eq::thrm2}. This enables us to clearly distinguish which parts of the map influence which order of aberrations. In fact, generators of order $k$ are directly related to the transverse ray aberrations of order $k-1$ \cite{Barion:22}. Concatenating multiple fundamental maps representing the different mirrors in our system and disregarding terms that lead to higher order aberrations leads to a map describing the complete optical system -- up to the desired order of accuracy in terms of initial phase-space coordinates. \section{The Fundamental Element}\label{sec::fundElem} Free propagation, reflection and rotation are symplectic maps and their combined actions map the origin of phase-space, i.e., the OAR, to itself. As such, it is possible to represent the combined actions of reflection and rotation $\mathcal{S}(\theta)$, see Eq.~\eqref{eq::reflWithRotMap}, in the form of Eq.~\eqref{eq::thrm2}. The polynomials necessary for this representation in terms of Lie transformations are called the \textit{generators} of the map. It is important to consider the complete reflection with rotation map $\mathcal{S}(\theta)$ because this ensures that the origin of phase-space, i.e., our OAR, is mapped onto itself. Hence, we have that $(\mathcal{S}(\theta))(\bm{0})=\bm{0}$ and we can apply the results in Eq.~\eqref{eq::thrm2}. Rotation alone does not map the origin of phase-space onto itself. We subsequently concatenate $\mathcal{S}(\theta)$ with the maps of object and image-space propagation to derive the description of the \textit{fundamental element} of the optical system. The fundamental element represents the physical counterpart of the fundamental map described at the end of Section \ref{sec::analytic}. This fundamental element is the building block of any arbitrary reflecting optical system with plane-symmetry with respect to the $yz$-plane. We restrict our analysis to aberrations of order three and therefore only polynomials up to degree four in Eq.~\eqref{eq::thrm2} are of relevance; see \cite{Barion:22,DragtFoundations86,Wolf2004}. The generators of free propagation for light rays in a medium of refractive index $n=1$ are, up to degree four \cite{Barion:22}, \begin{equation} \label{eq::HamExpansion} h_2(\bm{p})=\frac{1}{2}\vert\bm{p}\vert^2,\quad h_4(\bm{p})=\frac{1}{8}\vert\bm{p}\vert^4. \end{equation} This means, that if we want to propagate our physical system with initial condition $(\bm{q},\bm{p})$ over a distance $d$ along the optical axis, then the expression \begin{equation} \label{eq::thirdOrderProp} \binom{\bm{q}'}{\bm{p}'}=\exp(-d[h_2,\cdot\,])\exp(-d[h_4,\cdot\,])\binom{\bm{q}}{\bm{p}}, \end{equation} is equal, up to third-order terms, to the solution given in Eq.~\eqref{eq::hamSolution} \cite{Barion:22,DragtFoundations86,Wolf2004}. The result of Eq.~\eqref{eq::thirdOrderProp} is therefore sufficiently accurate to investigate third-order aberrations. Note that the polynomials in Eq.~\eqref{eq::HamExpansion} are simply the first two terms in the Taylor expansion of the Hamiltonian defined in Eq.~\eqref{eq::Hamiltonian}. The mirror equation is given in the coordinate system with its $z$-axis aligned with the surface's normal and is of the form \begin{equation} \label{eq::surfaceEq} z=\zeta(\bm{q})=\sum_{\substack{2\leq m+n\leq 4 \\ m \text{ even}}}c_{mn}q_x^mq_y^n. \end{equation} We consider surface terms up to fourth order, since higher order terms do not influence third-order aberrations. The reflection and rotation mapping $\mathcal{S}(\theta)$ maps $\bm{q},\bm{p}$ to $\bm{q}',\bm{p}'$. First, the rotation by the angle $\theta$ is applied to the incoming ray coordinates, cf. Eqs.~\eqref{eq::rotMomentum},\eqref{eq::rotPosition}. Secondly, reflection acts on these already rotated coordinates, cf. Eqs.~\eqref{eq::reflectedMomentum},\eqref{eq::qBar}. Lastly, a second rotation by $\theta$ maps these coordinates into the final reflected coordinate system. All these transformations -- and their concatenation -- can be expanded in terms of $(\bm{q},\bm{p})$ with the aid of computer algebra software, e.g., Mathematica. The first order expansion of $\mathcal{S}(\theta)$ reads: \begin{equation} \label{eq::firstOrder} \begin{aligned} q'_x&=q_x,\\ q'_y&=q_y,\\ p'_x&=p_x + 4 \,c_{2 0}\cos(\theta) \,q_x,\\ p'_y&=p_y + 4 \,c_{0 2}\sec(\theta)\,q_y. \end{aligned} \end{equation} Here, the coefficients $c_{20},c_{02}$, cf. Eq.~\eqref{eq::surfaceEq}, can be related to the radii of curvature of the mirror surface. The polynomial $g_2$ associated with the Lie transformation that generates the linear map in Eq.~\eqref{eq::firstOrder} is only dependent on $\bm{q}$: \begin{equation} \label{eq::g2} g_2(\bm{q})=2\,c_{20}\cos(\theta)\,q_x^2+2\,c_{02}\sec(\theta)\,q_y^2. \end{equation} The Lie transformation generated by Eq.~\eqref{eq::g2} reads \begin{equation} \label{eq::Lieg2} \binom{\bm{q}'}{\bm{p}'}=\exp([g_2(\bm{q}),\cdot\,])\binom{\bm{q}}{\bm{p}}=\binom{\bm{q}}{\bm{p}+\dfrac{\partial g_2(\bm{q})}{\partial \bm{q}}}. \end{equation} One can verify that the expression in Eq.~\eqref{eq::Lieg2} is the same as the one in Eq.~\eqref{eq::firstOrder}. Note that, if $g_2$ would also depend on $\bm{p}$, it would generate contributions to the $\bm{q}$-coordinates, which is undesired; cf. Eq.~\eqref{eq::firstOrder}. To initiate a more systematic approach, we define the generators $g_m$ in a more general way: \begin{equation} \label{eq::polyGeneral} g_m(\bm{q},\bm{p})=\sum_{i+j+k+l=m} a_{ijkl}\,q_x^i\,q_y^j\,p_x^k\,p_y^l,\quad i,j,k,l\in\mathbb{N},\quad i+k\text{ even} \end{equation} where the condition $i+k$ even stems from the symmetry of the optical system itself. In this notation the functions $g_2,g_3,g_4$ are defined by their coefficients. It is our goal to determine these coefficients such that \begin{equation} \mathcal{S}(\theta)\overset{(3)}{=}\exp([g_2,\cdot\,])\exp([g_3,\cdot\,])\exp([g_4,\cdot\,]). \label{eq::Sapprox} \end{equation} The notation $\overset{(3)}{=}$ symbolizes that the truncated concatenation of Lie transformations on the right-hand side (RHS) of Eq.~\eqref{eq::Sapprox} produces the same expressions as the map $\mathcal{S}(\theta)$ up to third-order terms in phase-space coordinates. To derive these coefficients, one has to expand the mapping $\mathcal{S}(\theta)$ up to the order of interest, i.e., 3 in our case. Subsequently, we consider the general form of the generators as given in Eq.~\eqref{eq::polyGeneral} and compute the action of the concatenation of Lie transformations in Eq.~\eqref{eq::Sapprox} on the phase-space variables. Since the coefficients of the generator $g_k$ are fully determined by their contributions to the aberrations of order $k-1$, we can determine the coefficients of the generators in increasing order; see the method described in \cite{Barion:22}. The non-zero coefficients of the generators of the reflection plus rotation mapping $\mathcal{S}(\theta)$ are listed in Table \ref{tbl::reflRotCoeffs} in the form described by Eq.~\eqref{eq::polyGeneral}. \footnotesize \begin{table} \centering \begin{tabular}{l p{11cm}} \toprule Coefficients & Values\\[0.5ex] \midrule $a_{0 2 0 0}$ & $2 \,c_{0 2} \sec(\theta)$ \\ $a_{2 0 0 0}$ & $2 \,c_{2 0} \cos(\theta) $ \\ \midrule $a_{0 2 0 1}$ & $2 \,c_{0 2} \sec(\theta) \tan(\theta)$ \\ $a_{0 3 0 0}$ & $2 \sec(\theta)^2 ( c_{0 3} - 2 \,c_{0 2}^2 \tan(\theta))$\\ $a_{1 1 1 0}$ & $4 \,c_{2 0} \sin(\theta)$ \\ $a_{2 0 0 1}$ & $-2 \,c_{2 0} \sin(\theta)$ \\ $a_{2 1 0 0}$ & $2 ( c_{2 1} - 4 \cos(\theta) \,c_{2 0}^2 \sin(\theta) + 2 \,c_{0 2} \,c_{2 0} \tan(\theta))$ \\ \midrule $a_{0 2 0 2}$ & $-\frac{1}{2} (-1 + 3 \cos(2 \theta)) \,c_{0 2} \sec(\theta)^3$\\ $a_{0 2 2 0}$ & $- c_{0 2} \sec(\theta) + 2 \,c_{2 0} \sin(\theta) \tan(\theta)$ \\ $a_{0 3 0 1}$ & $ 2 \sec(\theta)^4 (- \,c_{0 2}^2 + 3 \cos(2 \theta) \,c_{0 2}^2 + \,c_{0 3} \sin(2 \theta)) $\\ $a_{0 4 0 0}$ & $-\sec(\theta)^5 ( \,c_{0 2}^3 + \cos(2 \theta) (7 \,c_{0 2}^3 - c_{0 4}) - c_{0 4} + 4 \,c_{0 2} \,c_{0 3} \sin(2 \theta)) $ \\ $a_{1 2 1 0}$ & $4 \,c_{0 2} \,c_{2 0} - 2 \,c_{2 0}^2 + 2 \cos(3 \theta) \,c_{2 0}^2 \sec(\theta) + 4 \,c_{2 1} \tan(\theta) $ \\ $a_{2 0 0 2}$ & $-\cos(\theta) \,c_{2 0}$\\ $a_{2 0 2 0}$ & $-\cos(\theta) \,c_{2 0}$ \\ $a_{2 1 0 1}$ & $4 \,c_{0 2} \,c_{2 0}$\\ $a_{2 2 0 0}$ & $2 (-4 \cos(\theta) \,c_{0 2} \,c_{2 0}^2 + 8 \,c_{0 2}^2 \,c_{2 0} \sec(\theta)^3 - 4 \,c_{2 0} \,c_{2 1} \sin(\theta) + \sec(\theta) (-12 \,c_{0 2}^2 \,c_{2 0} + c_{2 2} - 6 \,c_{0 2}^2 \,c_{2 0} \tan(\theta)^2)) $\\ $a_{3 0 1 0}$ & $4 \cos(\theta)^2 \,c_{2 0}^2$ \\ $a_{4 0 0 0}$ & $-8 \cos(\theta)^3 \,c_{2 0}^3 + 2 \cos(\theta) \,c_{4 0} - 2 \,c_{0 2} \,c_{2 0}^2 \sin(\theta) \tan(\theta)$\\ \bottomrule \end{tabular} \caption{Coefficients of $g_2,g_3,g_4$.} \label{tbl::reflRotCoeffs} \end{table} \normalsize We proceed to combine reflection and rotation with propagation before and after the surface. The mapping $\mathcal{M}$ will describe the action of a fundamental element on the rays from object plane coordinates $(\bm{q},\bm{p})$ to image plane coordinates $(\bm{q}',\bm{p}')$. The map $\mathcal{M}$ is, up to fourth degree generators, composed as follows: \begin{multline} \mathcal{M}\overset{(3)}{=}\underbrace{\exp\left(-s_\mathrm{ob}[h_2,\cdot\,]\right)\exp\left(-s_\mathrm{ob}[h_4,\cdot\,]\right)}_{\overset{(3)}{=}\text{propagation from object plane}}\underbrace{\exp([g_2,\cdot\,])\exp([g_3,\cdot\,])\exp([g_4,\cdot\,])}_{\overset{(3)}{=}\mathcal{S}(\theta)}\\ \underbrace{\exp\left(-s_\mathrm{im}[h_2,\cdot\,]\right)\exp\left(-s_\mathrm{im}[h_4,\cdot\,]\right)}_{\overset{(3)}{=}\text{propagation to image plane}}. \label{eq::fundElement} \end{multline} Here, $s_\mathrm{ob},s_\mathrm{im}$ are the object and image distances measured along the OAR in the sagittal plane. Although it might appear counter-intuitive, due to their unique properties, the order of Lie transformations is left-to-right like the order of transformations undergone by the ray \cite{Wolf2004}. The object and image distances for the sagittal and tangential planes satisfy the Coddington equations \cite{Braat2019}: \begin{subequations} \label{eq::coddington} \begin{align} \text{sagittal plane}{:}\quad&\frac{1}{s_\mathrm{ob}}+\frac{1}{s_\mathrm{im}}=-4\,c_{20}\cos(\theta),\\ \text{tangential plane}{:}\quad&\frac{1}{t_\mathrm{ob}}+\frac{1}{t_\mathrm{im}}=-4\,c_{02}\sec(\theta). \end{align} \end{subequations} We want to reorder and combine the Lie transformations of Eq.~\eqref{eq::fundElement} into three Lie transformations generated by the functions $\tau_2,\tau_3,\tau_4$ such that \begin{equation} \label{eq::fundElementMod} \mathcal{M}\overset{(3)}{=}\exp([\tau_2,\cdot\,])\exp([\tau_3,\cdot\,])\exp([\tau_4,\cdot\,]). \end{equation} This allows us to separate the linear part of the mapping, generated by $\tau_2$, from the higher order parts generated by $\tau_3,\tau_4$ that induce aberrations. Again, equality up to third-order expansions is sufficient for our current work since we are investigating aberrations up to this same order. The functions $\tau_2,\tau_3,\tau_4$ describe the action of a fundamental element up to the expansion order $3$. To derive the functions $\tau_2,\tau_3,\tau_4$ it is necessary to manipulate the mapping in Eq.~\eqref{eq::fundElement} such that the generators are combined and reordered in ascending order. The procedure has been shown in \cite{Barion:22} and a short example can be found in Appendix \ref{sec::AddLieTools}. The main tools necessary for these calculations are the Baker-Campbell-Hausdorff (BCH) formula \eqref{eq::BCH} and the identity given in Eq.~\eqref{eq::thrm3}. The Lie transformation generated by the second degree polynomial $\tau_2$ is more conveniently represented in its matrix form $M_G$ as it is the matrix multiplication of its three components, i.e., object-space propagation, reflection with rotation and image-space propagation. We call this the Gaussian part of the mapping $\mathcal{M}_G=\exp([\tau_2,\cdot\,])$ and the associated $M_G$ reads \small \begin{equation} M_G=\begin{pmatrix} 1 & 0 & s_\mathrm{im} & 0\\ 0 & 1 & 0 & s_\mathrm{im}\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 4\,c_{20}\cos(\theta) & 0 & 1 & 0\\ 0 & 4\,c_{02}\sec(\theta) & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & s_\mathrm{ob} & 0\\ 0 & 1 & 0 & s_\mathrm{ob}\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}. \end{equation} \normalsize The coefficients of the polynomial $\tau_3$ are given in Table \ref{tbl::tau3} analogously to Eq.~\eqref{eq::polyGeneral} with coefficients denoted by $b_{ijkl}$. \begin{table} \centering \begin{tabular}{ll} \toprule Coefficients & Values\\[0.5ex] \midrule $b_{0003}$ & $s_\mathrm{im}^2 \left( a_{0201}-s_\mathrm{im} a_{0300}\right) $\\ $b_{0021}$ & $s_\mathrm{im}^2 \left( a_{1110}+a_{2001}-s_\mathrm{im} a_{2100}\right) $\\ $b_{0102}$ & $s_\mathrm{im} \left(3 s_\mathrm{im} a_{0300}-2 a_{0201}\right) $\\ $b_{0120}$ & $s_\mathrm{im} \left(s_\mathrm{im} a_{2100}- a_{1110}\right) $\\ $b_{0201}$ & $a_{0201}-3 s_\mathrm{im} a_{0300} $\\ $b_{0300}$ & $a_{0300} $\\ $b_{1011}$ & $s_\mathrm{im} \left(2 s_\mathrm{im} a_{2100}- a_{1110}+2 a_{2001}\right) $\\ $b_{1110}$ & $a_{1110}-2 s_\mathrm{im} a_{2100} $\\ $b_{2001}$ & $a_{2001}-s_\mathrm{im} a_{2100} $\\ $b_{2100}$ & $a_{2100}$\\ \bottomrule \end{tabular} \caption{Coefficients of $\tau_3$.} \label{tbl::tau3} \end{table} The expressions for the coefficients of $\tau_4$ are rather lengthy and not useful for the current discussion, but can be found in Appendix \ref{sec::Appendix} for completeness. We thus have a mapping that describes the fundamental element up to third-order. \subsection{From Optical Element to Optical System} To treat optical systems it suffices to concatenate multiple fundamental elements, keeping in mind the sign conventions described at the end of Section~\ref{sec::fundElemLast}. Each intermediate image plane corresponds to the intermediate object plane of the subsequent mirror. Thus, if one fundamental element is described by Eq.~\eqref{eq::fundElementMod}, then multiple elements are a concatenation of Lie transformations of this form. For example, suppose a two-mirror system where one mirror is described by the generators $\tau_k$ and the other mirror by the generators $\sigma_k$. Then, the map $\mathcal{M}$ of the complete system, up to third-order contributions, reads, \begin{equation} \mathcal{M}\overset{(3)}{=}\exp([\tau_2,\cdot\,])\exp([\tau_3,\cdot\,])\exp([\tau_4,\cdot\,])\exp([\sigma_2,\cdot\,])\exp([\sigma_3,\cdot\,])\exp([\sigma_4,\cdot\,]). \label{eq::twoMirrorEx} \end{equation} The coefficients of $\tau_k,\sigma_k$ are completely described by the geometry of the system according to the expressions $b_{ijkl}$. Previously, we have stressed the importance of having the Lie transformations in ascending order. This allows us to separate the contributions to the different (ascending) orders of aberrations. The necessary computations to reorder Eq.~\eqref{eq::twoMirrorEx} rely on the procedure for reordering shown in \cite{Barion:22} and make use of the BCH formula \eqref{eq::BCH} and the results of Eq.~\eqref{eq::thrm3}. During these steps, the composition of low-order aberrations into high-order ones follows directly from the application of the BCH formula. In more complex optical systems the intermediate image planes for the sagittal and tangential rays need not to be located at the same point along the OAR. As such, the choice of the propagation distances for each fundamental element is seemingly unclear. However, whatever the chosen propagation distance is equal to, the sum of the intermediate image distance of the surface $j$ and the object distance of surface $j+1$ needs always be equal to the total distance between the two surfaces. Since the propagation mappings commute, see \cite{Barion:22,DragtFoundations86,Wolf2004}, it does not matter what distance is chosen for the image propagation of surface $j$ or the object distance for surface $j+1$ as long as their sum remains equal to the distance between the two surfaces. \section{Applications}\label{sec::Examples} We verify the presented methodology using three examples. We recover the surface expansion coefficients of a spherical ellipsoid for a point-to-point imager and the surface expansion coefficients for a focusing mirror as recently presented in \cite{Caron}. Lastly, we use our proposed method to ray-trace a beam of rays reflected by a biconic mirror and compare with the spot diagram generated using OpticStudio. The first example will be the problem of perfect point-to-point imaging; see Figure \ref{fig::sphericalEllipse}. Suppose we have an object point on the OAR which is then reflected off a surface onto an image point. A spherical ellipsoid with these two points at its foci will result in perfect imaging \cite{Gomez}, i.e., no aberrations will be present. Therefore, if we choose arbitrarily an object and an image point and impose zero aberrations up to third-order for all rays with initial position $\bm{q}^\mathrm{ob}=\bm{0}$, then the solution for the surface coefficients should be the surface expansion terms up to fourth order of the corresponding spherical ellipsoid. \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{sphericalEllipse.pdf} \caption{A spherical ellipsoid as perfect imager between its foci. The OAR is in red and the red dashed lines are other rays originating from the object. The object and image planes are represented. The black dashed line is the major axis of the ellipsoid.} \label{fig::sphericalEllipse} \end{figure} We fix the object distance, image distance and the surface coefficients $c_{20},c_{02}$ to have the desired paraxial properties. Subsequently, the corresponding map given in Eq.~\eqref{eq::fundElementMod} is applied to the initial coordinates $(\bm{0},\bm{p}^\mathrm{ob})$. The expression for the final position coordinates at the image plane $\bm{q}^\mathrm{im}$ is of the form: \begin{equation} \label{eq::finalPosEllipsoid} \bm{q}^\mathrm{im}=\bm{q}^\mathrm{im}(\bm{0},\bm{p}^\mathrm{ob}). \end{equation} Eq.~\eqref{eq::finalPosEllipsoid} is a polynomial dependent only on the initial direction $\bm{p}^\mathrm{ob}$ where each monomial coefficient will depend on the chosen parameters and the -- yet undetermined -- higher order coefficients $c_{mn}$ of the reflecting surface; see Eq.~\eqref{eq::surfaceEq}. The requirement of zero aberration, i.e., $\bm{q}^\mathrm{im}=\bm{0}$, simply translates in setting all monomial coefficients in Eq.~\eqref{eq::finalPosEllipsoid} equal to zero. The resulting system of equations will determine the value of the surface expansion coefficients. For example, if we choose a spherical ellipsoid with major axis $a=20$ and minor axis $b=10$, then with the corresponding initial parameters for the system $s_\mathrm{ob}=s_\mathrm{im}=20$ and $c_{20}=-1/20,c_{02}=-1/80$, we get the following system of equations for the unknown coefficients $c_{mn}$ with $m=0,2,4$ and $2\leq m+n\leq 4$: \begin{equation} \begin{dcases} 80\, p_x^3 (8000 c_{40}+1)=0,\\ -80\, p_x p_y^2 \left(2400 \sqrt{3} c_{03}+200 \sqrt{3} c_{21}-16000 c_{22}-1\right)=0,\\ 32000\, p_x p_y c_{21}=0,\\ 80\, p_x^2 p_y \left(2400 \sqrt{3} c_{03}+200 \sqrt{3} c_{21}+16000 c_{22}+1\right)=0,\\ 16000 \,p_x^2 c_{21}=0,\\ 80\, p_y^3 (128000 c_{04}+1)=0,\\ 192000 \,p_y^2 c_{03}=0. \end{dcases} \label{eq::ellipsoidSystem} \end{equation} The solution to Eqs.~\eqref{eq::ellipsoidSystem}, computed with exact arithmetic, gives the surface expansion coefficients shown in Table \ref{tbl::ellipsoidCoeffs}. \begin{table}[htb!] \centering \begin{tabular}{l c} \toprule $c_{mn}$ & Values \\ [0.5ex] \midrule $c_{2 1}$ & 0 \\ $c_{0 3}$ & 0 \\ $c_{4 0}$ & $-1/8000$\\ $c_{2 2}$ & $-1/16000$\\ $c_{0 4}$ & $-1/128000$\\ \bottomrule \end{tabular} \caption{Surface expansion coefficients for the spherical ellipsoid defined according to Eq.~\eqref{eq::surfaceEq}.} \label{tbl::ellipsoidCoeffs} \end{table} The coefficients in Table \ref{tbl::ellipsoidCoeffs} are the same we would get by directly expanding the ellipsoid's equation \begin{equation} \label{eq::ellipsoidEq} z=\zeta(\bm{q})=\frac{b}{a} \sqrt{a^2 - q_y^2 - \frac{a^2}{b^2} q_x^2} - b, \end{equation} in terms of $q_x,q_y$ around the origin. In fact, the surface equation \eqref{eq::ellipsoidEq} represents the ellipsoid at the point of impact of the OAR with respect to the coordinate system aligned with its normal at that point; see Figure \ref{fig::sphericalEllipse}. The next example reproduces some of the results given in \cite{Caron}. Here, the authors calculate the surface expansion coefficients for a single mirror where again zero third-order aberrations are imposed with the additional condition that the initial momenta are equal to zero, i.e., $\bm{p}^\mathrm{ob}=\bm{0}$; see Figure~\ref{fig::parabolicRefl}. \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{parabolicRefl.pdf} \caption{A focusing reflector for an object at infinity as used in \cite{Caron}.} \label{fig::parabolicRefl} \end{figure} The initial surface parameters are the effective radius of curvature $R=-200$, cf. \cite{Caron}, for both the sagittal and tangential planes and the incidence angle of $\theta=-0.2$. The expression for the final position coordinates $\bm{q}^\mathrm{im}$ is now a polynomial in $\bm{q}^\mathrm{ob}$, i.e., \begin{equation} \label{eq::finalPosCaron} \bm{q}^\mathrm{im}=\bm{q}^\mathrm{im}(\bm{q}^\mathrm{ob},\bm{0}). \end{equation} Setting all monomial coefficients of Eq.~\eqref{eq::finalPosCaron} to zero, results in the following system of equations for the $c_{mn}$ with $m=0,2,4$ and $2\leq m+n\leq 4$: \begin{equation} \begin{dcases} \frac{q_x^3 \left(-8 R^3 \cos (\theta) c_{40}+\sec ^2(\theta)-1\right)}{2 R^2}=0,\\ \frac{q_x q_y^2 \sec (\theta) \left(-3 \sec (\theta) \left(2 R^2 (\sin (2 \theta) c_{21}-2 \tan (\theta) c_{03})+\cos (2 \theta)-1\right)-8 R^3 c_{22}\right)}{4 R^2}=0,\\ -\frac{q_x q_y \left(2 R^2 c_{21}+\tan (\theta)\right)}{R}=0,\\ -\frac{q_x^2 q_y \sec (\theta) \left(5 \sin (\theta) \left(2 R^2 c_{21}+\tan (\theta)\right)+2 R^2 (3 \tan (\theta) \sec (\theta) c_{03}+2 R c_{22})\right)}{2 R^2}=0,\\ -\frac{q_x^2 \left(2 R^2 c_{21}+\tan (\theta)\right)}{2 R}=0,\\ \frac{q_y^3 \sec ^3(\theta) \left(-32 R^2 (2 \sin (\theta) c_{03}+R c_{04})-3 \cos (\theta)+3 \cos (3 \theta)\right)}{8 R^2}=0,\\ -\frac{3 q_y^2 \left(2 R^2 \sec ^2(\theta) c_{03}+\tan (\theta)\right)}{2 R}=0. \end{dcases} \label{eq::caronSystem} \end{equation} The Eqs.~\eqref{eq::caronSystem} are solved using exact arithmetic and give results for the surface expansion coefficients shown in Table \ref{tbl::caronCoeffs}, which agree exactly with those given in \cite{Caron}. \begin{table}[htb!] \centering \begin{tabular}{l c} \toprule $c_{mn}$ & Values \\ [0.5ex] \midrule $c_{2 1}$ & $2.53388\times 10^{-6}$ \\ $c_{0 3}$ & $2.43386\times 10^{-6}$ \\ $c_{4 0}$ & $-6.55111\times 10^{-10}$\\ $c_{2 2}$ & $-3.77553\times 10^{-9}$\\ $c_{0 4}$ & $-3.02209\times 10^{-9}$\\ \bottomrule \end{tabular} \caption{Surface expansion coefficients for the second example defined according to Eq.~\eqref{eq::surfaceEq}.} \label{tbl::caronCoeffs} \end{table} The last example we present in this paper is a comparison between spot diagrams of a biconic mirror when computed using OpticStudio and our Lie method; see Figure~\ref{fig::biconicRefl}. The mapping in Eq.~\eqref{eq::fundElementMod} generates a third-degree polynomial in phase-space variables which can be used as a ray-tracer between object and image plane coordinates. In Figure \ref{fig::spotDiagrams}, we can see the difference in the ray-tracing for an off-axis beam of rays originating from the object point at position $\bm{q}=(-0.5,0.5)$ and direction domain $\bm{p}\in[-0.0075,0.0125]\times[-0.0125,0.0075]$ and for an on-axis beam of rays originating from $\bm{q}=(0,0)$ with direction domain $\bm{p}\in[-0.01,0.01]^2$. For both cases the object and image distances are $s_\mathrm{ob}=200,\,s_\mathrm{im}=100$ and the OAR has an incidence angle equal to $\theta=\pi/6$. The surface equation of the biconic is: \begin{equation} z=\zeta(\bm{q})=\frac{c_xq_x^2+c_yq_y^2}{1+\sqrt{1-(1+\kappa_x)c_x^2q_x^2-(1+\kappa_y)c_y^2q_y^2}}, \label{eq::biconic} \end{equation} with $\kappa_x=\kappa_y=0$ and $c_x=-\dfrac{\sqrt{3}}{200}$, $c_y=-\dfrac{3\sqrt{3}}{800}$. Using the surface expansion coefficients of Eq.~\eqref{eq::biconic} we can determine the necessary coefficients $b_{ijkl}$ for the Lie operators and the resulting spot diagram coincides almost perfectly with the OpticStudio ray tracing. The maximum distance between the coordinates given by the two methods in the examples is $\Delta_{\mathrm{max}}=9\times10^{-5}$. \begin{figure}[!htb] \centering \includegraphics[width=0.4\textwidth]{biconicRefl.pdf} \caption{Sketch of the biconic reflector in Eq.~\eqref{eq::biconic}. The point objects $1$ and $2$ are imaged paraxially onto their primed counterparts.} \label{fig::biconicRefl} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{spotDiagrams.pdf} \caption{Spot diagrams at image plane. Left: Off-axis object point. Right: On-axis object point.} \label{fig::spotDiagrams} \end{figure} \section{Conclusions} In this paper we extend the procedure presented for rotationally symmetric systems in \cite{Barion:22,DragtFoundations86,Wolf2004} to mirror systems with only planar symmetry. Starting from a set of analytical ray-tracing equations, we expand them up to third-order. The information about these expansions is then encoded into the associated Lie transformations. We derive the generator polynomials for the Lie transformations up to fourth degree. Thus, the method produces third-order analytical expressions for the transverse ray aberrations for an arbitrary mirror with planar symmetry. We calculate coefficients of the generators for a single mirror. These coefficients depend only on the geometrical information of the mirror itself. It is therefore possible to describe an arbitrary optical system as the concatenation of single mirrors since for each mirror the associated generated polynomials are known. Complex phenomena like lower order aberrations combining into higher order ones are captured by the method. We verified our results with three applications. In the first two, we show how it is possible to use the analytic expressions of the aberrations to determine the freeform coefficients of the mirror surface that eliminate aberrations up to third-order in the case of a point object and an object at infinity. The last example shows that the aberration expressions can also be used for ray tracing (up to the order of accuracy that has been used in the Lie transformations). Here, we see excellent agreement between the Lie-generated spot diagrams and the ones generated by OpticStudio. The authors now aim to explore the application of the shown method to the limiting case of grazing incidence and to investigate possible applications for the determination of mirror systems free of, or with reduced, third-order aberrations. The latter can serve as advantageous starting designs for complex mirror systems. Additionally, we intend to work out the relation between the Lie aberration coefficients and the wavefront aberration coefficients described in \cite{Moore,MooreErr}.
1,108,101,566,031
arxiv
\section{Introduction} \label{sec:intro} A well-known application area of SAT solvers is the analysis of over-constrained systems, i.e.\ systems of constraints that are inconsistent. A number of computational problems can be related with the analysis of over-constrained systems. These include minimal explanations of inconsistency, and minimal relaxations to achieve consistency. Pervasive to these computational problems is the problem of computing a ``maximal autarky'' of a propositional formula, since clauses satisfied by an autarky cannot be included in minimal explanations of inconsistency or minimal relaxations to achieve consistency. In the experimental study \cite{SIMML2014EfficientAutarkies} it was realised that using as few SAT calls as possible, via cardinality-constraints, performs much worse than using a linear number of calls. To use only a sublinear number of calls, without using cardinality constraints, is the goal of this article. Given a satisfiable clause-set $F$ and a partial assignment $\varphi$, in general $\varphi * F$, the result of the application (instantiation) of $\varphi$ to $F$, might be unsatisfiable. $\varphi$ is an \emph{autarky} for (arbitrary) $F$ iff every clause $C$ of $F$ touched by $\varphi$ (i.e., $\var(C) \cap \var(\varphi) \ne \emptyset$) is satisfied by $\varphi$ (i.e., $\exists\, x \in C : \varphi(x) = 1$). Now if $F$ is satisfiable, then also $\varphi * F$ is satisfiable, since due to the autarky property holds $\varphi * F = \set{C \in F : \var(C) \cap \var(\varphi) = \emptyset} \subseteq F$. Thus ``autarky reduction'' $F \leadsto \varphi * F$ can take place (satisfiability-equivalently). An early use of autarkies is \cite{EIS76}, for the solution of 2-SAT. The notion ``autarky'' was introduced in \cite{MoSp85} for faster $k$-SAT decision, which can be seen as an extension of \cite{EIS76}. For an overview of such uses of autarkies for SAT solving see \cite{HvM09HBSAT}. Besides such incomplete usage (using only autarkies ``at hand''), the complete search for ``all'' autarkies (or the ``strongest'' one) is of interest. Either with (clever) exponential-time algorithms, or for special classes of clause-sets, where polynomial-time is possible, or considering only restricted forms of autarkies to enable polynomial-time handling; see \cite{Kullmann2007HandbuchMU} for an overview. In \cite{Kullmann2007ClausalFormZI,Kullmann2007ClausalFormZII} autarky theory is generalised to non-boolean clause-sets. Finitely many autarkies can be composed to yield another autarky, which satisfies precisely the clauses satisfied by (at least) one of them; this was first observed in \cite{Ok98}. So complete autarky reduction for a clause-set $F$, elimination of clauses satisfied by some autarky as long as possible, yields a unique sub-clause-set, called the \emph{lean kernel} $\na(F) \subseteq F$, as introduced in \cite{Ku98e} and further studied in \cite{Ku00f}; we note that $F \in \mc{SAT} \Leftrightarrow \na(F) = \top$, where $\top$ is the empty clause-set. Clause-sets without non-trivial autarkies are called \emph{lean}, and are characterised by $\na(F) = F$; the set of all lean clause-sets is called $\mc{LEAN}$, and was shown to be coNP-complete in \cite{Ku00f}. A \emph{maximal autarky} for $F$ is one which can not be extended; note that a maximal autarky $\varphi$ always exist, where $\varphi = \epa$, the empty partial assignment, iff $F$ is lean. An autarky $\varphi$ is maximal iff $\var(\varphi) = \var(F) \setminus \var(\na(F))$. Thus $\var(F) \setminus \var(\na(F))$ is called the \emph{largest autarky var-set}. For a maximal autarky $\varphi$ the result of the autarky reduction is $\na(F)$, while any autarky which yields $\na(F)$ is called \emph{quasi-maximal}. \paragraph{Algorithmic problems associated with autarkies.} The basic algorithmic problems related to general ``autarky systems'', which allow to specialise the notion of autarky, for example in order to enable polynomial-time computations, are discussed in \cite[Section 11.11.6]{Kullmann2007HandbuchMU}. Regarding \emph{decision problems}, for this article{} only one problem is relevant here, namely \texttt{AUTARKY EXISTENCE}, deciding whether a clause-set $F$ has a non-trivial autarky; the negation is \texttt{LEAN}, deciding whether $F \in \mc{LEAN}$. An early oracle-result is \cite[Lemma 8.6]{Ku00f}, which shows, given an oracle for \texttt{LEAN}, how to compute \texttt{LEAN KERNEL} with at most $n(F)$ oracle calls (for all ``normal autarky systems'', using the terminology from \cite[Section 11.11]{Kullmann2007HandbuchMU}). We are concerned in this article{} with the \emph{functional problems}, where the four relevant problems are as follows, also stating the effort for checking a solution: \texttt{NON-TRIVIAL AUTARKY}: Find some non-trivial autarky (if it exists; otherwise return the empty autarky). Checking an autarky is in $P$. \texttt{QUASI-MAXIMAL AUTARKY} or \texttt{MAXIMAL AUTARKY}: Find a (quasi-)maximal autarky; by a trivial computation, from a quasi-maximal autarky we can compute a maximal one. Checking that $\varphi$ is a quasi-maximal autarky for $F$ means checking that $\varphi$ is an autarky (easy), and that $\varphi * F$ is lean, and so checking is in coNP. A quasi-maximal autarky can be computed by repeated calls to \texttt{NON-TRIVIAL AUTARKY} (until no non-trivial autarky exists anymore). \texttt{NON-TRIVIAL VAR-AUTARKY}: Find the var(iable)-set of some non-trivial autarky (if it exists; otherwise return the empty set). Checking that $V$ is the variable-set of an autarky means checking that $F[V]$, the restriction of $F$ to $V$, is satisfiable, thus checking is in NP. \texttt{(QUASI-)MAXIMAL VAR-AUTARKY} or \texttt{LEAN KERNEL}: Compute the largest autarky var-set (or a quasi-maximal one), or compute the lean kernel; all three tasks are equivalent by trivial computations. Checking that $V$ is the largest autarky var-set means checking that $F[V]$ is satisfiable and that $\set{C \in F : \var(F) \cap V = \emptyset}$ is lean, so checking is in $D^P$ (\cite{PW88}). The solution to \texttt{MAXIMAL VAR-AUTARKY} or to \texttt{LEAN KERNEL} is unique and always exists. The var-set of a quasi-maximal autarky can be computed by repeated calls to \texttt{NON-TRIVIAL VAR-AUTARKY}. Just having the var-set of the autarky $\varphi$ enables us to perform the autarky reduction $F \leadsto \varphi * F$, namely $\varphi * F = \set{C \in F : \var(C) \cap \var(\varphi) = \emptyset}$, but from the var-set $\var(\varphi)$ in general we can not derive the autarky $\varphi$ itself, which is needed to provide a certificate for the autarky-property. For example, $F$ is satisfiable iff $\var(F)$ is the largest autarky var-set, and in general without further hard work it is not possible to obtain the satisfying assignment from (just) the knowledge that $F$ is satisfiable. An interesting case is discussed in \cite[Subsection 4.3]{KullmannZhao2011Bounds} and (in greater depth) in \cite[Section 10]{KullmannZhao2010Extremal}, where we can compute a certain autarky reduction in polynomial-time, but it is not known how to find the autarky (efficiently). So \texttt{NON-TRIVIAL VAR-AUTARKY} is weaker than \texttt{NON-TRIVIAL AUTARKY}, and \texttt{MAXIMAL VAR-AUTARKY} is weaker than \texttt{MAXIMAL AUTARKY}. We tackle in this article{} the hardest problem, \texttt{MAXIMAL AUTARKY}. To obtain a complexity calibration, we can consider the computational model where polynomial-time computation and (only) one oracle call is used. Then \texttt{MAXIMAL VAR-AUTARKY} is equivalent to \texttt{PARALLEL SAT}, which has as input a list $F_1, \dots, F_m$ of clause-sets, and as output $m$ bits deciding satisfiability of the inputs: On the one hand, given these $F_1, \dots, F_m$, make them variable-disjoint and input their union to the \texttt{MAXIMAL VAR-AUTARKY} oracle --- $F_i$ is satisfiable iff $\var(F_i)$ is contained in the largest autarky var-set. On the other hand it is an easy exercise to see, that for example via the translation $F \leadsto t(F)$ used in this article, introduced as $\Gamma_2$ in \cite{SIMML2014EfficientAutarkies}, we can compute the largest autarky var-set by inputting $t(F) \cup \set{\set{v_1}}, \dots, t(F) \cup \set{\set{v_n}}$ to \texttt{PARALLEL-SAT}, where $\var(F) = \set{v_1, \dots, v_n}$. Similarly it is easy to see that \texttt{MAXIMAL AUTARKY} is equivalent to \texttt{PARALLEL FSAT} (here now also the satisfying assignments are computed). \paragraph{General approaches for the lean kernel.} See \cite[Section 11.10]{Kullmann2007HandbuchMU} for an overview. A fundamental method for computing a (quasi-)maximal autarky, strengthened in this article, uses the \emph{autarky-resolution duality} (\cite[Theorem 3.16]{Ku98e}): the variables in the largest autarky var-set are precisely the variables not usable in any resolution refutation. The basic algorithm, reviewed as algorithm $\mc{A}_0$ in Definition \ref {def:A0} in this article{} (with a refined analysis), was first given in \cite{Ku01a} and somewhat generalised in \cite[Theorem 11.10.1]{Kullmann2007HandbuchMU}; see \cite{KullmannLynceSilva2005Autarkies} for a discussion and some experimental results. A central concept is, what in this article{} we call an \emph{extended SAT oracle} $\mc{O}_{01}$, which for a satisfiable input outputs a satisfying assignment, while $\mc{O}_{01}$ on an unsatisfiable input outputs the variables used by some resolution refutation. In order to also accommodate polynomial-time results, the oracle $\mc{O}_{01}$ may get its inputs from a class $\mathcal{C}$ of clause-sets, which is stable (closed) under removal of variables. However, for the new algorithm of this article{} (Algorithm $\mc{A}_{01}$ presented in Theorem \ref{thm:main}), we do not consider classes $\mathcal{C}$ as for $\mc{A}_0$, since the input is first transformed, and then also some clauses are added, which would complicate the requirements on $\mathcal{C}$. The other main method to compute autarkies uses reduction to SAT problems, denoted by $F \leadsto t(F)$ in this article, where the solutions of $t(F)$ correspond to the autarkies of $F$. This was started by \cite{LiffitonSakallah2008Trimming}, and further extended first in \cite[Subsection 11.10.4]{Kullmann2007HandbuchMU}, and then in \cite{SIMML2014EfficientAutarkies}, which contains a thorough discussion of the various reductions. The basic algorithm here is $\mc{A}_1$ (Definition \ref{def:algoA1}), which iteratively extracts autarkies via the translation until reaching the lean kernel. When combined with cardinality constraints and binary search, indeed $\log_2 n$ oracle calls are sufficient; see Algorithm $\mc{A}_{\mathrm{bs}}$ (Definition \ref{def:algoAbs}). But these cardinality constraints make the tasks much harder for the SAT oracle. The new algorithm $\mc{A}_{01}$ of this article{} (Definition \ref{def:alg}) indeed combines the two basic approaches $\mc{A}_0, \mc{A}_1$, by applying the autarky-resolution duality to the translation and using a more clever choice of ``steering clauses'' to search for autarkies. To better understand this combination of approaches, all four algorithms $\mc{A}_0$, $\mc{A}_1$, $\mc{A}_{\mathrm{bs}}$ and $\mc{A}_{01}$, are formulated in a unified way, striving for elegance \emph{and} precision. One feature is, that the input is updated in-place, which not only improves efficiency, but also simplifies the analysis considerably. \paragraph{Related literature.} When for $\mathcal{C}$ (as above) the extended SAT oracle $\mc{O}_{01}$ runs in polynomial time, then by \cite[Theorem 11.10.1]{Kullmann2007HandbuchMU} the algorithm $\mc{A}_0$ computes a quasi-maximal autarky in polynomial time. The basic applications to 2-CNF, HORN, and the case that every variable occurs at most twice, are reviewed in \cite[Section 11.10.9]{Kullmann2007HandbuchMU}. The other known polytime results regarding computation of the lean kernel use the \emph{deficiency}, as introduced in \cite{FrGe98}, and further studied in \cite{Ku98e}). Here the above algorithm $\mc{A}_0$ can not be employed, since crossing out variables can increase this measure (see \cite[Section 10]{Kullmann2007ClausalFormZI} for a discussion). \cite[Theorem 4.2]{Ku99dKo} shows that the lean kernel is computable in polynomial time for bounded (maximal) deficiency. In \cite{FKS00} the weaker result, that SAT is decidable in polynomial time for bounded maximal deficiency, has been shown, and strengthened later in \cite{Szei2002FixedParam} to fixed-parameter tractability, which is unknown for the computation of the lean kernel. \cite[Theorem 10.3]{Kullmann2007ClausalFormZI} shows that also a maximal autarky can be computed in polynomial time for bounded maximal deficiency, and this for generalised non-boolean clause-sets, connecting to constraint satisfaction. The connection to the field of hypergraph $2$-colouring, the problem of deciding whether one can colour the vertices of a hypergraph with two colours, such that monochromatic hyperedges are avoided, has been established in \cite{Kullmann2007Balanciert}; see \cite[Section 11.12.2]{Kullmann2007HandbuchMU} and \cite[Subsection 1.6]{KullmannZhao2010Extremal} for overviews. Exploiting the solution of a long-outstanding open problem by \cite{RobertsonSeymourThomas1999GeradeKreise,McCuaig2004PolyasProblem}, the lean kernel is computable in polynomial time by \cite{Kullmann2007Balanciert} for classes of clause-sets, which by \cite[Subsection 1.6]{KullmannZhao2010Extremal}, via the translation of SAT problems into hypergraph $2$-colourability problems, strongly generalises the polytime results (discussed above) for maximal deficiency of clause-sets (partially proven, partially conjectured). Autarkies have a hidden older history in the field of \emph{Qualitative Matrix Analysis (QMA)}, which yields potential applications of autarky algorithms in economics and elsewhere. QMA was initiated by \cite{Samuelson1947Foundations}, based on the insight that in economics often the magnitude of a quantity is irrelevant, but only the \emph{sign} matters. So \emph{qualitative solvability} of systems of equations and/or inequalities is considered, a special property of such systems, namely that changes of the coefficients, which leave their signs invariant, do not change the signs of the solutions. For a textbook, concentrating on the combinatorial theory, see \cite{BS95}, while a recent overview is \cite{HallLi2007SignPatternMatrices}. The very close connections to autarky theory have been realised in \cite[Section 5]{Ku00f} (motivated by \cite{DD92}), and further expanded in \cite{Kullmann2007Balanciert}; see \cite[Subsection 11.12.1]{Kullmann2007HandbuchMU} for an overview. While preparing this article{} we came across \cite{KleeLadner1981WeakSAT}, which introduces ``weak satisfiability'', which is \emph{precisely} the existence of a non-trivial autarky. It is shown (\cite[Theorem 5]{KleeLadner1981WeakSAT}), that weak satisfiability is NP-complete; this is the earliest known proof of $\mc{LEAN}$ being coNP-complete. Apparently these connections to SAT have not been pursued further. The central notions in the early history of QMA were ``$S$-matrix'' and ``$L$-matrix'', which by \cite{Ku00f} are essentially the variable-clause matrices of certain sub-classes of $\mc{LEAN}$. Unaware of these connections, \cite[Theorem 1.2]{KLM1984} showed directly that recognition of $L$-matrices is coNP-complete. Lean clause-sets correspond to ``$L^+$-matrices'' introduced in \cite{LS1998}, and the decomposition of a clause-set into the lean kernel and the largest autark sub-clause-set now becomes a triangular matrix decomposition into an $L^+$-matrix and the remainder (\cite[Lemma 3.3]{LS1998}). \paragraph{Applications.} See \cite{KullmannLynceSilva2005Autarkies} for a general discussion of various redundancy criteria in clause-sets. Identification of maximal autarkies finds application in the analysis of over-constrained systems, for example autark clauses cannot be included in MUSes (minimally unsatisfiable sub-clause-sets) and so, by minimal hitting set duality, cannot be included in MCSes (minimal corrections sets, whose removal leads to a satisfiable clause-set). As discussed above, via the computation of a maximal autarky we can compute basic matrix decompositions of QMA; apparently due to the lack of efficient implementations, at least the related subfield of QMA (which is concerned with NP-hard problems) had yet little practical applications, and the efficient algorithms for computing maximal autarkies via SAT (and extensions) might be a game changer here. \paragraph{Overview.} In Section \ref{sec:prelim} we provide all background. Section \ref{sec:oracles} discusses oracles ($\mc{O}, \mc{O}_1, \mc{O}_0, \mc{O}_{01}$), and reviews the first basic algorithm $\mc{A}_0$ (Definition \ref{def:A0}), analysed in Lemma \ref{lem:corr0}. Section \ref{sec:trans} introduces the basic translation $F \leadsto t(F)$, where $t(F)$ expresses autarky-search for $F$, and proves various properties. The second basic algorithm $\mc{A}_1$ is reviewed in Definition \ref{def:algoA1} and analysed in Lemma \ref{lem:corr1}. Algorithm $\mc{A}_{\mathrm{bs}}$ is given in Definition \ref{def:algoAbs}, using cardinality constraints (translated into CNF). The use of ``steering clauses'', collected into a set $P$ of positive clauses, is discussed in Subsection \ref{sec:addposcls}, with the main technical result Corollary \ref{cor:readoffaut2}, which shows that variables involved in a resolution refutation of $t(F) \cup P$ can not be part of the largest autarky var-set of $F$. The novel algorithm $\mc{A}_{01}$ finally is introduced in Section \ref{sec:alg}, first using an unspecified $P$ (Definition \ref{def:alg}), and then instantiating this scheme in Theorem \ref{thm:main} to obtain at most $2 \sqrt{n(F)}$ many calls to $\mc{O}_{01}$. We conclude in Section \ref{sec:concl} by presenting conjectures and open problems. \section{Preliminaries} \label{sec:prelim} We use $\mathbb{N} = \set{n \in \mathbb{Z} : n \ge 1}$ and $\NN_0 = \mathbb{N} \cup \set{0}$. The powerset of a set $X$ is denoted by $\pot(X)$, while $\pote(X) := \set{X' \in \pot(X) : X' \text{ finite } }$. Maps are sets of ordered pairs, and so for maps $f, g$ the relation $f \subseteq g$ says, that $f(x) = g(x)$ holds for each $x$ in the domain of $f$, which is contained in the domain of $g$. We have the set $\mc{V\hspace{-0.1em}A}$ of variables, with $\mathbb{N} \subseteq \mc{V\hspace{-0.1em}A}$, and the set $\mc{LIT}$ of literals, with $\mc{V\hspace{-0.1em}A} \subset \mc{LIT}$. The complementation operation is written $x \in \mc{LIT} \mapsto \overline{x} \in \mc{LIT}$, and fulfils $\overline{\overline{x}} = x$. On $\mathbb{N}$ the complementation is arithmetical negation, and thus $\mathbb{Z} \setminus \set{0} \subseteq \mc{LIT}$. Every literal is either a variable or a complemented variable; forgetting the possible complementation is done by the projection $\var: \mc{LIT} \rightarrow \mc{V\hspace{-0.1em}A}$. For $L \subseteq \mc{LIT}$ we use $\overline{L} := \set{\overline{x} : x \in L}$ and $\lit(L) := L \cup \overline{L}$. A clause is a finite set $C \subset \mc{LIT}$ of literals with $C \cap \overline{C} = \emptyset$, while a clause-set is a finite set of clauses; the set of all clause-sets is denoted by $\mc{CLS}$. The empty clause is denoted by $\bot := \emptyset$, the empty clause-set by $\top := \emptyset \in \mc{CLS}$. Furthermore $\Pcls{p} := \set{F \in \mc{CLS} : \forall\, C \in F : \abs{C} \le p}$ for $p \in \NN_0$. For a clause $C$ we define $\var(C) := \set{\var(x) : x \in C}$, while for a clause-set $F$ we define $\var(F) := \bigcup_{C \in F} \var(C)$. We use the following measures: $n(F) := \abs{\var(F)} \in \NN_0$ is the number of variables, $c(F) := \abs{F} \in \NN_0$ is the number of clauses, $\ell(F) := \sum_{C \in F} \abs{C} \in \NN_0$ is the number of literal occurrences. A partial assignment is a map $\varphi: V \rightarrow \set{0,1}$ for some finite $V \subset \mc{V\hspace{-0.1em}A}$, where we write $\var(\varphi) := V$, while the set of all partial assignments is denoted by $\mc{P\hspace{-0.32em}ASS}$. A special partial assignment is the empty partial assignment $\epa := \emptyset \in \mc{P\hspace{-0.32em}ASS}$. Furthermore we use $\lit(\varphi) := \lit(\var(\varphi))$, and extend $\varphi$ to $\lit(\varphi)$ via $\varphi(\overline{v}) = 1 - \varphi(v)$ for $v \in \var(\varphi)$. For $\varepsilon \in \set{0,1}$ we define $\varphi^{-1}(\varepsilon) := \set{x \in \lit(\varphi) : \varphi(x) = \varepsilon}$. The application $\varphi * F \in \mc{CLS}$ of $\varphi \in \mc{P\hspace{-0.32em}ASS}$ to $F \in \mc{CLS}$ is defined as $\varphi * F := \set{C \setminus \varphi^{-1}(0) : C \in F \wedge C \cap \varphi^{-1}(1) = \emptyset}$. Then $\mc{SAT} := \set{F \in \mc{CLS} \mb \exists\, \varphi \in \mc{P\hspace{-0.32em}ASS} : \varphi * F = \top}$, and $\mc{USAT} := \mc{CLS} \setminus \mc{SAT}$. The restriction of $F \in \mc{CLS}$ to $V \subseteq \mc{V\hspace{-0.1em}A}$ is defined as $F[V] := \set{C \cap \lit(V) : C \in F} \setminus \set{\bot} \in \mc{CLS}$, i.e., removal of clauses $C \in F$ with $\var(C) \cap V = \emptyset$, and restriction of the remaining clauses to variables in $V$. Finally we use $\mc{CLS}(V) := \set{F \in \mc{CLS} : \var(F) \subseteq V}$, $\mc{P\hspace{-0.32em}ASS}(V) := \set{\varphi \in \mc{P\hspace{-0.32em}ASS} : \var(\varphi) \subseteq V}$ and $\mc{T\hspace{-0.35em}ASS}(V) := \set{\varphi \in \mc{P\hspace{-0.32em}ASS} : \var(\varphi) = V}$ (``total assignments'') for $V \subseteq \mc{V\hspace{-0.1em}A}$. Now to autarkies; this article{} is essentially self-contained, but if more information is desired, see the handbook chapter \cite{Kullmann2007HandbuchMU}. A partial assignment $\varphi \in \mc{P\hspace{-0.32em}ASS}$ is an \emph{autarky for $F \in \mc{CLS}$} iff for all $C \in F$ with $\var(\varphi) \cap \var(C) \ne \emptyset$ holds $\varphi * \set{C} = \top$ iff $\forall\, C \in F: \varphi * \set{C} \in \set{\top,\set{C}}$; the set of all autarkies for $F$ is denoted by $\aut(F) \subseteq \mc{P\hspace{-0.32em}ASS}$. The empty partial assignment $\epa$ is an autarky for every $F \in \mc{CLS}$, and in general we call an autarky $\varphi$ for $F$ \emph{trivial} if $\var(\varphi) \cap \var(F) = \emptyset$. For $\top$ as well as $\set{\bot}$ every partial assignment is a trivial autarky. Note that every satisfying assignment for $F$ is also an autarky for $F$, and it is a trivial autarky iff $F = \top$. Another simple but useful property is that $\varphi$ is an autarky for $\bigcup_{i \in I} F_i$ for a finite family $(F_i)_{i \in I}$ of clause-sets iff $\varphi$ is an autarky for all $F_i$, $i \in I$. We also note that $\varphi$ is an autarky for $F$ iff $\varphi$ is an autarky for $F \cup \set{\bot}$ iff $\varphi$ is an autarky for $F \setminus \set{\bot}$ (for autarkies the empty clause is invisible). In general it is best to allow that autarkies assign non-occurring variables, but it is also needed to have a notation which disallows this; following \cite[Definition 11.9.1]{Kullmann2007HandbuchMU}: \begin{definition}\label{def:autf} For $F \in \mc{CLS}$ let $\bmm{\autf(F)} := \aut(F) \cap \mc{P\hspace{-0.32em}ASS}(\var(F))$ (`r'' like ``restricted'' or ``relevant''), while by $\bmm{\var(\autf(F))} := \bigcup_{\varphi \in \autf(F)} \var(\varphi)$ we denote the \textbf{largest autarky-var-set}. \end{definition} $\mc{LEAN} \subset \mc{USAT} \cup \set{\top}$ is the set of $F \in \mc{CLS}$ such that $\autf(F) = \set{\epa}$, while the \emph{lean kernel} of $F \in \mc{CLS}$, denoted by $\na(F) \subseteq F$, is the largest element of $\mc{LEAN}$ contained in $F$ (it is easy to see that $\mc{LEAN}$ is closed under finite union). We have $\var(\autf(F)) \cup \var(\na(F)) = \var(F)$ and $\var(\autf(F)) \cap \var(\na(F)) = \emptyset$. See \cite[Subsection 11.8.3]{Kullmann2007HandbuchMU} for various characterisations of the lean kernel. \begin{definition}\label{def:nval} For $F \in \mc{CLS}$ let $\bmm{\nva(F)} := \abs{\var(\autf(F))} \in \NN_0$ be the number of variables in the largest autarky-var-set and $\bmm{\nvl(F)} := \abs{\var(\na(F))} \in \NN_0$ be the number of variables in the lean kernel. \end{definition} So $n(F) = \nva(F) + \nvl(F)$. On the finite set $\autf(F)$ we have a natural partial order given by inclusion. There is always the smallest element $\epa \in \autf(F)$, while the maximal elements of $\autf(F)$ are called \emph{maximal autarkies} for $F$. For maximal autarkies $\varphi, \psi$ holds $\var(\varphi) = \var(\psi) = \var(\autf(F))$; here we use that the composition of autarkies is again an autarky, i.e., for autarkies $\varphi, \psi$ for $F$ there is an autarky $\theta$ for $F$ with $\varphi * (\psi * F) = \psi * (\varphi * F) = \theta * F$. \begin{definition}\label{def:maxaut} Let $\bmm{\autmax(F)} \subseteq \autf(F)$ be the set of maximal autarkies. \end{definition} A \emph{quasi-maximal autarky for $F$} is an $\varphi \in \autf(F)$ with $\varphi * F = \na(F)$. By supplying arbitrary values for the missing variables we obtain efficiently a maximal autarky from a quasi-maximal autarky. \section{Oracles} \label{sec:oracles} The main computational task considered in this article{} is the computation of some element of $\autmax(F)$ for inputs $F \in \mc{CLS}$. Our emphasis is on the number of calls to an ``oracle'', which solves NP-hard problems, while otherwise the computations are in polynomial time. The \textbf{NP (-SAT) oracle} $\bmm{\mc{O}}: \mc{CLS} \rightarrow \set{0,1}$ just maps $F \in \mc{CLS}$ to $1$ in case of $F \in \mc{SAT}$, and to $0$ otherwise. As we will see in Example \ref{exp:basicalgfindnta}, for deciding leanness, one call suffices. For a \textbf{(standard) SAT oracle} $\bmm{\mc{O}_1}: \mc{CLS} \rightarrow \set{0} \cup (\set{1} \times \mc{P\hspace{-0.32em}ASS})$, the SAT solver also returns a satisfying assignment, and then also a non-trivial autarky can be returned in case of non-leanness. As introduced in \cite{Ku01a}, we consider here a strengthened oracle \bmm{\mc{O}_{01}}, to return something also for unsatisfiable inputs. Recall that a \emph{tree resolution refutation} for $F \in \mc{CLS}$ is a binary tree, where the nodes are labelled with clauses, such that the leaves are labelled by (some) clauses of $F$ (the ``axioms''), while the root is labelled with $\bot$, and such that for each inner node, with children labelled by clauses $C, D$, we have $C \cap \overline{D} = \set{x}$ for some $x \in \mc{LIT}$, while the label of that inner node is $(C \setminus \set{x}) \cup (D \setminus \set{\overline{x}})$. \begin{definition}\label{def:extsatorac} An \textbf{extended SAT oracle} is a map $\bmm{\mc{O}_{01}}: \mc{CLS} \rightarrow \set{0,1} \times (\pote(\mc{V\hspace{-0.1em}A}) \cup \mc{P\hspace{-0.32em}ASS})$, which for input $F \in \mc{USAT}$ returns $(0,\var(F'))$ for some $F' \subseteq F$, such that there is a tree refutation using as axioms \emph{precisely} $F'$, and for $F \in \mc{SAT}$ returns $(1,\varphi)$ for some $\varphi \in \mc{P\hspace{-0.32em}ASS}(\var(F))$ and $\varphi * F = \top$. If we don't need the satisfying assignment, then we use $\bmm{\mc{O}_0}: \mc{CLS} \rightarrow \set{1} \cup (\set{0} \times \pote(\mc{V\hspace{-0.1em}A}))$. \end{definition} In the following we will indicate the type of oracle by using one of $\mc{O}_0, \mc{O}_1, \mc{O}_{01}$. See \cite[Subsection 11.10.3]{Kullmann2007HandbuchMU} for a short discussion how to efficiently integrate the computations for $\mc{O}_0, \mc{O}_{01}$ into a SAT solver, both look-ahead (\cite{HvM09HBSAT}) and CDCL solvers (\cite{MSLM09HBSAT}). It is important to notice here that we do not need a full resolution refutation, but only the variables involved in it. The above use of \emph{tree} resolution is only a convenient way of stating the condition that all axioms are actually used in the refutation. Furthermore, there is no need for any sort of minimisation of the refutation, as we see by the following lemma. \begin{lemma}\label{lem:useoracle} If for $F \in \mc{CLS}$ holds $\mc{O}_0(F) = (0,V)$, then $V \cap \var(\autf(F)) = \emptyset$. \end{lemma} \begin{prf} As shown in \cite[Lemma 3.13]{Ku98e}, for any autarky $\varphi \in \aut(F)$ and any clause $C$ touched by $\varphi$ there is no tree resolution refutation of $F$ using $C$. \hfill $\square$ \end{prf} So the more clauses are involved in the resolution refutation (i.e., the larger $V$), the more variables we can exclude from the largest autarky-var-set, and thus minimising resolution refutation in general will be counter-productive. One known approach to compute a maximal autarky of $F \in \mc{CLS}$, as reviewed in \cite[Subsection 11.10.3]{Kullmann2007HandbuchMU} (especially Theorem 11.10.1 there), is based on the full \emph{autarky-resolution duality} (\cite[Theorem 3.16]{Ku98e}): the variables involved in some autarky of $F$ are altogether, i.e., $\var(\autf(F)) = \var(F) \setminus \var(\na(F))$, precisely the variables not usable by some tree resolution refutation of $F$. So the algorithm, called $\mc{A}_0(F)$ here, iteratively removes variables not usable in an autarky and clauses consisting solely of such variables, via Lemma \ref{lem:useoracle}, until a satisfying assignment $\varphi$ is found (which must happen eventually), and $\varphi$ is then a quasi-maximal (due to autarky-resolution duality): \begin{definition}\label{def:A0} For input $F \in \mc{CLS}$, the algorithm \bmm{\mc{A}_0(F)}, using oracle $\mc{O}_{01}$ and computing a partial assignment $\varphi$, performs the following computation: \begin{enumerate} \item While $\var(F) \ne \emptyset$ do: \begin{enumerate} \item Compute $\mc{O}_{01}(F)$, obtaining $(0,V)$ resp.\ $(1,\varphi)$. \item In case of $(0,V)$, let $F := F[\var(F) \setminus V]$. \item In case of $(1,\varphi)$, let $F := \top$. \end{enumerate} \item Return $\varphi$. \end{enumerate} \end{definition} \begin{lemma}[\cite{Ku98e}]\label{lem:corr0} For $F \in \mc{CLS}$ the algorithm $\mc{A}_0(F)$ computes a quasi-maximal autarky for $F$, using at most $\min(\nvl(F)+1, n(F))$ calls of oracle $\mc{O}_{01}$. \end{lemma} The best case for algorithm $\mc{A}_0(F)$ in terms of the number of oracle calls is given for $F \in \mc{SAT}$, where just one call suffices. For the worst-case $F \in \mc{LEAN}$ on the other hand $\mc{A}_0(F)$ might use $n(F)$ oracle calls: \begin{example}\label{exp:ncalls} Let $F := \setb{\set{1},\set{-1},\set{2},\set{-2},\dots, \set{n},\set{-n}}$ for $n \in \NN_0$. We have $F \in \mc{LEAN}$, and each loop iteration will remove exactly one pair $\set{i}, \set{-i}$, until all clauses are removed. \end{example} \section{The basic translation} \label{sec:trans} We now review the translation $t: \mc{CLS}(\mc{V\hspace{-0.1em}A}_0) \rightarrow \mc{CLS}$ from \cite{SIMML2014EfficientAutarkies}, called $\Gamma_2$ there, which represents the search for an autarky $\varphi$ for $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ as a SAT problem $\bmm{t(F)}$; here $\mc{V\hspace{-0.1em}A}_0$ is the set of \emph{primary variables}, while the variables in $\mc{V\hspace{-0.1em}A} \setminus \mc{V\hspace{-0.1em}A}_0$ are used as \emph{auxiliary variables}. The translation $t(F)$ uses two types of variables, the primary variables $v \in \var(F)$ themselves, where $v \mapsto 1$ \emph{now} means $v \in \var(\varphi)$, and for every $v \in \var(F)$ two auxiliary variables $t(v), t(\overline{v})$, where $t(x) \mapsto 1$ for $x \in \lit(F)$ means $\varphi(x) = 1$. In other words, the three possible states of a variable $v \in \var(F)$ w.r.t.\ the partial assignment $\varphi$, namely ``unassigned'' ($v \notin \var(\varphi)$), ``set true'' ($\varphi(v)=1$), ``set false'' ($\varphi(v)=0$), are represented by three of the four states of assigned variables $t(v), t(\overline{v})$, namely ``unassigned'' is $t(v), t(\overline{v}) \mapsto 0$, ``set true'' is $t(v) \mapsto 1, t(\overline{v}) \mapsto 0$, and ``set false'' is $t(v) \mapsto 0, t(\overline{v}) \mapsto 1$. The variable $v$ \emph{in the translation} $t(F)$ just acts as an indicator variable, showing whether $v$ is involved in the autarky or not. We have then three types of clauses in $t(F)$: the \emph{autarky clauses} for $C \in F$ and $x \in C$, stating that if $x$ gets false by the autarky, then some other literal of $C$ must get true, plus the \emph{AMO (at-most-one) clauses} for $t(v), t(\overline{v})$ and the \emph{connection} between $v$ and $t(v), t(\overline{v})$. It is useful for argumentation to have the more general form $t_V(F)$, where only $\varphi$ with $\var(\varphi) \subseteq V$ are considered: \begin{definition}\label{def:trans} We assume a set $\mathbb{N} \subseteq \mc{V\hspace{-0.1em}A}_0 \subset \mc{V\hspace{-0.1em}A}$ of ``primary variables'' together with an injection $t: \lit(\mc{V\hspace{-0.1em}A}_0) \rightarrow \mc{V\hspace{-0.1em}A}$, yielding the ``auxiliary variables'', such that $\mc{V\hspace{-0.1em}A}_0 \cap t(\lit(\mc{V\hspace{-0.1em}A}_0)) = \emptyset$ and $\mc{V\hspace{-0.1em}A}_0 \cup t(\lit(\mc{V\hspace{-0.1em}A}_0)) = \mc{V\hspace{-0.1em}A}$. For $V \subseteq \mc{V\hspace{-0.1em}A}_0$ let $V' := V \cup t(\lit(V))$. In general we define an equivalence relation on $\mc{V\hspace{-0.1em}A}$, where every equivalence class contains (precisely) three elements, namely $v, t(v), t(\overline{v})$ for $v \in \mc{V\hspace{-0.1em}A}_0$. A set $V \subseteq \mc{V\hspace{-0.1em}A}$ is \textbf{saturated}, if for $v \in V$ and every equivalent $v'$ holds $v' \in V$. The \textbf{saturation} $V \subseteq \bmm{V'} \subseteq \mc{V\hspace{-0.1em}A}$ of $V \subseteq \mc{V\hspace{-0.1em}A}$ is the saturation under this equivalence relation, i.e., addition of all equivalent variables. Now the translation $t_V: \mc{CLS}(\mc{V\hspace{-0.1em}A}_0) \rightarrow \mc{CLS}(V')$ for $V \in \pote(\mc{V\hspace{-0.1em}A}_0)$ has the following clauses for $t_V(F)$: \begin{enumerate} \item[I] for $C \in F$ and $x \in C$ with $\var(x) \in V$ the \textbf{autarky clause} $\set{\overline{t(\overline{x})}} \cup \set{t(y) : y \in C \setminus \set{x}, \var(y) \in V}$ (i.e., $t(\overline{x}) \rightarrow \bigvee_{y \in C \setminus \set{x}, \var(y) \in V} t(y)$); \item[II] for each $v \in V$ the \textbf{AMO-clause} $\set{\overline{t(v)}, \overline{t(\overline{v})}}$; \item[III] for each $v \in V$ the clauses of $v \leftrightarrow (t(v) \vee t(\overline{v}))$, i.e., the three clauses $\set{\overline{v}, t(v), t(\overline{v})}, \set{\overline{t(v)}, v}, \set{\overline{t(\overline{v})}, v}$ (the \textbf{indicator clauses}). \end{enumerate} Especially $\bmm{t(F)} := t_{\var(F)}(F)$ for $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$. \end{definition} For $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $V \in \pote(\mc{V\hspace{-0.1em}A}_0)$ holds $\var(t_V(F)) = V' = V \cup t(\lit(V))$, $V \cap t(\lit(V)) = \emptyset$, and $n(t(F)) = 3 n(F)$, $c(t(F)) = \ell(F) + 4 n(F)$. Due to the four AMO- and indicator-clauses, every satisfying assignment for $t_V(F)$ must be total, that is, for $\varphi \in \mc{P\hspace{-0.32em}ASS}$ with $\varphi * t_V(F) = \top$ holds $\var(t_V(F)) \subseteq \var(\varphi)$. \begin{example}\label{exp:ncallst} For $F = \setb{\set{1},\set{-1},\dots, \set{n},\set{-n}}$ as in Example \ref{exp:ncalls}, we have $2 n$ autarky clauses, which are $\set{\overline{t(i)}}$ for $i \in \tb{-n}{n} \setminus \set{0}$. \end{example} Partial assignments $\varphi$ on the primary variables are translated to assignments on the primary+auxiliary variables via $t_{0,V}(\varphi)$ (assigning unassigned variables to $0$ in the translation) and $t(\varphi)$ (leaving them unassigned), while the backwards direction goes via via $t^{-1}(\varphi)$: \begin{definition}\label{def:transpass} For $V \in \pote(\mc{V\hspace{-0.1em}A}_0)$ we define a translation $\bmm{t_{0,V}}: \mc{P\hspace{-0.32em}ASS}(V) \rightarrow \mc{T\hspace{-0.35em}ASS}(V')$ for $\varphi \in \mc{P\hspace{-0.32em}ASS}(V)$ by $t_{0,V}(\varphi)(v) = 1 \Leftrightarrow v \in \var(\varphi)$ for $v \in V$, while $t_{0,V}(\varphi)(t(x)) = 1 \Leftrightarrow \var(x) \in \var(\varphi) \wedge \varphi(x) = 1$ for $x \in \lit(V)$. The translation $\bmm{t}: \mc{P\hspace{-0.32em}ASS}(\mc{V\hspace{-0.1em}A}_0) \rightarrow \mc{P\hspace{-0.32em}ASS}$ for $\varphi \in \mc{P\hspace{-0.32em}ASS}(\mc{V\hspace{-0.1em}A}_0)$ is the partial assignment, where $\var(t(\varphi))$ is the saturation of $\var(\varphi)$, while $t(\varphi)(v) = 1$ for $v \in \var(\varphi)$, and $t(\varphi)(t(x)) = 1 \Leftrightarrow \varphi(x) = 1$ for $x \in \lit(\varphi)$. In the other direction, any partial assignment $\varphi \in \mc{P\hspace{-0.32em}ASS}$ with $\var(\varphi)$ saturated yields a partial assignment $\bmm{t^{-1}(\varphi)} \in \mc{P\hspace{-0.32em}ASS}(\mc{V\hspace{-0.1em}A}_0)$ with $\var(t^{-1}(\varphi)) := \varphi^{-1}(1) \cap \mc{V\hspace{-0.1em}A}_0$ and $t^{-1}(\varphi)(v) = \varphi(t(v))$ for $v \in \var(t^{-1}(\varphi))$. \end{definition} As already stated, $t_{0,V}(\varphi)$ makes explicit which variables are unassigned by $\varphi$, namely assigning them with $0$, and thus it needs to know $V$, while $t(\varphi)$ just leaves them unassigned. We have $t^{-1}(t_{0,V}(\varphi)) = t^{-1}(t(\varphi)) = \varphi$. \begin{example}\label{exp:tfsat} $t_V(F) \in \mc{SAT}$ for $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $V \in \pote(\mc{V\hspace{-0.1em}A}_0)$, since for $t_{0,V}(\epa) = \pab{v \rightarrow 0 : v \in V} \cup \pab{t(x) \rightarrow 0 : x \in \lit(V)}$ we have $t_{0,V}(\epa) * t_V(F) = \top$. \end{example} $t(F)$ does its job, i.e., its solutions represent all the autarkies of $F$: \begin{lemma}[\cite{SIMML2014EfficientAutarkies}]\label{lem:readoffaut1} Consider $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $V \in \pote(\mc{V\hspace{-0.1em}A}_0)$. \begin{enumerate} \item\label{lem:readoffaut1a} If $\mc{O}_1(t_V(F)) = (1,\varphi)$, then $t^{-1}(\varphi) \in \autf(F) \cap \mc{P\hspace{-0.32em}ASS}(V)$. \item\label{lem:readoffaut1b} $t_{0,V}(\varphi) * t_V(F) = \top$ for $\varphi \in \autf(F) \cap \mc{P\hspace{-0.32em}ASS}(V)$. \end{enumerate} \end{lemma} Before discussing the usage of $t(F)$, we remark that the variables $\var(F) \subseteq \var(t(F))$ are used purely for a more convenient discussion, while for a practical application they would be dropped, and the translation called $\Gamma_3$ in \cite{SIMML2014EfficientAutarkies} would be used (except possibly for Algorithm $\mc{A}_{\mathrm{bs}}$ defined later, which uses cardinality constraints): the variables of $t(F)$ then would be just $t(\lit(F))$, and the clauses would be the autarky- and AMO-clauses (only). In our applications $v \in \var(F)$ occurs in the translations only positively, and would be replaced by the two positive literals $t(v), t(\overline{v})$ (together). \subsection{Basic usages} \label{sec:transbasicuse} \begin{example}\label{exp:basicalgfindnta} A simple algorithm for finding a non-trivial autarky for $\var(F) \ne \emptyset$ evaluates $\mc{O}_1(t(F) \cup \set{\var(F)})$. By Lemma \ref{lem:readoffaut1} we get, that if the solver returns $0$, then $F \in \mc{LEAN}$, while if $(1,\varphi)$ is returned, then $t^{-1}(\varphi)$ is a non-trivial autarky for $F$ (the non-triviality is guaranteed by the additional clause $\var(F)$). \end{example} Algorithm $\mc{A}_1(F)$, computing a maximal autarky, iterates the algorithm from Example \ref{exp:basicalgfindnta}; the details are as follows, where we formulate the algorithm in such a way that it has the same basic structure as $\mc{A}_0$ (recall Definition \ref{def:A0}) and our novel algorithm $\mc{A}_{01}$ (to be given in Definition \ref{def:alg}): \begin{definition}\label{def:algoA1} For input $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ the algorithm \bmm{\mc{A}_1(F)}, using oracle $\mc{O}_1$ and computing a partial assignment $\varphi$, performs the following computation: \begin{enumerate} \item $\varphi := \epa$, $P := \set{\var(F)}$, $F := t(F)$. \item While $\var(P) \ne \emptyset$ do: \begin{enumerate} \item Compute $\mc{O}_1(F \cup P)$, obtaining $0$ resp.\ $(1,\psi)$. \item In case of $0$, let $P := \top$ and $F := \top$. \item In case of $(1,\psi)$, let $\psi' := t^{-1}(\psi)$, and update $P := P[\var(P) \setminus \var(\psi')]$, $F := t(\psi') * F$, and $\varphi := \varphi \cup \psi'$. In words: obtain the autarky $\psi'$ from $\psi$, remove the variables of $\psi'$ from $P$ and $F$, and add $\psi'$ to the result-autarky $\varphi$. \end{enumerate} \item Return $\varphi$. \end{enumerate} \end{definition} \begin{lemma}\label{lem:corr1} For $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ the algorithm $\mc{A}_1(F)$ computes $\varphi \in \autmax(F)$, using at most $\min(\nva(F)+1, n(F))$ calls of oracle $\mc{O}_1$. \end{lemma} \begin{prf} The algorithm always terminates, and moreover for the number $m \ge 0$ of executions of the while-body we have $m \le \min(\nva(F)+1, n(F))$, since in each round $P$ gets reduced by some variables from an autarky (due to the choice of $P$). Let $F_{-1}$ be the input, let $F_0 := t(F_{-1})$, and let $F_i$ for $i = 1,\dots,m$ be the current $F$ after execution of $i$-th iteration; similarly, let $P_0$ be the original value of $P$, and let $P_i$ be the current $P$ after the $i$-th iteration, and let $\varphi_0 := \epa$, and let $\varphi_i$ be the value of $\varphi$ after the $i$-th iteration. Finally, let $V_i$ for $i = 1,\dots,m$ be $\var(P_i)$ in case of $0$ resp.\ the value of $\var(\psi')$ after round $i$, and let $W_0 := \var(F_{-1})$, and let $W_i := W_{i-1} \setminus V_i$ for $i = 1,\dots,m$. Inductively we show that $F_i = t_{W_i}(\varphi_i * F_{-1})$ for $i \in \tb 0m$, where $\varphi_i$ is an autarky for $F_{-1}$ by Lemma \ref{lem:readoffaut1}, Part \ref{lem:readoffaut1a}, and $P_i = P_0[W_i]$ for $i \in \tb 1m$, where $W_m = \emptyset$. Variables only vanish as part of some autarky for $F_{-1}$, and thus $\varphi_i \in \autmax(F_{-1}[W_0 \setminus W_i])$ for $i \in \tb 0m$. \hfill $\square$ \end{prf} The best case for algorithm $\mc{A}_1(F)$ in terms of the number of oracle calls is given for $F \in \mc{LEAN}$, where just one call suffices. For the worst-case $F \in \mc{SAT}$ however, $\mc{A}_1(F)$ might use $n(F)$ oracle calls: \begin{example}\label{exp:algoo} Let $F := \set{\set{1}, \dots, \set{n}} \in \mc{SAT}$ for $n \in \NN_0$. In the worst case (depending on the answers of $\mc{O}_1$), in each call only one unit-clause $\set{i}$ is removed. \end{example} The algorithm realising the currently best number of calls to $\mc{O}_1$ uses SAT-encodings of cardinality constraints (see \cite{RM09HBSAT}); different from the literature, we follow our general scheme and iteratively apply the autarkies found: \begin{definition}\label{def:algoAbs} For input $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ the algorithm \bmm{\mc{A}_{\mathrm{bs}}(F)}, using oracle $\mc{O}_1$ and computing a partial assignment $\varphi$, performs the following computation: \begin{enumerate} \item $\varphi := \epa$, $n := n(F)$, $V := \var(F)$, $F := t(F)$ ($n$ is an upper bound on the size of a maximal autarky, $V$ is the set of variables potentially used by it). \item While $n \ne 0$ do: \begin{enumerate} \item $m := \ceil{\frac n2}$; let $G$ be a CNF-representation of the cardinality constraint ``$\,\sum_{v \in V} v \ge m$''; compute $\mc{O}_1(F \cup G)$, obtaining $0$ resp.\ $(1,\psi)$. \item In case of $0$, let $n := m-1$. \item In case of $(1,\psi)$, let $\psi' := t^{-1}(\psi)$, and update $n := n - n(\psi')$, $V := V \setminus \var(\psi')$, $F := t(\psi') * F$, and $\varphi := \varphi \cup \psi'$. \end{enumerate} \item Return $\varphi$. \end{enumerate} \end{definition} As it should be obvious by now: \begin{lemma}\label{lem:corrbs} For $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ the algorithm $\mc{A}_{\mathrm{bs}}(F)$ computes $\varphi \in \autmax(F)$, using at most $\ceil{\log_2(n(F))}$ calls of oracle $\mc{O}_1$ (for $n(F) > 0$). \end{lemma} That the upper bound of Lemma \ref{lem:corrbs} is attained, can be seen again with Example \ref{exp:algoo} (in the worst case). We remark that if we allow calls to Partial MaxSAT (see \cite{LM09HBSAT} for an overview), then just one call is enough (as used in \cite{LiffitonSakallah2008Trimming}), and that without cardinality constraints, namely using $t(F)$ as the hard clauses and $\set{v}$ for $v \in \var(F)$ as the soft clauses. Indeed, as shown in \cite[Proposition 1]{SIMML2014EfficientAutarkies}, this translation has a unique ``minimal correction set'' (MCS), i.e., a unique minimal subset of the soft clauses, whose removal yields a satisfiable clause-set, and so any MCS-solver can be used (just one call). \subsection{Adding positive ``steering'' clauses} \label{sec:addposcls} Generalising the use of $P$ in Algorithm $\mc{A}_1$, we consider some positive clause-set $P$ over $\var(F)$ (i.e., $P \subseteq \pot(\var(F))$), and use $t(F) \cup P \in \mc{CLS}$ to gain larger autarkies. Note that the elements of $P$ require variables to be in the autarky, and so in general $P$ should contain several shorter clauses, while for $\mc{A}_1$ we just used one full clause (containing all variables). If the oracle then yields unsatisfiability, this is no longer the end of the search (due to the lean kernel been reached), since the clauses of $P$ involved in the refutation might not involve all remaining variables. The extended oracle is now needed to tell us which clauses of $P$ were used. To do so, we first note that autarkies for $F$ yield autarkies for $t(F) \cup P$ (where for a simpler algorithm we allow $P$ to contain variables not in $t(F)$): \begin{lemma}\label{lem:readoffaut2} Consider $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $P \in \pote(\pote(\mc{V\hspace{-0.1em}A}_0))$. For $\varphi \in \autf(F)$ we have $t(\varphi) \in \autf(t(F) \cup P)$. \end{lemma} \begin{prf} $t(\varphi)$ is an autarky for $P$, since $t(\varphi)$ does not set variables from $\var(F)$ to $0$. By Lemma \ref{lem:readoffaut1}, Part \ref{lem:readoffaut1b}, we get that $t_0(\varphi)$ is a satisfying assignment for $t(\varphi)$; now $t(\varphi)$ just unsets all triples $v, t(v), t(\overline{v})$ with $v \notin \var(\varphi)$, where $t_0(\varphi)$ sets these three variables to $0$. Thus obviously $t(\varphi)$ is also an autarky for the AMO-clauses and the indicator clauses. Assume an autarky clause $D$ for $C \in F$ and $x \in C$, touched by $t(\varphi)$ but not satisfied. Thus there is $y \in C$ with $\var(x) \notin \var(\varphi)$ and $\varphi(y) = 0$; since $\varphi$ is an autarky, there is $y' \in C$ with $\varphi(y') = 1$, whence $t(\varphi)(t(y')) = 1$ with $t(y') \in C$, contradicting the assumption. \hfill $\square$ \end{prf} Thus the saturation of the largest autarky-var-set of $F$ is contained in the largest autarky-var-set for $t(F) \cup P$: \begin{corollary}\label{cor:varautt} Consider $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $P \in \pote(\pote(\mc{V\hspace{-0.1em}A}_0))$. Then the set $\var(\autf(t(F) \cup P))$ is saturated and contains $\var(\autf(F))$. \end{corollary} \begin{prf} It remains to show that $\var(\autf(t(F) \cup P))$ is saturated, and this follows by just considering the AMO-clauses and the indicator clauses: If $v$ is assigned, then also $t(v), t(\overline{v})$ need to be assigned for an autarky, while if one of $t(v), t(\overline{v})$ is assigned, then also $v$ needs to be assigned. \hfill $\square$ \end{prf} Using Lemma \ref{lem:useoracle}, we obtain the main insight, that if the oracle yields $(0,V)$ for $t(F) \cup P$, then none of the elements of $V$ are in the largest autarky-var-set: \begin{corollary}\label{cor:readoffaut2} If for $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $P \in \pote(\pote(\mc{V\hspace{-0.1em}A}_0))$ the oracle yields $\mc{O}_0(t(F) \cup P) = (0,V)$, then $V' \cap \var(\autf(F)) = \emptyset$ (recall Definition \ref{def:trans} for $V'$). \end{corollary} \section{The new algorithm} \label{sec:alg} We now present the novel algorithm scheme $\mc{S}_{01}(F,P)$, combining algorithms $\mc{A}_0$ (Definition \ref{def:A0}) and $\mc{A}_1$ (Definition \ref{def:algoA1}), which takes as input $F \in \mc{CLS}$ and additionally $P \subseteq \pot(\var(F))$, and computes some autarky $\varphi \in \autf(F)$; for our current best generic instantiation we specify $P$ in Theorem \ref{thm:main}, obtaining algorithm $\mc{A}_{01}(F)$. \begin{definition}\label{def:alg} For inputs $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $P \subseteq \pot(\var(F))$, the algorithm \bmm{\mc{S}_{01}(F,P)}, using oracle $\mc{O}_{01}$ and computing a partial assignment $\varphi$, performs the following computation (using the saturation $V'$ as in Definition \ref{def:trans}): \begin{enumerate} \item $\varphi := \epa$, $F := t(F)$. \item While $\var(P) \ne \emptyset$ do: \begin{enumerate} \item Compute $\mc{O}_{01}(F \cup P)$, obtaining $(0,V)$ resp.\ $(1,\psi)$. \item In case of $(0,V)$, let $V := V'$, $P := P[\var(P) \setminus V]$, $F := F[\var(F) \setminus V]$. \item In case of $(1,\psi)$, let $\psi' := t^{-1}(\psi)$, and update $P := P[\var(P) \setminus \var(\psi')]$, $F := t(\psi') * F$, and $\varphi := \varphi \cup \psi'$. \end{enumerate} \item Return $\varphi$. \end{enumerate} \end{definition} While $\bot \in P$ is of no real use, it doesn't cause a problem for the algorithm, and will be removed from $P$ in the first round by the restriction (whether the implicit resolution refutation of $t(F) \cup P$ chooses $\bot$ as the refutation or not). \begin{lemma}\label{lem:corr01} For $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $P \subseteq \pot(\var(F))$ the algorithm $\mc{S}_{01}(F,P)$ computes an autarky $\varphi \in \autf(F)$. If $\var(P) = \var(F)$, then $\varphi \in \autmax(F)$. \end{lemma} \begin{prf} The proof extends the proof of Lemma \ref{lem:corr1}, by extending the handling of the case $\mc{O}_{01}(F \cup P) = (0,V)$. The algorithm always terminates, since in each round $P$ gets reduced. Let $m \ge 0$ be the number of executions of the while-body. Let $F_{-1}$ be the input, let $F_0 := t(F_{-1})$, and let $F_i$ for $i = 1,\dots,m$ be the current $F$ after execution of $i$-th iteration; similarly, let $P_0$ be the input-value of $P$, and let $P_i$ be the current $P$ after the $i$-th iteration, and let $\varphi_0 := \epa$, and let $\varphi_i$ be the value of $\varphi$ after the $i$-th iteration. Finally, let $V_i$ for $i = 1,\dots,m$ be the value of $V$ resp.\ $\var(\psi')$ after round $i$, and let $W_0 := \var(F_{-1})$, and let $W_i := W_{i-1} \setminus V_i$ for $i = 1,\dots,m$. Inductively we show that $F_i = t_{W_i}(\varphi_i * F_{-1})$ for $i \in \tb 0m$, where $\varphi_i$ is an autarky for $F_{-1}$ by Lemma \ref{lem:readoffaut1}, Part \ref{lem:readoffaut1a}, and $P_i = P_0[W_i]$ for $i \in \tb 1m$. Since variables vanish from $P$ only by restriction, we have $V_1 \cup \dots V_m \supseteq \var(P)$, and thus $W_m \subseteq W_0 \setminus \var(P)$. Variables only vanish, if either they are realised as not being element of $\var(\autf(F_{-1}))$ (Corollary \ref{cor:readoffaut2}), or as part of some autarky for $F_{-1}$. So $\varphi_i \in \autmax(F_{-1}[W_0 \setminus W_i])$ for $i \in \tb 0m$, and if $\var(P) = \var(F_{-1})$, then $\varphi_m$ is a maximal autarky for $F_{-1}$. \hfill $\square$ \end{prf} If instead of an unrestricted (maximal) autarky $\varphi \in \autf(F)$ we want to compute a (maximal) autarky $\varphi \in \autf(F)$ with $\var(\varphi) \subseteq V$ for some given $V \subseteq \mc{V\hspace{-0.1em}A}$, then we may just replace the input $F$ by $F[V]$ (or we choose $P$ with $\bigcup P = V$, and restrict the result). \begin{example}\label{exp:alg1} The simplest cases for computing maximal autarkies use (I) $P = \set{\var(F)}$ or (II) $P = \set{\set{v} : v \in \var(F)}$. In Case I, we essentially obtain $\mc{A}_1$ (Definition \ref{def:algoA1}), and $\mc{S}_{01}(F,P)$ produces autarkies until the lean kernel is reached, so we only have SAT-answers with one final UNSAT-answer. In Case II, the scheme becomes very similar to $\mc{A}_0$ (Definition \ref{def:A0}), and we remove elements of $P$ until we obtain the variables of $\var(\autf(F))$, and so we only have UNSAT-answers with one final SAT answer. If $F \in \mc{LEAN}$, then in Case I only one call of the oracle is needed (as in Example \ref{exp:basicalgfindnta}), while in Case II, for $F$ as in Example \ref{exp:ncalls} we need $n(F)$ oracle calls. On the other hand, if $F \in \mc{SAT}$, then in Case I, for $F$ as in Example \ref{exp:algoo} we need $n(F)$ oracle calls, while in Case II only one call of the oracle is needed. \end{example} A more intelligent use of $\mc{S}_{01}$ employs a better $P$, to mix the SAT- and UNSAT-answers of the oracle. \begin{lemma}\label{lem:upperbound} For $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$ and $P \subseteq \pot(\var(F))$ with $P \in \Pcls{p}$ ($p \in \NN_0$), algorithm $\mc{S}_{01}(F,P)$ uses at most $\min(p,\nva(F)) + \min(c(P),\nvl(F))$ oracle calls. \end{lemma} \begin{prf} Every oracle call removes at least one clause from $P$ (in the unsat-case), since $t_V(F) \in \mc{SAT}$, or one variable from all clauses of $P$ (in the sat-case). \hfill $\square$ \end{prf} So we need to minimise the sum of the number of clauses in $P$ and the maximal clause-length, which is achieved by using disjoint clauses of size $\sqrt{n(F)}$; by Lemmas \ref{lem:corr01}, \ref{lem:upperbound} we obtain: \begin{theorem}\label{thm:main} Consider $F \in \mc{CLS}(\mc{V\hspace{-0.1em}A}_0)$. Choose $P' \subseteq \pot(\var(F))$ such that $P'$ is a partitioning of $\var(F)$ (the elements are pairwise disjoint and non-empty, the union is $\var(F)$) with $\forall\, V \in P' : \abs{V} \le \ceil{\sqrt{n(F)}}$ and $c(P') \le \ceil{\sqrt{n(F)}}$. Such a partitioning $P'$ can be computed in linear time. Algorithm $\bmm{\mc{A}_{01}(F)} := \mc{S}_{01}(F,P')$ computes a maximal autarky for $F$, using at most $\min(s,\nva(F)) + \min(s,\nvl(F)) \le 2 s$ calls of $\mc{O}_{01}$, where $s := \ceil{\sqrt{n(F)}} \in \NN_0$ . \end{theorem} Up to the factor, the upper bound of Theorem \ref{thm:main} is attained: \begin{example}\label{exp:sqrt} For $F$ as in Example \ref{exp:ncallst} as well as $F$ as in Example \ref{exp:algoo} we need now $\ceil{\sqrt{n(F)}}$ oracle calls (in the worst-case). \end{example} \section{Conclusion and outlook} \label{sec:concl} We reviewed the algorithms $\mc{A}_0, \mc{A}_1, \mc{A}_{\mathrm{bs}}$ for computing maximal autarkies, using a unified scheme, and presented the new algorithm $\mc{A}_{01}$. We are employing four different types of oracles: $\mc{O}$ is the basic oracle, just indicating satisfiability resp.\ unsatisfiability, $\mc{O}_0$ in the unsatisfiable case yields the set of variables used by some resolution refutation, $\mc{O}_1$ in the satisfiable case yields a satisfying assignment, while $\mc{O}_{01}$ combines these capabilities. We investigated in some depth the translation $F \leadsto t(F)$, which encodes the autarky search for $F$. The complexities of the four algorithms are summarised as follows (with slight inaccuracies), stating the number and type of oracle calls and the call-instances: \begin{itemize} \item $\mc{A}_0(F)$: $\nvl(F)$ calls of $\mc{O}_{01}$, subinstances of $F$. \item $\mc{A}_1(F)$: $\nva(F)$ calls of $\mc{O}_1$, subinstances of $t(F)$ plus one large positive clause. \item $\mc{A}_{01}(F)$: $\sqrt{n(F)}$ calls of $\mc{O}_{01}$, subinstances of $t(F)$ plus positive clauses. \item $\mc{A}_{\mathrm{bs}}(F)$: $\log_2(n(F))$ calls of $\mc{O}_1$, subinstances of $t(F)$ plus one varying cardinality constraint in CNF-representation. \end{itemize} \begin{question}\label{que:moreint} As we can see from Examples \ref{exp:alg1}, \ref{exp:sqrt}, the choice $P'$ from Theorem \ref{thm:main}, instantiating the scheme $\mc{S}_{01}$ and yielding $\mc{A}_{01}$, can be improved at least in special cases. Are more intelligent choices of $P$ possible, heuristically, for special classes, or even in general? The optimal choice (hard to compute) is $P := \set{\var(\na(F))} \cup \set{\set{v} : v \in \var(\autf(F))}$, which needs two oracle calls. \end{question} \begin{question}\label{que:main} We conjecture the number $\Omega(\sqrt{n(F)})$ of oracle calls from Theorem \ref{thm:main} to be optimal in general, but the question here is, how to formalise the restrictions to the input of oracle $\mc{O}_{01}$ (so that for example the SAT translations of cardinality constraints are excluded). With these restrictions in place, we also conjecture that when only using oracle $\mc{O}_1$ (as algorithm $\mc{A}_1$ does (Definition \ref{def:algoA1})), that then in general $\Omega(n(F))$ many calls are needed. \end{question} \begin{question}\label{que:perfdiff} How do $\mc{A}_0, \mc{A}_1, \mc{A}_{01}, \mc{A}_{\mathrm{bs}}$ compare to each other? Are they pairwise incomparable? Is their oracle usage optimal under suitable constraints? \end{question} \begin{question}\label{que:leancomp} In this article{} we concentrated on the hardest functional task: What about the complexity of the computation of the lean kernel, when using oracles $\mc{O}, \mc{O}_0, \mc{O}_1, \mc{O}_{01}$ ? Do we need less calls than for computing maximal autarkies? \end{question} Only one precise conjecture on lower bounds for the computation of maximal autarkies seems possible currently: \begin{conjecture}\label{que:storac} The computation of a maximal autarky for input $F \in \mc{CLS}$, when using a SAT oracle $\mc{O}_1$, in general needs $\Omega(\log_2(n(F)))$ many calls; possibly one can even show that for every (deterministic) algorithm there exists an instance needing at least $\log_2(n(F))$ many calls. \end{conjecture} Finally we remark that for the considerations of this article{} more fine-grained complexity notions for function classes and their oracle usage are needed. Function classes just using NP-oracles (only returning yes/no) have been studied starting with \cite{Krentel1988Optimisation}, while a systematic study of ``function oracles'' has been started in \cite{MarquesSilvaJanota2014QueryComplexity}, using ``witness oracles''; we note that $\mc{O}_0, \mc{O}_{01}$ are not such witness oracles (we can not easily check the returned var-sets). \bibliographystyle{plainurl} \newcommand{\noopsort}[1]{}
1,108,101,566,032
arxiv
\section{Introduction} \label{sect:intro} Relativistic jets are detected in both Galactic and extragalactic sources. In the latter case, they reach kiloparsec, or even Megaparsec, scales and produce different observed phenomena, radiogalaxies and blazars, according to whether they are viewed sideways or end-on (Urry \& Padovani 1995). Within this unified scheme, blazars are therefore ideal probes of extragalactic jets, because their orientation influences - through relativistic aberration - the jet kinematics and enhances the emission variability at all wavelengths. Ambitious multiwavelength campaigns have been organized in the last 10 years on selected blazar sources, to monitor the variability of the whole spectrum in different emission states and on different time scales, to identify correlated variations at various frequencies, and to constrain the models (Ulrich, Maraschi \& Urry 1997; Pian et al. 1998; Tagliaferri et al. 2003; Krawczynski et al. 2004; Dermer \& Atoyan 2004; B{\l}a\.zejowski et al. 2005; B\"ottcher et al. 2005; Sokolov \& Marscher 2005; Aharonian et al. 2006; Albert et al. 2006; Kato, Kusunose \& Takahara 2006; Massaro et al. 2006; Raiteri et al. 2006). While the mechanism by which the inner engine (a supermassive, possibly rotating, black hole) converts gravitational into kinetic energy and transfers it to the relativistic plasma is still unknown and the problems related to the exact interplay between the compact central object, its surrounding disk and the jet are still to be solved (Maraschi \& Tavecchio 2003; Vlahakis \& K\"onigl 2004; McKinney 2006), a clear paradigm has emerged for the production of the multiwavelength energy distributions of blazars: it is commonly accepted that synchrotron radiation dominates the spectrum from the radio to the UV (and occasionally X-ray) domain, while inverse Compton scattering prevails at higher energies. The radiating plasma is accelerated within the jet, and propagates relativistically through disturbances and shocks, which are responsible for the variability. The role of components external to the jet, such as the accretion disk/torus and the broad emission line region (BLR) has been recognized to be critical in the spectrum formation, and particularly in providing seed photons for the inverse Compton scattering (external Compton, Dermer \& Schlickeiser 1993; Sikora, Begelman, \& Rees 1994; Ghisellini \& Madau 1996; Ghisellini et al. 1998; B{\l}a\.zejowski et al. 2000; Celotti, Ghisellini \& Fabian 2007). When these components are bright and relevant with respect to the jet emission, the "external" contribution to the inverse Compton scattering process becomes significant, or even dominant, with respect to the "internal" synchrotron self-Compton process (i.e., inverse Compton of the relativistic particles off the jet synchrotron photons) and generates differences in the broad-band spectra. The differences among the different blazar "flavours" (Flat Spectrum Radio Quasars, Low-Energy Peaked BL Lacs, High-Energy Peaked BL Lacs) can be explained by differences in the relative importance of the Compton cooling and therefore, ultimately, by the different role of the BLR, powered by the thermal accretion disk (Ghisellini et al. 1998; Ghisellini, Celotti, \& Costamante 2002). Our recent multiwavelength observing campaigns of blazars benefitted from the joint availability of high energy facilities ({\it INTEGRAL}, {\it Swift}) and ground-based flexible small optical/infrared monitors, like the Rapid Eye Mount (REM). They were triggered and driven by the detection of an outburst and were aimed, through the comparison of low and high emission states, at determining the parameters responsible for variability, and the role played by the BLR photon reservoir during different states. The "economic" jet model (\S 2), applied to a scenario of internal shocks in the blazar jet, provides a physically meaningful description of some observations. We report here our tests of the economic model on two sources with strong emission lines, 3C~454.3 and PKS~0537-441, and compare their variability with that of the "classical" featureless BL Lac object PKS~2155-304 (\S 3). We discuss our results and the model applicability in \S 4. \section{An "economic" jet model} \label{sect:Theo} The jet model we adopt has been presented in Katarzi\'nsky and Ghisellini (2007) and is based on the scenario of internal shocks popularly applied to Gamma-Ray Bursts (M\'esz\'aros \& Rees 1994; Sari \& Piran 1997), but originally developed for extragalactic kiloparsec jets (Rees 1978). Applications of the internal shock scenario to individual classes of blazar sources has been presented in Spada et al. (2001) and Guetta et al. (2004). Relativistic plasma blobs of different velocity collide within the jet (internal shocks), merge into a single blob and give rise to the observed multiwavelength outbursts. Direct evidence of jet components traveling at different velocities has been provided by the VLBI radio measurements (Abraham et al. 1996; Jorstad et al. 2001a), although these map the jets on several parsec scales, much larger than those where the outbursts take place, that are at most few light-days across, as inferred from emission variability. The basic assumptions of the model are 1) that the jet has a fixed efficiency, i.e. each blob receives the same amount of energy from the central engine, so that its maximum Lorentz factor is inversely proportional to its mass, and 2) that the contrast in the Lorentz factors of the colliding shells is always the same, i.e. the amount of energy transmitted to the emitting electrons during a collision is constant. In the internal shock model, the distance at which the blobs collision occurs from the jet apex is proportional to the square of the lower Lorentz factor. Therefore, slower blobs collide closer to the nucleus than faster blobs. Since all physical quantities scale with the distance from the nucleus, the site of the collision and dissipation is critical for determining the dominance of a radiation component over the other. Close to the nucleus, the magnetic field experienced by the plasma is stronger, and the influence of the BLR is weaker; this suggests a more significant synchrotron emission with respect to external Compton. The opposite is true when the collision occurs farther from the nucleus: the magnetic field has lower strength and the BLR photon density is larger. Thus, at different sites along the jet, the synchrotron and inverse Compton (primarily external Compton) components may have different ratios, even if the injected total energy is the same. \section{The observing campaigns} \label{sect:data} \subsection{\itbf{INTEGRAL} observations of 3C~454.3} We activated our {\it INTEGRAL} program for observations of blazars in outburst in May 2005, following the dissemination of an optical alert for the Flat Spectrum Radio Quasar 3C~454.3 ($z = 0.859$). The source was in an unusually bright state ($V \sim 12$), and we verified that also the RXTE All Sky Monitor was registering a period of X-ray activity. Many orbiting and ground-based observatories started a monitoring (Giommi et al. 2006; Fuhrmann et al. 2006; Pian et al. 2006; Villata et al. 2007). The spectral energy distributions of the blazar in Spring 2005, based on {\it INTEGRAL} data, and at previous epochs are reported in Figure 1, along with a sketch of the jet model used to reproduce the two different multiwavelength states. The first 2 blobs ($\Gamma_1$ and $\Gamma_2$) are fast and collide far from the central engine, but within the BLR, that is known to be a relevant source of photons in 3C~454.3 (Pian et al. 2005). Therefore, the Compton scattering of the jet electrons off the BLR photons is very significant, because the density of the external radiation in the rest frame of the blobs is high. This external Compton component peaks in the MeV-GeV range, and the model indeed matches well the EGRET spectrum of this blazar observed in 1991-1994. In Spring 2005, the blobs collision occurs closer to the center, so that the synchrotron component is enhanced with respect to inverse Compton. The difference in the initial Lorentz factors of the slower blobs at the two epochs is less than a factor 2: the Lorentz factor corresponding to the "historical" state is $\Gamma = 11$, and that of May 2005 is $\Gamma = 6.25$. In Figure 1 are also shown the synthetic spectra obtained for a range of Lorentz factors in between these two values. \begin{figure*} \vspace{2mm} \begin{center} \hspace{3mm}\psfig{figure=Figure1.ps,width=150mm,height=130mm,angle=0.0} \parbox{180mm}{{\vspace{2mm} }} \caption{Multiwavelength energy distributions of the blazar 3C~454.3 (top) and scheme of a relativistic "jet" (bottom, not to scale): a blob of Lorentz factor $\Gamma_1 = 11$ (colored in orange) is ejected at a certain time from the central engine. A following faster blob with $\Gamma_2 > \Gamma_1$ collides with it and produces an outburst, the spectrum of which is reported in orange in the two top panels ("historical" state). At a subsequent epoch, a blob of Lorentz factor $\Gamma_3 = 6.25$ (green) is ejected, and is hit by a following blob of $\Gamma_4 > \Gamma_3$ ejected soon thereafter. This collision produces the outburst spectrum reported in green in the above panels (state of May 2005). The differences between the two multiwavelength spectra is completely accounted for by the difference of the bulk Lorentz factors at the 2 epochs (see text). A family of model spectra, parameterized by the Lorentz factor (the step is $\Delta\Gamma = 0.25$), is shown in the top left panel: the importance of the external inverse Compton component increases with the Lorentz factor and the dominance of the synchrotron component decreases accordingly. See data references in Pian et al. (2006) and more model details in Katarzy\'nsky \& Ghisellini (2007).} \end{center} \end{figure*} \subsection{ \itbf{Swift} observations of PKS~0537-441} This blazar ($z = 0.896$) has been observed at various epochs at many wavelengths and is known for its remarkable variability (see Pian et al. 2007, and references therein). Like 3C~454.3, it has a luminous BLR (Pian et al. 2005). In 2005 it has been monitored in the optical and infrared by REM (Dolcini et al. 2005) and observed by all instruments of {\it Swift} in January, July and November. Figure 2 reports the XRT light curve in 2 energy bands and the optical light curve in V-band obtained by the combination of the UVOT and REM observations. The X-ray light curves show a remarkable flare (factor of $\sim$4), but the simultaneous optical variations are astonishing: the source increased by a factor of $\sim$60 over about 1 month between December 2004 and January 2005, and then decreased during 2005. \begin{figure} \vspace{2mm} \begin{center} \hspace{3mm}\psfig{figure=Pian_2007_01_02.eps,width=150mm,height=130mm,angle=0.0} \parbox{180mm}{{\vspace{2mm} }} \caption{{\it Swift}/XRT background-subtracted light curves of PKS~0537--441 in the 1-10 keV (filled circles) and in the 0.2--1\,keV (open circles) energy bands, and optical light curve (triangles), obtained from the merging of the UVOT V filter and REM V filter observations. The curves are not corrected for Galactic extinction, and are normalized to their respective averages (0.136 cts~s$^{-1}$ in the 1-10 keV band, 0.084 cts~s$^{-1}$ in the 0.2-1 keV band, 6.58 mJy in the optical band). The dotted horizontal lines indicate the average values of the three light curves: for clarity, the 0.2-1 keV band and V-band light curves have been scaled up by additive constants 1 and 2, respectively. Note that this upscaling implies that the flux ratios derived by direct inspection of the soft X-ray (0.2-1 keV) and optical light curves do not correspond to the real ones, the fluxes having been increased by constants 1 and 2, respectively. The maximum amplitudes of variability in optical and X-rays are a factor of $\sim$4 and $\sim$60, respectively (from Pian et al. 2007).} \end{center} \end{figure} We have constructed the spectral energy distributions of the blazar using the simultaneous {\it Swift} and REM data of our campaign , and have compared them to the historical multiwavelength spectra. The collection of the 2005 energy distributions and the two historical ones are shown in the left and right panel of Figure 3, respectively. We have modeled all multiwavelength spectra with the Katarzy\'nski \& Ghisellini (2007) model, by accounting for the multiwavelength variability only with variations of the bulk Lorentz factor $\Gamma$ and by parameterizing every other physical quantity as a function of $\Gamma$. The fitting curves are very satisfactorily reproducing the data. The variability is due to rather small variations of $\Gamma$: from a minimum of $\Gamma \simeq 10$ in the most luminous state of February 2005 to a maximum of $\Gamma \simeq 15$ in the dimmest states of November 2005 as well as in the low states prior to 2005. The intermediate state of July 2005 is accordingly described by $\Gamma \simeq 12$. Some physical quantities yielded by the model are reported in Figure 4: the total luminosities associated with the protons, electrons and magnetic fields have a very weak or null dependence from the Lorentz factor (see lower panel of Fig. 4), indicating that the bolometric energy input is constant at all epochs. The spectral differences are related to the location of the dissipation site along the jet. Note that the MeV-GeV flux observed by EGRET for this blazar in 1991-1992 and in 1995 is well reproduced by the model curves. Therefore, it would have been crucial to observe the PKS~0537-441 at these energies simultaneously with the X-ray and optical observations, because the model predicts here a large variability. \begin{figure} \plottwo{Pian_2007_01_03a.eps} {Pian_2007_01_03b.eps} \caption{Spectral energy distributions of PKS~0537-441. {\it Left}: The multiwavelength spectra refer to 24-25 February 2005 (small circles), 12 July 2005 (squares) and 24 November 2005 (triangles). The big circles represent the {\it Swift} BAT data. The {\it Swift} XRT data are reported along with the 1 $\sigma$ confidence ranges of their power-law fits. The flux uncertainties are 1 $\sigma$ (in some cases they are smaller than the symbol size). The X-ray, UV, optical and near-IR data are corrected for Galactic extinction (see Pian et al. 2007). Overplotted are the jet models (Katarzy\'nski \& Ghisellini 2007, see text) for the energy distributions of 24-25 February 2005 (solid curve), 12 July 2005 (dotted curve), 24 November 2005 (dashed curve). The thermal component required to account for the observed optical-UV flux is also reported as a dashed curve. {\it Right}: Spectral energy distributions of PKS~0537-441 in 1991-1992 (filled squares) and 1995 (filled circles). The 1 $\sigma$ confidence ranges of the EGRET spectra are reported as light dashed lines. The far-infrared data taken by IRAS and ISO and the X-ray BeppoSAX data are not simultaneous and are represented as open squares, open circles and open triangles, respectively (see Pian et al. 2002, and references therein; Padovani et al. 2006). These spectra have been also modelled according to Katarzy\'nski \& Ghisellini (2007): the model curves for the 1991-1992 and 1995 states are dotted and solid, respectively (from Pian et al. 2007).} \end{figure} \begin{figure} \vspace{2mm} \begin{center} \hspace{3mm}\psfig{figure=Pian_2007_01_04.eps,width=150mm,height=130mm,angle=0.0} \parbox{180mm}{{\vspace{2mm} }} \caption{Jet parameters of PKS~0537-441. {\it Top panel:} The logarithms of 3 quantities ("Q") are reported as a function of the logarithm of the bulk Lorentz factor: the size of the emitting source $R_{15}$ in units of $10^{15}$ cm, the value of the magnetic field $B$ in Gauss, and the injected power $L^\prime_{43}$ (in the comoving frame) in the form of relativistic particles, in units of $10^{43}$ erg~s$^{-1}$, as used for our modelling. The dashed lines represent the relationships predicted by the Katarzy\'nski \& Ghisellini (2007) model. The labelled dates identify the specific model/state of the source (see Pian et al. 2007 for more details on the model parameters). {\it Bottom panel:} The power carried by the jet in the form of magnetic field ($L_B$), cold protons ($L_{\rm p}$), relativistic electrons ($L_{\rm e}$) resulting from our modelling, as a function of the bulk Lorentz factor (from Pian et al. 2007).} \end{center} \end{figure} \subsection{ \itbf{Swift} observations of PKS~2155-304 following a giant TeV outburst} PKS~2155-304 ($z = 0.116$) is one of the extragalactic sources most frequently monitored by the current experiments for the detection of Cerenkov light induced by TeV energy radiation. In July 2006 the blazar was detected by the HESS telescope at a level ten times higher than usual for this source. On 28 July 2006 the TeV flux at energies larger than 200 GeV was 7 times larger than that of the Crab Nebula in the same energy interval (Aharonian et al. 2007). This triggered multiple instruments for follow-up observations at lower energies, including {\it Swift} (Foschini et al. 2007). A bright X-ray flare was detected by the {\it Swift} XRT about 1 day after the TeV outburst, which decreased thereafter by a factor of $\sim$5 in 1 month. The X-ray spectral changes are not as dramatic. In particular the frequency of the synchrotron peak remained at values similar to those observed in the past (e.g., 1997, Chiappetti et al. 1999), during low TeV activity. Modeling of the spectral energy distribution (reported in Figure 5) based on the synchrotron self-Compton process in a homogeneous region suggests an increase of the Doppler factor ($33$ in $2006$; $18$ in $1997$) and of the normalization of the relativistic electrons distribution, associated with a decrease of the magnetic field ($0.27$ G in $2006$; $1$ G in $1997$, see Foschini et al. 2007). This suggests that in this source, the observed variability cannot be solely reproduced with a variation of the bulk Lorentz factor, but other physical quantities must change between the observed states. \begin{figure} \vspace{2mm} \begin{center} \hspace{3mm}\psfig{figure=Figure5.ps,width=150mm,height=130mm,angle=0.0} \parbox{180mm}{{\vspace{2mm} }} \caption{Spectral energy distributions of PKS~2155--304: the red symbols represent the quasi-simultaneous TeV (HESS) and X-ray ({\it Swift} XRT) data; the black symbols refer to the TeV, XRT and REM observations of 2 August 2006 (see references in Foschini et al. 2007). For comparison, historical data are also shown: green symbols refer to 1997 and previous epochs (see references in Chiappetti et al. 1999) and to 2003 (HESS TeV spectrum taken in October-November 2003, Aharonian et al. 2005), while in blue and light blue are reported the \emph{XMM-Newton} data from Foschini et al. (2006). The red and black continuous curves represent the synchrotron self Compton models (see Ghisellini et al. 2002) used to fit the data of July 2006 and August 2006, respectively. Both models include the absorption at TeV energies due to the extragalactic infrared background calculated according to Stecker \& Scully (2006). The dashed curves indicate the instrinsic (i.e. not absorbed) spectrum (from Foschini et al. 2007).} \end{center} \end{figure} \section{Discussion} \label{sect:discussion} We have presented the multiwavelength distributions of three well known and studied blazars at different epochs. Two of the sources (3C~454.3 and PKS~0537-441) have luminous BLRs and therefore represent a benchmark for the economic jet model of Katarzy\'nski \& Ghisellini (2007) based on the internal shock scenario. In this model, the flares are produced {\it within} the BLR, at different locations along the jet, from the collision of two consecutively emitted plasma blobs. Depending on the distance of the dissipation site from the nucleus the plasma will move with different bulk Lorentz factors, larger values being attained farther from the nucleus. Since the ratio between the external Compton and the synchrotron power in the blazar spectrum depends on the square of the bulk Lorentz factor (the external radiation field density, in the frame comoving with the blob, depends on $\Gamma^2$), the distance at which the blobs collide determines the relative importance of the two emission components and the shape of the overall spectrum. Synchrotron-dominated multiwavelength blazar spectra are produced by collisions occurring closer to the jet apex, while the external Compton component, mainly responsible for the production of the MeV-GeV spectra, dominates when the flare is generated farther from the nucleus and closer to the BLR. All physical quantities can be parameterized as functions of $\Gamma$, and their variations then depend on the changes of $\Gamma$. For blazars with no luminous BLR, the economic jet model -- the concept of which is based on the relative distance of the dissipation site from the nucleus and from the BLR -- cannot be adequately tested. While internal shocks can take place in these objects as well, their observed multiwavelength variability must be explained with intrinsic changes of other physical quantities, beside $\Gamma$. In the case of PKS~2155-304, these are the magnetic field and the electron distribution normalization (\S 3). This implies a change in the total energy budget of the jet. The resulting variability affects the broad-band spectrum in a coherent way, producing a brightening at all frequencies. Parameter changes independent from $\Gamma$ can obviously take place also in objects with a rich BLR, but our purpose here is to demonstrate that this is not necessary: the very different observed multiwavelength states in these sources {\it can} be described by the dissipation of a fixed amount of energy at any given epoch. It must be noted that our approach does not imply that the kinematics in the jets of blazars with and without luminous BLRs is {\it intrinsically} different: as said above, internal shocks can occur in both types of blazars. However, a significant difference is apparent in the VLBI jet structures of EGRET blazars (typically exhibiting also prominent optical and UV emission lines) and BL Lac objects with no detected emission lines (Jorstad et al. 2001a; Piner \& Edwards 2004). The key interesting features of the internal shock scenario is the relatively low radiative efficiency (of order of a few per cent) which well accounts for the dissipation of kinetic energy in blazars, as higher dissipation rates would be difficult to reconcile with the large amount of power carried up to the large scale lobes. Furthermore, although the radiative dissipation occurs on all jet scales most of it is localized within tenths of a parsec, on the BLR scale, in agreement with the requirements of fast variability and transparency to $\gamma$--rays (Spada et al. 2001). Many VLBI campaigns have been organized with the aim of correlating with confidence the occurrence of a blazar multiwavelength outburst and the appearance of a new radio component in the jet (e.g., Jorstad et al. 2001b; Savolainen et al. 2002; Lindfors et al. 2006). The velocities of the emerging plasma blobs may clarify better their behavior within light-days from the nucleus, at scales that the VLBI cannot probe, and help test the economic jet model, although the difficulties of disentangling the kinematical from the viewing angle effects may be unsurmountable. We stress that the model has a high predictive power at the MeV-GeV energies (see e.g. Figure 3), so that the monitoring of blazars with {\it AGILE} and {\it GLAST}, coordinated with observations at lower energies, will represent a crucial test. \begin{acknowledgements} We would like to acknowledge the contribution of many colleagues to the success of the blazar observing campaigns described in this paper. EP would like to thank the organizers of the Frascati Workshop 2007 for a very pleasant and stimulating conference. This work has been supported by the Italian MIUR and by the Italian Space Agency through the contract ASI-INAF I/023/05/0. \end{acknowledgements}
1,108,101,566,033
arxiv
\section{Introduction} \label{sec:intro} It has long been conjectured that neutron stars might contain cores of quark matter, and one of the challenges facing nuclear astrophysics is to find signatures by which the presence of such matter could be inferred from observations of the behavior of neutron stars. This requires us to develop a good understanding of the differences between the properties of nuclear matter and quark matter, taking in to account the effects of magnetic fields, which are known to be present in neutron stars. In this paper we study quark matter in magnetic fields $B\lesssim 10^{14}$\,Gauss, which are astrophysically plausible and high enough to affect transport (see for example \cite{Huang:2009ue}) but not so large as to modify the phase structure of the material \cite{Ferrer:2006vw,Fukushima:2007fc,Menezes:2008qt}. Nuclear matter at high densities and low temperatures is expected to be a type-II electrical superconductor, with the magnetic field distributed in an Abrikosov lattice of flux tubes~\cite{Baym:1969}. In this paper we investigate the possibility that quark matter in the two-flavor color superconducting phase (``2SC'') \cite{Alford:2007xm} could be a type-II superconductor with respect to the color gauge fields \cite{Iida:2002ev,Giannakis:2003am}, with color flux tubes that scatter electrons, muons, and ungapped quarks via the Aharonov-Bohm effect. These tubes are not topologically stable, and their energetic stability has not yet been determined; in this paper we investigate the role they might play in transport, and their expulsion time, if they turn out to be stable or to have a lifetime that is sufficiently long. As we explain below, the tubes carry flux that is mostly color-magnetic (hence they can reasonably be called ``color-magnetic flux tubes'') with a small admixture of ordinary magnetic flux. We will argue that the density of color-magnetic flux tubes could be high, perhaps only about an order of magnitude less than that of ordinary flux tubes in superconducting nuclear matter. Color magnetic flux tubes may appear in other color superconducting phases, such as the color-flavor-locked (CFL) phase, but the CFL phase has no gapless charged excitations, so in this paper we focus on the 2SC phase. We calculate the Aharonov-Bohm interaction between the flux tubes and unpaired quarks or electrons/muons. We calculate the associated damping time and the forces on the flux tubes. We defer the calculation of other contributions to relaxation and transport in the 2SC phase, such as scattering of the unpaired quarks and electrons off each other, to future work. The behavior of quark matter phases in magnetic fields is complicated by the intertwined breaking of the strong interaction $SU(3)$ ``color'' gauge symmetry and the electromagnetic $U(1)_Q$ gauge symmetry. In the 2SC phase, a condensate of Cooper pairs of up ($u$) and down ($d$) quarks leads to the gauge symmetry breaking pattern $SU(3)\otimes U(1)_Q \to SU(2)_{rg} \otimes U(1)_{\tilde Q}$ \cite{Alford:1997zt,Alford:1999pb}. The unbroken $SU(2)_{rg}$ symmetry ensures confinement of particles that carry net red or green color, with a confinement scale around 10\,MeV \cite{Rischke:2000cn}. The unbroken $U(1)_{\tilde Q}$ gauge symmetry is a linear combination of the original electromagnetic and color symmetries, called ``rotated electromagnetism''. The associated gauge field, the ``${\tilde Q}$ photon'', is a combination of the original photon and one of the gluons. It is massless and propagates freely in 2SC quark matter. The orthogonal combination $X$ is a broken gauge generator, and the associated magnetic field has a finite penetration depth. The situation is closely analogous to the Higgs mechanism in the standard model, where one linear combination of the hypercharge and $W_3$ gauge bosons remains massless (the photon), while the orthogonal combination becomes massive (the $Z^0$). The $X$ flux tubes are therefore analogous to ``$Z$-strings'' \cite{Vachaspati:1992fi} which have been found to be stable only in a small region of the standard model parameter space \cite{James:1992wb}, although the stable region may be enlarged when bound states are taken in to account \cite{Vachaspati:1992mk}. There are differences between the 2SC phase of QCD and the Higgs phase of the standard model: the gluon mass is proportional to the quark chemical potential, not the superconducting order parameter \cite{Alford:2007xm}; the non-Abelian gauge group is $SU(3)$ rather than $SU(2)$ and is only partly broken, leaving an unbroken confining $SU(2)$ as well as an unbroken $U(1)$ in the low temperature phase. This means that a separate stability calculation will be needed for the 2SC case. Because electromagnetism is much more weakly coupled than the strong interaction, the massless ${\tilde Q}$ gauge field is almost identical to the photon, with a small admixture of a color gauge boson. Conversely, the broken $X$ gauge field is almost identical to one of the gluons, with a small admixture of the photon \cite{Alford:1999pb}. Thus the $X$ flux tubes can be described as ``color-magnetic flux tubes''. However, because they contain a small admixture of ordinary magnetic flux, they interact with electrons/muons as well as with unpaired (blue) quarks. In summary, the 2SC phase is not a superfluid, but it is a superconductor with respect to the $X$ gauge fields, and a conductor with respect to the ${\tilde Q}$ gauge fields, with current mainly being carried by the gapless electrons and blue quarks (one of which is neutral, the other has charge +1). Strange quarks and muons, if present, will have a lower Fermi momentum because of their higher mass, and hence less phase space near their Fermi surface. Thus their contribution to the processes discussed in this paper will be subleading, and we ignore it. The picture given above is valid below the critical temperature for 2SC pairing and above an unknown critical temperature $T_{1SC}$ at which there will be a transition to a phase in which there is self-pairing of the blue up and down quarks. Such pairing would break the $U(1)_{\tilde Q}$ symmetry, so there could be both ${\tilde Q}$ and $X$ flux tubes. Models of the strong interaction between quarks do not give us much idea of the value of $T_{1SC}$. They agree that, because the strong attraction is much weaker in the single-color channel, $T_{1SC}$ will be many orders of magnitude lower than the critical temperature for 2SC pairing, perhaps as low as 1\,eV ($10^4$\,K) \cite{Alford:1997zt,Schafer:2000tw,Alford:2002rz}. In this paper we will be concerned with temepratures above $T_{1SC}$, where the ${\tilde Q}$ gauge symmetry remains unbroken. Depending on the ratio of the $X$-flux penetration depth to the coherence length of the condensate, the 2SC phase may be type-I or type-II with respect to the $X$ magnetic field \cite{Iida:2002ev}. In this paper we will be concerned with the possibility of type-II behavior, and the presence of flux tubes containing $X$-flux in the 2SC quark matter core of a compact star. Even if the average magnetic field strength in the core is below the lower critical field, such flux tubes may end up ``frozen in'' if the quark matter had cooled in to the 2SC state in the presence of the magnetic field. The magnetic field would then be resolved in to a ${\tilde Q}$ part, which would pass freely through the 2SC quark matter, and an $X$ part, which would become trapped in flux tubes (Sec.~\ref{sec:fluxtube}). The paper is structured as follows. In Sec.~\ref{sec:type2} we calculate the Ginzburg-Landau parameter for 2SC quark matter, and conclude that it is a type-II superconductor with respect to the broken $X$ generator as long as the pairing gap $\Delta$ is large enough. We estimate that $\Delta\gtrsim \mu_q/16$ will suffice, which for typical quark chemical potentials $\mu_q\sim 400\,{\rm MeV}$ requires $\Delta \gtrsim 25$ MeV. In Sec.~\ref{sec:fluxtube} we discuss the nucleation scenario by which the flux tubes can occur in the 2SC superconductor, even when the magnetic field intensities are below the lower critical field. We estimate the density of such flux tubes in the hypothetical 2SC quark matter core of a neutron star. In Sec.~\ref{sec:scattering} we calculate the Aharonov-Bohm scattering cross section for electrons or unpaired quarks interacting with color magnetic flux tubes. Sec.~\ref{sec:relax_time} is devoted to the computation of relaxation time of massless electrons and unpaired blue quarks interacting with flux tubes via Aharonov-Bohm cross-section. In Sec.~\ref{sec:forces} we estimate the timescale for expulsion of the flux tubes from the 2SC core, taking in to account the forces on the color-magnetic flux tubes in the 2SC core and at its boundary, but neglecting any forces on the magnetic flux lines outside the core. We summarize our results in Sec.~\ref{sec:conclusions}. In our calculations we use ``Heaviside-Lorentz'' natural units with $\hbar = c = k_B = \epsilon_0 = 1$, where $k_B$ is the Boltzmann constant and $\epsilon_0$ is the vacuum permittivity; the electric charge $e$ is related to the fine structure constant by $\alpha=e^2/(4\pi)$. \section{Type-II color superconductivity in quark matter} \label{sec:type2} A superconductor is of type II if it obeys the condition \begin{equation} \kappa \equiv \frac{\lambda}{\xi}>\frac{1}{\sqrt{2}}, \label{criterion} \end{equation} where $\kappa$ is the Ginzburg-Landau (GL) parameter, $\lambda$ is the penetration depth, and $\xi$ is the coherence length for the superconductor. In a system of relativistic fermions with chemical potential $\mu$ and pairing gap $\Delta$, we expect $\xi \propto 1/\Delta$, $\lambda\propto (g\mu)^{-1}$, so $\kappa\propto\Delta/(g\mu)$. (In the case of 2SC quark matter the relevant broken gauge symmetry is the ``$X$'' which is mostly color, so the coupling $g$ is approximately the strong coupling constant.) We therefore expect that 2SC quark matter will be a type-II color superconductor if the gap is sufficiently large. To make a more accurate determination we follow the approach of Bailin and Love \cite{Bailin:1983bm} and Iida and Baym \cite{Iida:2002ev}. We start with the effective free energy density (Ginzburg-Landau theory) for a relativistic BCS superconductor (Ref.~\cite{Bailin:1983bm},~(3.12)) \begin{equation} \label{GL_functional} {\cal F} = {\cal F}_n+\alpha \psi^*\psi+\frac{1}{2}\beta(\psi^*\psi)^2+ \gamma(\bm \nabla\psi^*-2ie\bm A\psi^*)(\bm \nabla\psi+2ie\bm A\psi) +\frac{1}{2\mu_0}(\bm B-\mu_0 \bm H)^2 \ . \end{equation} (We have followed Ref.~\cite{Bailin:1983bm} in writing the magnetic field free energy in SI units; in natural units $\mu_0=1$.) Here $\psi$ is the gap parameter; for negative $\alpha$ the free energy has a minimum at $|\psi|^2=\psi_0^2$, with penetration depth $\lambda$, and coherence length $\xi$ given by \begin{equation} \psi_0^2 = -\frac{\alpha}{\beta}, \qquad \lambda^2 = \frac{1}{2 \gamma q_{\rm pair}^2 \vert\psi_0\vert^2 }, \qquad \xi^2 = -\frac{\gamma}{\alpha} \ , \label{GL-kappa} \end{equation} where $q_{\rm pair}$ is the charge of the Cooper pair. The GL parameter $\kappa$ is then given by \begin{equation} \kappa^2 = \frac{\lambda^2}{\xi^2} = \frac{1}{2 q_{\rm pair}^2}\frac{\beta}{\gamma^2}. \label{kappasq} \end{equation} The coefficients in the Ginzburg-Landau functional are \cite{Bailin:1983bm} \begin{equation} \begin{array}{rcl} \alpha &=& \displaystyle \nu \frac{\tau_{GL}}{2}, \\[2ex] \beta &=& \displaystyle \nu \frac{7\zeta(3)}{16(\pi T_c)^2}, \\[2ex] \gamma &=& \displaystyle \frac{\beta}{6}\frac{p_F^2}{\mu^2} , \end{array} \label{GLcoeffs-general} \end{equation} where $\tau_{GL}\equiv (T-T_c)/T_c$. The fermions have Fermi momentum $p_F$, so the density of states near the Fermi surface is $\nu= N p_F\mu/\pi^2\simeq N \mu^2/\pi^2$. The parameter $N$ is a degeneracy factor that is $1$ for a single-species system, and $2$ for the 2SC phase (see Ref.~\cite{Bailin:1983bm}, Eq.~(4.63)). The Ginzburg-Landau theory is most reliable for temperatures close to $T_c$, however we will use it at $T\ll T_c$. The low-temperature gap parameter $\Delta$ is related to the critical temperature by $T_c = (e^\gamma/\pi)\Delta$: note that $\Delta$ then differs from $\psi_0$ by a factor of about 1.7. Expressing the coefficients in terms of $\Delta$, \begin{equation} \kappa \approx \frac{32.74}{q_{\rm pair}\sqrt{N}} \frac{\Delta}{\mu}, \label{kappa} \ . \end{equation} We can check this result by noting that for a relativistic electronic superconductor, $N=1$ and $q_{\rm pair}=2e$ where $\alpha=e^2/(4\pi)\approx 1/137$. Substituting these values into \eqn{kappa} we find $\kappa=54.043 \Delta/\mu = 95.325 T_c/\mu$, in agreement with Ref.~\cite{Bailin:1983bm},~(3.24). In 2SC quark matter, the degeneracy factor is $N=2$ and the charge of the Cooper pair is the $X$ charge of the 2SC condensate. From Eqs.~\eqn{couplings} and \eqn{qc} of Sec.~\ref{sec:scattering} we find \begin{equation} q_{\rm pair}= q_c {e^{(\X)}} = \frac{g}{\sqrt{3}\cos{\varphi}} \approx \frac{g}{\sqrt{3}}, \label{qpair} \end{equation} where the mixing angle ${\varphi}$ is defined in Eq.~(\ref{couplings}). We estimate the strong coupling constant $g$ by assuming that $\alpha_s=g^2/(4\pi)\approx 1$, so $g\approx 3.5$. Substituting these values into \eqn{kappa} we find \begin{equation} \kappa_{\rm 2SC} \approx 11\frac{\Delta}{\mu_q} \ . \label{kappa-2SC} \end{equation} We conclude, using \eqn{criterion}, that 2SC quark matter will be of type II if the pairing gap is sufficiently large, $\Delta \gtrsim \mu_q/16$. In quark matter we expect $\mu_q\sim 400~{\rm MeV}$, so this only requires the 2SC pairing gap to be greater than about 25~{\rm MeV}, which is well within typical estimates~\cite{Brown:1999aq,Brown:1999yd}. Our general conclusion agrees with that of Refs.~\cite{Iida:2002ev,Giannakis:2003am} who also noted that a sufficiently large 2SC pairing gap yields a type-II superconductor. Our specific result \eqn{kappa-2SC} differs from Eq.~(112) of Ref.~\cite{Iida:2002ev} by a factor of $\sqrt{2}$, but given the uncertainty in the strong coupling constant $g$ this numerical discrepancy does not affect our conclusion. \section{Color-magnetic flux tubes in the 2SC phase} \label{sec:fluxtube} \subsection{The nucleation and density of flux tubes} When the quark matter core of the star cools below a critical value, a 2SC condensate forms. We expect that this happens before the nuclear mantle becomes superconducting because the gap parameter for quark matter is expected to be an order of magnitude larger than that for proton pairing \cite{Alford:2007xm,Dean:2002zx,Muther:2005cj,Sedrakian:2006xm}. The electromagnetic field is then resolved in to a ${\tilde Q}$ component and an $X$ component. The 2SC core is not a superconductor with respect to ${\tilde Q}$, so the ${\tilde Q}$ component is undisturbed \cite{Alford:1999pb} (on this we disagree with Ref.~\cite{hep-ph/0012383}, which we believe imposes an incorrect boundary condition on the gluon field). However, the core is a superconductor with respect to the $X$ component, and we have argued above that it may well be a type-II superconductor. The lower critical field for the $X$-superconductivity is very high, $H_{c1} \sim 10^{17}$\,Gauss \cite{Iida:2002ev}, and typical neutron star magnetic fields are expected to be lower than this, but, as we now argue (see also \cite{Alford:1999pb} and footnote [8] of Ref.~\cite{Iida:2002ev}), it is still quite possible for the $X$-flux to form flux tubes threading the quark matter core. The only way the $X$-flux could be expelled from the core is if the transition from hot quark matter to 2SC happens smoothly from the center of the star outwards. However, it seems more likely that the transition to 2SC matter will proceed by nucleation of 2SC regions (``bubbles'') in the quark matter, which then grow and coalesce. The $X$-flux will be expelled from the 2SC bubbles, but will then be trapped in the non-superconducting regions between the bubbles. As the bubbles grow, these regions become smaller, concentrating the flux there until the local field strength rises above $H_{c1}$, at which point the bubbles stop growing. At this stage, the core consists of 2SC quark matter with channels of non-superconducting quark matter running though it, carrying the $X$-flux. If the 2SC phase is a type-II superconductor then these channels are unstable and will fragment into flux tubes, each carrying a single quantum of $X$-flux, with a short-range repulsion between the flux tubes. The fact that the average field strength was below the lower critical field for a sphere of 2SC matter in a uniform magnetic field will now manifest itself as an outwardly-directed boundary force on the flux tubes at the point where they meet the edge of the 2SC core. We will study this in Sec.~\ref{sec:forces}. Because the 2SC phase is a conductor with respect to ${\tilde Q}$ charge, it supports eddy currents which make it very difficult for the ${\tilde Q}$ magnetic field in the 2SC core to change. The timescale for expulsion of the ${\tilde Q}$ magnetic field is estimated to be longer than the age of the universe \cite{Alford:1999pb}. Thus we are justified in treating the ${\tilde Q}$ magnetic field as a fixed background. If we assume that all the $X$-flux is trapped in the manner described above, then the density of flux tubes is just $B_X$, the density of magnetic $X$-flux, divided by $\Phi_X$, the $X$-flux of a single flux tube. $B_X$ is obtained by projecting out the $X$-component of the original electromagnetic flux $B$ (see \eqn{mixing}), so $B_X=B\sin{\varphi}$. The flux quantum is \begin{equation} \Phi_X = \frac{2\pi}{q_{\rm pair}} \approx \sqrt{\frac{3\pi}{\alpha_s}} \ , \label{X-quantum} \end{equation} where $q_{\rm pair}$ is the $X$-charge of the 2SC condensate (see \eqn{qpair} and \eqn{couplings}). We can relate it to the flux quantum $\Phi_0=\pi/e\approx 10.37$ for an ordinary superconductor where the charge of the condensate is $2e$, \begin{equation} \Phi_X = \frac{2e}{q_{\rm pair}} \Phi_0 = 6\sin({\varphi}) \Phi_0 \ . \label{XvsPhi0} \end{equation} We conclude that \begin{equation}\label{eq:flux_number} n_{v} = \frac{B_X}{\Phi_X} = \frac{1}{6} \frac{B}{\Phi_0} \ . \end{equation} This is the upper limit on the flux tube density in 2SC matter. Interestingly, as anticipated in Ref.~\cite{Blaschke:2000gm}, it only differs by a factor of $1/6$ from the density of electromagnetic flux tubes that would result if the core were an electromagnetic superconductor due to electron or proton pairing. Projection on to the $X$ component reduces the magnetic flux by a factor $\sin{\varphi}$, but because the $X$ fields are strongly coupled their flux quantum is smaller by a similar factor, so the flux tube density ends up being independent of the mixing angle. The actual density will depend on details of how the transition to 2SC matter was completed. For an internal field $B=10^{14}$\,Gauss ($2\,{\rm MeV}^2$), the maximum flux tube density is $n_v=8.1\times 10^{19}\,{\rm cm}^{-2}$. \subsection{Properties of the flux tube} The thickness of the flux tubes is given by the penetration depth for magnetic $X$-flux in the 2SC phase. This follows from equations \eqn{kappasq} to \eqn{qpair}. Assuming $p_F\simeq \mu_q$ for relativistic quarks, \begin{equation} \lambda = \frac{3\pi}{g\mu_q\vert\tau_{GL}\vert^{1/2}} = (1.3\,{\rm fm}) \left(\frac{400{\rm~MeV}}{\mu_q}\right) \left(1-\frac{T}{T_c}\right)^{-1/2} \ . \label{lambda-X} \end{equation} The energy per unit length (tension) of the flux tube is given by $\half {\cal E} \ln\ka_{\!X}$ where ${\cal E}$ is the energy per unit length of the magnetic flux if it were uniformly spread over an circle of radius $\lambda$ (Ref.~\cite{Tinkham}, Sec.~(5.1.2)), and $\ln\kappa_X$ is a factor of order 1. In Heaviside-Lorentz natural units ${\cal E} = B^2/2$, where $B=\Phi_X/(\pi\lambda^2)$, so \begin{equation} \varepsilon_X = \frac{\Phi_X^2}{4\pi\lambda^2}\ln \ka_{\!X} \end{equation} (compare Ref.~\cite{Iida:2002ev}, Eq.~(107); see also Ref.~\cite{Blaschke:2000gm}). To estimate the tension we work to lowest order in $\alpha$ and use \eqn{lambda-X}, \eqn{X-quantum}, and \eqn{sinmix}. In the low temperature limit we find \begin{equation} \varepsilon_X = \frac{\mu_q^2}{3\pi}\,\ln\ka_{\!X} \ . \label{X-tension} \end{equation} Assuming that in 2SC quark matter $\mu_q$ is in the 350 to 500 MeV range, and that the logarithmic factor is of order 1, we conclude that the tension will be of order $60$ to $130$\,MeV/fm. \section{Aharonov-Bohm scattering by flux tubes} \label{sec:scattering} The Aharonov-Bohm effect provides a remarkably strong interaction between a charged particle and a flux tube containing magnetic flux. For the simple case of a single $U(1)$ gauge group (electromagnetism), the differential cross-section per unit length is (see, for example, Ref.~\cite{Alford:1988sj}) \begin{equation} \frac{d\sigma}{d\vartheta} = \frac{\sin^2(\pi\tilde\beta)}{ 2\pi k\sin^2(\vartheta/2)}, \label{AB-scattering} \end{equation} where \begin{equation} \tilde\beta= \frac{q_p}{q_c} \ , \label{AB-parameter} \end{equation} where $q_p$ is the charge of the scattering particle. For a flux tube that arises as a topological soliton in an Abelian Higgs model, $q_c$ is the charge of the condensate field whose winding by a phase of $2\pi$ characterizes the flux tube; $k$ is the momentum in the plane perpendicular to the string, and $\vartheta$ is the scattering angle. Aharonov-Bohm scattering has several important features: \begin{tightlist}{$\bullet$} \item The cross-section vanishes if $\tilde\beta$ is an integer, but is otherwise non-zero. \item The cross section is {\em independent of the thickness of the flux tube}: the scattering is not suppressed in the limit where the symmetry breaking energy scale goes to infinity, and the flux tube thickness goes to zero. \item The cross section diverges both at low energy and for forward scattering. \end{tightlist} It is therefore of great interest to determine the values of $\tilde\beta$ for scattering of the fermions that are ungapped in the 2SC phase off a flux tube containing magnetic flux associated with the broken gauge symmetry. \subsection{The gauge groups and charges} \subsubsection{The light fermions} In the 2SC phase we will focus on the $U(1)\times U(1)$ gauge group consisting of electromagnetism and the part of the color gauge symmetry that mixes with electromagnetism. The relevant particles are the quarks and the electron: \begin{equation} \psi = (ru,gd,rd,gu,bu,bd,e^-), \label{basis} \end{equation} where ``$ru$'' means the red up quark, etc. ``$e^-$'' is the electron. Muons would have the same interaction as the electron, so we do not include them separately. In this basis, the generators of the two $U(1)$ gauge groups are just the diagonal matrices of their electric and color charges, \begin{equation} \begin{array}{rcl} Q^\psi &=& {\rm diag}(+{\txt \frac{2}{3}},-{\txt \frac{1}{3}},-{\txt \frac{1}{3}},+{\txt \frac{2}{3}}, +{\txt \frac{2}{3}},-{\txt \frac{1}{3}},-1), \\[1ex] T^\psi &=& \frac{1}{2\sqrt{3}}{\rm diag}(1,1,1,1,-2,-2,0). \end{array} \label{generators} \end{equation} The normalization of $Q^\psi$ is fixed by the conventional electric charges of the particles. For $T$ we have used the conventional normalization for generators of the $SU(3)$ color gauge group~\cite{Donoghue_Golowich_Holstein}. The kinetic term in the lagrangian of the fermions is $\bar\psi\gamma^\mu D_\mu \psi$, where the covariant derivative of the fermion fields is \begin{equation} D_\mu \psi = \partial_\mu\psi - i e A^Q_\mu Q^\psi\psi -i g A^T_\mu T^\psi\psi \ . \label{Dpsi-orig} \end{equation} The electromagnetic gauge coupling is $e$, and the QCD gauge coupling is $g$. The photon gauge field is $A^Q$, and the gluon gauge field is $A^T$. With the normalization of \eqn{generators}, $\alpha=e^2/4\pi=1/137$, and $\alpha_s=g^2/4\pi\sim 1$. \subsubsection{The 2SC condensate} The 2SC condensate is a diquark condensate, \begin{equation} \phi_{ij} = \<\psi_i C\gamma_5 \psi_j\> \ , \end{equation} where the indices $i$ and $j$ live in the color-flavor space of \eqn{basis}. The condensate only involves the red and green up and down quarks, so its color-flavor structure is \begin{equation} \phi \propto \left( \begin{array}{rrrrrrr} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right)\ . \label{phi2SC} \end{equation} From \eqn{generators} we can see how $\phi_{ij}$, considered as a $7\times7$ matrix in the color-flavor space of \eqn{basis}, transforms under an infinitesimal electromagnetism or color rotation. Each of the quarks in the diquark feels its own color-flavor phase, so each index $i$ and $j$ is separately transformed: \begin{equation} \begin{array}{rcl} Q^\phi \phi &=& Q^\psi\cdot \phi + \phi\cdot Q^\psi, \\[1ex] T^\phi \phi &=& T^\psi \cdot\phi + \phi\cdot T^\psi, \end{array} \end{equation} where the ``$\cdot$'' on the right hand side signifies ordinary matrix multiplication of the two $7\times 7$ matrices, and we have used the fact that $Q^\psi$ and $T^\psi$ are both diagonal, and hence symmetric. The lagrangian of the 2SC condensate ({\it i.e.}\,~the G-L theory) contains the kinetic term $(D_\mu\phi)^*D^\mu\phi$, where the covariant derivative is \begin{equation} D_\mu\phi = \partial_\mu\phi -ieA^Q_\mu Q^\phi\phi -igA^T_\mu T^\phi\phi \ . \end{equation} This determines the coupling of the 2SC condensate to the gauge fields. \subsection{The broken/unbroken basis} When the 2SC condensate $\phi$ forms, one linear combination of $Q$ and $T$ is spontaneously broken: we will call it ``$X$''. The other remains unbroken: we will call it ``${\tilde Q}$'', \begin{equation} \begin{array}{rcl} {\tilde Q} &=& Q + \eta_1 T \ , \\[1ex] X &=& -\eta_2 Q + T \ . \end{array} \label{gens-new} \end{equation} We determine $\eta_1$ by requiring that the 2SC condensate be invariant under ${\tilde Q}$ gauge transformations, \begin{equation} {\tilde Q}^\phi\phi = 0 \ , \end{equation} which implies that \begin{equation} \eta_1 = -\frac{1}{\sqrt{3}} \ . \label{eta1} \end{equation} It is natural to work in the $({\tilde Q},X)$ basis rather than the $(Q,T)$ basis, so we define new ``rotated'' gauge fields \begin{equation} \begin{array}{rcl} A^{\tilde Q} &=& \cos{\varphi} A^Q - \sin{\varphi} A^T, \\ A^X &=& \sin{\varphi} A^Q + \cos{\varphi} A^T, \end{array} \label{mixing} \end{equation} where the mixing angle ${\varphi}$ is analogous to the Weinberg angle in the standard model which parametrizes the mixing of the hypercharge and $W^3$ gauge bosons to yield the photon (analogous to $A^{\tilde Q}$ here) and the $Z$ (analogous to $A^X$ here). It is important that the mixing of the gauge fields is expressed in terms of an angle, so it maintains their normalization, so the gauge field kinetic terms for $A^{\tilde Q}$ and $A^X$ are still conventionally normalized. In the case of the generators, which we defined in \eqn{gens-new}, the overall normalization is not important, since it is absorbed in the new gauge couplings. In the new basis, the covariant derivative of the fermions is \begin{equation} D_\mu\psi = \partial_\mu\psi -i{e^{(\Qt)}} A^{\tilde Q}_\mu {\tilde Q}^\psi\psi - i{e^{(\X)}} A^X_\mu X^\psi\psi \ . \label{Dpsi-new} \end{equation} We will determine the new gauge couplings ${e^{(\Qt)}}$ and ${e^{(\X)}}$, and the mixing parameters $\eta_2$ and ${\varphi}$, by requiring that \eqn{Dpsi-new} be equivalent to \eqn{Dpsi-orig} for all gauge field configurations. \subsection{$X$-charges of the particles and condensate} Flux tubes will contain magnetic $X$-flux, so to determine the Aharonov-Bohm scattering parameter $\tilde\beta$ for each particle, we need to find the $X$-charge of each particle, corresponding to $q_p$ in \eqn{AB-parameter}. This follows straightforwardly from \eqn{Dpsi-new}. We will also need to know the $X$-charge of the 2SC condensate, corresponding to $q_c$ in \eqn{AB-parameter}. Requiring that \eqn{Dpsi-new} be equivalent to \eqn{Dpsi-orig} for all gauge field configurations, and using \eqn{eta1}, we find \begin{equation} \begin{array}{rcl} \cos{\varphi} &=& \displaystyle \frac{\sqrt{3}g}{\sqrt{e^2 + 3g^2}} \\[3ex] \eta_2 &=&\displaystyle -\frac{e^2}{\sqrt{3}g^2} = -\sqrt{3}\tan^2{\varphi} \\[3ex] {e^{(\Qt)}} &=& \displaystyle\frac{\sqrt{3}eg}{\sqrt{e^2 + 3g^2}} = e\cos{\varphi} \\[3ex] {e^{(\X)}} &=&\displaystyle \frac{\sqrt{3}g^2}{\sqrt{e^2 + 3g^2}} = g\cos{\varphi}\ . \end{array} \label{couplings} \end{equation} There is a new ``rotated'' electromagnetism, with coupling ${e^{(\Qt)}}$, which is slightly smaller than the usual electromagnetic gauge coupling. The charges of the fermions under this gauge group are \begin{equation} {\tilde Q} = {\rm diag}( \half,-\half,-\half,\half,1,0,-1) \ . \label{eq:Qcharges} \end{equation} This agrees with the well-known results for the 2SC phase \cite{Alford:2007xm}. The action of the $X$-charge matrix on the 2SC condensate determines the $X$-charge $q_c$ of the condensate, in units of ${e^{(\X)}}$; from \eqn{phi2SC}, \eqn{gens-new}, and \eqn{couplings}, \begin{equation} \begin{array}{rcl} X \phi + \phi X &=& q_c \phi , \\[3ex] \hbox{where}\quad q_c &=&\displaystyle \frac{1}{\sqrt{3}}\Bigl( 1 + \frac{e^2}{3g^2} \Bigr) = \frac{1}{\sqrt{3}\cos^2{\varphi}}. \end{array} \label{qc} \end{equation} The $X$-charge matrix of the fermions is \begin{equation} \begin{array}{rcl@{}l@{\;}l} X &=& \displaystyle\frac{1}{\sqrt{3}} {\rm diag}(&\displaystyle \half+2\tan^2{\varphi}, &\displaystyle \half-\tan^2{\varphi}, \\[1ex] && &\displaystyle \half-\tan^2{\varphi}, &\displaystyle\half+2\tan^2{\varphi}, \\[1ex] && &\displaystyle -1+2\tan^2{\varphi}, &\displaystyle-1-\tan^2{\varphi}, \\[1ex] && &\displaystyle -3\tan^2{\varphi} ). \end{array} \end{equation} Dividing by $q_c$ \eqn{qc} we find the Aharonov-Bohm $\tilde\beta$-factors of the fermions, in the basis defined by \eqn{basis}, \begin{equation} \begin{array}{r@{}rrr} \tilde\beta^\psi = {\rm diag}\Bigl( &\displaystyle \frac{1}{2}+\frac{3}{2}\sin^2{\varphi}, &\displaystyle \frac{1}{2}-\frac{3}{2}\sin^2{\varphi}, \\[2ex] &\displaystyle \frac{1}{2}-\frac{3}{2}\sin^2{\varphi}, &\displaystyle \frac{1}{2}+\frac{3}{2}\sin^2{\varphi}, \\[2ex] &\displaystyle -1+3\sin^2{\varphi}, &\displaystyle -1, &\displaystyle -3\sin^2{\varphi} \Bigr) \ . \end{array} \end{equation} Expanding in powers of $e^2$ (since $e\ll g$), we find \begin{equation} \sin^2({\varphi}) \approx \frac{\alpha}{3\alpha_s} \label{sinmix} \end{equation} so to lowest order in $\alpha$, \begin{equation} \begin{array}{r@{}rrr} \tilde\beta^\psi = {\rm diag}\Bigl( &\displaystyle \frac{1}{2}+\frac{\alpha}{2\alpha_s}, &\displaystyle \frac{1}{2}-\frac{\alpha}{2\alpha_s}, \\[2ex] &\displaystyle \frac{1}{2}-\frac{\alpha}{2\alpha_s}, &\displaystyle \frac{1}{2}+\frac{\alpha}{2\alpha_s}, \\[2ex] &\displaystyle -1+\frac{\alpha}{\alpha_s}, &\displaystyle -1, &\displaystyle -\frac{\alpha}{\alpha_s} \Bigr) \ . \end{array} \label{ABfactor-approx} \end{equation} We conclude that the gapped quarks have $\tilde\beta$ close to $\half$, which means that they have near-maximal Aharonov-Bohm interactions with an $X$-flux tube. Among the lighter (and hence more phenomenologically relevant) fermions, the ${\tilde Q}$-neutral $bd$ has zero Aharonov-Bohm interaction with the flux tube, while the $bu$ and electron have the same Aharonov-Bohm factor \begin{equation} \sin(\pi\tilde\beta^{bu})=\sin(\pi\tilde\beta^{e}) \approx -\pi \frac{\alpha}{\alpha_s} \ . \label{ABfactor-bu} \end{equation} \section{Relaxation via scattering off flux tubes} \label{sec:relax_time} \subsection{Relaxation time calculation} In this section we compute the characteristic timescale for a perturbation from equilibrium to relax away due to scattering of the fermions off the color magnetic flux tubes. This relaxation time is a measure of the mean free time between collisions of the fermions with the flux tubes, so we will also refer to it as a collision time. Our calculation applies equally to electrons and the unpaired component of blue colored quarks in the 2SC phase, the key difference being $\tilde\beta$ factors in the cross-section. The Boltzmann kinetic equation for blue-quark/electron distribution function $f(\bm p, t)$ is \begin{eqnarray}\label{eq:Boltzmann} \frac{\partial f(\bm p, t)}{\partial t} &=&\frac{2\pi N_v}{V} \int\!\!\frac{d^3p'}{(2\pi)^3} \Biggl\{ W(\bm p;\bm p')f(\bm p',t)\left[1-f(\bm p,t)\right] \nonumber\\ &-&W(\bm p';\bm p)f(\bm p,t)\left[1-f(\bm p',t)\right] \Biggr\}\delta(\varepsilon(\bm p)-\varepsilon(\bm p') ) , \end{eqnarray} where $N_v$ is the number of flux tubes, $V$ is the volume, and $W(\bm p';\bm p)$ is the transition probability between the states described by momenta $\bm p$ and $\bm p'$. Time-reversal symmetry implies $W(\bm p;\bm p')=W(\bm p';\bm p)$. In equilibrium the fermion distribution function is given by the Fermi-Dirac distribution function \begin{equation}\label{eq:Fermi-Dirac} f_0(\bm p)= \frac{1}{1+\exp[(p-\mu_i)/T]} \end{equation} where $T$ is the temperature and $\mu_i$ is the chemical potential of blue quarks ($i=b$) and electrons $(i=e)$. To solve the Boltzmann equation we shall apply the variational method, where the perturbations from equilibrium are described by variational trial functions whose functional form is dictated by the form of applied perturbation~\cite{Flowers:1976,Flowers:1979}. The resulting transport coefficients are lower bounds on their exact values. The number of adjustable trial functions, which are used to maximize the entropy production via scattering, could be large. In the following we shall use one linear function $\phi$, in which case there is no need for variation, since the variational parameter cancels out. It should be kept in mind that the resulting transport coefficients are still lower bounds on their exact values. For small perturbations from equilibrium the Boltzmann equation can be linearized by writing $f(\bm p,t) = f_0(\bm p)+\delta f(\bm p,t),$ where the (small) perturbation from the Fermi-Dirac form (\ref{eq:Fermi-Dirac}) is \begin{equation}\label{eq:perturb} \delta f(\bm p,t) = -\frac{df_0(\bm p)}{d\varepsilon(\bm p)}\phi(\bm p,t), \end{equation} where $\phi(\bm p,t)$ is the trial function. The linearized Boltzmann equation then reads \begin{eqnarray}\label{eq:Boltzmann_linear} - \frac{\partial\phi(\bm p,t)}{\partial t}f_0(\bm p)\left[1-f_0(\bm p)\right] &=&\frac{2\pi N_v}{V} \int\!\!\frac{d^3p'}{(2\pi)^3}\left[\phi(\bm p,t)-\phi(\bm p',t)\right] \nonumber\\ &\times&W(\bm p;\bm p')f_0(\bm p')\left[1-f_0(\bm p)\right] \delta(\varepsilon(\bm p)-\varepsilon(\bm p')). \end{eqnarray} To obtain this form of the kinetic equation we used the detailed balance conditions $ f_0(\bm p')\left[1-f_0(\bm p)\right] - f_0(\bm p)\left[1-f_0(\bm p')\right] = 0, $ and $ {df_0(\bm p)}/{d\varepsilon(\bm p)} = {df_0(\bm p')}/{d\varepsilon(\bm p')} . $ It is convenient to work with the Laplace transformed trial function \begin{equation} \phi(\bm p,t) = \int ds e^{-st}\phi(\bm p,s). \end{equation} Upon Laplace transforming Eq.~(\ref{eq:Boltzmann_linear}) we find \begin{eqnarray}\label{eq:motion} s \phi(\bm p,s)f_0(\bm p)\left[1-f_0(\bm p)\right] &=&\frac{2\pi N_v}{V} \int\!\!\frac{d^3p'}{(2\pi)^3} \left[\phi(\bm p,s)-\phi(\bm p',s)\right]\nonumber\\ &\times&W(\bm p;\bm p') f_0(\bm p')\left[1-f_0(\bm p)\right] \delta(\varepsilon(\bm p)-\varepsilon(\bm p')). \end{eqnarray} To define a characteristic relaxation rate we assume that the trial function can be written as \begin{equation} \phi(\bm p,s) = \phi(\bm p)\delta(s-s_0), \end{equation} in which case Eq.~(\ref{eq:motion}) becomes \begin{eqnarray}\label{eq:motion2} s_0 \phi(\bm p)f_0(\bm p)\left[1-f_0(\bm p)\right] &=&\frac{2\pi N_v}{V} \int\!\!\frac{d^3p'}{(2\pi)^3} \left[\phi(\bm p)-\phi(\bm p')\right]\nonumber\\ &\times&W(\bm p;\bm p') f_0(\bm p')\left[1-f_0(\bm p)\right] \delta(\varepsilon(\bm p)-\varepsilon(\bm p')), \end{eqnarray} where the perturbation functions are now independent of $s$. We can identify $s_0$ with the relaxation rate ({\it i.e.}\, the inverse of the relaxation time) by comparing the computed kinetic coefficients with standard expressions for transport coefficients, {\it e.g.}\,, the electrical conductivity with the Drude formula. To formulate the variational principle~\cite{Ziman} we write Eq.~(\ref{eq:motion2}) in the compact form \begin{equation}\label{eq:X} X(\bm p) = \int \left[\phi(\bm p)-\phi(\bm p')\right] P(\bm p,\bm p')d^3p', \end{equation} where $X(\bm p)$ stands for the left-hand side of Eq.~(\ref{eq:motion2}); the scattering operator $P(\bm p,\bm p')$ is easily read-off from the kernel on the right-hand side of Eq.~(\ref{eq:motion2}). Since the factor $f_0(\bm p) \left[1-f_0(\bm p)\right]$ and the transition probability are positive definite, the operator $P(\bm p,\bm p')$ is positive definite. Further it is linear and self-adjoint (symmetric). Following ref.~\cite{Ziman} we define an inner product \begin{equation} \langle\phi , \psi \rangle \equiv \int \phi(\bm p)\psi(\bm p) d\bm p, \end{equation} in terms of which \begin{equation}\label{eq:12} \langle\phi , P\psi \rangle \equiv \frac{1}{2}\int d\bm p\int d\bm p' [\phi(\bm p) -\phi(\bm p')]P(\bm p,\bm p')[\psi(\bm p)-\psi(\bm p')]. \end{equation} The variational principle states that the expression \begin{equation} \label{var_principle} \langle\phi , X\rangle = \langle\phi , P\phi\rangle \end{equation} attains its maximum for the {\em exact} value $\phi_{\rm ex}$, which satisfies Eq.~(\ref{eq:X}); for any other trial function $\phi$, that satisfies Eq.~(\ref{var_principle}), $\langle\phi , P\phi\rangle\le \langle\phi_{\rm ex} , P\phi_{\rm ex}\rangle$. Explicitly, Eq.~(\ref{var_principle}) reads \begin{eqnarray}\label{eq:s1} s_0 \int\frac{d^3p}{(2\pi)^3} \phi(\bm p,s)^2f_0(\bm p)\left[1-f_0(\bm p)\right] &=&\frac{2\pi N_v}{ V} \int\!\!\frac{d^3p}{(2\pi)^3} \int\!\!\frac{d^3p'}{(2\pi)^3} \frac{1}{2} \left[\phi(\bm p,s)-\phi(\bm p',s)\right]^2\nonumber\\ &&W(\bm p;\bm p') f_0(\bm p')\left[1-f_0(\bm p)\right] \delta(\varepsilon(\bm p)-\varepsilon(\bm p')). \end{eqnarray} It is also straightforward to check that the variation of Eq.~(\ref{eq:s1}) leads us back to ``equation of motion'' (\ref{eq:motion2}). From Eq.~(\ref{eq:s1}) we obtain the variational relaxation rate \begin{eqnarray}\label{eq:s1bis} s_0 &= &\frac{2\pi N_v}{ V{\cal D}} \int\!\!\frac{d^3p}{(2\pi)^3} \int\!\!\frac{d^3p'}{(2\pi)^3} \frac{1}{2} \left[\phi(\bm p)-\phi(\bm p')\right]^2W(\bm p;\bm p') f_0(\bm p')\left[1-f_0(\bm p)\right] \delta(\varepsilon(\bm p)-\varepsilon(\bm p')), \nonumber\\ \end{eqnarray} where \begin{equation} \label{eq:calD} {\cal D} = \int\frac{d^3p}{(2\pi)^3} \phi(\bm p)^2f_0(\bm p)\left[1-f_0(\bm p)\right]. \end{equation} The exact relaxation rate $s \ge s_0$. We specify the form of the trial function appropriate to the problem at hand, which is the relaxation of uniform blue-quark/electron velocity $\bm v$ on a flux tube, as \begin{equation} \label{eq:trial2} \phi(\bm p) = \bm p \cdot \bm v ~C(p^2), \end{equation} where $C(p^2)$ is the scalar part of the trial function. In the following we will adopt the simple choice $C(p^2)=1$. The differential transition probability can be obtained from the Aharonov-Bohm scattering cross-section, Eq.~(\ref{AB-scattering}), and is given by (for details see Appendix~\ref{app:corss_section}) \begin{equation}\label{eq:diff_probability} dW = 2\pi\delta(\varepsilon'-\varepsilon) 2\pi\delta(p_z-p_z') \frac{4 L\sin^2(\pi\tilde\beta)}{\sin^2(\phi/2)} \frac{1}{2\varepsilon V} \frac{d^3p'}{(2\pi)^32\varepsilon'}, \end{equation} where the initial and final state momenta and energies, $p$ and $\varepsilon$, are unprimed and primed respectively (in this section we use $\phi$ for the scattering angle, as opposed to $\vartheta$ in Sec.~\ref{sec:scattering}). Here we have used the cylindrical coordinates coaxial with the flux tube to write $d^3p = p_{\perp}dp_{\perp} d \phi d p_z$. Combining Eqs.~(\ref{eq:s1bis}) and (\ref{eq:diff_probability}) and carrying out the phase space integrals (the details are given in Appendix~\ref{app:phase_space}) we obtain, to lowest order in the low-temperature expansion, \begin{eqnarray}\label{eq:s5} s_0&=& \frac{p_{Fi}^3 v^2n_v T}{6\pi^2 {\cal D}} ~\sin^2(\pi\tilde\beta), \end{eqnarray} where $p_{Fi}$ is the blue-quark/electron Fermi momentum, $n_v$ is the density of flux tubes. Eq.~(\ref{eq:calD}) with the trial function (\ref{eq:trial2}) can be computed in the low-temperature limit by approximating ${df(\bm p)}/{d\varepsilon(\bm p)} \simeq \delta(\varepsilon(\bm p)-\mu_i)$ to obtain ${\cal D} = {p_{Fi}^4 v^2T}/{6 \pi^2}$, where $v$ is the fermion fluid velocity \eqn{eq:trial2}. The relaxation rate for particles of species $i$ scattering off flux tubes of area density $n_v$ is then given by \begin{equation} \tau^{-1}_{if} \equiv s_0= \frac{n_v}{p_{Fi}} \sin^2(\pi\tilde\beta_i) \ . \label{tauinv-flux} \end{equation} It easy to understand the final result (\ref{tauinv-flux}). It is of the standard form for classical gases $\tau^{-1}=c n \sigma$, where $c=1$ is the speed of the particles, $n=n_v$ is the density of scattering centers, and $\sigma\propto \sin^2(\pi\tilde\beta)/p_F$ is the cross section for Aharonov-Bohm scattering. Eq.~\eqn{tauinv-flux} is relevant for thermal relaxation of the gapless fermion species in the 2SC phase. One of these, the blue down quark, has no A-B interaction with the flux tubes ($\tilde\beta=0$). The other two, the electron and blue up quark, have identical A-B factors \eqn{ABfactor-bu} although their Fermi momenta are different. \subsection{Comparison with Coulomb scattering} \label{sec:Coulomb} To find out whether scattering off flux tubes is likely to be an important source of relaxation, and hence a significant contributor to transport properties, it is useful to compare Eq.~\eqn{tauinv-flux} with the collision time for screened Coulomb scattering via exchange of ${\tilde Q}$ photons. The 2SC phase is a ${\tilde Q}$-conductor, with two species of gapless charged fermions: the $bu$ quarks (with ${\tilde Q}$-charge +1 and chemical potential $\approx\mu$) and the electrons (with ${\tilde Q}$-charge +1 and chemical potential $\approx\mu_e$). There may also be muons, but their Fermi momentum will be much smaller. As mentioned in the introduction, the red and green quarks will be confined to bound states whose mass is expected to be of order 10\,MeV \cite{Rischke:2000cn}, so they play no role in transport at neutron star temperatures. Since $\mu>\mu_e$, the $bu$ quarks are more numerous than the electrons and have a larger phase space near their Fermi surface, so they will make the largest contribution to the collision time. The Coulomb collision time depends on the in-medium photon spectrum, which will be affected by Debye screening and Landau damping arising from the presence of gapless charged excitations, dominantly the $bu$ quarks because of their larger phase space. A simple estimate can be obtained by assuming that the dispersion relation is dominated by a plasmon pole. The plasma frequency $\omega_p$ is given by \begin{equation} \omega_p^2 = \frac{{\tilde\alpha} n_q}{\mu_q} = \frac{4}{3\pi^2}{\tilde\alpha}\mu_q^2 \end{equation} where ${\tilde\alpha}={e^{(\Qt)}}^2/(4\pi)$ is the fine structure constant for the ``rotated'' ${\tilde Q}$ electromagnetism \eqn{couplings}. The collision frequency is given by (see Eq.~(10),(12),(18) of \cite{Shternin:2006uq}), \begin{equation} \tau^{-1}_{qq} = \frac{8 \zeta(3)\mu_{q}^2}{\pi^3\omega_p^2} {\tilde\alpha}^2\, T = \frac{6\zeta(3)}{\pi^2}{\tilde\alpha} \, T \label{eq:qq_relax} \end{equation} where $\zeta(3) = 1.202$. This result is valid for $T\ll\omega_p$, which is the relevant regime for neutron stars since $\mu_q$ is in the $400\,{\rm MeV}$ range. Eq.~\eqn{eq:qq_relax} is analogous to Ref.~\cite{Heiselberg:1993cr}'s Eq.~(62) for the thermal conduction timescale, with electromagnetic interactions (so their $\alpha_s$ is replaced by ${\tilde\alpha}$) and a different number of quark species. The quark-quark Coulomb collision frequency \eqn{eq:qq_relax} is proportional to temperature $T$ whereas the particle-flux-tube collision frequency is independent of temperature. We can therefore define a temperature $T_f$ below which flux tubes dominate the relaxation of deviations from thermal equilibrium. From \eqn{eq:qq_relax} and \eqn{tauinv-flux} we find \begin{equation} T_f = \frac{\pi^2}{6\zeta(3)} \frac{\sin^2(\pi\beta_{bu})}{{\tilde\alpha}} \frac{n_v}{\mu_q} \ . \end{equation} To make a numerical estimate we assume that the 2SC core contains the maximum flux tube density given by Eq.~(\ref{eq:flux_number}), and that $\alpha_s\approx 1$. Using \eqn{ABfactor-bu} and the fact that ${\tilde\alpha}\approx\alpha$, we find \begin{equation} T_f \approx (9\times 10^4\,K) \Bigl(\frac{B}{10^{14}\,{\rm G}}\Bigr) \Bigl( \frac{400\,{\rm MeV}}{\mu_q} \Bigr) \ . \label{T-flux_domination} \end{equation} We conclude that for reasonable values of the magnetic field, only at very low temperatures is Aharonov-Bohm scattering off flux tubes likely to be an important source of {\em thermal} relaxation. However, it is important to note that the thermal relaxation timescale is not the only one that is relevant to transport. There is also the viscous relaxation rate (Ref.~\cite{Heiselberg:1993cr}, Eq.~(51)) and the momentum relaxation rate (Ref.~\cite{Heiselberg:1993cr}, Eq.~(32)) both of which have a much stronger ($\propto T^{5/3}$) suppression at low temperatures. We defer a full discussion of transport in the 2SC phase to later work. \section{Forces on the flux tubes} \label{sec:forces} We argued in Sec.~\ref{sec:fluxtube} that even if the magnetic field in the core of the star is below the lower critical field, color magnetic flux tubes will still be produced in the transition to the 2SC phase. In this section we study the forces on those flux tubes, and estimate the timescale for their expulsion from the 2SC core. For this initial estimate we take in to account only the forces on the flux tubes within the 2SC core, or at its boundary. Depending on the nature of the material surrounding the core there may be additional forces, and these may modify the expulsion time in a way that would have to be calculated on a case-by-case basis. The velocity of the flux tube is $\bm v_L$, the velocity of the normal fluid is $\bm v_N$, and the velocity of the 2SC condensate is $\bm v_S$. The forces we consider are mutual friction (``mf''), the non-dissipative (lifting) Magnus-Lorentz force (``ML''), the Iordanskii force (``Iord''), forces arising from zero modes (``zm''), and boundary forces (``bf'') at the quark-hadronic boundary. We assume that local magneto-hydrostatic-gravitational equilibrium is established quickly after the transition to the 2SC phase, so there is no additional buoyancy force \cite{1985SvAL...11...80M,2009MNRAS.397.1027J}. We note that there may be additional forces due to density-dependence of the 2SC pairing gap \cite{Hsu:1999rf}, but we do not include these since there is as yet no reliable estimate of the density dependence. The equation of motion of a flux tube then has the form \begin{equation}\label{eq:dynamics} m_V \frac{d\bm v_L}{dt} = \bm f_{\rm mf} + \bm f_{\rm ML} + \bm f_{\rm Iord} + \bm f_{\rm zm} + \bm f_{\rm bf} \ , \end{equation} where $m_V$ is the effective mass of a flux tube per unit length and $\bm f$ is a force per unit length. The boundary forces tend to pull the flux tube in a radial direction, expelling it from the 2SC core. This is resisted by the combination of the other forces. In our calculations we will assume that the flux tubes are straight. A bent flux tube will feel an additional restoring force determined by its tension. \subsection{The background ${\tilde Q}$ magnetic field} \label{sec:Qt-field} In our calculations we will neglect the effect of the ${\tilde Q}$ magnetic field $B_{{\tilde Q}}$ that penetrates the 2SC core. Because of this field, ${\tilde Q}$-charged particles, including the $bu$ quarks and electrons, will feel a Lorentz force. This will have a significant effect on the behavior of the normal fluid of quarks and electrons when the cyclotron frequency $\omega_c$ becomes larger than the inverse of the characteristic time for equilibration, which as we argued in Sec.~\ref{sec:Coulomb}, is the quark-quark collision time $\tau_{qq}$ \eqn{eq:qq_relax}. The dominant component of the fluid is the $bu$ quarks, with ${\tilde Q}$-charge $e^{\tilde Q}\approx e$, and $B_{{\tilde Q}}\approx B$, so \begin{equation} \omega_c = \frac{eB}{p_F} \ , \end{equation} and we can neglect the effects of the magnetic field on transport when $\omega_c\tau_{qq} \ll 1$, where \begin{equation} \omega_c\tau_{qq} = \frac{2 \pi^3}{3 \zeta(3)} \frac{1}{\sqrt{4\pi\alpha}} \frac{B}{\mu_q T} = 0.32 \Bigl(\frac{B}{10^{12}\,{\rm G}}\Bigr) \Bigl(\frac{10^8\,{\rm K}}{T}\Bigr) \Bigl(\frac{400\,{\rm MeV}}{\mu_q}\Bigr) \ . \label{omegac*tau} \end{equation} We conclude that only for high magnetic fields (above $10^{12}$\,G) or low temperatures (below $10^8$\,K) might the magnetic field affect thermal relaxation. We defer a discussion of this regime to future work. \subsection{Mutual friction} Mutual friction is a frictional force on a flux tube arising from its Aharonov-Bohm interaction with the normal fluid of gapless particles through which it is moving. Consider a vortex moving relative to the normal fluid with velocity $\bm u=\bm v_L-\bm v_N$. In the relaxation time approximation \begin{equation} \bm f_{\rm mf} = \frac{\tau^{-1}_{if}}{n_v} \int\frac{d^3p}{(2\pi)^3} \, \bm p\, f_0(p,\bm u), \end{equation} where $\tau^{-1}_{if}$ is the collision rate between fermions of species $i$ and flux tubes \eqn{tauinv-flux}. We will assume that the blue up quarks dominate the friction. This is reasonable because the blue down quarks have no $X$ charge and hence no Aharonov-Bohm interaction with the flux tube, and the electron Fermi momentum is smaller than that of the blue quarks. The equilibrium Fermi-Dirac thermal distribution of the quarks is \begin{equation} f_0(p,\bm u) = \{ \exp[(\varepsilon-\mu_i + \bm p\cdot \bm u)/T]+1 \}^{-1} \ , \end{equation} where the $\bm p\cdot \bm u$ term is a correction due to the motion of the vortex relative to the thermal bath with velocity $\bm u$. We will compute the force to linear order in $\bm u$. The leading contribution arises at the first order in velocity \begin{equation} \bm f_{\rm mf} = \frac{\tau^{-1}_{if}}{n_v} \int\frac{d^3p}{(2\pi)^3} \bm p (\bm p\cdot \bm u) \frac{\partial f_0(\varepsilon)}{\partial \varepsilon} = \eta\bm u. \label{eta-defn} \end{equation} Carrying out the integral and using \eqn{tauinv-flux} we obtain the mutual friction drag coefficient \begin{equation} \eta = \frac{p_{Fi}n_i \tau^{-1}_{if}}{n_v} = n_i \sin^2(\pi\tilde\beta_i) \label{eq:eta} \end{equation} where $n_i$ is the fermion density and $\tilde\beta_i$ is their Aharonov-Bohm factor \eqn{ABfactor-approx}. As one would expect, the friction coefficient is independent of the magnetic field (i.e.~the density of flux tubes). It is proportional to the fermion density, so, as noted above, the $bu$ quark contribution will dominate the electron contribution. \subsection{Boundary forces} \begin{figure} \includegraphics[scale=0.6]{boundary_force.eps} \caption{ A straight flux tube of length $l$ passing through a 2SC neutron star core of radius $R$, at distance $r$ from the center of the star. There is a boundary force where it reaches the edge of the 2SC core. } \label{fig:boundary} \end{figure} Next we wish to calculate the force exerted on the flux tube at the point where it reaches the interface between the 2SC quark matter core and the nuclear mantle of a neutron star (Fig.~\ref{fig:boundary}). When the $X$-magnetic flux tube reaches the edge of the 2SC core it combines with the ${\tilde Q}$ magnetic flux in the core to re-constitute the ordinary magnetic field from which it was originally formed. The form in which the flux continues through the nuclear mantle, and hence the boundary energy, may therefore be influenced by the state of the nuclear matter. In this analysis we will include only the forces arising from the contribution due to the 2SC core itself. We briefly discuss other contributions, but a proper analysis including them would have to be done in the context of a specific model of the whole neutron star and the properties of all regions within it. The outward force per unit length on the flux tube is (see Fig.~\ref{fig:boundary}) \begin{equation} f_b = \frac{1}{l}\frac{dE}{dr} = \frac{r}{R^2-r^2}\varepsilon_X \ . \label{fb-full} \end{equation} Here $E$ should be the total energy of magnetic flux inside and outside the core, but we neglect the outside contribution; $\varepsilon_X$ is the energy per length of $X$ flux tubes. Then from \eqn{X-tension}, \begin{equation} f_b \approx \frac{r}{R^2-r^2} \frac{\mu_q^2}{3\pi}\ln\ka_{\!X} \ . \label{fbmax} \end{equation} Taking in to account the energy of the magnetic field outside the core will reduce the right hand side of \eqn{fb-full}, and weaken the outward force on the flux tube. We now discuss the magnitude of such terms in various cases. If the nuclear mantle is a type-II superconductor, the magnetic field penetrates the nuclear mantle in the form of Abrikosov flux tubes (dashed line in Fig.~\ref{fig:boundary}). From \eqn{eq:flux_number} we know that each $X$ flux tube will spawn 6 Abrikosov flux tubes in the nuclear matter, each of which has energy per unit length \begin{equation} \varepsilon_{\rm nuc} = \frac{\Phi_0^2}{4\pi\lambda_{\rm nuc}^2} \ln \kappa_{\rm nuc} \ . \end{equation} $\Phi_0=\pi/e\approx 10.37$, so if we assume that the logarithmic factor is of order 1 then for $\lambda_{\rm nuc}$ in the 50 to 100~fm range, $\varepsilon_{\rm nuc}$ is in the 0.2 to 0.7 MeV/fm range. This means that even when multiplied by a factor of 6, $\varepsilon_{\rm nuc}$ is small in comparison with the tension of the $X$ flux tube, which is greater than 10\,MeV/fm \eqn{X-tension}, so \eqn{fbmax} is still a good estimate of the boundary force. Of course, in a type-II nuclear mantle there may be other forces, for example if it is also a superfluid there may be entanglement of Abrikosov flux tubes with superfluid vortices, but we neglect those here because they depend on details of the nuclear mantle. If there is no Cooper pairing of the protons then the nuclear matter is a conductor. In this case the energy gained from shortening the $X$ flux tube is counteracted by the field energy of the magnetic field it connects to in the nuclear matter mantle. The criterion for the tension of the flux tube to dominate is the same as the criterion for the magnetic field to be below its lower critical value. Since neutron star magnetic fields are well below the lower critical field for the 2SC phase, we can assume that the 2SC flux tube tension will dominate and we can use \eqn{fbmax} again. The only complication is that conducting nuclear matter supports eddy currents which will resist any change in the magnetic field in the nuclear mantle. This may make it much harder to move the $X$-flux tubes in the 2SC core. Again, we do not attempt to include such forces that depend on details of the constitution of the nuclear mantle. If the nuclear mantle were a type-I proton superconductor \cite{Sedrakian:2004yq,Alford:2005ku,Alford:2007np, Charbonneau:2007db} then $X$ flux tubes in the 2SC core would connect to non-superconducting domains in the nuclear mantle \cite{Sedrakian:2004yq,Charbonneau:2007db}. In this case we cannot compute the boundary force because the domain structure of the type-I proton superconductor is not known; the possible (layered, cylindrical, etc) structures in type-I superconductors essentially depend on the history of the nucleation of the superconducting phase. \subsection{Magnus-Lorentz force} \label{sec:ML} The Magnus-Lorentz force is a non-dissipative force, directed orthogonally to the flux tube velocity, that arises from the superposition of the winding ``flow'' of the 2SC order parameter around the flux tube and the background flow of the charged superfluid of fermions \cite{PhysRev.140.A1197,1991ApJ...380..530M,PhysRevB.55.485}. (There is controversy about this in the literature; for example, Jones \cite{1991MNRAS.253..279J,2009MNRAS.397.1027J} has suggested that this is cancelled by another contribution from ungapped fermions. Pending a definitive resolution of this disagreement we will use the standard form of the Magnus-Lorentz force.) The Magnus-Lorentz force per unit length on a flux tube is \begin{equation} \label{eq:MLforce} \bm f_{\rm ML} = -(\bm j_X \times \hat n \Phi_X ) \ , \end{equation} where $\Phi_X$ is the $X$-flux through the flux tube \eqn{X-quantum}, $\hat n$ is a unit vector pointing along the flux tube, and $\bm j_X$ is the current of $X$ charge seen by the flux tube, arising from the $X$ charge density $\rho_X$ of the 2SC condensate, moving relative to the flux tube \begin{equation} \bm j_X = \rho_X (\bm v_S-\bm v_L) \ . \end{equation} We can write $\rho_X= q_{\rm pair} n_s/2$ where $n_s$ is the density of quarks in the condensate. Since there are 4 quark species in the condensate, and at low temperature all fermions are part of the condensate, \begin{equation} \begin{array}{rcl} \bm f_{\rm ML} &=&\displaystyle -\rho (\bm v_S-\bm v_L) \times \hat n \ , \\[2ex] \rho &\equiv&\displaystyle \rho_X \Phi_X = \pi n_s = \frac{4\mu^3}{3\pi} \ . \end{array} \label{ML-final} \end{equation} Note that the charge of the Cooper pairs cancels in this expression. \subsection{Iordanskii force} \label{sec:Iordanskii} The mutual friction force described above is the force on the flux tube in the longitudinal direction (i.e.~parallel to its velocity relative to the normal fluid of unpaired quarks), due to Aharonov-Bohm scattering of the unpaired quarks. The Iordanskii force is the transverse component of that same force \cite{PhysRevB.55.485}, \begin{equation} \bm f_{\rm Iord} = D'\, (\bm v_L-\bm v_N) \times \hat n \ . \label{Iordanskii} \end{equation} The transverse Aharonov-Bohm scattering cross-section for $bu$ quarks off the flux tube is $\sigma_\perp = -k^{-1}\sin(2\pi \tilde\beta^{bu})$ (Ref.~\cite{PhysRevB.55.485}, Eq.~(64)) and, as in the case of the longitudinal Aharonov-Bohm force, one expects the force per unit length to be proportional to the fermion density, so we expect $D'\approx \sin(2\pi \tilde\beta^{bu}) \mu_q^3 \approx \alpha\mu_q^3$ (see Ref.~\cite{PhysRevB.55.485}, after Eq.~(69)). This rough estimate is sufficient to argue that the Iordanskii force can be neglected. Basically, the Aharonov-Bohm forces are suppressed by powers of $\alpha$ arising from the Aharonov-Bohm factor of the $bu$ quarks \eqn{ABfactor-bu}. In the case of the Iordanskii (transverse) component, we will see that this makes it subleading relative to the Magnus-Lorentz force, which also acts perpendicular to the flux tube's velocity. In the case of the longitudinal component, there is no larger force parallel to the velocity, so the Aharonov-Bohm force is the dominant contribution to mutual friction. \subsection{Zero-mode force} \label{sec:zero-mode} The frictional force on a flux tube due to scattering of zero modes localized inside the flux tube off gapless fermions in the bulk \cite{PhysRevLett.77.4687,Kopnin:2002} has been calculated for proton flux tubes in nuclear matter \cite{2009MNRAS.397.1027J}. At low temperatures, we expect the frictional force on a 2SC flux tube to be \begin{equation} \bm f_{\parallel} = - \frac{C}{\omega_0\tau_c} (\bm v_L-\bm v_N) \ , \label{zm-force} \end{equation} where, generalizing from nonrelativistic protons to relativistic quarks, \begin{eqnarray} C &=& \pi n_q \tanh(\Delta/2T) \sim \mu_q^3 \ , \label{Cvalue} \\ \omega_0 &\sim& \Delta^2/\mu_q \label{omega0} \ , \\ \tau_c &\sim& \mu_q^{2/3}T^{-5/3} \label{tau-c} \ . \end{eqnarray} Eq.~\eqn{Cvalue} follows from Ref.~\cite{2009MNRAS.397.1027J} Eq.~(7), and the fact that 2SC pairing gap $\Delta$ is expected to be much bigger than typical neutron star temperatures. Eq.~\eqn{omega0} follows from Ref.~\cite{2009MNRAS.397.1027J} Eq.~(1), assuming, following Ref.~\cite{2009MNRAS.397.1027J}, that the typical transverse momentum of the population of zero modes is of the same order as the Fermi momentum of the quarks. Eq.~\eqn{tau-c} is obtained by, as in Ref.~\cite{2009MNRAS.397.1027J}, assuming that scattering involving the zero modes has the same relaxation time as quark-quark scattering in a non-superconducting medium (i.e.~as if the flux tube core were infinitely large). We can then use the continuum quark-quark momentum relaxation time $\tau_s$ from gluon exchange in a cold quark-gluon plasma (Ref.~\cite{Heiselberg:1993cr},~eqn~(28)) as a crude estimate of the relaxation time $\tau_c$ for momentum transfer between bulk gapless quarks and zero modes inside the flux tube. Comparing \eqn{zm-force} with \eqn{eta-defn} and \eqn{ABfactor-bu} we see that the ratio of the zero mode force to the mutual friction force is $f_{\rm zm}/f_{\rm mf} \sim (\omega_0\tau_c \pi^2\alpha^2)^{-1}$, assuming $\alpha_s\sim 1$. Using the estimates given above, \begin{equation} \frac{f_{\rm zm}}{f_{\rm mf}} \sim 0.003 \, \Bigl(\frac{\mu_q}{400~{\rm MeV}}\Bigr)^{\!1/3} \Bigl(\frac{50~{\rm MeV}}{\Delta}\Bigr)^{\!2} \Bigl(\frac{T}{0.01~{\rm MeV}}\Bigr)^{\!5/3} \ . \label{zm-ratio} \end{equation} We conclude that the zero mode force is likely to be negligible relative to mutual friction. \subsection{Timescale for expulsion of flux} We can now estimate the time scale for the expulsion of the $X$ magnetic field flux tubes from the 2SC core. As we noted above, there will be an outward force on the flux tubes at the point where they reach the nuclear mantle. The maximum force per unit length is given by \eqn{fbmax}, in which the energy costs of the magnetic field in the nuclear mantle have been neglected. The rate of outward movement of the flux tubes is given by balancing that force against frictional or pinning forces. There may be such forces arising from the nuclear matter, but we ignore them and only include the Aharonov-Bohm (mutual friction and Iordanskii) and Magnus-Lorentz forces in the quark matter. Using \eqn{eq:dynamics}, \eqn{eta-defn}, \eqn{Iordanskii}, \eqn{ML-final}, we can see that the steady-state value of the vortex velocity $\bm v_L$ is given by the force balance equation, \begin{equation} \rho_{\rm ML}(\bm v_S-\bm v_L)\times \hat n +D'(\bm v_L-\bm v_N)\times \hat n +\eta (\bm v_L-\bm v_N) +\bm f_{\rm bf}(r)=0 \ , \end{equation} where $\bm f_{\rm bf}(r)$ is given by \eqn{fbmax}. We work in the reference frame that is uniformly rotating with the normal component (blue quarks and electrons) and we neglect possible small differential rotation between the superfluid and the normal fluid, so $\bm v_N=\bm v_S=0$ in this frame. The Iordanskii and Magnus-Lorentz forces then add to give a single transverse force \begin{equation} -\rho\bm v_L\times \hat n +\eta \bm v_L + \bm f_{\rm bf}(r)=0 \ , \label{balance} \end{equation} where $\rho=\rho_{\rm ML}-D'$. From \eqn{ML-final} and Sec.~\ref{sec:Iordanskii} we see that $\rho_{\rm ML}\sim\mu_q^3$ and $D'\sim \alpha\mu_q^3$, so we can neglect the Iordanskii force and assume $\rho\approx \rho_{\rm ML}$. We take the flux tube to lie in the $z$ direction, and we calculate its position in the $x,y$ plane using polar co-ordinates $(r,\th)$. We want to find $\dot r$, the rate at which the flux tube moves outward. Solving \eqn{balance} for the steady-state velocities $\dot r$ and $\dot\th$, we find \begin{equation} \begin{array}{rcl} \dot r &=&\displaystyle \frac{\eta}{\eta^2+\rho^2} f_r(r) \ , \\[2ex] r \dot\th &=&\displaystyle \frac{\rho}{\eta^2+\rho^2} f_r(r) \ , \end{array} \label{fluxtube-EoM} \end{equation} where $f_r$ is the radial component of the boundary force. We note in passing that $\dot r$ shows a {\em non-monotonic} dependence on the friction coefficient $\eta$. As $\eta$ tends to zero one might expect the expulsion time to also tend to zero, and in the absence the Magnus-Lorentz force ($\rho=0$) this would indeed be the case. However, in the presence of a non-zero Magnus-Lorentz force, the flux tube moves in an orbit around the center of the star, with the radially outward boundary force balanced by the resultant radially inward Magnus-Lorentz force. If the flux tube starts at radius $r_0$ at time $t=0$ and leaves the core ($r$ reaches $R$) at time $t=t_1$, then by solving \eqn{fluxtube-EoM} we find \begin{equation} \begin{array}{rcl} t_1 &=&\displaystyle \tau \biggl[ 2\ln\Bigl(\frac{R}{r_0}\Bigr) + 1-\frac{r_0^2}{R^2} \biggr] \ , \\[3ex] \tau &=&\displaystyle \frac{R^2}{2\varepsilon_X}\frac{\eta^2 + \rho^2}{\eta} \ . \end{array} \label{t1} \end{equation} The factor in square brackets of order 1 for initial radii $r_0$ not too close to 0 or $R$, so the flux expulsion time for a typical flux tube is of order $\tau$. From \eqn{eq:eta} and \eqn{ML-final}, $\eta\sim\alpha^2\mu_q^3$ and $\rho\sim \mu_q^3$. So $\rho\gg\eta$, and using \eqn{X-tension}, \eqn{fbmax}, \eqn{ABfactor-bu} we find \begin{equation} \tau \approx \frac{8 \alpha_s^2\mu_q R^2}{\pi \alpha^2 \ln\ka_{\!X}} \ . \label{tau} \end{equation} Taking $\alpha_s\approx 1$, \begin{equation} \tau \approx (10^{10}\,{\rm yr}) \Bigl(\frac{\mu_q}{400\,{\rm MeV}}\Bigr) \Bigl(\frac{R}{1\,{\rm km}}\Bigr)^{\!2} \frac{1}{\ln\ka_{\!X}} \ . \label{tau-approx} \end{equation} The timescale for $X$ flux tubes to be expelled from the 2SC core is therefore in the range of $10^{10}$ years. \section{Conclusions} \label{sec:conclusions} Quark matter in the 2SC (or CFL) color-superconducting phase is a superconductor with respect to a broken ``$X$'' generator that is mostly color with a small admixture of electromagnetism. We have confirmed previous calculations \cite{Iida:2002ev} showing that quark matter in the 2SC phase will be a type-II $X$-superconductor if the quark pairing gap is above a critical value which is well within the expected range \eqn{kappa-2SC}. Although the ambient magnetic field in the core of a neutron star is below the lower critical field for the formation of Abrikosov flux tubes containing $X$-magnetic flux, we argue that, when the quark matter cools into the 2SC phase, the process of domain formation and amalgamation is likely to leave some of the $X$ flux trapped in the form of flux tubes. The exact configuration and density of such tubes depends on details of the dynamics of the phase transition, but the density could be within an order of magnitude of the density of conventional flux tubes in proton-superconducting nuclear matter \eqn{eq:flux_number}. Our calculations apply to 2SC quark matter in the temperature range $T_{1SC}<T \ll T_{2SC}$ where $T_{2SC}$ is the critical temperature for the formation of the 2SC condensate, expected to be of order $10\,{\rm MeV}$ ($10^{11}\,{\rm K}$), and $T_{1SC}$ is the critical temperature for self pairing of the blue quarks, which could be as low as $1\,{\rm eV}$ ($10^4\,{\rm K}$). The 2SC phase contains three species of gapless fermions: two quarks (``blue up'' and ``blue down'') and the electron. These are expected to dominate its transport properties. We do not discuss strange quarks, but our analysis is also applicable to phases with strange quarks present, as long as their pairing pattern does not break the ${\tilde Q}$ gauge symmetry. Muons may also be present, but, like strange quarks, their higher mass gives them a lower Fermi momentum so they make a subleading contribution to the phenomena discussed here. We have calculated the Aharonov-Bohm scattering cross-section of gapless fermions with the $X$ flux tubes \eqn{AB-scattering}, \eqn{ABfactor-bu}, and the associated collision (or relaxation) rate \eqn{tauinv-flux}. A comparison with the collision time for Coulomb quark-quark scattering indicates that only at very low temperatures ($T\lesssim 10^5\,{\rm K}$ or $10\,{\rm eV}$) will the flux tubes dominate over thermal relaxation via Coulomb scattering. However, we defer a detailed calculation of the transport properties, including Coulomb and $X$-boson-mediated interactions, to future work. Because the ambient magnetic field in a neutron star is below the lower critical field required to force $X$-flux tubes into 2SC quark matter, the trapped flux tubes will feel a boundary force pulling them outwards. We calculated this force for the case where the energy of the magnetic field outside the core can be neglected relative to the energy of flux tube. This force will be balanced by the drag force (``mutual friction'') on the moving flux tube due to its Aharonov-Bohm interaction with the thermal population of gapless quarks and electrons \eqn{eq:eta}, and also by the Magnus-Lorentz force \eqn{ML-final}. On this basis, we estimate that the timescale for the expulsion of $X$ flux tubes from a 2SC core \eqn{tau-approx}, is of order $10^{10}$ years. The work described here offers many directions for future development.\\ (1) To get a full picture of the transport properties of 2SC quark matter one must calculate the relaxation rates associated with processes that do not include flux tubes, such as ${\tilde Q}$-Coulomb and $X$-boson-mediated interactions between gapless fermions.\\ (2) We studied the regime where the cyclotron frequency is smaller than the thermal relaxation time of the unpaired quarks (see Sec.~\ref{sec:Qt-field}). It would be valuable to extend our analysis to higher magnetic fields and/or lower temperatures where the cyclotron frequency cannot be neglected.\\ (3) It is important to resolve the disagreement in the literature over whether the Magnus-Lorentz force on flux tubes is cancelled by forces arising from the neutralizing background (see Sec.~\ref{sec:ML}). This is necessary for understanding flux expulsion from superconducting nuclear matter as well as more exotic flux tubes such as the ones that we described here.\\ (4) We assumed that the $X$-flux tubes are stable, or at least have a lifetime that is long enough for them to play a role in transport. However there is no topological guarantee of their stability, and it is necessary to perform a calculation of their energetics, analogous to that of \cite{James:1992wb}, and to investigate bound states on the string, which if present can enhance their stability \cite{Vachaspati:1992mk}. \\ (5) We focussed on the 2SC phase, but other phases may support flux tubes. The CFL phase, which is the ground state of 3-flavor quark matter at asymptotically high densities, also has a gauge symmetry breaking pattern which resolves an external magnetic field in to an unbroken ${\tilde Q}$ part, and a broken $X$ part which could be carried in flux tubes \cite{Iida:2004if}. In this case also there is no topological guarantee of stability, and an analysis of the energetic stability is required. The CFL phase also features semi-superfluid vortices with non-zero magnetization~\cite{Iida:2002ev,Balachandran:2005ev,Eto:2009bh,Eto:2009wu,Sedrakian:2008ay}. Since the CFL phase has no gapless charged excitations the associated phenomenology is likely to be quite different. In the CFL-K0 phase there are charged kaon modes that can have an energy gap well below the pairing gap, so, if they have non-zero Aharonov-Bohm $\tilde\beta$ factors, their scattering off flux tubes might be important. \\ (6) We treated the thickness of the flux tubes as negligible, so scattering off them is dominated by the Aharonov-Bohm effect. In fact the thickness of the flux tube is comparable to the inverse Fermi momentum of the quarks (see \eqn{lambda-X}) and there will be finite-size corrections to our results. Calculating them would require explicit construction of the radial profile of the flux tube.\\ (7) Some quark matter phases break the ${\tilde Q}$ gauge symmetry. These include the 2SC phase at $T<T_{1SC}$, and many other phases such as the color-spin-locked phase \cite{Schafer:2000tw,Schmitt:2003xq}. It is interesting to ask what happens to magnetic flux in such cases: is the ${\tilde Q}$-superconductivity always type-I? (One suspects it may be because the gaps are usually small.) Will the dynamics of the phase transition lead to trapped normal regions, and what is the timescale for their expulsion from the star? Could these phases retain $X$-flux tubes even after ${\tilde Q}$ flux has been expelled? If $X$-flux tubes existed in a CFL core, for example they might experience the same sort of entanglement with superfluid vortices as is predicted in nuclear matter.\\ (8) Neutron stars probably have layers of different phases. For a proper treatment of the dynamics of magnetic flux one would have to analyse how magnetic flux was connected between layers and pinned within layers, and the consequent additional forces on the color magnetic flux tube in a 2SC core. For instance, in a conducting nuclear mantle there would be eddy-current pinning of the magnetic flux; in a type-II superconducting {\em and} superfluid mantle there would be entanglement of nuclear Abrikosov flux tubes with superfluid vortices; and so on. There is also the possibility of different quark matter phases, such as an inner CFL core, {\em inside} the 2SC region. If it turned out that additional forces arising from these other regions of the star acted so as to allow expulsion of the flux tubes on a shorter timescale then this would have interesting astrophysical ramifications, such as a change of the magnetic moment of the star over this period of time. If the core contained a phase where $X$-flux tubes were entangled with superfluid vortices (as mentioned for the CFL phase above) then the rotational dynamics could also be affected. Observationally, this could provide a new mechanism for glitches in neutron stars, since vortex-interface pinning force, derived above, may prevent a continuous flow of rotational vortices in the superfluid phases, in a manner analogous to vortex pinning in the crust \cite{Anderson:1975zze} and the hadronic core-solid crust interface~\cite{Sedrakian:1998ki}. Other dynamical manifestations, such as, for example, the recently studied shear modes~\cite{Noronha:2007qf,Shahabasyan:2009zz} in the superfluid core and the post-jump relaxations (see Ref.~\cite{Sedrakian:2006xm} and references therein) will be affected as well. \section*{Acknowledgements} We thank Xu-Guang Huang, Kazunori Itakura, Naoki Itoh, Peter Jones, Muneto Nitta, Dirk Rischke, Karen Shahabasyan for their comments. This research was supported in part by the Offices of Nuclear Physics and High Energy Physics of the U.S.~Department of Energy under contracts \#DE-FG02-91ER40628, \#DE-FG02-05ER41375, and the Deutsche Forschungsgemeinschaft (Grant SE 1836/1-1).
1,108,101,566,034
arxiv
\section{INTRODUCTION}\label{section:introduction} \subsection{Background}\label{section:background} Active asteroids are small solar system bodies that exhibit comet-like mass loss yet occupy dynamically asteroidal orbits, typically defined as having Tisserand parameters of $T_J$$\,>\,$3.00 and semimajor axes less than that of Jupiter \citep[cf.][]{jewitt2015_actvasts_ast4}. They include main-belt comets (MBCs), whose activity is thought to be driven by the sublimation of volatile ices \citep[cf.][]{hsieh2006_mbcs}, and disrupted asteroids, whose mass loss is due to disruptive processes such as impacts or rotational destabilization \citep[cf.][]{hsieh2012_scheila}. Active asteroids have attracted considerable attention since their discovery for various reasons. MBCs may be useful for probing the ice content of the main asteroid belt \citep[e.g.,][]{hsieh2014_mbcsiausproc}, given that dust modeling, confirmation of recurrent activity, or both show that their activity is likely to be driven by the sublimation of volatile ices \citep[cf.][]{hsieh2012_scheila}, while dynamical analyses indicate that many appear to have formed in situ where we see them today \citep[e.g.,][]{haghighipour2009_mbcorigins,hsieh2012_288p,hsieh2012_324p,hsieh2013_p2012t1,hsieh2016_tisserand} \citep[or if they are originally from the outer solar system, at least must have been implanted at their current locations at very early times; e.g.,][]{levison2009_tnocontamination,vokrouhlicky2016_tnocapturemainbelt}. No spectroscopic confirmation of sublimation products have been obtained for any MBC studied to date \citep[cf.][]{jewitt2015_actvasts_ast4}, but this lack of direct detections of gas only indicates that gas production rates were below the detection limits of the observations in question at the time, not that gas was definitively absent \citep[cf.][]{hsieh2016_mbcsiausproc}. MBCs and the evidence they provide of likely present-day ice in the asteroid belt are especially interesting for solar system formation models and astrobiology given dynamical studies that suggest that a large portion of the Earth's current water inventory could have been supplied by the accretion of icy objects either from the outer asteroid belt or from more distant regions of the solar system that were scattered onto Earth-impacting orbits \citep[e.g.,][]{morbidelli2000_earthwater,raymond2004_earthwater,obrien2006_earthwater,raymond2017_waterorigin}. Meanwhile, disrupted asteroids provide opportunities to study disruption processes for which significant theoretical and laboratory work has been done \citep[e.g.,][]{ballouz2015_impactsimulations,durda2015_disruptionfragments,housen2018_impactsporousasteroids}, but for which real-world and real-time observations are relatively lacking. Disruption events represent opportunities to probe the structure and composition of asteroid interiors that are difficult to study otherwise \citep[e.g.,][]{bodewits2014_scheila,hirabayashi2014_p2013r3}. Approximately 20 active asteroids have been discovered to date, although the exact number reported by different sources within the community can vary due to slight differences in dynamical definitions (e.g., the use of different $T_J$ values as the ``asteroidal'' cut-off, such as $T_J$$\,=\,$3.05 or $T_J$$\,=\,$3.08, or the inclusion of objects not confined to the main asteroid belt). In this paper, we only consider active asteroids whose orbits do not cross those of Mars and Jupiter, have semimajor axes between 4J:1A and 2J:1A mean-motion resonances (MMRs) at 2.065~AU and 3.278~AU (the canonical boundaries of the main asteroid belt), respectively, and have Tisserand parameters with respect to Jupiter of $T_J$$\,>\,$3. \setlength{\tabcolsep}{4.0pt} \setlength{\extrarowheight}{0em} \begin{table*}[htb!] \caption{Physical and Dynamical Properties of Known Active Asteroids} \smallskip \footnotesize \begin{tabular}{lcrccccrl} \hline\hline \multicolumn{1}{c}{Object} & \multicolumn{1}{c}{Type$^a$} & \multicolumn{1}{c}{$r_N$$^b$} & \multicolumn{1}{c}{$T_J$$^c$} & \multicolumn{1}{c}{$a_p$$^d$} & \multicolumn{1}{c}{$e_p$$^e$} & \multicolumn{1}{c}{$\sin(i_p)$$^f$} & \multicolumn{1}{c}{$t_{ly}$$^g$} & \multicolumn{1}{c}{Ref.$^h$} \\ \hline \multicolumn{4}{l}{\it \underline{Sublimation-driven activity}} \\ ~~~(1) Ceres & S & 467.6 & 3.310 & 2.767085 & 0.114993 & 0.167721 & 350.9 & [1,2] \\ ~~~133P/Elst-Pizarro (P/1996 N2) & S/R & 1.9 & 3.184 & 3.163972 & 0.153470 & 0.024165 & 934.6 & [3,4] \\ ~~~176P/LINEAR ((118401) 1999 RE$_{70}$) & S? & 2.0 & 3.166 & 3.217864 & 0.145566 & 0.024465 & 92.5 & [4,5] \\ ~~~238P/Read (P/2005 U1) & S & 0.4 & 3.153 & 3.179053 & 0.209260 & 0.017349 & 16.7 & [6,7] \\ ~~~259P/Garradd (P/2008 R1) & S & 0.3 & 3.217 & 2.729305 & 0.280882 & 0.288213 & 33.8 & [8,9] \\ ~~~288P/(300163) 2006 VW$_{139}$ & S & 1.3 & 3.204 & 3.053612 & 0.160159 & 0.037982 & 1265.8 & [10,11] \\ ~~~313P/Gibbs (P/2014 S4) & S & 0.5 & 3.132 & 3.152211 & 0.205637 & 0.178835 & 12.0 & [12,13] \\ ~~~324P/La Sagra (P/2010 R2) & S & 0.6 & 3.100 & 3.099853 & 0.114883 & 0.382057 & 1612.9 & [14,15] \\ ~~~358P/PANSTARRS (P/2012 T1) & S? & $<$1.3 & 3.135 & 3.160515 & 0.196038 & 0.175636 & 8.5 & [16] \\ ~~~P/2013 R3-A (Catalina-PANSTARRS) & S/R & $\sim$0.2 & 3.184 & 3.030727 & 0.259023 & 0.033973 & 4.2 & [17] \\ ~~~P/2013 R3-B (Catalina-PANSTARRS) & S/R & $\sim$0.2 & 3.184 & 3.029233 & 0.236175 & 0.032950 & 3.6 & [17] \\ ~~~P/2015 X6 (PANSTARRS) & S/R & $<$1.4 & 3.318 & 2.754716 & 0.163811 & 0.059354 & 91.8 & [18] \\ ~~~P/2016 J1-A (PANSTARRS) & S/R & $<$0.9 & 3.113 & 3.165357 & 0.259628 & 0.249058 & 54.0 & [19,20] \\ ~~~P/2016 J1-B (PANSTARRS) & S/R & $<$0.4 & 3.116 & 3.160171 & 0.259843 & 0.247799 & 8.2 & [19,20] \\ \hline \multicolumn{4}{l}{\it \underline{Disruption-driven activity}} \\ ~~~(493) Griseldis & I? & 20.8 & 3.140 & 3.120841 & 0.144563 & 0.267158 & 529.1 & [21,22] \\ ~~~(596) Scheila & I & 79.9 & 3.208 & 2.929386 & 0.197608 & 0.226490 & 13.4 & [23,24] \\ ~~~(62412) 2000 SY$_{178}$ & I/R & 5.2 & 3.197 & 3.147701 & 0.111265 & 0.096409 & 121.6 & [25,26] \\ ~~~311P/PANSTARRS (P/2013 P5) & R? & $<$0.2 & 3.661 & 2.189019 & 0.141820 & 0.094563 & 31.6 & [27,28] \\ ~~~331P/Gibbs (P/2012 F5) & I/R & 0.9 & 3.229 & 3.003859 & 0.022816 & 0.179959 & 6666.7 & [29,30] \\ ~~~354P/LINEAR (P/2010 A2) & I/R & 0.06 & 3.583 & 2.290197 & 0.151754 & 0.097421 & 116.8 & [31,32] \\ ~~~P/2016 G1 (PANSTARRS) & I & $<$0.05 & 3.367 & 2.583930 & 0.169074 & 0.205145 & 1818.2 & [33] \\ \hline \multicolumn{4}{l}{\it \underline{Unknown activity mechanism}} \\ ~~~233P/La Sagra & ? & --- & 3.081 & 2.985806 & 0.479060 & 0.164666 & 0.1 & [34] \\ ~~~348P/PANSTARRS & ? & --- & 3.062 & 3.146828 & 0.311352 & 0.312174 & 3.0 & [35] \\ \hline \hline \end{tabular} \\ $^a$ Type of active asteroid in terms of likely activity driver --- S: sublimation; I: impact; R: rotation; ?: unknown/uncertain. \\ $^b$ Effective nucleus radius, in km. \\ $^c$ Tisserand parameter based on current osculating orbital elements (as of UT 2017 July 1). \\ $^d$ Proper semimajor axis, in AU. \\ $^e$ Proper eccentricity. \\ $^f$ Sine of proper inclination. \\ $^g$ Lyapunov time, in kyr. \\ $^h$ References for object-specific activity mechanism determinations and nucleus size measurements: [1] \citet{carry2008_ceres}; [2] \citet{kuppers2014_ceres}; [3] \citet{hsieh2004_133p}; [4] \citet{hsieh2009_albedos}; [5] \citet{hsieh2011_176p} [6] \citet{hsieh2009_238p}; [7] \citet{hsieh2011_238p}; [8] \citet{maclennan2012_259p}; [9] \citet{hsieh2017_259p}; [10] \citet{hsieh2012_288p}; [11] \citet{agarwal2016_288p}; [12] \citet{jewitt2015_313p1}; [13] \citet{hsieh2015_313p}; [14] \citet{hsieh2014_324p}; [15] \citet{hsieh2015_324p}; [16] \citet{hsieh2013_p2012t1}; [17] \citet{jewitt2014_p2013r3}; [18] \citet{moreno2016_p2015x6}; [19] \citet{moreno2017_p2016j1}; [20] \citet{hui2017_p2016j1}; [21] \citet{masiero2014_neowisealbedos}; [22] \citet{tholen2015_griseldis}; [23] \citet{ishiguro2011_scheila2}; [24] \citet{masiero2012_neowisealbedos}; [25] \citet{masiero2011_neowisealbedos}; [26] \citet{sheppard2015_sy178}; [27] \citet{jewitt2013_311p}; [28] \citet{jewitt2015_311p}; [29] \citet{stevenson2012_331p}; [30] \citet{drahus2015_331p}; [31] \citet{jewitt2010_p2010a2}; [32] \citet{agarwal2013_p2010a2}; [33] \citet{moreno2016_p2016g1}; [34] \citet{mainzer2010_233p}; [35] \citet{wainscoat2017_p2017a2}. \\ \label{table:aaproperties} \end{table*} \subsection{Asteroid family associations}\label{section:familyassociations} Over the years, many active asteroids have been found to be associated with asteroid families, which are groups of asteroids with similar orbital elements that have been inferred to have formed from the catastrophic fragmentation of single parent bodies at some point in the past \citep{hirayama1918_astfam}. The first known active asteroid, 133P/Elst-Pizarro, was recognized to be a member of the $\sim$2.5~Gyr-old Themis family \citep{nesvorny2003_dustbands} soon after its discovery \citep[cf.][]{boehnhardt1998_133p}. Since then, two more active asteroids, 176P/LINEAR and 288P/(300163) 2006 VW$_{139}$, have also been associated with the Themis family \citep{hsieh2009_htp,hsieh2012_288p}. A fourth active asteroid, 238P/Read, is considered to be a possible former Themis family member whose orbit has dynamically evolved to the point at which it is no longer formally dynamically linked to the family \citep{haghighipour2009_mbcorigins}. All four of these objects are considered to be MBCs based on dust modeling results, confirmation of recurrent activity, or both \citep[e.g.,][]{boehnhardt1998_133p,hsieh2004_133p,hsieh2010_133p,hsieh2011_176p,hsieh2011_238p,hsieh2012_288p,licandro2013_288p,jewitt2014_133p}. Other active asteroids have also been associated with other asteroid families including 311P/PANSTARRS, 313P/Gibbs, 354P/LINEAR, 358P/PANSTARRS, and (62412) 2000 SY$_{178}$ \citep[][]{hainaut2012_p2010a2,hsieh2013_p2012t1,hsieh2015_313p,jewitt2013_311p,sheppard2015_sy178}. However, only some of these associations were formally established using standard family-linking techniques. Others were simply based on the qualitative similarity of each object's osculating orbital elements to those of a nearby family. In order to clarify the significance of asteroid family membership to active asteroids, we have conducted a search for family associations for all of the known active asteroids to date, and report the results here. We also describe the properties of these associated families and discuss the implications of our results. \section{Family Search Methodology}\label{section:methodology} Members of asteroid families can be identified from their clustering in proper orbital element space (i.e., proper semimajor axis, $a_p$, proper eccentricities, $e_p$, and proper inclination, $i_p$). Proper orbital elements are quasi-integrals of motion, where the transient oscillations of osculating orbital elements have been largely removed, making them nearly constant over time. They are therefore well-suited for identifying stable groupings of objects in dynamical parameter space. We begin our search for asteroid families associated with known active asteroids by computing synthetic proper orbital elements for each object, using the methodology described by \citet{knezevic2000_synthelements} and \citet{knezevic2003_synthelements}. Synthetic proper elements are about a factor of 3 more accurate than analytically computed proper elements for objects with low to moderate inclinations and eccentricities \citep[cf.][]{knezevic2017_propelements}, and are also significantly more reliable and useful for identifying asteroid families than analytically computed proper elements for objects at higher inclinations \citep[cf.][]{novakovic2011_highifamilies}. The results of these computations for the known active asteroids, along with computations of Lyapunov times ($t_{ly}$) to characterize their stability (where objects with $t_{ly}$$\,<\,$10~kyr are typically considered dynamically unstable), are listed in Table~\ref{table:aaproperties}. To identify clustering of family members in proper element space, we employ the Hierarchical Clustering Method \citep[HCM;][]{zappala1990_hcm,zappala1994_hcm}. The HCM identifies groupings of objects such that each cluster member is closer than a certain threshold distance, $\delta$, from at least one other cluster member, a so-called cut-off ``distance'' ($\delta_c$), which typically has units of velocity (i.e., m~s$^{-1}$). The traditional application of this method \citep[cf.][]{zappala1990_hcm,zappala1994_hcm,novakovic2011_highifamilies} involves the computation of mutual distances among all asteroids within a selected region of orbital element space, and determination of a so-called quasi-random level (QRL), which is used to determine a statistical significance and the optimum $\delta_c$ value for a given family. Because we are interested in searching for families that include specific objects, i.e., the active asteroids, however, we use a slightly different HCM-like approach that starts from a selected central asteroid. In this case, the volume of the region of interest in proper element space is not defined a priori, but instead grows around the selected central object as the $\delta_c$ value being considered increases. Given that this method does not include the determination of a QRL, we need to take a different approach for selecting an appropriate $\delta_c$ value for a family associated with a particular central object. A plot of the number of asteroids associated with a given central body as a function of $\delta_c$ for an asteroid family is typically characterized by an increase in the number of associated asteroids at small $\delta_c$ values (as members of the family are identified by their close proximity in orbital element space), a ``plateau'' (an interval of $\delta_c$ over which family membership remains nearly constant; mainly seen for families that are very cleanly separated from the background asteroid population in orbital element space), and finally, resumed growth in the number of associated asteroids as increasing $\delta_c$ values begin to incorporate a large fraction of the background population. The most appropriate $\delta_c$ value for a family is typically chosen to include the majority of the asteroids associated with the central body within the plateau region, while excluding the asteroids associated with the central body beyond the plateau region, as those objects are assumed to belong to the background population. A typical example of such a plot is shown in Figure~\ref{figure:family_progression_aeolia}. For active asteroids that have been previously linked to known families, we perform the HCM-based analysis described above using the previously identified nominal central objects of those families, and test whether each active asteroid becomes linked to its respective family at a reasonable $\delta_c$ value \citep[i.e., less than or comparable to the optimum $\delta_c$ values previously found for those families; e.g., from][]{nesvorny2015_astfam_ast4}. For those active asteroids that have not been previously linked to known families, we perform the same initial analyses using each active asteroid as the starting central object. In these cases, if we find that an active asteroid becomes linked with the central object of a known family at a $\delta_c$ value less than or comparable to that family's nominal optimum $\delta_c$ value, we then use that family's previously identified central body as the central body in a follow-up HCM-based analysis to verify that the active asteroid becomes linked with its respective family at a reasonable $\delta_c$ value. If no link between an active asteroid and a known family is found, but a previously unknown family-like cluster of asteroids is tentatively identified, we perform a similar follow-up HCM-based analysis using the largest body of that candidate family as the central object, and again attempt to verify that the active asteroid becomes linked with the candidate family at a reasonable $\delta_c$ value. We note that not all family growth plots have features that are as cleanly defined as seen in Figure~\ref{figure:family_progression_aeolia}. For families in high-density regions of the asteroid belt in orbital element space, the plateau in the family growth plot can be poorly defined, and the most appropriate $\delta_c$ for the family can be difficult to identify, if a family can be determined to exist at all. As such, selection of the best $\delta_c$ value to define a family often necessarily includes some subjective judgment, and in cases of families found in dense regions of the asteroid belt with overlapping populations of objects, analysis of the physical properties of individual asteroids may be employed to further clarify family membership \citep[e.g.,][]{masiero2013_astfams_neowise}. In this work, we do not perform detailed analyses of each family in question, many of which have already been analyzed in detail in other works, and others which require dedicated individual investigations beyond the scope of this overview paper. Rather, we investigate whether active asteroids can be linked with families at reasonably low $\delta_c$ values, using previously determined optimum $\delta_c$ values as benchmarks when available \citep[e.g., from][]{nesvorny2015_pdsastfam}. Here we note that most active asteroids are km-scale in size or smaller, and may therefore be subject to significant Yarkovsky drift \citep[e.g.,][]{bottke2006_yarkovsky}, \changed{as well as to non-gravitational recoil forces due to asymmetric outgassing \citep[cf.][]{hui2017_activeastsnongrav}}. As such, it is reasonable to expect that some of these bodies might be found in the outer ``halos'' of their respective families, and therefore in some cases, we may consider active asteroids linked at somewhat larger $\delta_c$ values than are typically used to characterize particular families to still be potential members of those families. We perform HCM-based analyses for all known active asteroids (as of 2017 June 15) as described above using a synthetic proper element catalog for 524\,216 numbered and multi-opposition asteroids retrieved from the {\it AstDyS} website\footnote{\tt http://hamilton.dm.unipi.it/astdys} on 2017 June 15, where we compute the proper elements of the 17 active asteroids and active asteroid fragments under consideration in this work and also add these to the catalog. \section{RESULTS}\label{section:results} \subsection{Overview}\label{section:resultsoverview} We summarize the results of our search for family associations of the active asteroids in the main asteroid belt known to date in Table~\ref{table:family_associations}. We find that nearly all of the active asteroids that we investigate here have asteroid family associations, a finding whose significance we discuss further in Section~\ref{section:discussion}. In the remainder of this section, we consider the individual families for which we have found associated active asteroids, divided into those families associated with MBCs and those associated with disrupted asteroids, and discuss the physical and dynamical properties of those families in the context of the likely physical natures of their associated active asteroids. For the purposes of this work, classifications of active asteroids as MBCs or disrupted asteroids are based on observational confirmation of recurrent activity or dust modeling indicating prolonged dust emission events \citep[cf.][]{hsieh2012_scheila}, given that no direct confirmation of sublimation has been obtained for any of the MBCs studied to date (cf.\ Section~\ref{section:background}). The regions of the main asteroid belt in which each associated family is located are listed in Table~\ref{table:family_associations}, where asteroids with $a_p$ between the 4J:1A and 3J:1A MMRs (at 2.064~AU and 2.501~AU, respectively) comprise the inner main belt (IMB), asteroids with $a_p$ between the 3J:1A and 5J:2A MMRs (at 2.501~AU and 2.824~AU, respectively) comprise the middle main belt (MMB), and asteroids with $a_p$ between the 5J:2A and 2J:1A MMRs (at 2.824~AU and 3.277~AU, respectively) comprise the outer main belt (OMB). In cases where new families or clusters are identified, we emphasize that the reliability of these findings remains to be confirmed, requiring detailed individual analyses that are beyond the scope of this work. \changed{Similarly, especially in the cases of active asteroids linked with their respective families at relatively large $\delta_c$ values or located near major MMRs, follow-up analyses, such as backward dynamical integrations, may be required to more definitively confirm or rule out the family associations reported here.} Nonetheless, we report these preliminary findings here in order to highlight potential family associations to investigate in more detail in future work. \setlength{\tabcolsep}{2.5pt} \setlength{\extrarowheight}{0em} \begin{table*}[htb!] \caption{Family Associations of Known Active Asteroids} \smallskip \footnotesize \begin{tabular}{lcrcccclc} \hline\hline \multicolumn{1}{c}{Object} & \multicolumn{1}{c}{Family} & \multicolumn{1}{c}{$n_{fam}$$^a$} & \multicolumn{1}{c}{$\delta_{\rm AA}$$^b$} & \multicolumn{1}{c}{$\delta_c$$^c$} & \multicolumn{1}{c}{Age$^{d}$} & \multicolumn{1}{c}{Region$^{e}$} & \multicolumn{1}{c}{$\overline{p_{V}}$$^f$} & \multicolumn{1}{c}{Sp.\ Type$^g$} \\ \hline \multicolumn{4}{l}{\it \underline{Sublimation-driven activity}} \\ ~~~(1) Ceres & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & MMB & 0.090$\pm$0.003$^h$ (1) & C \\ ~~~133P/Elst-Pizarro (P/1996 N2) & Themis & 4782 & 33 & 60 & 2.5$\pm$1.0~Gyr & OMB & 0.068$\pm$0.017 (2218) & B/C \\ ~~~... & Beagle & 148 & 19 & 25 & $<$$\,$10~Myr & OMB & 0.080$\pm$0.014 (30) & B/C \\ ~~~176P/LINEAR ((118401) 1999 RE$_{70}$) & Themis & 4782 & 34 & 60 & 2.5$\pm$1.0~Gyr & OMB & 0.068$\pm$0.017 (2218) & B/C \\ ~~~238P/Read (P/2005 U1) & Gorchakov$^i$ & 16 & 45 & 75 & ? & OMB & 0.053$\pm$0.012 (7) & C \\ ~~~259P/Garradd (P/2008 R1) & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & MMB & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} \\ ~~~288P/(300163) 2006 VW$_{139}$ & 288P$^j$ & 11 & n/a & 70 & 7.5$\pm$0.3~Myr & OMB & 0.090$\pm$0.020 (2) & C \\ ~~~313P/Gibbs (P/2014 S4) & Lixiaohua & 756 & 21 & 45 & $\sim$155~Myr & OMB & 0.044$\pm$0.009 (367) & C/D/X \\ ~~~324P/La Sagra (P/2010 R2) & Alauda & 1294 & 108 & 120 & 640$\pm$50~Myr & OMB & 0.066$\pm$0.015 (687) & B/C/X \\ ~~~358P/PANSTARRS (P/2012 T1) & Lixiaohua & 756 & 13 & 45 & $\sim$155~Myr & OMB & 0.044$\pm$0.009 (367) & C/D/X \\ ~~~P/2013 R3-A (Catalina-PANSTARRS) & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & OMB & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} \\ ~~~P/2013 R3-B (Catalina-PANSTARRS) & Mandragora$^k$ & 30 & 59 & 75 & 290$\pm$20 kyr & OMB & 0.056$\pm$0.019 (9) & ? \\ ~~~P/2015 X6 (PANSTARRS) & Aeolia & 296 & 36 & 50 & $\sim$100~Myr & MMB & 0.107$\pm$0.022 (43) & C/Xe \\ ~~~P/2016 J1-A (PANSTARRS) & Theobalda & 376 & 23 & 60 & 6.9$\pm$2.3~Myr & OMB & 0.062$\pm$0.016 (107) & C/F/X \\ ~~~P/2016 J1-B (PANSTARRS) & ... & ... & 30 & ... & ... & OMB & ... & ... \\ \hline \multicolumn{4}{l}{\it \underline{Disruption-driven activity}} \\ ~~~(493) Griseldis & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & OMB & 0.081$\pm$0.009 (1) & X \\ ~~~(596) Scheila & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & OMB & 0.040$\pm$0.001 (1) & T \\ ~~~(62412) 2000 SY$_{178}$ & Hygiea & 4854 & 37 & 60 & 3.2$\pm$0.4~Gyr & OMB & 0.070$\pm$0.018 (1951) & B/C/D/X \\ ~~~311P/PANSTARRS (P/2013 P5) & Behrens$^i$ & 20 & 46 & 45 & ? & IMB & 0.248$\pm$0.026 (4) & Q/S/V \\ ~~~331P/Gibbs (P/2012 F5) & 331P$^l$ & 9 & n/a & 10 & 1.5$\pm$0.1~Myr & OMB & \multicolumn{1}{c}{?} & Q \\ ~~~354P/LINEAR (P/2010 A2) & Baptistina & 2500 & 43 & 48 & $\sim$100--320~Myr & IMB & 0.179$\pm$0.056 (581) & S/X \\ ~~~P/2016 G1 (PANSTARRS) & Adeona & 2236 & 44 & 50 & 620$\pm$190~Myr & MMB & 0.060$\pm$0.011 (874) & Ch \\ \hline \multicolumn{4}{l}{\it \underline{Unknown activity mechanism}} \\ ~~~233P/La Sagra & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & OMB & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} \\ ~~~348P/PANSTARRS & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & OMB & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} \\ \hline \hline \end{tabular} \\ $^a$ Number of family members, as computed by \citet{nesvorny2015_pdsastfam}, unless otherwise specified. \\ $^b$ Cut-off distance, in m~s$^{-1}$, at which the specified active asteroid becomes linked with the specified family. \\ $^c$ Cut-off distance, in m~s$^{-1}$, for family in HCM analysis, as determined by \citet{nesvorny2015_pdsastfam}, unless otherwise specified. \\ $^d$ Estimated age of family (?: unknown), from references in text. \\ $^e$ Region of the main asteroid belt in which the specified family is found (IMB: Inner Main Belt; MMB: Middle Main Belt; OMB: Outer Main Belt). \\ $^f$ Average \changed{reported} $V$-band geometric albedos of objects for which values are available; from \citet{mainzer2016_neowise} (?: no albedos available for any known family members). \\ $^g$ Spectral types of family members for which taxonomic classifications are available; from \citet{neese2010_taxonomy} and \citet{hasselmann2011_taxonomy} (?: no classifications available for any known family members). \\ $^h$ \changed{Reported} $V$-band geometric albedo for Ceres determined by \citet{li2006_ceres}. \\ $^i$ Candidate family identified and parameters determined by this work. \\ $^j$ Family identified and parameters determined by \citet{novakovic2012_288p}. \\ $^k$ Family identified and parameters determined by \citet{pravec2017_astclusters}. \\ $^l$ Family identified and parameters determined by \citet{novakovic2014_331p}. \\ \label{table:family_associations} \end{table*} \subsection{Main-Belt Comet Family Associations}\label{section:mbcfamilies} \subsubsection{The Aeolia Family}\label{section:aeolia} We find that active asteroid P/2015 X6 (PANSTARRS) is linked to the Aeolia family, which is believed to have formed in a cratering event $\sim\,$100~Myr ago \citep{spoto2015_astfamages}. P/2015 X6 becomes linked with the Aeolia family at $\delta_c$$\,=\,$36~m~s$^{-1}$ (Figure~\ref{figure:family_progression_aeolia}), which is actually outside the optimum cut-off distance ($\delta_{c}$$\,=\,$20~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. As can be seen in Figure~\ref{figure:family_progression_aeolia} though, P/2015 X6 still lies well within the ``plateau'' region of the family growth plot (cf.\ Section~\ref{section:methodology}) for the Aeolia family, and so we still regard it as a likely family member. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_aeolia.pdf}} \caption{\small Plot of number of asteroids associated with (396) Aeolia as a function of $\delta_c$, where the point at which P/2015 X6 becomes linked with the family ($\delta_c$$\,=\,$36~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_aeolia} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_aeolia.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Aeolia family members (small blue dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (396) Aeolia are marked with red triangles, while the proper elements for P/2015 X6 are marked with yellow stars. Vertical dashed lines mark the semimajor axis positions of the 3J$-$1S$-$1A (left) and 13J:5A (right) MMRs at 2.7518~AU and 2.7523~AU, respectively. } \label{figure:aei_aeolia} \end{figure} The family lies just inside the 13J:5A and 3J$-$1S$-$1A MMRs (Figure~\ref{figure:aei_aeolia}). P/2015 X6 lies close to those two resonances, indicating that it may be unstable over long timescales. This potential instability is reflected by its small $t_{ly}$ value (Table~\ref{table:aaproperties}). Given that P/2015 X6 is also relatively distant in proper element space from the core of the family \citep[and in fact is actually outside the $\delta_c$ cut-off established by][]{nesvorny2015_pdsastfam}, its membership in the Aeolia family may be considered somewhat uncertain. The largest member of the family, (396) Aeolia, has been spectroscopically classified as a Xe-type asteroid \citep{neese2010_taxonomy}, and \changed{has been reported to have} a geometric albedo of $p_V$$\,=\,$0.126\changed{$\pm$0.019} and effective radius of $r_e$$\,=\,$19.6\changed{$\pm$0.2}~km \citep{mainzer2016_neowise}. All other family members that have been taxonomically classified have been classified as C-type asteroids. The average \changed{reported} albedo of Aeolia family members is $\overline{p_{V}}$$\,=\,$0.107$\pm$0.022 (cf.\ Table~\ref{table:family_associations}), although albedos of individual family members \changed{have been reported to} range widely from ${p_V}$$\,\sim\,$0.05 to ${p_V}$$\,\sim\,$0.15, suggesting that the family could have a mix of primitive and non-primitive members \changed{(or alternatively, that individual reported albedo values have large uncertainties)}. \changed{One important caveat that applies here, as well as to discussions of the physical properties of other families that follow below, is that albedos reported by \citet{mainzer2016_neowise} (and many others) are generally calculated using absolute $V$-band magnitudes ($H_V$) computed using photometric data compiled by the Minor Planet Center from a wide range of observers and surveys. However, \citet{pravec2012_wiseabsmagnitudes} found that while catalogued absolute magnitudes for larger asteroids ($H_V$$\,\lesssim\,$10) were generally consistent with results from an independent targeted observing campaign to verify $H_V$ values for several hundred main-belt and near-Earth asteroids, catalogued $H_V$ values for smaller asteroids ($H_V$$\,\gg\,$10) exhibited systematically negative offsets up to $\Delta H_V$$\,\sim\,$$-$0.5 relative to independently measured values. In many cases, the eventual resulting offsets between catalogued albedo values and recalculated albedo values using revised $H_V$ values were within the originally reported uncertainties of the catalogued albedo values, but nonetheless, we note that albedo values discussed here, particularly for the smaller asteroids that dominate the families we discuss in this paper, should be regarded with some caution.} A dust modeling analysis of the activity of P/2015 X6 indicates that the object underwent sustained dust loss over a period of at least two months, suggesting that the observed activity was sublimation-driven \citep{moreno2016_p2015x6}, making the object a likely MBC. \subsubsection{The Alauda Family}\label{section:alauda} We find that active asteroid 324P/La Sagra (formerly designated P/2010 R2) is linked to the Alauda family, which has been determined to be 640$\pm$50~Myr old \citep{carruba2016_oldestfamilies}. 324P becomes linked with the Alauda family at $\delta_c$$\,=\,$108~m~s$^{-1}$ (Figure~\ref{figure:family_progression_alauda}), just within the optimum cut-off distance ($\delta_{c}$$\,=\,$120~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_alauda.pdf}} \caption{\small Plot of number of asteroids associated with (702) Alauda as a function of $\delta_c$, where the point at which 324P becomes linked with the family ($\delta_c$$\,=\,$108~m~s$^{-1}$) is marked with a vertical arrow.} \label{figure:family_progression_alauda} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_alauda.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Alauda family members (small blue dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (702) Alauda are marked with red triangles, while the proper elements for 324P are marked with yellow stars. Vertical dashed lines mark the semimajor axis positions, from left to right, of the 9J:4A, 13J:6A, and 2J:1A MMRs at 3.0307~AU, 3.1080~AU, and 3.2783~AU, respectively. } \label{figure:aei_alauda} \end{figure} The family is found between the 9J:4A and 2J:1A MMRs, and is crossed by the 13J:6A MMR as well as other two- and three-body MMRs (Figure~\ref{figure:aei_alauda}). It is bounded above in proper inclination space by the Euphrosyne family and below by the Luthera family, and is also adjacent to the Danae and Erminia families in proper semimajor axis space, separated by the 9J:4A MMR. Some exchange of objects may be possible between the Alauda family and surrounding families via the $\nu_6$ secular resonance (which connects it to the Danae region), and various three-body MMRs (which connect it to the Euphrosyne and Luthera families) \citep{machuca2012_euphrosyne}. Several sub-families and clumps within this region have been identified \citep{machuca2012_euphrosyne}, but 324P is not linked to any of them at $\delta_c$ values smaller than the $\delta_c$ value at which it is linked to the main Alauda family. The largest member of the family, (702) Alauda, has been spectroscopically classified as a B-type asteroid \citep{neese2010_taxonomy}, possesses a small satellite, and \changed{has been reported to have} $p_V$$\,=\,$0.061\changed{$\pm$0.011}, $r_e$$\,=\,$95.5\changed{$\pm$1.0}~km, and a bulk density of $\rho$$\,=\,$1570$\pm$500~kg~m$^{-3}$ \citep{bus2004_alaudaspectrum,rojo2011_alauda,mainzer2016_neowise}. Other family members have been taxonomically classified as B-, C-, and X-type asteroids, and \changed{have been reported to have} a low average albedo of $\overline{p_V}$$\,=\,$0.066$\pm$0.015 (cf.\ Table~\ref{table:family_associations}), indicating that they are likely to have primitive compositions. Photometric and morphological analysis of the activity of 324P/La Sagra in 2010 suggested that it was likely to be sublimation-driven \citep{hsieh2012_324p}, a conclusion that was strengthened by the detection of recurrent activity in 2015 \citep{hsieh2015_324p}, making the object a likely MBC. The object's nucleus has been measured to have $r_e$$\,=\,$0.55$\pm$0.05~km \citep[assuming a $R$-band albedo of $p_R$$\,=\,$0.05;][]{hsieh2014_324p}. \subsubsection{The Gorchakov Family}\label{section:gorchakov} We find that active asteroid 238P/Read (formerly designated P/2005 U1) is linked to a candidate asteroid family that we designate here as the Gorchakov family. 238P becomes linked with the candidate Gorchakov family at $\delta_c$$\,=\,$45~m~s$^{-1}$ (Figure~\ref{figure:family_progression_gorchakov}). The largest member of the family, (5014) Gorchakov, \changed{has been reported to have} $p_V$$\,=\,$0.057\changed{$\pm$0.008} and $r_e$$\,=\,$9.7\changed{$\pm$0.1}~km \citep{mainzer2016_neowise}, where its low albedo suggests that it may have a primitive composition. Other family members have been classified as C-type asteroids, and \changed{have been reported to have} a low average \changed{reported} albedo of $\overline{p_{V}}$$\,=\,$0.053$\pm$0.012 (Table~\ref{table:family_associations}), indicating that they are likely to have primitive compositions. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_gorchakov.pdf}} \caption{\small Plot of number of asteroids associated with (5014) Gorchakov as a function of $\delta_c$, where the point at which 238P becomes linked with the family ($\delta_c$$\,=\,$45~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_gorchakov} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_gorchakov.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Gorchakov family members (small blue dots) identified by HCM analysis performed as part of this work using $\delta_c$$\,=\,$75~m~s$^{-1}$. The proper elements for (5014) Gorchakov are marked with red triangles, while the proper elements for 238P are marked with yellow stars. } \label{figure:aei_gorchakov} \end{figure} The activity of 238P is strongly believed to be sublimation-driven based on numerical dust modeling of its activity in 2005 \citep{hsieh2009_238p} and observations of recurrent activity on two additional occasions in 2010 and 2016 \citep{hsieh2011_238p,hsieh2016_238p}, making the object a likely MBC. The object's nucleus has been estimated to have $r_e$$\,\sim\,$0.4~km \citep[assuming $p_R$$\,=\,$0.05;][]{hsieh2009_238p}. While we find that 238P is currently associated with the candidate Gorchakov family, \citet{haghighipour2009_mbcorigins} has suggested that it may have been a former member of the Themis family that has since migrated in eccentricity away from the family. This hypothesis is supported by 238P's small $t_{ly}$ value (Table~\ref{table:aaproperties}), indicating that it is dynamically unstable, although the existence of a plausible dynamical pathway from the Themis family to 238P's current location has not yet been definitively demonstrated. \subsubsection{The Lixiaohua Family}\label{section:lixiaohua} Active asteroids 313P/Gibbs (formerly designated P/2014 S4) and 358P/PANSTARRS (formerly designated P/2012 T1) have been previously linked to the Lixiaohua family \citep{hsieh2013_p2012t1,hsieh2015_313p}, which has been determined to be 155$\pm$36~Myr old \citep{novakovic2010_chaotictransport}. The Lixiaohua family has a size-frequency distribution consistent with being the result of a catastrophic disruption event \citep{novakovic2010_lixiaohua,benavidez2012_sfds}. 313P becomes linked with the Lixiaohua family at $\delta_c$$\,=\,$21~m~s$^{-1}$, and 358P becomes linked with the family at $\delta_c$$\,=\,$13~m~s$^{-1}$ (Figure~\ref{figure:family_progression_lixiaohua}), both well within the optimum cut-off distance ($\delta_{c}$$\,=\,$45~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_lixiaohua.pdf}} \caption{\small Plot of number of asteroids associated with (3556) Lixiaohua as a function of $\delta_c$, where the points at which 313P and 358P become linked with the family ($\delta_c$$\,=\,$21~m~s$^{-1}$ and $\delta_c$$\,=\,$13~m~s$^{-1}$, respectively) are marked with vertical arrows. } \label{figure:family_progression_lixiaohua} \end{figure} The family resides in a dynamically complex region of orbital element space in the asteroid belt (Figure~\ref{figure:aei_lixiaohua}), and is affected by several weak two- and three-body MMRs (cf.\ Figure~\ref{figure:aei_lixiaohua}) and potential close encounters with large asteroids, particularly Ceres, resulting in chaotic diffusion in all three proper elements ($a_p$, $e_p$, and $i_p$) \citep{novakovic2010_lixiaohua}. Both 313P and 358P have small $t_{ly}$ values (Table~\ref{table:aaproperties}), indicating that they are relatively unstable over long timescales. About 20\% of Lixiaohua family members also have similar or smaller $t_{ly}$ values, however, and so the small $t_{ly}$ values of 313P and 358P do not necessarily indicate that they are likely to be recently implanted interlopers, but may instead simply reflect the complex dynamical environment of the family resulting in general instability for a large number of its members. \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_lixiaohua.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Lixiaohua family members (small blue dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (3556) Lixiaohua are marked with red triangles, while the proper elements for 313P and 358P are marked with yellow stars. } \label{figure:aei_lixiaohua} \end{figure} The largest member of the family, (3556) Lixiaohua, has been spectroscopically classified as a C- or X-type asteroid \citep[cf.][]{nesvorny2005_spaceweathering}, and \changed{has been reported to have} $p_V$$\,=\,$0.035\changed{$\pm$0.004} and $r_e$$\,=\,$10.04\changed{$\pm$0.02}~km \citep{mainzer2016_neowise}. Other family members have been classified as C-, D-, and X-type asteroids and \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.044$\pm$0.009, indicating that they are likely to have primitive compositions. The activity of 313P is strongly believed to be sublimation-driven based on both numerical dust modeling and observations showing that it has been active on at least two occasions in 2003 and 2014 \citep{hsieh2015_313p,jewitt2015_313p1,jewitt2015_313p2,hui2015_313p}, making the object a likely MBC. Photometric monitoring of 358P while it was active in 2012 suggests that its activity is likely to be due to sublimation, making the object a likely MBC as well. \subsubsection{The Mandragora Family}\label{section:mandragora} We find that active asteroid component P/2013 R3-B (PANSTARRS) is linked to the recently identified Mandragora family, which has been determined to be 290$\pm$20~kyr old \citep{pravec2017_astclusters}. P/2013 R3-B becomes linked with the Mandragora family at $\delta_c$$\,=\,$59~m~s$^{-1}$ (Figure~\ref{figure:family_progression_mandragora}), within the optimum cut-off distance ($\delta_{c}$$\,=\,$65~m~s$^{-1}$) determined for the family by \citet{pravec2017_astclusters}. P/2013 R3-A is not formally linked to the family at the present time. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_mandragora.pdf}} \caption{\small Plot of number of asteroids associated with (22280) Mandragora as a function of $\delta_c$, where the point at which P/2013 R3-B becomes linked with the family ($\delta_c$$\,=\,$59~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_mandragora} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_mandragora.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for candidate Mandragora family members (small blue dots) identified by \citet{pravec2017_astclusters}. The proper elements for (22280) Mandragora are marked with red triangles, while the proper elements for P/2013 R3-A and P/2013 R3-B are marked with yellow stars. Vertical dashed lines mark the semimajor axis position of the 9J:4A MMR at 3.0307~AU. } \label{figure:aei_mandragora} \end{figure} The 9J:4A MMR falls near the cluster, very nearly coinciding with the proper semimajor axes of P/2013 R3-A and P/2013 R3-B (Figure~\ref{figure:aei_mandragora}). This means that those objects are likely to be dynamically unstable on long timescales, a conclusion supported by the objects' small $t_{ly}$ values (Table~\ref{table:aaproperties}), and may not in fact share a common origin with other Mandragora family members. If the parent body of P/2013 R3-A and P/2013 R3-B was originally a member of the candidate Mandragora family though, destabilization by the 9J:4A MMR or non-gravitational outgassing forces could explain why P/2013 R3-A has diffused away from the family in proper eccentricity. A third possibility is that P/2013 R3-A could simply not be linked to the Mandragora family due to poor proper element determination resulting from large uncertainties in the osculating orbital elements of both fragments. A more detailed backward integration analysis like that performed by \citet{pravec2017_astclusters} for other members of the family would help to clarify the membership status of both fragments, and should be performed in the future. The largest member of the Mandragora family, (22280) Mandragora, \changed{has been reported to have} $p_V$$\,=\,$0.046\changed{$\pm$0.006} and $r_e$$\,=\,$4.9\changed{$\pm$0.1}~km \citep{mainzer2016_neowise,pravec2017_astclusters}. No family members have been taxonomically classified, but those with measured albedos \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.056\changed{$\pm$0.019}, indicating that they are likely to have primitive compositions. This conclusion is also supported by the C-type-like $V-R$ colors measured for the two largest members of the family reported by \citet{pravec2017_astclusters}. When P/2013 R3 was discovered, the object had already split into multiple fragments, which then disintegrated further over the following several months. Analysis of follow-up observations suggested that the comet likely broke apart due to stresses from rapid rotation \citep{jewitt2014_p2013r3,jewitt2017_p2013r3}. \citet{jewitt2014_p2013r3} concluded that gas pressure alone was insufficient for causing the catastrophic disruption of the comet, although individual fragments were observed to exhibit secondary dust emission behavior indicative of being driven by sublimation, perhaps of newly exposed interior ices, making the object a likely MBC. \subsubsection{The Themis and Beagle Families}\label{section:themis_beagle} Active asteroids 133P (also designated (7968) Elst-Pizarro) and 176P (also designated (118401) LINEAR) have been previously linked to the Themis family \citep[e.g.,][]{boehnhardt1998_133p,hsieh2009_htp}. While MBC 288P was previously found to be associated with the Themis family, we find that it becomes linked with the family at $\delta_c$$\,=\,$77~m~s$^{-1}$, outside the nominally established $\delta_c$ for the family. 133P becomes linked with the family at $\delta_c$$\,=\,$33~m~s$^{-1}$ and 176P becomes linked with the family at $\delta_c$$\,=\,$34~m~s$^{-1}$ (Figure~\ref{figure:family_progression_themis}), both well within the optimum cut-off distance ($\delta_{c}$$\,=\,$60~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. \citet{nesvorny2003_dustbands} estimated the age of the Themis family to be 2.5$\pm$1.0~Gyr, but due to large uncertainties caused by its old age and dynamical environment, other estimates for the age of the Themis family range from as little as 500~Myr to nearly the age of the solar system ($\sim$4.5~Gyr) \citep{spoto2015_astfamages,carruba2016_oldestfamilies}. Meanwhile, 133P has been previously determined to also be linked to the Beagle family, which is a sub-family of the Themis family and has been estimated to be $<\,$10~Myr old \citep{nesvorny2008_beagle}. 133P becomes linked with the Beagle family at $\delta_c$$\,=\,$19~m~s$^{-1}$ (Figure~\ref{figure:family_progression_beagle}), within the optimum cut-off distance ($\delta_{c}$$\,=\,$25~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_themis.pdf}} \caption{\small Plot of number of asteroids associated with (24) Themis as a function of $\delta_c$, where the points at which 133P and 176P become linked with the family ($\delta_c$$\,=\,$33~m~s$^{-1}$ and $\delta_c$$\,=\,$34~m~s$^{-1}$, respectively) are marked with vertical arrows. } \label{figure:family_progression_themis} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_beagle.pdf}} \caption{\small Plot of number of asteroids associated with (656) Beagle as a function of $\delta_c$, where the point at which 133P becomes linked with the family ($\delta_c$$\,=\,$19~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_beagle} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_themis.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Themis family members (small blue dots; using $\delta_c$$\,=\,$60~m~s$^{-1}$) and Beagle family members (small pale red dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (24) Themis and (656) Beagle are marked with red triangles, while the proper elements for 133P and 176P are marked with yellow stars. Vertical dashed lines mark the semimajor axis positions of the 9J:4A (left) and 2J:1A (right) MMRs at 3.0307~AU and 3.2783~AU, respectively. } \label{figure:aei_themis} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_beagle.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for the Beagle family (small blue dots; using $\delta_c$$\,=\,$25~m~s$^{-1}$). The proper elements for (656) Beagle are marked with red triangles, while the proper elements for 133P are marked with yellow stars. } \label{figure:aei_beagle} \end{figure} The Themis family was one of the original asteroid families identified by \citet{hirayama1918_astfam}. Due to the fact that the Themis family is adjacent to the 2J:1A MMR (cf.\ Figure~\ref{figure:aei_themis}), it is believed that a significant number of family members may have been captured and scattered by the resonance since the family's formation \citep{morbidelli1995_familyresonances}. The largest member of the family, (24) Themis, has been spectroscopically classified as a B- or C-type asteroid \citep{neese2010_taxonomy}, and \changed{has been reported to have} $p_V$$\,=\,$0.069\changed{$\pm$0.010}, $r_e$$\,=\,$97.8\changed{$\pm$2.2}~km, and an estimated density of $\rho$$\,=\,$1.81$\pm$0.67~kg~m$^{-3}$ \citep{mainzer2016_neowise,carry2012_astdensities}. Notably, a near-infrared absorption feature attributed to water ice frost was detected in spectra of both Themis \citep{rivkin2010_themis,campins2010_themis} and another large member of the family, (90) Antiope \citep{hargrove2015_antiope}. No evidence of outgassing in the form of spectroscopic detections of outgassing has yet been found, though \citep{jewitt2012_themiscybele,mckay2017_themisceres}. The Themis family in general is dominated by C-complex asteroids, many of which exhibit spectra indicative of aqueously altered mineralogy and are similar to carbonaceous chondrite meteorites \citep{florczak1999_themisspectra,ziffer2011_themisveritas}. The Beagle family is entirely contained within the Themis family (cf.\ Figure~\ref{figure:aei_themis}), suggesting that it formed from the fragmentation of a parent body that was itself a Themis family member. Despite the now commonly-used name of the family, \citet{nesvorny2008_beagle} noted that it is possible that (656) Beagle may not actually be a real member of the family that bears its name, as its slight eccentricity offset from the other family members (Figure~\ref{figure:aei_beagle}) would require the invocation of an unusual ejection velocity field to explain. No formal taxonomic classification has been reported for the largest member of the family, (656) Beagle, but it has a spectrum consistent with C-complex asteroids \citep{kaluna2016_spaceweathering,fornasier2016_themisbeagle}, and \changed{has been reported to have} $p_V$$\,=\,$0.045\changed{$\pm$0.005} and $r_e$$\,=\,$31.3\changed{$\pm$0.3}~km \citep{mainzer2016_neowise}. Other family members have been taxonomically classified as B- and C-type asteroids, and \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.080$\pm$0.014 (cf.\ Table~\ref{table:family_associations}), indicating that they are likely to have primitive compositions. Based on the simultaneous detection of some objects with phyllosilicate absorption features and the presence of the apparently ice-bearing 133P in the family, \citet{kaluna2016_spaceweathering} concluded that the Beagle parent body was most likely composed of a heterogeneous mixture of ice and aqueously altered material. Meanwhile, spectroscopic observations of samples of both Beagle and Themis family asteroids showed that Beagle family asteroids are spectrally bluer, have higher albedos, and exhibit smaller spectral slope variability than background Themis family asteroids, suggesting that the Beagle parent body could have been a particularly blue and bright interior fragment of the original Themis parent body \citep{fornasier2016_themisbeagle}. 133P has been observed to be active during four perihelion passages (in 1996, 2002, 2007, and 2013) with intervening periods of inactivity, where dust modeling results indicate that dust emission took place over periods of months during its 1996, 2002, and 2013 active epochs \citep{boehnhardt1998_133p,hsieh2004_133p,jewitt2014_133p}. As such, 133P's activity is strongly believed to be the result of sublimation of volatile ices, although it is possible that the object's rapid rotation ($P_{\rm rot}$$\,=\,$3.471$\pm$0.001~hr) may also play a role in helping to eject dust particles \citep{hsieh2004_133p}. 133P's nucleus has been taxonomically classified as a B- or F-type asteroid \citep{bagnulo2010_133p,licandro2011_133p176p} and \changed{has been reported to have} $p_R$$\,=\,$0.05$\pm$0.02 and $r_e$$\,=\,$1.9$\pm$0.3~km \citep{hsieh2009_albedos}. While the presence of water ice has not yet been definitively spectroscopically confirmed on 133P, \citet{rousselot2011_133p} reported that its spectrum could be consistent with a mixture of water ice, black carbon, tholins, and silicates, but acknowledged that such a compositional interpretation was not unique. 176P's nucleus has been taxonomically classified as a B-type asteroid \citep{licandro2011_133p176p} and \changed{has been reported to have} $p_R$$\,=\,$0.06$\pm$0.02 and $r_e$$\,=\,$2.0$\pm$0.2~km \citep{hsieh2009_albedos}. Numerical dust modeling of its activity in 2005 indicated that it was likely to be due to a prolonged dust emission event, pointing to sublimation as the mostly likely driver of the activity \citep{hsieh2011_176p}, making the object a likely MBC. However, numerous observations during the object's next perihelion passage in 2011 revealed no evidence of recurrent activity, which could either suggest that the object did not actually exhibit sublimation-driven activity when observed in 2005, or that the object's activity had simply become attenuated to an undetectable level during the following orbit passage \citep{hsieh2014_176p}. \subsubsection{The Theobalda Family}\label{section:theobalda} We find that active asteroid P/2016 J1-A/B (PANSTARRS) is linked to the Theobalda family. P/2016 J1-A becomes linked with the Theobalda family at $\delta_c$$\,=\,$23~m~s$^{-1}$, and P/2016 J1-B becomes linked with the family at $\delta_c$$\,=\,$30~m~s$^{-1}$ (Figure~\ref{figure:family_progression_theobalda}), both well within the optimum cut-off distance ($\delta_{c}$$\,=\,$85~m~s$^{-1}$) determined for the family by \citet{novakovic2010_theobalda}. \citet{novakovic2010_theobalda} further determined via two independent methods (chaotic chronology and backward integration) that the family was likely produced 6.9$\pm$2.3~Myr ago by a cratering impact on a $d$$\,=\,$78$\pm$9~km parent body. A detailed dynamical analysis of this family was performed by \citet{novakovic2010_theobalda}, who found that it is crossed by several three-body MMRs (Figure~\ref{figure:aei_theobalda}) making the region significantly chaotic. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_theobalda.pdf}} \caption{\small Plot of number of asteroids associated with (778) Theobalda as a function of $\delta_c$, where the points at which P/2016 J1-A and P/2016 J1-B become linked with the family ($\delta_c$$\,=\,$23~m~s$^{-1}$ and $\delta_c$$\,=\,$30~m~s$^{-1}$, respectively) are marked with vertical arrows. } \label{figure:family_progression_theobalda} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_theobalda.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Theobalda family members (small blue dots) identified by \citet{nesvorny2015_astfam_ast4}. The proper elements for (778) Theobalda are marked with red triangles, while the proper elements for P/2016 J1-A and P/2016 J1-B are marked with yellow stars. } \label{figure:aei_theobalda} \end{figure} The largest member of the family, (778) Theobalda, has been spectroscopically classified as a F-type asteroid \citep{neese2010_taxonomy}, and \changed{has been reported to have} $p_V$$\,=\,$0.079\changed{$\pm$0.010} and $r_e$$\,=\,$27.7\changed{$\pm$0.4}~km \citep{mainzer2016_neowise}. Other family members have been taxonomically classified as C-, F-, and X-type asteroids and \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.062$\pm$0.016 (cf.\ Table~\ref{table:family_associations}), indicating that they are likely to have primitive compositions. P/2016 J1 was characterized by \citet{hui2017_p2016j1} and \citet{moreno2017_p2016j1}, who found mass loss rates of $\lesssim\,$1~kg~s$^{-1}$ for both components of the object (P/2016 J1-A and P/2016 J1-B). Both sets of authors also found that both components were continuously active over a period of three to nine months, strongly suggesting that the activity was sublimation-driven, making the object a likely MBC. \citet{hui2017_p2016j1} also estimated that the two largest fragments, J1-A and J1-B, have radii of 140$\,$m$\,<\,$$r_{e}$$\,<\,$900$\,$m and 40$\,$m$\,<\,$$r_{e}$$\,<\,$400$\,$m, respectively, and broadband colors similar to C- or G-type asteroids. \subsubsection{The 288P Family}\label{section:288P} Active asteroid 288P/(300163) 2006 VW$_{139}$ has been previously linked to a 7.5$\pm$0.3~Myr-old asteroid family designated as the 288P family \citep{novakovic2012_288p}. The 11-member 288P family was analyzed in detail by \citet{novakovic2012_288p} who found that it was likely formed in a disruptive event characterized as being intermediate between a catastrophic disruption and a cratering event. It is located in close proximity to the Themis family, with which it merges at $\delta_c$$\,\sim\,$75~m~s$^{-1}$, separated by a number of weak two- and three-body MMRs. These MMRs may contribute to a number of dynamically unstable interlopers, which were excluded from the family by \citet{novakovic2012_288p} based on a backward integration method (BIM) analysis \citep[cf.][]{nesvorny2002_karin}. The 288P family is roughly bound by the 9J:4A MMR at 3.0307~AU on one side and is crossed by the 20J:9A MMR at 3.0559~AU (Figure~\ref{figure:aei_288p}). \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_288p.pdf}} \caption{\small Plot of number of asteroids associated with 288P as a function of $\delta_c$. } \label{figure:family_progression_288p} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_288p.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for 288P family members (small blue dots) identified by \citet{novakovic2012_288p}. The proper elements for 288P are marked with yellow stars. Vertical dashed lines mark the semimajor axis positions of the 9J:4A (left) and 20J:9A (right) MMRs at 3.0307~AU and 3.0559~AU, respectively. } \label{figure:aei_288p} \end{figure} The two members of the family which have had their albedos measured \changed{have been reported to have} ${p_V}$$\,=\,$0.077$\pm$0.037 and ${p_V}$$\,=\,$0.103$\pm$0.077, giving $\overline{p_V}$$\,=\,$0.090$\pm$0.021 (cf.\ Table~\ref{table:family_associations}), indicating that they may have relatively primitive compositions. The activity of 288P seen in 2011 is believed to be sublimation-driven based on numerical dust modeling \citep{hsieh2012_288p,agarwal2016_288p}, making the object a likely MBC, where this conclusion was further strengthened by the recent confirmation in 2016 that the object is recurrently active \citep{agarwal2016_288p}. The nucleus of 288P has been classified as a C-type asteroid, has an effective absolute magnitude of $H_V$$\,=\,$17.0$\pm$0.1 (equivalent to $r_e$$\,\sim\,$1.3~km, assuming $p_V$$\,=\,$0.04), and has recently been confirmed to be a binary system with approximately equally sized components \citep{licandro2013_288p,agarwal2016_288p,agarwal2017_288p}. \subsection{Disrupted Asteroid Family Associations}\label{section:dafamilies} \subsubsection{The Adeona Family}\label{section:adeona} We find that active asteroid P/2016 G1 (PANSTARRS) is linked to the Adeona family, which is estimated to have formed in a cratering event $\sim$700~Myr ago \citep{benavidez2012_sfds,carruba2016_ejectionfields,milani2017_astfamages}. P/2016 G1 becomes linked with the family at $\delta_c$$\,=\,$44~m~s$^{-1}$ (Figure~\ref{figure:family_progression_adeona}), within the optimum cut-off distance ($\delta_{c}$$\,=\,$50~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_adeona.pdf}} \caption{\small Plot of number of asteroids associated with (145) Adeona as a function of $\delta_c$, where the point at which P/2016 G1 becomes linked with the family ($\delta_c$$\,=\,$44~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_adeona} \end{figure} The Adeona family's orbital evolution was investigated in detail by \citet{carruba2003_gefionadeona}, who found that perturbations from large asteroids like Ceres could impact the inferred ejection velocities of family members, but should have minimal effect on the spread of the family's semimajor axis distribution. The sharp cut-off of the family at the 8J:3A MMR at 2.706~AU (Figure~\ref{figure:aei_adeona}) is attributed to family members drifting into the resonance under the influence of the Yarkovsky effect and becoming scattered in eccentricity, thus becoming unrecognizable as family members. The family is also crossed by several other two-body, three-body, and secular resonances \citep[][]{carruba2003_gefionadeona}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_adeona.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Adeona family members (small blue dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (145) Adeona are marked with red triangles, while the proper elements for P/2016 G1 are marked with yellow stars. Vertical dashed lines mark the semimajor axis position of the 8J:3A MMR at 2.7062~AU. } \label{figure:aei_adeona} \end{figure} The Adeona family is notable in that the members of the family that have been spectroscopically classified have C- and Ch-type classifications, while the nearby background population is dominated by S-type asteroids. The largest member of the family, (145) Adeona, has been spectroscopically classified as a C- or Ch-type asteroid \citep{neese2010_taxonomy}, and \changed{has been reported to have} $p_V$$\,=\,$0.061\changed{$\pm$0.010}, $r_e$$\,=\,$63.9\changed{$\pm$0.2}~km, and $\rho$$\,=\,$1.18$\pm$0.34~kg~m$^{-3}$ \citep{mainzer2016_neowise,carry2012_astdensities}. Adeona has been spectroscopically characterized by \citet{busarev2015_asteroidspectra} who also found spectroscopic features indicative of hydrated silicates and hydrated oxides. An unexplained sharp increase in reflectivity between 0.4~$\mu$m and 0.7~$\mu$m was also noted, and interpreted as possibly being indicative of a cloud of sublimed or frozen ice particles, but this interpretation has yet to be confirmed. Other family members \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.060$\pm$0.011 (cf.\ Table~\ref{table:family_associations}), indicating that they are likely to have primitive compositions. The family's C-type members also exhibit evidence of aqueous alteration, and have been judged to be consistent with the breakup of a CM chondrite-like body \citep{mothediniz2005_familyspectroscopy}. \citet{moreno2016_p2016g1} found that P/2016 G1's active behavior is best interpreted as the result of a short duration event about one year prior to perihelion, consistent with an impact which then led to the observed disintegration of the object, making the object a likely disrupted asteroid. Those authors also found an upper limit of $r_e$$\,\sim\,$50~m for any post-disruption fragments. \subsubsection{The Baptistina Family}\label{section:baptistina} We find that disrupted asteroid 354P/LINEAR (formerly designated P/2010 A2) is linked to the Baptistina family \citep[despite being initially suspected of being a member of the Flora family; e.g.,][]{snodgrass2010_p2010a2}. \citet{masiero2012_baptistina} determined the family's age to be between 140 to 320 Myr old, depending on the physical properties assumed for its family members, while \citet{carruba2016_ejectionfields} estimated the family's age to be 110$\pm$10~Myr old. 354P becomes linked with the family at $\delta_c$$\,=\,$43~m~s$^{-1}$ (Figure~\ref{figure:family_progression_baptistina}), within the optimum cut-off distance ($\delta_{c}$$\,=\,$48~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_baptistina.pdf}} \caption{\small Plot of number of asteroids associated with (298) Baptistina as a function of $\delta_c$, where the point at which 354P becomes linked with the family ($\delta_c$$\,=\,$43~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_baptistina} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_baptistina.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Baptistina family members (small blue dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (298) Baptistina are marked with red triangles, while the proper elements for 354P are marked with yellow stars. Vertical dashed lines mark the semimajor axis positions of the 11J:3A (left) and 10J:3A (right) MMRs at 2.1885~AU and 2.3321~AU, respectively. } \label{figure:aei_baptistina} \end{figure} The Baptistina family is located in a crowded region of the inner main belt near several other families including the Flora, Vesta, Massalia, and Nysa-Polana families \citep{dykhuis2014_flora}. Notably, the Chicxulub impactor responsible for the Cretaceous/Tertiary (K/T) mass extinction on Earth was linked to the Baptistina family by \citet{bottke2007_baptistina}, although more recent studies revising the age and composition of the family have cast doubt on this claim \citep[e.g.,][]{reddy2009_baptistina,reddy2011_baptistina,carvano2010_baptistina,masiero2012_baptistina}. The family is roughly bound by the 11J:3A MMR on one side and the 10J:3A MMR on the other side (Figure~\ref{figure:aei_baptistina}). The largest member of the family, (298) Baptistina, has been spectroscopically classified as a X- or Xc-type asteroid \citep{lazzaro2004_s3os2}, and \changed{has been reported to have} $p_V$$\,=\,$\changed{0.131$\,\pm\,$0.017} and $r_e$$\,=\,$\changed{10.6$\pm$0.2}~km \citep{mainzer2016_neowise}. Other family members that have been physically characterized have been taxonomically classified as S- and X-type asteroids, and \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.179$\pm$0.056 (cf.\ Table~\ref{table:family_associations}), indicating that they are not likely to have primitive compositions. Initial analysis of 354P in 2010 indicated that the apparent cometary activity was likely to be due to a physical disruption of the asteroid by either an impact or rotational spin-up \citep{jewitt2010_p2010a2,snodgrass2010_p2010a2}, making this object a likely disrupted asteroid. A detailed analysis of the 2010 {\it HST} images led \citet{agarwal2013_p2010a2} to conclude that the disruption of 354P was most likely due to rotational destabilization, although measurements showing the largest remaining fragment has a spin rate ($P_{rot}$$\,=\,$11.36$\pm$0.02~hr) well below the critical spin rate for rotational disruption and revised dust modeling results appear to indicate that, in fact, an impact disruption was the most likely cause of the object's observed activity in 2010 \citep{kim2017_p2010a2_1,kim2017_p2010a2_2}. 354P was the first active asteroid determined to exhibit activity that was not due to sublimation, making it the first recognized disrupted asteroid. \subsubsection{The Behrens Family}\label{section:behrens} We find that active asteroid 311P/PANSTARRS (formerly designated P/2013 P5) is linked to a candidate asteroid family that we designate here as the Behrens family. 311P becomes linked with the family at $\delta_c$$\,=\,$46~m~s$^{-1}$ (Figure~\ref{figure:family_progression_behrens}). While we have not performed a detailed assessment of the likelihood that the Behrens family is real in this work, one line of evidence that the Behrens family may be real comes from the long rotational period of the asteroid (1651) Behrens, estimated to be $P_{\rm rot}$$\,\sim\,$34~hr. One possible explanation for such a long period is angular momentum ``splash'' due to a disruptive collision \citep{cellino1990_angmomentumsplash,takeda2009_collisionalspindown}. If real, the Behrens cluster is likely to be relatively young due to the small size of its largest body. Unfortunately, this hypothesis will be difficult to confirm using the backward integration method \citep[e.g.,][]{novakovic2012_288p,novakovic2012_lorre} because the orbits of most of the family members are unstable over long timescales. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_behrens.pdf}} \caption{\small Plot of number of asteroids associated with (1651) Behrens as a function of $\delta_c$, where the point at which 311P becomes linked with the family ($\delta_c$$\,=\,$46~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_behrens} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_behrens.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for candidate Behrens family members (small blue dots) identified by HCM analysis performed as part of this work using $\delta_c$$\,=\,$50~m~s$^{-1}$. The proper elements for (1651) Behrens are marked with red triangles, while the proper elements for 311P are marked with yellow stars. Vertical dashed lines mark the semimajor axis position of the 11J:3A MMR at 2.1885~AU. } \label{figure:aei_behrens} \end{figure} The Behrens family is intersected by the 11J:3A MMR with Jupiter. This MMR also passes close to 311P itself (Figure~\ref{figure:aei_behrens}), suggesting that 311P may be unstable over long timescales, consistent with its relatively small $t_{ly}$ value (Table~\ref{table:aaproperties}). As such, its membership in the Behrens family may be considered somewhat uncertain. The largest member of this candidate family, (1651) Behrens, \changed{has been reported to have} $p_V$$\,=\,$0.318\changed{$\pm$0.052} and $r_e$$\,=\,$4.5\changed{$\pm$0.1}~km \citep{mainzer2016_neowise}, the former of which suggests that it likely does not have a primitive composition, although no formal taxonomic classification is currently available for the object. Other family members that have been physically characterized have been taxonomically classified as Q-, S-, and V-type asteroids, and \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.248$\pm$0.026 (cf.\ Table~\ref{table:family_associations}), indicating that they are not likely to have primitive compositions. At the time of its discovery, 311P exhibited at least six dust tails believed to have been produced by multiple impulsive mass shedding events caused by rapid rotation of the nucleus near its critical limit \citep{jewitt2013_311p,jewitt2015_311p}, making the object a probable disrupted asteroid. Attempts to confirm that the nucleus of 311P has a rapid rotation rate have thus far been unsuccessful, however, with some observations even suggesting that it could be rotating unusually slowly \citep[e.g.,][]{hainaut2014_311p}. The nucleus has been estimated to have $r_e$$\,=\,$0.20$\pm$0.02~km \citep{jewitt2015_311p}, and has been found to have broadband colors consistent with being a S-type asteroid \citep{hainaut2014_311p}. \subsubsection{The Gibbs Cluster}\label{section:331p} Active asteroid 331P/Gibbs (formerly designated P/2012 F5) has been previously determined to be associated with a family that was designated the Gibbs cluster and estimated to be just 1.5$\pm$0.1~Myr old \citep{novakovic2014_331p}. No significant two- or three-body MMRs intersect the Gibbs cluster region (Figure~\ref{figure:aei_gibbs}), making both 331P and the overall cluster relatively dynamically stable. The parent body of the cluster has been estimated to be $\sim$10~km in diameter, where \citet{novakovic2014_331p} concluded that the estimated mass ratio between the largest fragment and the parent body indicate that the disruption that created the cluster was likely intermediate between a catastrophic disruption and a cratering event, \changed{assuming that the cluster was formed by an impact event. A more recent analysis, however, including observations to determine the rotational periods and sizes of cluster members, suggest that the cluster may instead have been formed by rotational fission \citep{pravec2017_astclusters}.} \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_gibbs.pdf}} \caption{\small Plot of number of asteroids associated with 331P/Gibbs as a function of $\delta_c$. } \label{figure:family_progression_gibbs} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_gibbs.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for 331P/Gibbs family members (small blue dots) identified by \citet{novakovic2014_331p}. The proper elements for 331P are marked with yellow stars. } \label{figure:aei_gibbs} \end{figure} The physical properties of this cluster were studied by \citet{novakovic2014_331p}, who noted that two members for which SDSS observations were available appear to be Q-type objects and also conducted a small-scale search for other active cluster members (none were found). 331P's nucleus has been estimated to have $r_e$$\,=\,$0.88$\pm$0.01~km \citep{drahus2015_331p} but has not yet been taxonomically classified. Dust modeling has suggested that the long, thin dust trail observed for 331P was most likely produced by an impulsive emission event, such as an impact \citep{stevenson2012_331p,moreno2012_331p}, or possibly by mass ejection due to rotational destabilization of the nucleus given that its rotation period was found to be $P_{\rm rot}$$\,=\,$3.24$\pm$0.01~hr \citep{drahus2015_331p}, making the object a likely disrupted asteroid. \subsubsection{The Hygiea Family}\label{section:hygiea} We confirm the finding of \citet{sheppard2015_sy178} that active asteroid (62412) 2000 SY$_{178}$ is linked to the Hygiea family. (62412) becomes linked with the Hygiea family at $\delta_c$$\,=\,$37~m~s$^{-1}$ (Figure~\ref{figure:family_progression_hygiea}), well within the optimum cut-off distance ($\delta_{c}$$\,=\,$60~m~s$^{-1}$) determined for the family by \citet{nesvorny2015_astfam_ast4}. The Hygiea family has a size-frequency distribution consistent with being the result of the catastrophic disruption of a monolithic parent body \citep{durda2007_impactsfds,benavidez2012_sfds}, and has been determined to be 3.2$\pm$0.4~Gyr old \citep{carruba2014_hygiea}. \begin{figure}[htb!] \centerline{\includegraphics[width=2.6in]{fig_familyprogression_hygiea.pdf}} \caption{\small Plot of number of asteroids associated with (10) Hygiea as a function of $\delta_c$, where the point at which (62412) becomes linked with the family ($\delta_c$$\,=\,$37~m~s$^{-1}$) is marked with a vertical arrow. } \label{figure:family_progression_hygiea} \end{figure} \begin{figure}[htb!] \centerline{\includegraphics[width=2.1in]{fig_aei_hygiea.pdf}} \caption{\small Plots of $a_p$ versus $e_p$ (top panel) and $\sin(i_p)$ (bottom panel) for Hygiea family members (small blue dots) identified by \citet{nesvorny2015_pdsastfam}. The proper elements for (10) Hygiea are marked with red triangles, while the proper elements for (62412) are marked with yellow stars. Vertical dashed lines mark the semimajor axis positions of the 9J:4A (left) and 2J:1A (right) MMRs at 3.0307~AU and 3.2783~AU, respectively. } \label{figure:aei_hygiea} \end{figure} The Hygiea family is roughly bound by the 9J:4A MMR at 3.0307~AU on one side and the 2J:1A MMR at 3.2783~AU on the other side (Figure~\ref{figure:aei_hygiea}). \citet{carruba2013_hygiea} and \citet{carruba2014_hygiea} found that the region in which it is found likely contains a significant component of interlopers from the nearby Themis and Veritas families, which likely contribute low-albedo asteroids, and the Eos family, the likely origin of the few high-albedo asteroids found in the region. It also crosses two other smaller families (associated with (5340) Burton and (15755) 1992 ET$_{5}$) in proper element space. Besides the numerous two- and three-body resonances intersecting the region, Hygiea family members are also perturbed by numerous secular resonances, the Yarkovsky effect, and other massive asteroids, interestingly possibly including Hygiea itself \citep{carruba2014_hygiea}. The largest member of the family, (10) Hygiea, has been spectroscopically classified as a C-type asteroid \citep{mothediniz2001_hygiea,neese2010_taxonomy}, and \changed{has been reported to have} $p_V$$\,=\,$0.072\changed{$\pm$0.002}, $r_e$$\,=\,$203.6\changed{$\pm$3.4}~km, and $\rho$$\,=\,$2.19$\pm$0.42~kg~m$^{-3}$ \citep{mainzer2016_neowise,carry2012_astdensities}. The asteroid's spectrum includes an absorption feature centered at 3.05$\pm$0.01~$\mu$m that has been classified as ``Ceres-like'' by \citet{takir2012_3micron}. The corresponding feature on Ceres may be due to irradiated organic material and crystalline water ice, or perhaps iron-rich clays \citep{vernazza2005_ceresvesta,rivkin2006_ceres}. Rotationally resolved spectroscopy of Hygiea has also revealed surface heterogeneity suspected of being due to heating by significant impact events \citep{busarev2011_spectralheterogeneity}. Other family members have been taxonomically classified as B-, C-, D-, S-, V-, and X-type asteroids (some of which may be interlopers) \citep{mothediniz2001_hygiea,carruba2013_hygiea,carruba2014_hygiea}, and \changed{have been reported to have} $\overline{p_V}$$\,=\,$0.070$\pm$0.018 (cf.\ Table~\ref{table:family_associations}), indicating that most members are likely to have primitive compositions. Dust emission observed from asteroid (62412) 2000 SY$_{178}$ in 2014 is considered likely to have been driven by rotational disruption, given the determination of a relatively rapid rotational period for the object of $P_{\rm rot}$$\,\sim\,$3.33~hr \citep{sheppard2015_sy178}, making it a likely disrupted asteroid. The object's nucleus has been estimated to have an effective radius of $r_e$$\,=\,$3.9$\pm$0.3~km with a minimum axis ratio of $a/b$$\,\geqslant\,$1.51, where measured colors and low albedo suggest that it is a C-type asteroid \citep{sheppard2015_sy178}. \subsection{Active Asteroids Without Associated Families}\label{section:nofamilies} We do not find any families associated with active asteroids 233P/La Sagra, 259P/Garradd, 348P/PAN-STARRS, (1) Ceres, (596) Scheila, and (493) Griseldis. Of these objects, 259P, 348P, and Ceres have exhibited likely sublimation-driven activity, Scheila's activity was caused by an impact disruption, and the sources of activity exhibited by 233P, 348P, and Griseldis have yet to be determined. 259P/Garradd was first observed to be active in 2008 \citep{jewitt2009_259p}, and recently confirmed to exhibit recurrent activity \citep{hsieh2017_259p}, strongly suggesting that its activity is sublimation-driven. The object's nucleus has $r_e$$\,=\,$0.30$\pm$0.02~km \citep[assuming $p_R$$\,=\,$0.05;][]{maclennan2012_259p}. Dynamically, 259P has been determined to be unstable on a timescale of $\sim$20$-$30~Myr, indicating that it is unlikely to be native to its current orbit, and may have instead originated elsewhere in the main belt or possibly as a JFC \citep{jewitt2009_259p,hsieh2016_tisserand}. 233P was discovered to be active by the WISE spacecraft \citep{mainzer2010_233p}. Aside from a small number of ground-based observations confirming the presence of activity for the discovery announcement, no follow-up observations have been published to date. As such, little is known about the object's physical properties while either active or inactive, and no assessment about the likely cause of its activity is currently available. Its relatively high eccentricity ($e$$\,=\,$0.409), perihelion distance ($q$$\,=\,$1.795~AU) close to the aphelion of Mars ($Q_{\rm Mars}$$\,=\,$1.666~AU), and $T_J$ value ($T_J$$\,=\,$3.08) in the indistinct dynamical boundary region between asteroids and comets \citep[cf.][]{hsieh2016_tisserand} suggest, however, that it may simply be a Jupiter-family comet (JFC) that has briefly taken on main-belt-like orbital elements. This conclusion is supported by the object's small $t_{ly}$ (Table~\ref{table:aaproperties}) as well as the fact that we find that several synthetic JFCs studied by \citet{brasser2013_oortsdformation} take on 233P-like orbital elements at some point during their evolution. We plot the orbital evolution of an example of such an object in Figure~\ref{figure:233p_evolution}. \begin{figure*}[tbp] \centerline{\includegraphics[width=5in]{fig_evolution_162.pdf}} \caption{\small Plots of semimajor axis in AU (top panel), eccentricity (middle panel), and inclination in degrees (bottom panel) as a function of time (small grey dots) for a synthetic Jupiter-family comet from \citet{brasser2013_oortsdformation}. Regions shaded in light grey indicate where orbital elements are similar to those of 233P, specifically where $a$$\,=\,$$a_{\rm 233P}\pm0.1$~AU, $e$$\,=\,$$e_{\rm 233P}\pm0.05$, and $i$$\,=\,$$i_{\rm 233P}\pm5^{\circ}$, where $a_{\rm 233P}$, $e_{\rm 233P}$, and $i_{\rm 233P}$ are the semimajor axis in AU, eccentricity, and inclination in degrees of 233P, respectively. Red dots indicate where the orbital elements of the synthetic comet simultaneously meet all of these criteria for being similar to 233P's orbital elements. } \label{figure:233p_evolution} \end{figure*} 348P/PANSTARRS was discovered in 2017 \citep{wainscoat2017_p2017a2}. It has a small $t_{ly}$ value (Table~\ref{table:aaproperties}), indicating that it is dynamically unstable, and its semimajor axis ($a$$\,=\,$3.166~AU) is also close to the 19J:9A MMR at 3.1623~AU. Like 233P, it has a relatively high eccentricity ($e$$\,=\,$0.301) for a main-belt asteroid and also has $T_J$$\,=\,$3.062, placing it within the dynamical boundary region between asteroids and comets \citep[cf.][]{hsieh2016_tisserand}. We also find that two synthetic JFCs studied by \citet{brasser2013_oortsdformation} briefly take on 348P-like orbital elements during their evolution. As such, we suspect that it may also be a JFC that has temporarily taken on main-belt-like orbital elements. For completeness, we include dwarf planet (1) Ceres as an active asteroid given that water vapor has been detected from the body by the {\it Herschel Space Observatory} \citep{kuppers2014_ceres}. Of course, due to its much larger size \citep[$r_e$$\,\sim\,$470~km;][]{carry2008_ceres,park2016_ceres} relative to the other objects we are considering here, the physical regime occupied by the object is certainly very different from those occupied by other active asteroids. No family has been identified for Ceres to date \citep[e.g.,][]{milani2014_astfamilies,rivkin2014_ceresfamily}, although \citet{carruba2016_ceresfamily} proposed that Ceres family members might simply be highly dispersed and therefore undetectable by standard family identification techniques. Scheila was observed to be active in 2010, exhibiting an unusual three-tailed morphology \citep{jewitt2011_scheila,bodewits2011_scheila}. Scheila's activity was most likely due to an oblique impact which generated an impact cone and down-range plume of impact ejecta \citep{ishiguro2011_scheila2}. The asteroid has been classified as a T-type asteroid \citep{neese2010_taxonomy}, and \changed{has been reported to have} $p_V$$\,=\,$0.040\changed{$\pm$0.001} and $r_e$$\,=\,$79.9\changed{$\pm$0.6}~km \citep{mainzer2016_neowise}. Dust emission observed from (493) Griseldis in 2015 was likewise suspected of being impact-generated, due to the short duration of the observed activity and morphology of the detected extended dust feature \citep{tholen2015_griseldis}, although a detailed analysis of its activity has yet to be published. Griseldis has been classified as a P-type asteroid \citep{neese2010_taxonomy} and \changed{has been reported to have} \changed{$p_V$$\,=\,$0.081$\pm$0.009 and} $r_e$$\,=\,$20.8\changed{$\pm$0.1}~km \citep{mainzer2016_neowise}. We do not find any families associated with Scheila or Griseldis. \subsection{Other Asteroid Families and Clusters}\label{section:family_other} There are some young asteroid families with which no known active asteroids are currently associated, but have properties suggesting that they could be found in the future to contain active asteroids. The Veritas family has been determined to be 8.3$\pm$0.5~Myr old \citep{nesvorny2003_dustbands} and is dominated by C-type asteroids \citep{mothediniz2005_familyspectroscopy}. No MBCs have been associated with this family to date, although its young age and primitive composition strongly suggests that it could have the potential to harbor them \citep[cf.][]{hsieh2009_htp}. Another interesting group of asteroids is the Lorre cluster, named for (5438) Lorre, which was determined to be 1.9$\pm$0.3~Myr old by \citet{novakovic2012_lorre}. Lorre has been classified as a C-type asteroid, and is the only object in the 19-member cluster to have had its spectral class determined. The average \changed{reported} albedo of ten cluster members for which albedos have been measured is $\overline{p_V}$$\,=\,$\changed{0.044$\pm$0.013}, though, consistent with these other members also being C-type objects. No MBCs have yet been identified among the members of the cluster, although due to the cluster's young age and spectral type of its largest member, \citet{novakovic2012_lorre} hypothesized that it could be a potential MBC reservoir. In these cases of young primitive asteroid families for which no MBCs have yet been found, the lack of currently known MBCs in these families could be due to the fact that not all members of these families have been observed deeply enough or at the right times to reveal faint, transient cometary activity \citep[cf.][]{hsieh2009_htp}. Impact-triggered activation of MBC activity also depends on the local collision rate and so the rate of activations may simply be lower in certain families, particularly those at higher inclinations \citep[e.g.,][]{farinella1992_astcollisionrates}. In addition to the young asteroid families discussed in this section and earlier in this paper, a number of other young asteroid families or clusters, including the Datura, Brugmansia, Emilkowalski, Hobson, Iochroma, Irvine, Kap'bos, Lucascavin, Nicandra, Rampo, and Schulhof families, all of which have ages of $<$$\,$2~Myr, have been identified \citep{nesvorny2015_astfam_ast4,pravec2017_astclusters}. The \changed{reported} albedos of most of the central bodies of these families for which albedos have been measured are large \citep[$p$$\,>\,$0.1;][]{mainzer2016_neowise}, however, suggesting these families do not have particularly primitive compositions, and so are unlikely to contain MBCs. One exception is (66583) Nicandra, which \changed{has been reported to have} $p_V$$\,=\,$0.049\changed{$\pm$0.007} \citep{mainzer2016_neowise}, suggesting that the members of its associated family could be primitive and therefore potentially icy. \section{Discussion}\label{section:discussion} \subsection{Overall Results}\label{section:discussionoverview} As can be seen from Table~\ref{table:family_associations}, nearly all of the known active asteroids or active asteroid fragments we considered appear to be associated with at least one asteroid family. Of those objects found to not have family associations, 233P, 259P, and 348P have been dynamically determined to be potential interlopers at their present locations (Section~\ref{section:nofamilies}), P/2013 R3-A has a corresponding fragment (P/2013 R3-B) that is associated with a family, where both fragments are closely associated with a significant MMR (9J:4A) (Section~\ref{section:mandragora}), Ceres has exhibited volatile outgassing, but is a very large object and so occupies a very different physical regime from other much smaller suspected MBCs, and Griseldis and Scheila are also relatively large objects that are suspected of undergoing impact disruptions (Section~\ref{section:nofamilies}). Of the 384\,337 asteroids used for the analysis performed by \citet{nesvorny2015_astfam_ast4} (including Hungaria, Hilda, and Jovian Trojan asteroids), $\sim$143\,000 were linked to 122 families, corresponding to an average family association rate of $\sim$37\%. This number notably omits members of 19 groupings designated as candidate families (of which, for example, the cluster associated with 288P is one) by the authors due to those groupings' uncertain statistical significance at the time, and other families may have yet to be identified, \changed{and so represents a lower limit to the true combined family or candidate family association rate for inner solar system asteroids.} Nonetheless, even including outliers in physical size like Ceres, Griseldis, and Scheila, and possible interlopers like 259P, 233P, and 348P, we find a family or candidate family association rate for active asteroids and active asteroid fragments of 16 out of 23, or $\sim$70\%, significantly higher than the currently known ``background'' family association rate. Treating the fragments of P/2013 R3 and P/2016 J1 as non-independent objects, we find a family or candidate family association rate for MBCs of 10 out of 12. Using 37\% as the average likelihood of a random asteroid being associated with a family, there is a 0.1\% probability of this family association rate occurring by pure chance. Meanwhile, for disrupted asteroids, there is a $\sim$6\% probability of seeing the observed family association rate (5 out of 7) by pure chance. Of course, if the aforementioned physical and dynamical outliers are removed from consideration, we would then find an even higher MBC family or candidate family association rate, making the likelihood that the observed family association rate has occurred by chance even more improbable. While in most cases, physical information is available for only a small fraction of the members of each family associated with an active asteroid, we find that all asteroid families associated with MBCs contain at least some primitive objects, i.e., objects taxonomically classified as C-complex or even D-type asteroids (Table~\ref{table:family_associations}), where the few MBCs that have been directly taxonomically classified are also all found to have C-complex spectra. This result is likely related to the higher prevalence of primitive-type asteroids in the outer main belt \citep[cf.][]{demeo2015_astbeltcomposition_ast4} where most MBCs are found \citep[cf.][]{hsieh2015_ps1mbcs}, but may also be significant on its own, given that a MBC in the middle main belt (P/2015 X6) is also associated with a family with C-type asteroids in it (the Aeolia family). Meanwhile, the taxonomic types of members of families associated with disrupted asteroids are more diverse, including both primitive C-complex and D-type asteroids as well as less primitive Q-, S-, and V-type asteroids (Table~\ref{table:family_associations}). With just 23 active asteroid and active asteroid fragments considered, the statistical significance of the analysis presented here is certainly limited by our small sample size. \changed{The average family association rate for asteroids in the inner solar system is likely also dependent on the specific region being considered and possibly also the taxonomic types of the objects being considered. For example, it might be more appropriate for us to interpret our MBC family association rate using the average family association rate for primitive-type asteroids in the outer main belt, which is likely to be different from the family association rate for the entire population of main belt, Hungaria, Hilda, and Jovian Trojan asteroids considered by \citet{nesvorny2015_astfam_ast4}. Unfortunately, we lack the orbital element distribution of the specific catalogue of asteroids used by \citet{nesvorny2015_astfam_ast4} as well as the taxonomic classifications for the vast majority of small asteroids that are necessary to derive average family association rates taking those properties into account. We also note that if every main-belt asteroid were subjected to the same dynamical scrutiny as each active asteroid, more might be found to be associated with their own candidate families. Addressing these various uncertainties in the average asteroid family association rate used to evaluate the significance of our observed MBC family association rate is well beyond the scope of this work. For reference though, we note that using 50\% as the average asteroid family association rate, there is a 1.6\% probability of our observed MBC family association rate occurring by chance, and using 90\% as the average asteroid family association rate, there is a 23\% probability of our observed MBC family association rate occurring by chance. Meanwhile, there are 16\% and 12\% probabilities of seeing our observed disrupted asteroid family association rate given average asteroid family association likelihoods of 50\% and 90\%, respectively.} \subsection{Implications for MBCs}\label{section:implications_mbcs} \subsubsection{Finding and Characterizing New MBCs}\label{section:mbc_dicovery} From the perspective of finding new MBCs, because asteroid family members are thought to have similar compositions \citep[e.g.,][]{ivezic2002_astfamilies,vernazza2006_karin}, it is logical to expect that asteroid families that contain a known, presumably ice-bearing MBC could contain other icy objects. It is this hypothesis that led \citet{hsieh2009_htp} to search members of the Themis family (already known to contain 133P) for new MBCs, ultimately leading to the discovery of activity for asteroid (118401) LINEAR, now also designated as 176P. Meanwhile, dynamical studies have also shown that intra-family collision rates may be elevated over local background rates, particularly for young families \citep[e.g.,][]{farinella1992_astcollisionrates,delloro2002_newfamilycollisionrates}, suggesting that more potentially activity-triggering impacts on icy objects might occur in asteroid families, further increasing the chances of producing active MBCs. In terms of characterizing both new and known MBCs, many MBC nuclei have been found to be quite small, with effective radii of $r_e$$\,\lesssim\,$1~km (cf.\ Table~\ref{table:aaproperties}). Such objects are difficult to physically characterize as they are extremely faint \citep[e.g., $m_R$$\,\sim\,$24-26~mag;][]{maclennan2012_259p,hsieh2014_324p} when inactive far from the Sun, making reliable photometry, colors, or spectroscopy difficult to obtain at those times. Meanwhile, their surface properties also cannot be measured when they are closer to the Sun and therefore brighter, as this is where they become active and thus become obscured by coma dust. In these cases, identification of an associated asteroid family can allow for reasonable guesses of a MBC's taxonomic type and albedo by proxy using corresponding measurements of other asteroids in the same family. The Themis family provides an illustrative example of this application of establishing links between MBCs and specific asteroid families as it contains two MBCs (133P and 176P) that have been individually characterized as B- or F-type asteroids and \changed{have been reported to have} albedos of $p_R$$\,=\,$0.05\changed{$\pm$0.02} and $p_R$$\,=\,$0.06\changed{$\pm$0.02} \citep{hsieh2009_albedos}, where the family as a whole has been found to contain mostly C-complex asteroids (which include B- and F-type asteroids) and \changed{has been reported to have} an average albedo of $\overline{p_V}$$\,=\,$0.068$\pm$0.017 (Section~\ref{section:themis_beagle}; Table~\ref{table:family_associations}). This technique for inferring a MBC's taxonomic type is obviously subject to uncertainties due to possible differentiation of the family parent body, or if the MBC is an interloper or taxonomically dissimilar member of the background population or overlapping family. In the absence of other direct surface property measurements of MBC nuclei, fellow family members may nonetheless be able to provide useful information about the likely properties of those MBC nuclei from their dynamical associations alone. It is therefore interesting to note that, although only three MBC nuclei have been individually taxonomically classified (as B-, C-, or F-type asteroids) and two \changed{have been reported to have} albedos ($p_R$$\,\sim\,$0.05), all MBCs with family associations to belong to families containing primitive-type asteroids (cf.\ Section~\ref{section:discussionoverview}) that also have relatively low average \changed{reported} albedos ($\overline{p_V}$$\,\lesssim\,$0.10). Meanwhile members of families associated with disrupted asteroids span a wider range of taxonomic types and \changed{reported} albedos (0.06$\,<\,$$\overline{p_V}$$\,<\,$0.25). These results are consistent with MBC activity being correlated to composition (i.e., whether an object contains primitive and therefore potentially icy material) and processes that produce activity in disrupted asteroids being less sensitive to composition (although may still have some dependence, e.g., to the extent that material density can affect an object's susceptibility to rotational disruption; Section~\ref{section:implications_other}). \subsubsection{MBC Formation}\label{section:mbc_origins} The link between MBCs and very young asteroid families (e.g., 133P and the Beagle family; Section~\ref{section:themis_beagle}) is particularly interesting considering thermal models and impact rate calculations \citep[e.g.,][]{schorghofer2008_mbaice,prialnik2009_mbaice,hsieh2009_htp} showing that ice may become depleted from the surface of a main-belt asteroid over Gyr timescales to the point at which small ($\sim$m-scale) impactors are unable to penetrate deeply enough to trigger sublimation-driven activity \citep[cf.][]{hsieh2004_133p,capria2012_mbcactivity,haghighipour2016_mbcimpacts}. However, if most MBC nuclei were produced in more recent fragmentation events (e.g., $\lesssim\,$10~Myr), they may have much younger effective ages than their dynamical stability timescales would otherwise suggest, and could possess more ice at shallower depths than expected. In Figure~\ref{figure:family-formation-mbcs}, we illustrate a sequence of physical processes \citep[some of which have been previously noted in the context of MBCs; e.g.,][]{nesvorny2008_beagle,hsieh2009_htp,capria2012_mbcactivity,haghighipour2016_mbcimpacts} that could lead to active MBCs in young asteroid families. We begin with the premise that large icy asteroids can preserve ice over Gyr timescales in the main asteroid belt \citep[cf.][]{schorghofer2008_mbaice,prialnik2009_mbaice}, except that, by now, that ice has likely receded too deep below the surface to plausibly produce sublimation-driven activity (Figure~\ref{figure:family-formation-mbcs}a). However, if one of these objects is catastrophically disrupted \changed{\citep[either by an impact event, as is commonly assumed, or a rotational fission event, as is suspected for some families by][]{pravec2017_astclusters}} (Figure~\ref{figure:family-formation-mbcs}b), after the sublimation and depletion of ice directly exposed by the initial disruption, remaining subsurface ice on the resulting fragments (i.e., family members) might then be found at much shallower depths than before (Figure~\ref{figure:family-formation-mbcs}c). At these depths, that ice would then be more easily excavated by relatively small (and therefore relatively abundant) impactors. For asteroids formed in recent ($\lesssim$10~Myr) fragmentation events where ice has not yet had time to recede again to significant depths, such small-scale disruptions should be able to trigger the sublimation-driven activity observed today on MBCs. Interestingly, with some exceptions, most MBC nuclei are small (km-scale or smaller; Table~\ref{table:aaproperties}). Smaller objects are collisionally disrupted on statistically shorter timescales than larger objects \citep{cheng2004_collisionalevolution,bottke2005_collisionalevolution}, meaning that currently existing smaller objects are more likely to have been recently formed than larger objects. Thus, even those MBCs for which young families have not yet been associated may have also formed in recent disruptions of larger parent bodies. The current lack of identified young family associations for these MBCs could simply be due to other family members being too faint to have been discovered yet by current asteroid surveys. As surveys improve and find ever fainter asteroids, more young families will likely be discovered and some of these will likely be associated with MBCs that currently lack identified young family associations. We emphasize, however, that we do not expect young family associations to eventually be found for all MBCs. Some MBCs may have formed in recent disruptions from which they are the only remaining fragments of appreciable size, or where family members have rapidly dispersed and blended beyond recognition into the background asteroid population due to chaotic dynamical conditions. Some MBCs themselves may have been destabilized by chaotic dynamical conditions near their points of origins and are now interlopers at their present-day locations, far from their original families. Finally, for certain orbital obliquities and latitudes of subsurface ice reservoirs, thermal modeling indicates that shallow subsurface ice could remain preserved on an outer main belt asteroid over even Gyr timescales \citep{schorghofer2008_mbaice,schorghofer2016_asteroidice}. Therefore, in these cases, a recent catastrophic disruption would not be required at all for ice to be accessible to excavation by small impactors. \begin{figure}[tbp] \includegraphics[width=2.5in]{fig_asteroid-family-mbc-formation2.pdf} \caption{Illustration of processes that could produce MBCs with shallow subsurface ice from the catastrophic disruption of parent bodies with initally more deeply buried ice: (a) a long-lived icy main belt object has preserved ice in its interior but has had its outer layer largely devolatilized from solar heating and impact gardening over $\sim$Gyr timescales; (b) a large impactor \changed{(or rotational instability)} catastrophically disrupts this body, exposing its icy interior and leading to sublimation-driven outbursts from ice directly exposed by the disruption; (c) mantling on new family members quenches activity triggered directly by the disruption of the family's parent body, although ice still remains relatively close to the surface; and (d) a small (and therefore abundant and relatively frequently encountered) impactor, which would otherwise be unable to penetrate the inert surface layer of an older ice-bearing asteroid, causes a small-scale disruption of the young family member's relatively fresh surface, excavating shallow subsurface ice and producing a localized active site from which sublimation-driven dust emission can occur. } \label{figure:family-formation-mbcs} \end{figure} As ongoing surveys continue to discover more asteroids, and particularly as future surveys discover smaller and fainter asteroids than are detectable now, continued searches for tightly clustered young families should be performed \citep[e.g.,][]{milani2014_astfamilies}, \changed{perhaps using osculating or mean elements rather than proper elements for detecting very young clusters \citep[e.g.,][]{nesvorny2006_datura,nesvorny2006_youngfamilies,pravec2009_asteroidpairs,pravec2017_astclusters,rosaev2017_hobson}}. Furthermore, as new young families are identified --- especially ones associated with known MBCs, believed to contain primitive asteroids, or found in the outer main belt (i.e., $a$ between the 5J:2A MMR at 2.8252~AU and the 2J:1A MMR at 3.2783~AU) --- targeted observations or at least targeted close examination of survey data of members of these young families should be conducted to search for new MBCs. This schematic model also suggests that thermal modeling work on volatile preservation in asteroids should take into account the fact that many icy asteroids found in the main belt today could actually have originated from the fragmentation of larger icy parent bodies some time in the relatively recent past. In these cases, the timescale over which volatile depletion from solar heating is expected to take place is not the age of the solar system, but rather the age of an object's associated asteroid family. As such, we suggest that thermal models computing ice retreat depths over Gyr timescales \citep[e.g.,][]{schorghofer2008_mbaice,prialnik2009_mbaice} may overestimate depths to ice at the present day, and that ice in primitive asteroids might be found at much shallower depths (and therefore be more accessible to activity-triggering impacts) than these models might indicate. \subsection{Implications for Disrupted Asteroids}\label{section:implications_other} Given that impact events depend more on an object's environment rather than its composition, we might not expect that the presence of an impact-disrupted active asteroid in a family would necessarily indicate that the family could contain more. However, impact disruptions could be more common in families in general. Intra-family collisions (which have lower velocities and so are more likely to cause non-catastrophic disruptions) may be more likely in asteroid families relative to the local background, particularly for families with low inclinations, since their members share similar orbits \citep[cf.][]{farinella1992_astcollisionrates}. In young families whose members still have very similar orbits, non-catastrophic impact disruptions could be even more frequent. For rotationally disrupted asteroids, the relationship between incidence rate and membership in families is more uncertain. Given the dependence of rotational destabilization on the composition and internal structure (particularly as they relate to density) \citep[e.g.,][]{hirabayashi2014_p2013r3,hirabayashi2014_biaxialellipsoids}, a positive correlation between membership in a particular family and the prevalence of rotational disruptions could arise if family members share similar compositions or internal structures that make them more susceptible to rotational destabilization than other non-family asteroids, particularly if that family has already been found to contain at least one other rotationally disrupted asteroid. What is unclear is whether the variation in composition or internal structure between members of a particular family and non-family asteroids, particularly if they are of similar taxonomic type, is large enough to produce significant differences in disruption rates. It is also unclear whether the degree of commonality in internal compositions or structures among family members plays a more significant role than other factors that also affect the likelihood of rotational disruption. For example, the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect, which may be responsible for spinning up asteroids beyond their critical limits \citep{jewitt2014_p2013r3}, is known to depend on the thermal properties of an asteroid, particularly thermal inertia, which could be similar for asteroids belonging to the same family, but it also strongly depends on asteroid size and shape, which are not particularly related to family membership \citep[cf.][]{rubincam2000_yorp,scheeres2007_yorp,golubov2016_yarkovsky}. We conclude for now that the relationships between family membership and the probability of an asteroid experiencing an impact or rotational disruption are probably weaker than the relationship between family membership and the rate of occurrence of MBCs, especially for young asteroid families largely consisting of primitive asteroids. Nonetheless, those relationships may not be entirely negligible and are likely worth further investigation in the future using both theoretical approaches (e.g., calculation or modeling of expected impact and rotational disruption rates within and outside families in different regions of the asteroid belt) and observational approaches (e.g., continuing to note whether newly discovered disrupted asteroids are associated with young families and tallying relative occurrence rates of disruptive events within and outside those families). \subsection{Other Considerations and Challenges}\label{section:futurework} Two active asteroids (311P and 354P) were initially identified as members of the Flora family, where both objects are considered disrupted asteroids \citep{jewitt2010_p2010a2,jewitt2013_311p,snodgrass2010_p2010a2}, although both of these objects have since been determined to belong to other nearby families (Baptistina and Behrens, respectively; Sections~\ref{section:baptistina} and \ref{section:behrens}). This confusion points to an issue that will assuredly continue to arise in the future as more active asteroids are found in dense regions of the main asteroid belt in orbital element space. In the case of the Flora family, its large size and diffuse dynamical structure necessitates care in determining its true extent and membership, given the presence of several nearby neighboring families in orbital element space (i.e., the Vesta, Baptistina, Massalia, and Nysa-Polana families). Overlapping dynamical families, which can result when families arise from parent bodies that happened to share similar orbital elements, particularly those positioned near chaos-inducing MMRs, can result in complications in attempting to infer the composition of other family members from sublimation-driven activity observed from apparently linked active asteroids. In studies of particularly dense regions of the asteroid belt, color and albedo information can be used to help to separate true members of a family from background objects \citep[e.g.,][]{reddy2011_baptistina,dykhuis2014_flora}, though the possibility of taxonomic diversity within a family means that this technique needs to be used with care \citep[e.g.,][]{oszkiewicz2015_flora}. This approach will also be unhelpful for distinguishing family members in regions where background objects and family members are compositionally similar, or for which compositional information is largely unavailable \citep[cf.][]{novakovic2012_288p}. In these cases, more careful dynamical analyses, such as selective backward integration \citep[SBIM;][]{novakovic2012_288p} to identify clusterings of secular angles in the past for subsets of family member candidates, will be useful for identifying true family members. Lastly, as seen throughout Section~\ref{section:results}, many families that are found to contain potentially ice-bearing objects are located near or are intersected by MMRs. In cases where a family either contains a known MBC or at least primitive-type asteroids and is significantly affected by one or more MMRs (e.g., the Adeona family and the 8J:3A MMR, and the Themis family and the 2J:1A MMR; Sections~\ref{section:adeona} and \ref{section:themis_beagle}), the nearby or intersecting MMRs could provide a means for dispersing ice-bearing family asteroids throughout the asteroid belt and beyond. Such a process was proposed for the origin of 238P, which \citet{haghighipour2009_mbcorigins} suggested was originally a member of the Themis family that had its eccentricity driven up by that family's close proximity to the 2J:1A MMR. Similarly, \citet{jewitt2009_259p} suggested, based on its asteroid-like $T_J$ value, that 259P could have originated elsewhere in the asteroid belt. In this light, future dynamical studies to ascertain whether MBCs without currently recognized links to young families may have originated elsewhere in the asteroid belt will be useful for ensuring proper interpretation of the spatial distribution of the population of icy bodies in the asteroid belt and ascertaining the degree to which they can be used to trace the distribution of ice in the primordial solar system \citep[cf.][]{hsieh2014_mbcsiausproc}. \section{Summary}\label{section:summary} In this work, we present the following key findings: \begin{enumerate} \item{We report newly identified family associations between active asteroids 238P/Read and the Gorchakov family, 311P/PANSTARRS and the Behrens family, 324P/La Sagra and the Alauda family, 354P/LINEAR and the Baptistina family, P/2013 R3-B (Catalina-PANSTARRS) and the Mandragora family, P/2015 X6 (PANSTARRS) and the Aeolia family, P/2016 G1 (PANSTARRS) and the Adeona family, and P/2016 J1-A/B (PANSTARRS) and the Theobalda family. The Gorchakov and Behrens families are candidate families identified by this work and will require further investigation to confirm that they are real families. } \item{We find that 10 out of 12 MBCs and 5 out of 7 disrupted asteroids are linked with known or candidate families, rates that have $\sim$0.1\% and $\sim$6\% probabilities, respectively, of occurring by chance, given an overall average family association rate of 37\% for asteroids in the inner solar system. } \item{All MBCs with family associations are found to belong to families that contain and are sometimes dominated by primitive-type asteroids, and have relatively low average \changed{reported} albedos ($\overline{p_V}$$\,\lesssim\,$0.10). Meanwhile, disrupted asteroids are found to belong to families that span wider ranges of taxonomic types (including Q-, S-, and V-types) and a wider range of average \changed{reported} albedos (0.06$\,<\,$$\overline{p_V}$$\,<\,$0.25). These findings are consistent with hypotheses that MBC activity is closely tied to an object's composition (namely whether it is likely to contain preserved ice) while processes that produce disrupted asteroid activity are less sensitive to composition. } \item{We describe a sequence of processes that could produce MBCs that involves the preservation of ice over Gyr timescales within large parent bodies that are subsequently catastrophically disrupted in family forming collisions in the recent past, where the resulting young family members may possess subsurface ice at relatively shallow depths and are thus more susceptible to activation than older icy asteroids. We suggest that as ongoing surveys discover more asteroids and future surveys discover smaller and fainter asteroids, associations with young families may eventually be found for some families that currently lack such associations, though we do not expect all MBCs to eventually be found to have such associations. } \item{Though we also find a suggestively high rate of disrupted asteroids with associated asteroid families, the connection between asteroid families and disrupted asteroids is less clear than for MBCs. Further theoretical and observational work is needed to clarify the significance of the rate of family associations that we find for disrupted asteroids. } \end{enumerate} \acknowledgments HHH acknowledges support from the NASA Solar System Observations program (Grant NNX16AD68G). BN acknowledges support by the Ministry of Education, Science and Technological Development of the Republic of Serbia, Project 176011. RB is grateful for financial support from JSPS KAKENHI (JP16K17662). We also thank an anonymous reviewer for useful suggestions that helped to improve this paper. \bibliographystyle{aasjournal}
1,108,101,566,035
arxiv
\section{Introduction}\label{intro} Multiplanetary systems appear to be suitable distant laboratories to explore the diversity of small planets, and their formation and evolution pathways. This is the case of Kepler-36 \citep{2012Sci...337..556C}, where its two planets, b and c, present periods of 14 and 16 days with densities of 7.5 and 0.9 \gcm3, respectively. This suggests that these planets may have formed in different environments within the same protoplanetary disk before migrating inwards. Furthermore, a decreasing density gradient with distance from the host star in multiplanetary systems with 6 to 7 planets, such as TRAPPIST-1 \citep{2021arXiv210108172A,agol21} and TOI-178 \citep{Leleu21} suggest that there might be a transition between the rocky, inner super-Earths and the outer, volatile-rich sub-Neptunes. This transition is most probably due to the presence of the snowline in the protoplanetary disk \citep{Ruden99}. Nevertheless, there are presently several limitations to determine the variation of the volatile mass fraction of planets within their systems that include the precision reached on the fundamental parameters of both the planets and the star, and the different assumptions considered between different interior structure models. These assumptions include whether the volatile layer of the planet is fully constituted of H/He \citep{lopez_fortney14}, an ice layer \citep{Zeng19}, an ice layer with a H/He atmosphere on top \citep{dorn15} or a steam and/or supercritical water layer \citep{mousis20,Turbet20}. To overcome the differences in volatile mass fraction estimates of multiplanetary systems due to the different compositions of the volatile layer between interior structure models, we perform a homogeneous analysis of the interior structure and composition of several multiplanetary systems. In our interior structure model, we consider that the volatile layer is water-dominated, following the approach of \cite{mousis20} and \cite{2021arXiv210108172A}. This analysis allows us to uncover volatile and core mass fraction trends, and their connection with planet formation and evolution. We use already published masses, radii and stellar composition data for four systems, and perform our own spectroscopic analysis to improve the parameters of one system, K2-138, whose detection was reported in \citet{christiansen18}. K2-138 harbours six small planets in a chain of near 3:2 mean-motion resonances and benefited of a radial velocity ground-based follow-up with HARPS on the 3.6m telescope at La Silla Observatory, leading to the confirmation and mass measurements of the four inner planets \citep{2019A&A...631A..90L}, with relatively good precisions given the standard today. In order to bring stronger constraints on the stellar parameters and abundances, and further reduce the degeneracies in the planetary structure modelling, we carried out an in-depth analysis of K2-138. Section \ref{sect_spectroscopic_analysis} presents the new detailed analysis of the stellar host in the K2-138 system, which allowed us to derive stellar fundamental parameters and the elemental abundances, using the Sun and $\alpha$ Cen B as benchmarks. Section \ref{sect:pastis} describes a new Bayesian analysis of the HARPS radial velocities and K2 photometry, using the new stellar parameters. We describe our interior-atmosphere modelling in Sect. \ref{sect:methodology}, including our calculation of atmospheric mass-loss rates to infer the current presence or absence of volatiles. We present the volatile and core mass fraction trends for each mutiplanetary system as a result of our homogeneous analysis in Sect. \ref{sect:results}. Finally, we discuss the planet formation and evolution mechanisms that could have shaped these compositional trends in Sect. \ref{sect:discussion}. We present our concluding remarks in Sect. \ref{sect:conclusion}. \section{Spectroscopic analysis}\label{sect_spectroscopic_analysis} \object{K2-138} stellar parameters and abundances were derived based on a differential, line-by-line analysis relative to the Sun. The solar abundances are determined as part of such an analysis \citep[e.g.][]{Melendez12} and a set of reference values is not assumed. We used the HARPS spectra retrieved under programme ID 198.C-0.168. These were corrected from systemic velocity and planetary reflex-motion, removing the spectra with a S/N lower than ten in order 47 (550 nm) and the ones contaminated by the moonlight (S/N above 1.0 in fibre B). We then co-added the spectra in a single 1D spectrum and normalised it to the continuum. For the Sun, we used the HARPS spectra extracted from the ESO instrument archives\footnote{\url{http://archive.eso.org}}, acquired under programme ID 088.C-0323. The reduction of the solar spectrum, obtained as the spectrum of the light reflected by Vesta, is detailed in \citet{2016MNRAS.457.3637H} and the co-addition was performed as for \object{K2-138}. The stellar parameters and abundances of 24 metal species were self-consistently determined from the spectra, plane-parallel MARCS model atmospheres \citep{gustafsson08}, and the 2017 version of the line-analysis software MOOG originally developed by \citet{sneden73}. The equivalent widths (EWs) were measured manually using IRAF\footnote{{\tt IRAF} is distributed by the National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} tasks assuming Gaussian profiles. Strong lines with RW = $\log$ (EW/$\lambda$) $>$ --4.80 were discarded. This constraint on the line strength was relaxed for Mg because it would result in no \ion{Mg}{i} lines left. \subsection{Stellar parameters}\label{sect_stellar_parameters} The stellar parameters of \object{K2-138} and \object{$\alpha$ Cen B} appear to be similar (see below). Therefore, we also analysed the latter for benchmarking because it has accurate and nearly model-independent $T_{\rm eff}$ and $\log g$ estimates from long-baseline interferometry and asteroseismology, respectively. \object{K2-138} and \object{$\alpha$ Cen B} were observed with exactly the same instrumental set-up, which ensures the highest consistency \citep{bedell14}. The \object{$\alpha$ Cen B} spectra were selected from the ESO archive, keeping those corrected from the Blaze and with S/N higher than 350 in order 47. For \object{$\alpha$ Cen B}, we adopt in the following $T_{\rm eff}$ = 5231$\pm$21 K derived by \citet{kervella17a} from their VLTI/PIONIER measurements and the bolometric flux of \citet{boyajian13}. We also assumed $\log g$ = 4.53$\pm$0.02 dex \citep{heiter15} based on scaling relations making use of the frequency of maximum oscillation power, $\nu_{\rm max}$, determined from radial-velocity time series by \citet{kjeldsen08}. The model parameters ($T_{\rm eff}$, $\log g$, $\xi$, and [Fe/H]) were iteratively modified until the excitation and ionisation balance of iron is fulfilled and the \ion{Fe}{i} abundances exhibit no trend with RW. The abundances of iron and the $\alpha$ elements were also required to be consistent with the values adopted for the model atmosphere. For the solar analysis, $T_{\rm eff}$ and $\log g$ were held fixed to 5777 K and 4.44 dex, respectively, whereas the microturbulence, $\xi$, was left as a free parameter. The uncertainties in the stellar parameters were computed as in \citet{morel18}. We first carried out the analysis of \object{$\alpha$ Cen B} and \object{K2-138} using various iron line lists \citep{biazzo12,doyle17,feltzing01,jofre14,melendez14,morel14,reddy03,tsantaki19}. For \citet{jofre14}, we adopted their FGDa line list. The goal is to identify the line list that provides the most accurate parameters based on a comparison with the interferometric and asteroseismic constraints at hand for \object{$\alpha$ Cen B}. To ensure the highest consistency, the spectral features the analysis is based on for a given line list were exactly the same for the three stars. The parameters obtained are given in Table~\ref{tab_spectroscopic_parameters} and shown in Fig.~\ref{fig_spectroscopic_parameters}. The surface gravity of \object{$\alpha$ Cen B} appears to be underestimated in most cases. We also experimented with the LW13 Ti line list of \citet{tsantaki19} to constrain this quantity through Ti ionisation balance. As discussed by these authors, this leads to a larger value amounting here to $\sim$0.11 dex. However, it still falls short of matching the seismic value. As can be seen in Fig.~\ref{fig_spectroscopic_parameters}, the only notable difference between the parameters of \object{$\alpha$ Cen B} and \object{K2-138} is that the latter is slightly poorer in metals. Indeed, a differential analysis of \object{K2-138} with respect to \object{$\alpha$ Cen B} adopting the line list of \citet{biazzo12} gives the following results: $\Delta$$T_{\rm eff}$ = --10$\pm$45 K, $\Delta$$\log g$ = +0.02$\pm$0.09 dex, $\Delta$$\xi$ = +0.03$\pm$0.09 km s$^{-1}$ and $\Delta$[Fe/H] = --0.11$\pm$0.04. For the abundance analysis of \object{K2-138}, we adopt in the following the parameters provided by the line list of \citet{biazzo12}: $T_{\rm eff}$ = 5275$\pm$50 K, $\log g$ = 4.50$\pm$0.11, $\xi$ = 0.95$\pm$0.10 km s$^{-1}$ and [Fe/H] = +0.08$\pm$0.05. This choice is motivated by the fact that it leads to parameters that reproduce the reference ones of \object{$\alpha$ Cen B} within the errors. In addition, the metallicity is within the range of accepted values for the binary system \cite[][and references therein]{morel18}. However, from the comparison to the interferometric-based $T_{\rm eff}$ in Fig.~\ref{fig_spectroscopic_parameters}, we cannot rule out that the effective temperature of \object{K2-138} is slightly overestimated at the $\sim$50 K level. The analysis was also repeated using Kurucz atmosphere models \citep{castelli03} The following modest deviations with respect to the default values (Kurucz -- MARCS) were found: $\Delta T_{\rm eff}$ $\sim$ +10 K, $\Delta \log g$ $\sim$ +0.02 dex, and $\Delta$[Fe/H] $\sim$ +0.02 dex. We will examine the robustness of our abundance results against such putative systematic errors in Sect.~\ref{sect_stellar_abundances}. In any case, we find that \object{K2-138} is cooler and less metal rich than concluded by \citet{christiansen18}. \begin{table*}[h!] \caption{Stellar parameters of \object{$\alpha$ Cen B} and \object{K2-138}, as obtained from the various iron line lists. For iron, 42 Fe I and 4 Fe II lines were used.} \label{tab_spectroscopic_parameters} \small \centering \begin{tabular}{l|cccc|cccc} \hline\hline & \multicolumn{4}{c}{\object{$\alpha$ Cen B}} & \multicolumn{4}{c}{\object{K2-138}} \\ & $T_{\rm eff}$ & $\log g$ & $\xi$ & [Fe/H] & $T_{\rm eff}$ & $\log g$ & $\xi$ & [Fe/H] \\ Iron line list & [K] & & [km s$^{-1}$] & & [K] & & [km s$^{-1}$] & \\ \hline \citet{biazzo12} & 5285$\pm$60 & 4.49$\pm$0.14 & 0.909$\pm$0.121 & 0.200$\pm$0.051 & 5275$\pm$50 & 4.50$\pm$0.11 & 0.945$\pm$0.099 & 0.084$\pm$0.043\\ \citet{doyle17} & 5245$\pm$32 & 4.35$\pm$0.08 & 0.490$\pm$0.146 & 0.185$\pm$0.043 & 5235$\pm$30 & 4.43$\pm$0.07 & 0.450$\pm$0.146 & 0.083$\pm$0.034\\ \citet{feltzing01} & 5330$\pm$41 & 4.48$\pm$0.11 & 0.890$\pm$0.100 & 0.220$\pm$0.040 & 5280$\pm$38 & 4.46$\pm$0.10 & 0.915$\pm$0.084 & 0.100$\pm$0.035\\ \citet{jofre14} & 5210$\pm$77 & 4.31$\pm$0.11 & 0.500$\pm$0.221 & 0.181$\pm$0.063 & 5210$\pm$66 & 4.37$\pm$0.11 & 0.555$\pm$0.190 & 0.069$\pm$0.054\\ \citet{melendez14} & 5270$\pm$35 & 4.37$\pm$0.08 & 0.755$\pm$0.133 & 0.174$\pm$0.044 & 5255$\pm$24 & 4.44$\pm$0.06 & 0.767$\pm$0.105 & 0.070$\pm$0.031\\ \citet{morel14} & 5265$\pm$31 & 4.35$\pm$0.09 & 0.795$\pm$0.102 & 0.197$\pm$0.031 & 5275$\pm$31 & 4.45$\pm$0.08 & 0.870$\pm$0.089 & 0.089$\pm$0.032\\ \citet{reddy03} & 5320$\pm$38 & 4.51$\pm$0.11 & 0.900$\pm$0.062 & 0.218$\pm$0.036 & 5295$\pm$29 & 4.52$\pm$0.09 & 0.958$\pm$0.046 & 0.092$\pm$0.027\\ \citet{tsantaki19} & 5190$\pm$64 & 4.26$\pm$0.09 & 0.590$\pm$0.149 & 0.163$\pm$0.048 & 5140$\pm$81 & 4.35$\pm$0.08 & 0.485$\pm$0.198 & 0.050$\pm$0.049\\ \hline\hline \end{tabular} \end{table*} \begin{figure*}[h!] \centering \includegraphics[trim=0 180 0 90,clip,width=0.8\hsize]{Figures/results_strips.pdf} \caption{Results of the analysis of \object{$\alpha$ Cen B} (left panels) and \object{K2-138} (right panels) using the various iron line lists. The colour coding for each line list is indicated in the upper left panel. The parameters of \object{K2-138} determined by \citet{christiansen18} are shown in the right panels. The grey shaded areas for \object{$\alpha$ Cen B} delimit the interferometric $T_{\rm eff}$ and seismic $\log g$ values ($\pm$1 $\sigma$; see Sect.~\ref{sect_stellar_parameters} for details).} \label{fig_spectroscopic_parameters} \end{figure*} \subsection{Stellar abundances}\label{sect_stellar_abundances} We proceed for the abundance analysis with the extensive line list of \citet{melendez14} because the lines of some important elements (e.g., Mg) in \citet{biazzo12} are not covered by our observations. Hyperfine structure was taken into account for Sc, V, Mn, Co and Cu using atomic data from the Kurucz database\footnote{Available at \url{http://kurucz.harvard.edu/linelists.html}}, while the Eu data were taken from \citet{ivans06}. A classical curve-of-growth analysis making use of the EWs was performed for most species. However, the determination of some abundances relied on spectral synthesis. The oxygen abundance was based on \ion{[O}{i]} $\lambda$630.0, while the C abundance was also estimated from the C$_2$ lines at 508.6 and 513.5 nm. See \citet{morel14} for further details on the modeling of the \ion{[O}{i]} and C$_2$ features. Finally, the Eu abundance was based on a synthesis of a number of \ion{Eu}{ii} lines \citep[for details, see][]{2020A&A...644A..19W}. For \object{K2-138}, $v \sin i$ = 2.5 and a macroturbulence of 1.9 km s$^{-1}$ were assumed based on the analysis reported in \citet{2019A&A...631A..90L}. An attempt was made to model \ion{Li}{i} $\lambda$670.8. The line is not detected in \object{K2-138}, but the Li abundance appears to be much lower than solar. The abundances are provided in Table \ref{tab_spectroscopic_abundances}. The random uncertainties were estimated following \citet{morel18}. For the spectral synthesis, additional sources of errors (e.g., continuum placement) were taken into account \citep[see][]{morel14}. The O abundance is based on a single line that is weak (EW $<$ 10 m\AA) and blended with a Ni line. It is therefore uncertain. The same is true for the Mg abundance that is based on three strong lines exhibiting quite a large line-to-line scatter ($\sim$0.05 dex). The impact of lowering $T_{\rm eff}$ by 50 K (see Sect.~\ref{sect_stellar_parameters}) is also given in Table \ref{tab_spectroscopic_abundances}. The Sc, Ti and Cr abundances were derived from both neutral and singly ionised species. Ionisation balance is fulfilled within the uncertainties in all cases assuming the default parameters. However, it can be noted that the agreement systematically degrades for the cooler $T_{\rm eff}$ scale. \begin{table}[h!] \caption{Abundance results for \object{K2-138}. The last column shows the impact of lowering $T_{\rm eff}$ by 50 K (see Sect.~\ref{sect_stellar_parameters}), while keeping $\log g$ and $\xi$ unchanged.} \label{tab_spectroscopic_abundances} \centering \begin{tabular}{l|cc} \hline\hline Abundance ratio & Default $T_{\rm eff}$ scale & Cooler $T_{\rm eff}$ scale \\ \hline $[$Fe/H$]$ & +0.08$\pm$0.05 (42+4) & +0.01\\ \hline $[$\ion{C}{i}/Fe$]$ & --0.04$\pm$0.08 (3) & +0.03\\ $[$C$_2$/Fe$]$ & --0.07$\pm$0.09 (2) & --0.01 \\ $[$\ion{O}{i}/Fe$]$ & +0.03$\pm$0.10 (1) & --0.01\\ $[$\ion{Na}{i}/Fe$]$ & +0.02$\pm$0.06 (3) & --0.04\\ $[$\ion{Mg}{i}/Fe$]$ & --0.06$\pm$0.08 (3) & --0.05\\ $[$\ion{Al}{i}/Fe$]$ & +0.01$\pm$0.05 (2) & --0.04\\ $[$\ion{Si}{i}/Fe$]$ & +0.01$\pm$0.04 (10) & +0.00\\ $[$\ion{Ca}{i}/Fe$]$ & +0.04$\pm$0.06 (3) & --0.05\\ $[$\ion{Sc}{i}/Fe$]$ & --0.03$\pm$0.10 (4) & --0.06\\ $[$\ion{Sc}{ii}/Fe$]$ & --0.01$\pm$0.05 (5) & --0.01\\ $[$\ion{Ti}{i}/Fe$]$ & +0.01$\pm$0.08 (14) & --0.07\\ $[$\ion{Ti}{ii}/Fe$]$ & +0.01$\pm$0.06 (10) & +0.00\\ $[$\ion{V}{i}/Fe$]$ & +0.03$\pm$0.08 (5) & --0.07\\ $[$\ion{Cr}{i}/Fe$]$ & +0.03$\pm$0.05 (7) & --0.04\\ $[$\ion{Cr}{ii}/Fe$]$ & +0.08$\pm$0.04 (4) & +0.01\\ $[$\ion{Mn}{i}/Fe$]$ & +0.04$\pm$0.07 (5) & --0.05\\ $[$\ion{Co}{i}/Fe$]$ & +0.00$\pm$0.06 (7) & --0.03\\ $[$\ion{Ni}{i}/Fe$]$ & +0.00$\pm$0.04 (14) & --0.02\\ $[$\ion{Cu}{i}/Fe$]$ & --0.02$\pm$0.03 (2) & --0.02\\ $[$\ion{Zn}{i}/Fe$]$ & --0.01$\pm$0.03 (3) & +0.00\\ $[$\ion{Sr}{i}/Fe$]$ & +0.01$\pm$0.09 (1) & --0.07\\ $[$\ion{Y}{ii}/Fe$]$ & +0.02$\pm$0.07 (4) & --0.01\\ $[$\ion{Zr}{ii}/Fe$]$ & +0.06$\pm$0.06 (2) & --0.02\\ $[$\ion{Ba}{ii}/Fe$]$ & +0.02$\pm$0.07 (1) & --0.02\\ $[$\ion{Ce}{ii}/Fe$]$ & +0.01$\pm$0.08 (5) & --0.02\\ $[$\ion{Nd}{ii}/Fe$]$ & +0.07$\pm$0.05 (3) & --0.02\\ $[$\ion{Eu}{ii}/Fe$]$ & +0.04$\pm$0.08 (3) & --0.02\\ \hline $[$\ion{C}{i}/\ion{O}{i}$]$ & --0.07$\pm$0.13 & +0.04\\ $[$C$_2$/\ion{O}{i}$]$ & --0.10$\pm$0.12 & +0.00\\ $[$\ion{Mg}{i}/\ion{Si}{i}$]$ & --0.07$\pm$0.08 & --0.05\\ \hline\hline \end{tabular} \tablefoot{The number in brackets gives the number of lines the abundance is based on. For iron, the number of \ion{Fe}{i} and \ion{Fe}{ii} lines is given.} \end{table} \section{\texttt{PASTIS} analysis} \label{sect:pastis} The joint analysis of the HARPS radial velocities, \textit{K2} light curve and spectral energy distribution (SED) was made using the Bayesian software \texttt{PASTIS} \citep{2014MNRAS.441..983D}. Improvements with respect to our previous analysis in \citet{2019A&A...631A..90L} are (1) the radial velocities were nightly binned to average out the correlated high-frequency noise resulting from granulation and instrumental calibrations, (2) the new stellar parameters, as derived in section \ref{sect_stellar_parameters}, were used as priors. We ran two sets of analysis with the adopted $T_{\rm eff}$ and lowered by 50 K, as the latter cannot be ruled out, as reported in section \ref{sect_stellar_parameters}. The magnitudes used to construct the SED were taken from the American Association of Variable Star Observers Photometric All-Sky Survey \citep{2015AAS...22533616H} archive in the optical, from the Two-Micron All-Sky Survey \citep{2014AJ....148...81M} and the Wide-field Infrared Survey Explorer \citep{2014yCat.2328....0C} archives in the near-infrared. The SED was modelled with the BTSettl stellar atmospheric models \citep{2012RSPTA.370.2765A}. The radial velocities were modelled with keplerian orbit models for the planetary contribution and with a gaussian process regression for the correlated noise induced by the activity. For the latter, the following quasi-periodic kernel was used: \begin{equation} \begin{split} k(t_i, t_j) = A^2 \exp \left[ - \frac{1}{2} \left( \frac{t_i - t_j}{\lambda_1} \right)^2 - \frac{2}{\lambda_2^2} \sin^2 \left( \frac{\pi \left| t_i - t_j \right|}{P_{\rm rot}} \right) \right] \\ + \delta_{ij} \sqrt{\sigma_i^2 + \sigma_J^2} \end{split} \end{equation} where A can be identified to the radial velocity modulation amplitude, P$_{\rm rot}$ to the stellar rotation period, $\lambda_1$ to the correlation decay timescale of the active regions, $\lambda_2$ to the relative contribution between the periodic and the decaying components, and $\sigma_J$ to the radial velocity jitter. To model the photometry, we used the JKT Eclipsing Binary Orbit Program \citep{2008MNRAS.386.1644S} with an oversampling factor of 30 to account for the long integration time of \textit{Kepler} \citep{2010MNRAS.408.1758K}. The star was modelled with the PARSEC evolution tracks \citep{2012MNRAS.427..127B}, taking into account the asterodensity profiling \citep{2014MNRAS.440.2164K}, and with the limb darkening coefficients taken from \citet{2011A&A...529A..75C}. We ran 80 Markov chain Monte Carlo (MCMC) with $10^6$ iterations for the two different effective temperatures to explore the posterior distributions of the parameters. The convergence was assessed with a Kolmogorov-Smirnov test \citep{10.2307/1391067}. The burn-in phase was then removed \citep{2014MNRAS.441..983D} and the remaining iterations of the different chains having converged were merged. Both analysis, with $T_{\rm eff}$ and $T_{\rm eff}$ lowered by 50 K, converged towards the same distributions, and in particular the same median effective temperature. Therefore we only report the posteriors for the analysis based on $T_{\rm eff} = 5275$ K, along with the priors used. These are shown in Table \ref{MCMCprior}. The parameters obtained are fully compatible with that of \citet{2019A&A...631A..90L}. In particular, we find masses of $2.80^{+0.94}_{-0.96} ~\hbox{$\mathrm{M}_{\oplus}$}$, $5.95^{+1.17}_{-1.12} ~\hbox{$\mathrm{M}_{\oplus}$}$, $7.20\pm1.40 ~\hbox{$\mathrm{M}_{\oplus}$}$, $11.28^{+2.78}_{-2.72} ~\hbox{$\mathrm{M}_{\oplus}$}$ respectively for planets b, c, d, and e, giving a precision of $34\%$, $20\%$, $19\%$, and $25\%$. For planets f and g, the median values on the masses are respectively $2.43^{+3.05}_{-1.75} ~\hbox{$\mathrm{M}_{\oplus}$}$ and $2.45^{+2.92}_{-1.74} ~\hbox{$\mathrm{M}_{\oplus}$}$, giving a significance of $1.4 ~\sigma$ for both planets. For planet g, the non detection is not surprising given the relatively long orbital period, for a planet with a radius compatible with a low-density planet. Conversely, for planet f, we cannot exclude an absorption of the signal by the gaussian process given its orbital period is half the stellar rotation period. Further discussion on the constraints and upper limits of the planetary masses can be found in \citet{2019A&A...631A..90L}. The parameters of the planets were then used as input for the planets modelling described in the following section. \section{Composition analysis} \label{sect:methodology} \subsection{Interior-atmosphere model} \label{sect:interior} We used the internal structure model initially developed by \citet{2017ApJ...850...93B} and \citet{mousis20}, and recently updated by \cite{2021arXiv210108172A} for the study of their internal composition. The model can accommodate a surface water layer. To consider the effect of the stellar irradiation on this layer, we include a water-rich atmosphere on top of the high-pressure water layer or the mantle by coupling the interior to an atmosphere model. The atmospheric model computes the temperature at the bottom of the atmosphere, which is the boundary condition for the interior model. As a result, our current atmosphere-interior model allows us to assess in detail how well a close-in planet, as the ones we analyze in Sect. \ref{sect:results}, can support a water-rich layer either in liquid, vapour or supercritical state depending on the surface temperature. Our atmosphere-interior model takes into account the irradiation received by the planet and calculates the surface temperature assuming a water-rich atmosphere on top of a high-pressure water layer or a mantle. Therefore, in Sect. \ref{sect:results}, we use the terms volatile mass fraction and water mass fraction interchangeably. The planets in the multiplanetary systems we analyse are highly irradiated, with irradiation temperatures ranging from approximately 1300 K to 500 K (see Table \ref{output_mcmc}). Depending on the corresponding surface conditions, if water is present, it can be in vapour or supercritical state. The input variables of the interior structure model are the total planetary mass, the core mass fraction (CMF) and the water mass fraction (WMF), while the model outputs the total planetary radius and the Fe/Si mole ratio. In order to explore the parameter space, we performed a complete Bayesian analysis to obtain the probability density distributions of the parameters. This Bayesian analysis was carried out via the implementation of a MCMC algorithm, by adapting the method proposed by \citet{dorn15} to our interior and atmosphere model as described in \citet{2021arXiv210108172A}. Initial values of the three input parameters were randomly drawn from their prior distributions, which correspond to a Gaussian distribution for the mass, and uniform distributions for the CMF and the WMF. We establish a maximum WMF in the uniform prior of 80\%, based on the maximum water content found in Solar System bodies \citep{mckay19}. For the atmosphere, we have considered a composition of 99\% water and 1\% carbon dioxide. The atmosphere and the interior are coupled at a pressure of 300 bar. We consider the stellar spectral distribution of a Sun-like star for the calculation of the Bond albedo. The atmospheric mass, thickness, Bond albedo, and temperature at the bottom of the atmosphere are provided by a grid generated with the atmospheric model described in \citet{marcq17} and \cite{pluriel19}. \subsection{Atmospheric escape} \label{sect:escape} Atmospheric mass loss in super-Earths and sub-Neptunes can be produced by thermal or non-thermal escape, with Jeans escape \citep{jeans25}, XUV photoevaporation \citep{owen12} or core-powered mass loss \citep{ginzburg16}. These processes might shape the trend of the volatile mass fraction (water, H/He or a combination of both) in the inner region of multiplanetary systems. An estimate of the mass loss rates of different species can discriminate between two possible interior compositions. In our solar system, Jeans' escape efficiently removed lighter gases as H$_2$ and He on telluric planets, leaving heavier molecules. For the planets in the K2-138 system, we estimate Jeans mass loss rates \citep{aguichine21} by using as input the masses, radii and equilibrium temperatures we obtained as a result of our spectroscopic analysis (Sect. \ref{sect_spectroscopic_analysis}). For the rest of the multiplanetary systems we analyse, we use the parameters provided by the references we mention in Sect. \ref{sect:multipl_data}. The hydrodynamic escape of H-He is driven by the incident XUV flux from the host star. A star's XUV luminosity $L_{\mathrm{XUV}}$ is usually constant at early stages, called saturation regime (a few tens of Myr), and then evolves as a power-law function of time $L_{\mathrm{XUV}}\propto t^\alpha$, with $\alpha\simeq -1.5$ \citep{sanzforcada11}. Computing the mass loss rate from \citep{owen12}: \begin{equation} \dot{m} = \eta \frac{L_{\mathrm{XUV}} R_b^3}{GM_b (2a_b)^2}, \label{eq:dotm-xuv} \end{equation} where $G$ is the gravitational constant and $\eta=0.1$ is an efficiency factor \citep{owen12}. Following the approach in \cite{aguichine21}, we integrate Equation (\ref{eq:dotm-xuv}) over time assuming that only $L_{\mathrm{XUV}}$ can vary, implying mass and radius do not change significantly, to calculate the total lost mass. \subsection{Multiplanetary systems parameters} \label{sect:multipl_data} In addition to K2-138, we select a sample of multiplanetary systems that host only low-mass planets ($M$ < 20 $M_{\oplus}$), with five or more planets that have masses and radii available. These systems are TOI-178, Kepler-11, Kepler-102 and Kepler-80. For K2-138, we take the planetary mass and radius derived in section \ref{sect:pastis}, and the corrected Fe/Si molar ratio. The latter was estimated as Fe/Si = 0.77$\pm$0.07, using the metallicity and the Mg, Al, Si, Ca and Ni abundances presented in section \ref{sect_stellar_abundances}, following \cite{sotin07} and \cite{2017ApJ...850...93B}. For the other systems, we performed the same modeling, taking masses, radii and stellar abundances from \cite{Leleu21} for TOI-178; \cite{Lissauer11} and \cite{Brewer16} for Kepler-11; \cite{Marcy14} and \cite{Brewer18} for Kepler-102, and \cite{Macdonald16} and \cite{Macdonald21} for Kepler-80. The Fe/Si mole ratios of these systems are computed similarly to the Fe/Si mole ratio of K2-138 from their respective host stellar abundances. \begin{table*}[] \centering \begin{tabular}{cccccc} \hline \hline System & Planet & M [$M_{\oplus}$] & R [$R_{\oplus}$] & $a_{d}$ [AU] & $T_{irr}$ [K] \\ \hline \multirow{6}{*}{TOI-178} & b & 1.5$^{+0.39}_{-0.44}$ & 1.152$^{+0.073}_{-0.070}$ & 0.026 & 1040 \\ & c & 4.77$^{+0.55}_{-0.68}$ & 1.669 $^{+0.114}_{-0.099}$ & 0.037 & 873 \\ & d & 3.01$^{+0.80}_{-1.03}$ & 2.572$^{+0.075}_{-0.078}$ & 0.059 & 691 \\ & e & 3.86$^{+1.25}_{-0.94}$ & 2.207$^{+0.088}_{-0.090}$ & 0.078 & 600 \\ & f & 7.72$^{+1.67}_{-1.52}$ & 2.287$^{+0.108}_{-0.110}$ & 0.104 & 521 \\ & g & 3.94$^{+1.31}_{-1.62}$ & 2.87$^{+0.14}_{-0.13}$ & 0.128 & 471 \\ \hline \multirow{5}{*}{Kepler-11} & b & 4.3$^{+2.2}_{-2.0}$ & 1.97$\pm$0.19 & 0.091 & 953 \\ & c & 13.5$^{+4.8}_{-6.1}$ & 3.15$\pm$0.30 & 0.106 & 883 \\ & d & 6.1$^{+3.1}_{-1.7}$ & 3.43$\pm$0.32 & 0.159 & 721 \\ & e & 8.4$^{+2.5}_{-1.9}$ & 4.52$\pm$0.43 & 0.194 & 653 \\ & f & 2.3$^{+2.2}_{-1.2}$ & 2.61$\pm$0.25 & 0.250 & 575 \\ \hline \multirow{5}{*}{Kepler-102} & b & 0.41$\pm$1.6 & 0.47$\pm$0.02 & 0.055 & 868 \\ & c & -1.58$\pm$2.0 & 0.58$\pm$0.02 & 0.067 & 786 \\ & d & 3.80$\pm$1.8 & 1.18$\pm$0.04 & 0.086 & 597 \\ & e & 8.93$\pm$2.0 & 2.22$\pm$0.07 & 0.117 & 694 \\ & f & 0.62$\pm$3.3 & 0.88$\pm$0.03 & 0.165 & 501 \\ \hline \multirow{5}{*}{Kepler-80} & d & 5.95$^{+0.65}_{-0.60}$ & 1.309$^{+0.036}_{-0.032}$ & 0.033 & 990 \\ & e & 2.97$^{+0.76}_{-0.65}$ & 1.330$^{+0.039}_{-0.038}$ & 0.044 & 863 \\ & b & 3.50$^{+0.63}_{-0.57}$ & 2.367$^{+0.055}_{-0.052}$ & 0.058 & 750 \\ & c & 3.49$^{+0.63}_{-0.57}$ & 2.507$^{+0.061}_{-0.058}$ & 0.071 & 679 \\ & g & 0.065$^{+0.044}_{-0.038}$ & 1.05$^{+0.22}_{-0.24}$ & 0.094 & 588 \\ \hline \end{tabular} \caption{Masses, radii, semi-major axis and irradiation temperature for the multiplanetary systems TOI-178, Kepler-11, Kepler-102 and Kepler-80. References can be found in Sect. \ref{sect:multipl_data}. } \label{tab:my-table} \end{table*} \section{Compositional trends in multiplanetary systems} \label{sect:results} Table \ref{tab:multiplanets} shows the retrieved CMF and WMF and their one-dimensional 1$\sigma$ uncertainties as a result of our Bayesian analysis, as well as their atmospheric mass loss estimates. To assess how compatible a water-rich composition is with the data, we also show the difference between the observational mean and the retrieved mean, which is calculated as $d_{obs-ret}$ = max$\left\lbrace | R_{data}-R | , | M_{data}-M | \right\rbrace $. If $d_{obs-ret}$ is below 1$\sigma$, the retrieved mass and radius agree within the 1$\sigma$ confidence intervals with the observed mass and radius, meaning that the density of a planet is compatible with a volatile layer dominated by water. A high $d_{obs-ret}$ (> 1 $\sigma$), and a high WMF in our model simultaneously, indicate that a water-dominated atmosphere is not inflated enough to account for the low density of the planet, pointing to an atmosphere with more volatile gases, which are probably H and He. Table \ref{output_mcmc} shows the irradiation temperatures and the retrieved atmospheric parameters of the planets whose density is compatible with the presence of a volatile layer dominated by water. \subsection{K2-138} \begin{table*}[h] \centering \begin{tabular}{cccccccc} \hline \hline System & Planet & CMF & WMF & $d_{obs-ret}$ & $\Delta M_{H2}$ [$M_{\oplus}$] & $\Delta M_{H2O}$ [$M_{\oplus}$] & $\Delta M_{XUV}$ [$M_{\oplus}$] \\ \hline \multirow{6}{*}{K2-138} & b & 0.27$\pm$0.02 & 0.000$_{-0.000}^{+0.007}$ & 1.5 $\sigma$ & 0.132 & < 0.01 & 0.40 \\ & c & 0.23$\pm$0.02 & 0.13$\pm$0.04 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & d & 0.22$\pm$0.03 & 0.17$\pm$0.05 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & e & 0.11$\pm$0.02 & 0.57$\pm$0.08 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & f & 0.11$\pm$0.02 & 0.60$\pm$0.07 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & g & 0.12$\pm$0.05 & 0.55$\pm$0.18 & 1.3 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ \hline \multirow{6}{*}{TOI-178} & b & 0.21$\pm$0.30 & 0 & \textless 1 $\sigma$ & 0.83 & < 0.01 & 0.45 \\ & c & 0.30$\pm$0.02 & 0.02$^{+0.04}_{-0.02}$ & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.21 \\ & d & 0.10$\pm$0.01 & 0.69$\pm$0.05 & 1.3 $\sigma$ & 0.16 & < 0.01 & 0.48 \\ & e & 0.18$\pm$0.02 & 0.40$\pm$0.06 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.13 \\ & f & 0.22$\pm$0.03 & 0.28$\pm$0.10 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.04 \\ & g & 0.10$\pm$0.01 & 0.58$\pm$0.16 & 3.0 $\sigma$ & < 0.01 & < 0.01 & 0.11 \\ \hline \multirow{5}{*}{Kepler-11} & b & 0.20$\pm$0.04 & 0.27$\pm$0.10 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.10 \\ & c & 0.18$\pm$0.01 & 0.33$\pm$0.04 & 1.7 $\sigma$ & < 0.01 & < 0.01 & 0.10 \\ & d & 0.10$\pm$0.02 & 0.65$\pm$0.05 & 2.4 $\sigma$ & < 0.01 & < 0.01 & 0.13 \\ & e & 0.12$\pm$0.01 & 0.55$\pm$0.04 & 4.4 $\sigma$ & < 0.01 & < 0.01 & 0.14 \\ & f & 0.14$\pm$0.06 & 0.47$\pm$0.10 & 1.9 $\sigma$ & 0.56 & < 0.01 & 0.06 \\ \hline \multirow{5}{*}{Kepler-102} & b & 0.91$^{+0.09}_{-0.16}$ & 0 & \textless 1 $\sigma$ & 0.13 & < 0.01 & 0.03 \\ & c & 0.95$^{+0.05}_{-0.30}$ & 0 & \textless 1 $\sigma$ & 0.10 & < 0.01 & 0.03 \\ & d & 0.80$\pm$0.14 & 0 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.03 \\ & e & 0.22$\pm$0.02 & 0.17$\pm$0.07 & \textless 1 $\sigma$ & 0.01 & < 0.01 & 0.03 \\ & f & 0.27$\pm$0.09 & 0.04$\pm$0.04 & \textless 1 $\sigma$ & 0.02 & < 0.01 & 0.01 \\ \hline \multirow{5}{*}{Kepler-80} & d & 0.97 $^{+0.03}_{-0.05}$ & 0 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.35 \\ & e & 0.43$\pm$0.18 & 0 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.29 \\ & b & 0.13$\pm$0.02 & 0.58$\pm$0.07 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.11 \\ & c & 0.09$\pm$0.01 & 0.70$\pm$0.04 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.13 \\ & g & 0.31$\pm$0.02 & < 1.5 $\times \ 10^{-3}$ & \textless 1 $\sigma$ & 140 & 3.23 & 0.60 \\ \hline \end{tabular} \caption{Retrieved core mass fraction (CMF) and water mass fraction (WMF) of planets in the multiplanetary systems K2-138, TOI-178, Kepler-11, Kepler-102 and Kepler-80, with our interior-atmosphere model. A low $d_{obs-ret}$ indicates that the assumption of a water-dominated atmosphere is adequate for a particular planet (see text). $\Delta M_{H2}$, $\Delta M_{H2O}$ and $\Delta M_{XUV}$ correspond to the maximum estimate of atmospheric escape mass loss due to H$_{2}$, H$_{2}$O Jeans escape and XUV photoevaporation, respectively.} \label{tab:multiplanets} \end{table*} \begin{table*}[h] \centering \begin{tabular}{ccccc} \hline \hline Planet & $T_{irr}$ [K] & $T_{300}$ [K] & $z_{atm}$ [km] & $A_{B}$ \\ \hline K2-138 b & 1291 & 4110$\pm$44 & 932$\pm$151 & 0.213$\pm$0.001 \\ K2-138 c & 1125 & 3900$\pm$23 & 711$\pm$103 & 0.214$\pm$0.002 \\ K2-138 d & 978 & 3614$\pm$56 & 635$\pm$84 & 0.218$\pm$0.002 \\ K2-138 e & 850 & 3383$\pm$39 & 673$\pm$90 & 0.231$\pm$0.001 \\ K2-138 f & 735 & 3396$\pm$116 & 1483$\pm$546 & 0.260$\pm$0.004 \\ TOI-178 c & 873 & 3344$\pm$33 & 500$\pm$60 & 0.226$\pm$0.001 \\ TOI-178 d & 691 & 3254$\pm$45 & 1181$\pm$224 & 0.264$\pm$0.004 \\ TOI-178 e & 600 & 2930$\pm$31 & 690.7$\pm$133 & 0.225$\pm$0.018 \\ TOI-178 f & 521 & 2610$\pm$23 & 368$\pm$60 & 0.298$\pm$0.007 \\ Kepler-11 b & 953 & 3697$\pm$133 & 840$\pm$313 & 0.221$\pm$0.005 \\ Kepler-102 e & 694 & 2947$\pm$29 & 360$\pm$55 & 0.243$\pm$0.004 \\ Kepler-102 f & 501 & 2784$\pm$102 & 837$\pm$290 & 0.347$\pm$0.013 \\ Kepler-80 b & 750 & 3344$\pm$33 & 1133$\pm$148 & 0.253$\pm$0.002 \\ Kepler-80 c & 679 & 3219$\pm$29 & 1128$\pm$114 & 0.266$\pm$0.003 \\ \hline \end{tabular} \caption{Atmospheric parameters retrieved for the planets whose composition can accommodate a water-dominated atmosphere (see text). These parameters are the equilibrium temperature assuming a null albedo ($T_{irr}$), the atmospheric temperature at 300 bar ($T_{300}$), the thickness of the atmosphere from 300 bar to 20 mbar ($z_{atm}$), and the planetary Bond albedo ($A_{B}$).} \label{output_mcmc} \end{table*} \begin{figure} \centering \includegraphics[width=\hsize]{Figures/conf_interval_ternary_K138.pdf} \caption{1-$\sigma$ confidence regions derived from the 2D posterior distributions of the CMF and WMF obtained with the planetary interior Bayesian analysis. Axes indicate the core mass fraction (CMF), water mass fraction (WMF) and the mantle mass fraction (MMF). The latter is defined as MMF = 1 - (CMF+WMF).} \label{ternary} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{Figures/grid_fg_final.pdf} \caption{Total mass and radius of K2-138 f (upper panel) and K2-138 g (lower panel) from the different realisations of the MCMC (black crosses). The solid blue lines show the mass and radius measurements from \texttt{PASTIS}, and the dashed lines give the related uncertainties. The red line indicates the limit below which the planet cannot maintain an atmosphere.} \label{fig:K2-138f} \end{figure} Figure~\ref{ternary} displays the 1$\sigma$ confidence intervals derived from the 2D distributions of the WMF and CMF of the K2-138 in a ternary diagram. We can see that the confidence regions are aligned along a line almost parallel to the lines where the CMF is constant. This alignment is due to the the constraint on the Fe/Si mole ratio we have considered within the whole planetary system: the confidence regions are spread over the Fe/Si-isolines whose constant values range from Fe/Si = 0.70 to 0.84 \citep[see][their Figure 4]{2017ApJ...850...93B}. For K2-138 b, the results set an upper limit of 0.7\% in the WMF, which means that this planet is unlikely to have a significant amount of volatiles, including water. The retrieved planetary radius is 1.538 $R_{\oplus}$, which is 1.5$\sigma$ larger than the measured radius from the analysis in section \ref{sect:pastis}. This is due to the extended atmosphere necessary to produce temperature and pressure conditions to hold supercritical water on the surface ($P_{surf} > 300 $ bar). If we assume a mass of 2.80 $M_{\oplus}$ and a CMF of 0.27, a vapour atmosphere with a maximum surface pressure of 300 bar would yield a WMF of 0.01\% (WMF of Earth is 0.05\%) and a radius of 1.461 $R_{\oplus}$, which is well within the 1$\sigma$ confidence interval of the observed value. Therefore, we can conclude that K2-138 b is a volatile-poor planet, that might present a secondary atmosphere with a low surface pressure ($P_{surf} \leq 300 $ bar) or no atmosphere (WMF = 0). In addition, it is the planet with the highest CMF in the system, showing that planets in this system are likely to have less massive cores than Earth (CMF = 0.325) and the other terrestrial planets in the Solar System. The atmospheric model also establishes a minimum surface gravity of 2 m/s to retain an atmosphere. Unlike planets b, c, d and e, in which the 1-$\sigma$ intervals on the masses exclude such low surface gravity, this is not the case for planets f and g. For planet f, a lower limit on the surface gravity of the planet can be translated to a lower limit on the mass. If it is below this limit, the gravity at the surface would not be enough to retain an atmosphere. For planet f, with a total radius of 2.762 $R_{\oplus}$ and a CMF of 0.11, this limit would be approximately 2 $M_{\oplus}$. This minimum mass value to retain its atmosphere is above the lower limit of the total mass set by its 1 $\sigma$ uncertainties, as can be seen in Figure \ref{fig:K2-138f}, upper panel. Furthermore, planet f is the most water-rich in the K2-138 planetary system, with an upper limit of 66\% in the WMF, which is close to the 77\% maximum limit on the water content derived from measurements on cometary compositions. Similarly, planet g also presents a lower limit on the mass of the bulk of the planet of $\backsim$ 2 $M_{\oplus}$ (see Figure \ref{fig:K2-138f}, lower panel). Its retrieved planetary radius is significantly lower than the observational value, with a difference of 1.3 $\sigma$. Therefore, the atmosphere of K2-138 g is significantly more extended than an atmosphere dominated by water vapour under the same irradiance conditions. This increase in atmospheric thickness is probably due to an atmosphere rich in H and He. K2-138 g could have up to 5\% of volatile mass fraction assuming a H/He atmosphere \citep[see Fig. 1 in][]{lopez_fortney14}. A rough estimate of Jeans mass loss rates for K2-138 b yields $6\times 10^{-7}$ $\hbox{$\mathrm{M}_{\oplus}$}.\mathrm{Gyr}^{-1}$ for Jeans escape of H$_2$, and $5\times 10^{-84}$ $\hbox{$\mathrm{M}_{\oplus}$}.\mathrm{Gyr}^{-1}$ for Jeans escape of H$_2$O. For comparison, in the case of Earth the absence of H$_2$ is due to an exobase (altitude at which particles escape) temperature much higher than the equilibrium temperature \citep{hedin83}. An exobase temperature 2 times higher than the equilibrium temperature gives a mass-loss rate of $4\times 10^{-2}$ $\hbox{$\mathrm{M}_{\oplus}$}.\mathrm{Gyr}^{-1}$. In that case, an envelope of 1--10\% of H-He mixture could be efficiently removed, leaving only heavier species such as H$_2$O. In the case of hydrodynamic escape, we obtain a mass loss rate of 2 $\hbox{$\mathrm{M}_{\oplus}$}$.$\mathrm{Gyr}^{-1}$ during the saturation regime and $1\times 10^{-2}$ $\hbox{$\mathrm{M}_{\oplus}$}$.$\mathrm{Gyr}^{-1}$ at $t=3$ Gyr. This yields an integrated mass loss of $0.4\hbox{$\mathrm{M}_{\oplus}$}$, or 14\% of planet's b total mass. Comparing this value to the WMF derived for planets c and d from the MCMC in Table \ref{tab:multiplanets}, we conclude that K2-138 b could have formed with a thick envelope of H$_2$O that has been blown away by XUV photoevaporation. \subsection{TOI-178} In the TOI-178 system, planets b and c have an increasing WMF with progressing distance from the star, while planets d to g have WMF equal or greater than 30\%. For planets d and g, the the volatile layer is likely to present H/He, which would explain why in our analysis their WMF are in the 60-70\% range in addition to $d_{obs-ret}$ greater than 1$\sigma$. TOI-178 b could have lost up to 0.83 $M_{\oplus}$ of its current mass in H$_{2}$ due to Jeans escape, and up 0.45 $M_{\oplus}$ due to photoevaporation, while TOI-178 c could have lost 0.21 $M_{\oplus}$. In such scenario, TOI-178 b and c original volatile mass fraction would be up to 0.36 and 0.10, respectively compared to their current value. \subsection{Kepler-11} For Kepler-11, the WMF of the innermost planet is 0.27$\pm$0.10, which is compatible with a water-dominated envelope. For Kepler-11 c to e, their radius data are 1.7$\sigma$, 2.4$\sigma$ and 4.4$\sigma$ higher than the radius we retrieve with our model, discarding the water-rich envelope hypothesis. The increasing significance level indicates that these planets have an increasing content of H/He with distance from the star. In the case of the outermost planet, Kepler-11 f, the retrieved radius is 1.9$\sigma$ lower than the data, suggesting that this planet presents less H/He than planets c to e. Nonetheless, this could be because of Kepler-11 f not being able to retain a primordial atmosphere due to its low mass (2.3 $^{+2.2}_{-1.2}$ $M_{\oplus}$), compared to the higher masses of the rest of the planets in the system (> 6 $M_{\oplus}$). Furthermore, Kepler-11 f could have lost up to 0.56 $M_{\oplus}$ in H$_{2}$, according to our atmospheric Jeans escape calculation, whereas the other four planets in the system have atmospheric mass losses below 2$\times 10^{-3} \ M_{\oplus}$. \subsection{Kepler-102} The densities of the three innermost planets of Kepler-102 suggest that these are dry planets with high CMFs. Their core-to-mantle ratios could be even higher than the CMF we would expect from the Fe and Si stellar abundances of their host star. Therefore, we set the WMF equal to zero in our MCMC Bayesian analysis and let the CMF as the only free parameter. We only take into account the mass and radius as observables. Our modelling shows that Kepler-102 b, c and d are dry Mercury-like planets, with CMF = 0.91$^{+0.09}_{-0.16}$, 0.95$^{+0.05}_{-0.30}$ and 0.80$\pm$0.14, respectively. Their high CMF could be due to mantle evaporation \citep{Cameron85}, impacts \citep{Benz88,Benz07,Asphaug14} or planet formation in the vicinity of the rocklines \citep{Aguichine20,Scora20}. Kepler-102 e presents a WMF of 0.17$\pm$0.07, suggesting that this planet has a more volatile-rich composition than the planets that precede it. The large uncertainties in the mass of Kepler-102 f prevent us from determining whether this is a bare rocky planet with no atmosphere, or if it presents a thin atmosphere with a maximum WMF = 0.08. In addition, Jeans H$_{2}$ atmospheric escape could have removed up to 0.02 M$_{\oplus}$ from Kepler-102 f, yielding an original volatile mass fraction between 0.07 and 0.10. \subsection{Kepler-80} Kepler-80 d presents a high CMF, corresponding to a Fe-rich planet, similarly to Kepler-102 b and c. Kepler-80 e is consistent with a dry planet with an Earth-like CMF, whereas Kepler-80 b and c are volatile-dominated planets. Kepler-80 g shows a WMF of up to 0.15\%. Given its low mass M = 0.065$^{+0.044}_{-0.038} \ M_{\oplus}$ \citep{Macdonald21}, planet g could have not retained a H/He atmosphere, making a secondary atmosphere with water and/or CO$_{2}$ the most likely atmospheric composition for this planet. Based on our MCMC interior-atmosphere analysis, this atmosphere could be of less than 300 bar of surface pressure. This scenario is also supported by our estimated Jeans water escape, which is between 3.26 $\times \ 10^{-3} \ M_{\oplus}$ and 3.24 $M_{\oplus}$. Both Jeans escape and XUV photoevaporation could have removed efficiently a H/He envelope. The total atmospheric mass loss and the current mass add up to a planetary mass that is similar to that of Kepler-80 e, b and c. Finally, the radius of Kepler-80 g is 2.7 $\sigma$ higher than the radius of a rocky planet with no atmosphere, which suggests that Kepler-80 g probably has retained a gaseous envelope. \section{Discussion} \label{sect:discussion} Figure~\ref{distance} shows the volatile content of the five multiplanetary systems we analysed in this work as a function of the incident flux normalised with the incident flux received by the innermost planet. In addition, we include in Figure~\ref{distance} the WMF of TRAPPIST-1 derived with our interior-atmosphere model by \cite{2021arXiv210108172A} for a homogeneous comparison. Of all systems, K2-138 presents a very clear volatile mass fraction trend: an increasing gradient in water content with distance from the host star for planets b to d, followed by a constant volatile mass fraction for the outer planets (planets e to g). A similar trend is observed in the TRAPPIST-1 system, if one neglects TRAPPIST-1 d presenting a higher volatile mass fraction than its two surrounding inner and outer planets in Fig. ~\ref{distance}. In \cite{2021arXiv210108172A}, the WMF is obtained by assuming a condensed water layer. However, water could be in vapour phase and mixed with CO$_{2}$ in a CO$_{2}$- dominated atmosphere, lowering the overall volatile mass fraction of TRAPPIST-1 d. In that case, the TRAPPIST-1 system could potentially show the increase-plus-plateau volatile trend observed in K2-138. Transmission spectroscopy of TRAPPIST-1 d is needed to probe the composition of its atmosphere. The multiplanetary systems TOI-178 and Kepler-11 do not show smooth increases of the water mass fraction with orbital distances, although the inner planets present significantly less volatiles than the outer planets. Finally, Kepler-80 and Kepler-102 could form this trend if it was not because of their outermost planet, which presents a lower volatile mass fraction than the planet that immediately precedes it. In addition, the estimated original volatile mass fraction of Kepler-102 f is well within the uncertainties of the WMF of Kepler-102 e, meaning that the planets e and f could potentially form a plateau in the outer part of the Kepler-102 system with a water mass fraction of 10\%, similarly to TRAPPIST-1. In the case of the TOI-178 and Kepler-11, it would be necessary to adopt a self-consistent modelling approach that includes the possibility of a H/He-dominated volatile layer to determine whether their volatile mass fraction trend is as clear as that of K2-138 and TRAPPIST-1. For the other multiplanetary systems, which do not present high $d_{obs-ret}$ combined with high water mass fractions in our analysis, the volatile mass fraction would decrease for each individual planet under the assumption of a H/He envelope. Including H/He as part of the envelope would change the value of the volatile mass fraction of each individual planet, but it would not change our conclusion about the global volatile mass fraction trends in each system (i.e the gradient and plateau trend in TRAPPIST-1 and K2-138). Furthermore, the water-H/He degeneracy to which volatile-rich planets are subject to can only be broken with atmospheric characterization data, such as transmission spectroscopy and phase curves. In many cases, the volatile envelope of sub-Neptunes might not be dominated by either water or H/He, but it could be a mixture of both. This is supported by transmission spectroscopy of the sub-Neptune K2-18 b \citep{Tsiaras19,Benneke19,Madhusudhan20}, where water is detected, although its current trace species could be compatible with a H$_{2}$-rich atmosphere \citep{Yu21}. Additionally, meteorite outgassing experiments show that a significant fraction of H/He could be sustained in a water-dominated secondary atmosphere \citep{Thompson21}. The significant difference in volatile mass fraction between the inner planets and the outer planets of these multiplanetary systems indicates that these planets might have undergone similar formation and evolution histories. The gradient-plus-plateau trend could potentially result from the combination of planetary formation in ice-rich regions of the protoplanetary disk, atmospheric loss, and inward migration. The outer volatile-rich planets could have formed beyond the ice line prior to migration, where ice-rich solids are expected to form \citep{Mousis21}, producing planets with high volatile contents. In the systems whose planets present water mass fractions lower than 10\%, volatiles could have been simply delivered by building blocks made of chondritic minerals bearing this amount of water \citep{Daswani21}. In those conditions, the radial drift of icy planetesimals from beyond the snowline is not required. In the case of K2-138, the three-body Laplace resonances are a sign of an inner planetary migration \citep{2007ApJ...654.1110T, 2017MNRAS.470.1750I, 2017A&A...602A.101R}. For three systems, we found that their outermost planets (Kepler-11 f, Kepler-102 f and Kepler-80 g) have lower volatile mass fractions than the planets before them in the system. This could be due to their lower masses compared to the other planets in their systems, since they are not massive enough to have a surface gravity that would help them retain their atmospheres. In addition, these three low-mass, low-WMF planets could have formed further away from the water ice line than the water-rich planets in their systems, having less water-rich material available during accretion than those planets that formed in the vicinity of the water ice line. Contrasting with K2-138, the water mass fraction of the outer planets found in the planets of the TRAPPIST-1 and Kepler-102 systems are compatible with 10\% \citep{agol21,2021arXiv210108172A}, a value found in agreement with the water content of many asteroids of the Main Belt \citep{Vernazza15}. This similarity suggests that the building blocks of the outer planets of these systems could have agglomerated from a mixture of ice grains coming from the snowline and anhydrous silicates formed at closer distances from the host star, following the classical formation scenarios invoked for the Main Belt \citep{2002aste.book..235R}. In that case, this implies that the migration distances of the planets in TRAPPIST-1 and Kepler-102 would have been more restricted than those of the water-rich planets in the K2-138, TOI-178 and Kepler-11 systems. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{Figures/WMFplot_rev_v3.pdf} \caption{Volatile mass fraction trends of the six multiplanetary systems analysed with our interior-atmosphere model. We show the water mass fraction estimates (see text) as a function of the stellar incident flux or irradiation, $F$, in Earth irradiation units ($S_{\oplus}$ = 1361 $W/m^{2}$) in the upper panel. In the lower panel, the incident flux is normalised with respect to the inner, most irradiated planet in each system, $F_{innermost}$. Planets whose atmospheric composition is likely to be H/He-dominated instead of water-dominated ($d_{obs-ret}$ > 1 $\sigma$) are indicated in grey color. } \label{distance} \end{figure*} We have considered the Fe/Si mole ratio as an observable of our MCMC Bayesian analysis in addition to the planetary masses and radii. Even though the Fe/Si derived from stellar abundances and that obtained from rocky planet densities could depart from a 1:1 relationship \citep{Plotnykov20,Adibekyan21}, considering the Fe/Si mole ratio contributes to reducing the degeneracy between the rock+mantle layers and the volatile layer \citep{dorn15,Dorn17,2017ApJ...850...93B}. Particularly, assuming that the planetary Fe/Si mole ratio is similar to the Fe/Si ratio of the host star improves the determination of the CMF, but does not necessarily contribute to the determination of the volatile mass fraction in volatile-rich planets \citep{Otegi20}. This is the case of the TRAPPIST-1 system, where the inclusion of the Fe/Si mole ratio as an observable in the MCMC Bayesian analysis refines the determination of the surface pressure for the inner planets of the system, but slightly reduces the uncertainties of the WMF estimates for the outer planets \citep[see Tables 3 and 4 in][]{2021arXiv210108172A}. Therefore, considering the Fe/Si mole ratio does not affect the volatile general trend of the planets within a multiplanetary system. \section{Conclusions} \label{sect:conclusion} We carried out a homogeneous interior modelling and composition analysis of five multiplanetary systems that have 5 or more low-mass planets ($M < 20 \ M_{\oplus}$), rather than compiled the volatile content estimates of previous works, to eliminate the differences between interior models as a possible bias when comparing the compositional trends between planetary systems. In the case of the TOI-178, Kepler-11, Kepler-102 and Kepler-80 systems, we used previously published mass, radius and stellar abundances data. In the case of the K2-138 system, we completed the previous analysis with an in-depth stellar spectroscopic analysis. We performed a line-by-line differential analysis of K2-138 spectra with respect to $\alpha$ Cen B and the Sun, to derive the most accurate stellar parameters and abundances given the data at hand. These were used for a new complete Bayesian analysis of the radial velocities and photometry acquired on the system. We explored the robustness of the planetary parameters and stellar chemical abundances in our spectroscopic analysis. We concluded that the parameters we derived are fully consistent with the ones obtained by \citet{2019A&A...631A..90L}. With our interior-atmosphere model in a MCMC framework, we obtained the posterior distribution of the compositional parameters (CMF and WMF) and the atmospheric parameters assuming a water-dominated volatile layer of each of the planets in these multiplanetary systems. We found that K2-138 and TRAPPIST-1 present a very clear volatile trend with distance from the host star. Kepler-102 could potentially present this trend. For the TOI-178 and Kepler-11 systems, our modelling ruled out the presence of large hydrosphere as responsible for their low density. For such systems, it would be necessary to include H/He as part of the volatile layer in a self-consistent interior-atmosphere model. Nonetheless, all multiplanetary systems showed that the volatile mass fraction is significantly lower for the inner planets than for the outer planets. This is consistent with a formation history that involves formation of the outer planets in the vicinity of the ice line, inwards migration and atmospheric loss of the inner planets. We discussed the possible formation and evolution pathways that might yield these volatile content trends case-by-case. Similarly, we also commented on the possible causes of the high core mass fractions of the inner planets of Kepler-102 and Kepler-80, which might involve formation in the vicinity of the rocklines. In addition, the atmospheric thickness that we obtained as a result of our Bayesian analysis (see Table~\ref{output_mcmc}) can be used to estimate the scale height of the extended atmospheres of the planets analysed in this work, which is necessary to assess the observing time and number of transits to characterise the composition of these atmospheres with transmission spectroscopy. This would confirm the exact composition of their atmospheres. To better assess possible evolutionary effects on the current composition of the planet, future work should involve the inclusion of atmospheric mass loss processes in the coupled atmosphere-interior model. In this work, we assumed that the planets do not evolve with time. The variation of water mass fraction could also have been shaped by post-formation processes such as hydrodynamic escape \citep{Bonfanti20}. Each of the discussed processes has been studied individually with interior models to constrain whether the atmospheres of low-mass planets are primordial or secondary \citep{dorn_heng18,gupta21}, but none has modelled the effects of all these combined processes on the volatile reservoir of low-mass planets. \begin{acknowledgements} We would like to thank Maria Bergemann and Matthew Raymond Gent for a preliminary analysis of the stellar spectrum. This research has made use of the services of the ESO Science Archive Facility. This research was made possible through the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This paper includes data collected by the K2 mission. Funding for the K2 mission is provided by the NASA Science Mission directorate. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A\&AS, 143, 23. TM acknowledges financial support from Belspo for contract PRODEX PLATO mission development. We acknowledge the anonymous referee whose comments helped improve and clarify this manuscript. \end{acknowledgements} \bibliographystyle{aa} \section{Introduction}\label{intro} Multiplanetary systems appear to be suitable distant laboratories to explore the diversity of small planets, and their formation and evolution pathways. This is the case of Kepler-36 \citep{2012Sci...337..556C}, where its two planets, b and c, present periods of 14 and 16 days with densities of 7.5 and 0.9 \gcm3, respectively. This suggests that these planets may have formed in different environments within the same protoplanetary disk before migrating inwards. Furthermore, a decreasing density gradient with distance from the host star in multiplanetary systems with 6 to 7 planets, such as TRAPPIST-1 \citep{2021arXiv210108172A,agol21} and TOI-178 \citep{Leleu21} suggest that there might be a transition between the rocky, inner super-Earths and the outer, volatile-rich sub-Neptunes. This transition is most probably due to the presence of the snowline in the protoplanetary disk \citep{Ruden99}. Nevertheless, there are presently several limitations to determine the variation of the volatile mass fraction of planets within their systems that include the precision reached on the fundamental parameters of both the planets and the star, and the different assumptions considered between different interior structure models. These assumptions include whether the volatile layer of the planet is fully constituted of H/He \citep{lopez_fortney14}, an ice layer \citep{Zeng19}, an ice layer with a H/He atmosphere on top \citep{dorn15} or a steam and/or supercritical water layer \citep{mousis20,Turbet20}. To overcome the differences in volatile mass fraction estimates of multiplanetary systems due to the different compositions of the volatile layer between interior structure models, we perform a homogeneous analysis of the interior structure and composition of several multiplanetary systems. In our interior structure model, we consider that the volatile layer is water-dominated, following the approach of \cite{mousis20} and \cite{2021arXiv210108172A}. This analysis allows us to uncover volatile and core mass fraction trends, and their connection with planet formation and evolution. We use already published masses, radii and stellar composition data for four systems, and perform our own spectroscopic analysis to improve the parameters of one system, K2-138, whose detection was reported in \citet{christiansen18}. K2-138 harbours six small planets in a chain of near 3:2 mean-motion resonances and benefited of a radial velocity ground-based follow-up with HARPS on the 3.6m telescope at La Silla Observatory, leading to the confirmation and mass measurements of the four inner planets \citep{2019A&A...631A..90L}, with relatively good precisions given the standard today. In order to bring stronger constraints on the stellar parameters and abundances, and further reduce the degeneracies in the planetary structure modelling, we carried out an in-depth analysis of K2-138. Section \ref{sect_spectroscopic_analysis} presents the new detailed analysis of the stellar host in the K2-138 system, which allowed us to derive stellar fundamental parameters and the elemental abundances, using the Sun and $\alpha$ Cen B as benchmarks. Section \ref{sect:pastis} describes a new Bayesian analysis of the HARPS radial velocities and K2 photometry, using the new stellar parameters. We describe our interior-atmosphere modelling in Sect. \ref{sect:methodology}, including our calculation of atmospheric mass-loss rates to infer the current presence or absence of volatiles. We present the volatile and core mass fraction trends for each mutiplanetary system as a result of our homogeneous analysis in Sect. \ref{sect:results}. Finally, we discuss the planet formation and evolution mechanisms that could have shaped these compositional trends in Sect. \ref{sect:discussion}. We present our concluding remarks in Sect. \ref{sect:conclusion}. \section{Spectroscopic analysis}\label{sect_spectroscopic_analysis} \object{K2-138} stellar parameters and abundances were derived based on a differential, line-by-line analysis relative to the Sun. The solar abundances are determined as part of such an analysis \citep[e.g.][]{Melendez12} and a set of reference values is not assumed. We used the HARPS spectra retrieved under programme ID 198.C-0.168. These were corrected from systemic velocity and planetary reflex-motion, removing the spectra with a S/N lower than ten in order 47 (550 nm) and the ones contaminated by the moonlight (S/N above 1.0 in fibre B). We then co-added the spectra in a single 1D spectrum and normalised it to the continuum. For the Sun, we used the HARPS spectra extracted from the ESO instrument archives\footnote{\url{http://archive.eso.org}}, acquired under programme ID 088.C-0323. The reduction of the solar spectrum, obtained as the spectrum of the light reflected by Vesta, is detailed in \citet{2016MNRAS.457.3637H} and the co-addition was performed as for \object{K2-138}. The stellar parameters and abundances of 24 metal species were self-consistently determined from the spectra, plane-parallel MARCS model atmospheres \citep{gustafsson08}, and the 2017 version of the line-analysis software MOOG originally developed by \citet{sneden73}. The equivalent widths (EWs) were measured manually using IRAF\footnote{{\tt IRAF} is distributed by the National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} tasks assuming Gaussian profiles. Strong lines with RW = $\log$ (EW/$\lambda$) $>$ --4.80 were discarded. This constraint on the line strength was relaxed for Mg because it would result in no \ion{Mg}{i} lines left. \subsection{Stellar parameters}\label{sect_stellar_parameters} The stellar parameters of \object{K2-138} and \object{$\alpha$ Cen B} appear to be similar (see below). Therefore, we also analysed the latter for benchmarking because it has accurate and nearly model-independent $T_{\rm eff}$ and $\log g$ estimates from long-baseline interferometry and asteroseismology, respectively. \object{K2-138} and \object{$\alpha$ Cen B} were observed with exactly the same instrumental set-up, which ensures the highest consistency \citep{bedell14}. The \object{$\alpha$ Cen B} spectra were selected from the ESO archive, keeping those corrected from the Blaze and with S/N higher than 350 in order 47. For \object{$\alpha$ Cen B}, we adopt in the following $T_{\rm eff}$ = 5231$\pm$21 K derived by \citet{kervella17a} from their VLTI/PIONIER measurements and the bolometric flux of \citet{boyajian13}. We also assumed $\log g$ = 4.53$\pm$0.02 dex \citep{heiter15} based on scaling relations making use of the frequency of maximum oscillation power, $\nu_{\rm max}$, determined from radial-velocity time series by \citet{kjeldsen08}. The model parameters ($T_{\rm eff}$, $\log g$, $\xi$, and [Fe/H]) were iteratively modified until the excitation and ionisation balance of iron is fulfilled and the \ion{Fe}{i} abundances exhibit no trend with RW. The abundances of iron and the $\alpha$ elements were also required to be consistent with the values adopted for the model atmosphere. For the solar analysis, $T_{\rm eff}$ and $\log g$ were held fixed to 5777 K and 4.44 dex, respectively, whereas the microturbulence, $\xi$, was left as a free parameter. The uncertainties in the stellar parameters were computed as in \citet{morel18}. We first carried out the analysis of \object{$\alpha$ Cen B} and \object{K2-138} using various iron line lists \citep{biazzo12,doyle17,feltzing01,jofre14,melendez14,morel14,reddy03,tsantaki19}. For \citet{jofre14}, we adopted their FGDa line list. The goal is to identify the line list that provides the most accurate parameters based on a comparison with the interferometric and asteroseismic constraints at hand for \object{$\alpha$ Cen B}. To ensure the highest consistency, the spectral features the analysis is based on for a given line list were exactly the same for the three stars. The parameters obtained are given in Table~\ref{tab_spectroscopic_parameters} and shown in Fig.~\ref{fig_spectroscopic_parameters}. The surface gravity of \object{$\alpha$ Cen B} appears to be underestimated in most cases. We also experimented with the LW13 Ti line list of \citet{tsantaki19} to constrain this quantity through Ti ionisation balance. As discussed by these authors, this leads to a larger value amounting here to $\sim$0.11 dex. However, it still falls short of matching the seismic value. As can be seen in Fig.~\ref{fig_spectroscopic_parameters}, the only notable difference between the parameters of \object{$\alpha$ Cen B} and \object{K2-138} is that the latter is slightly poorer in metals. Indeed, a differential analysis of \object{K2-138} with respect to \object{$\alpha$ Cen B} adopting the line list of \citet{biazzo12} gives the following results: $\Delta$$T_{\rm eff}$ = --10$\pm$45 K, $\Delta$$\log g$ = +0.02$\pm$0.09 dex, $\Delta$$\xi$ = +0.03$\pm$0.09 km s$^{-1}$ and $\Delta$[Fe/H] = --0.11$\pm$0.04. For the abundance analysis of \object{K2-138}, we adopt in the following the parameters provided by the line list of \citet{biazzo12}: $T_{\rm eff}$ = 5275$\pm$50 K, $\log g$ = 4.50$\pm$0.11, $\xi$ = 0.95$\pm$0.10 km s$^{-1}$ and [Fe/H] = +0.08$\pm$0.05. This choice is motivated by the fact that it leads to parameters that reproduce the reference ones of \object{$\alpha$ Cen B} within the errors. In addition, the metallicity is within the range of accepted values for the binary system \cite[][and references therein]{morel18}. However, from the comparison to the interferometric-based $T_{\rm eff}$ in Fig.~\ref{fig_spectroscopic_parameters}, we cannot rule out that the effective temperature of \object{K2-138} is slightly overestimated at the $\sim$50 K level. The analysis was also repeated using Kurucz atmosphere models \citep{castelli03} The following modest deviations with respect to the default values (Kurucz -- MARCS) were found: $\Delta T_{\rm eff}$ $\sim$ +10 K, $\Delta \log g$ $\sim$ +0.02 dex, and $\Delta$[Fe/H] $\sim$ +0.02 dex. We will examine the robustness of our abundance results against such putative systematic errors in Sect.~\ref{sect_stellar_abundances}. In any case, we find that \object{K2-138} is cooler and less metal rich than concluded by \citet{christiansen18}. \begin{table*}[h!] \caption{Stellar parameters of \object{$\alpha$ Cen B} and \object{K2-138}, as obtained from the various iron line lists. For iron, 42 Fe I and 4 Fe II lines were used.} \label{tab_spectroscopic_parameters} \small \centering \begin{tabular}{l|cccc|cccc} \hline\hline & \multicolumn{4}{c}{\object{$\alpha$ Cen B}} & \multicolumn{4}{c}{\object{K2-138}} \\ & $T_{\rm eff}$ & $\log g$ & $\xi$ & [Fe/H] & $T_{\rm eff}$ & $\log g$ & $\xi$ & [Fe/H] \\ Iron line list & [K] & & [km s$^{-1}$] & & [K] & & [km s$^{-1}$] & \\ \hline \citet{biazzo12} & 5285$\pm$60 & 4.49$\pm$0.14 & 0.909$\pm$0.121 & 0.200$\pm$0.051 & 5275$\pm$50 & 4.50$\pm$0.11 & 0.945$\pm$0.099 & 0.084$\pm$0.043\\ \citet{doyle17} & 5245$\pm$32 & 4.35$\pm$0.08 & 0.490$\pm$0.146 & 0.185$\pm$0.043 & 5235$\pm$30 & 4.43$\pm$0.07 & 0.450$\pm$0.146 & 0.083$\pm$0.034\\ \citet{feltzing01} & 5330$\pm$41 & 4.48$\pm$0.11 & 0.890$\pm$0.100 & 0.220$\pm$0.040 & 5280$\pm$38 & 4.46$\pm$0.10 & 0.915$\pm$0.084 & 0.100$\pm$0.035\\ \citet{jofre14} & 5210$\pm$77 & 4.31$\pm$0.11 & 0.500$\pm$0.221 & 0.181$\pm$0.063 & 5210$\pm$66 & 4.37$\pm$0.11 & 0.555$\pm$0.190 & 0.069$\pm$0.054\\ \citet{melendez14} & 5270$\pm$35 & 4.37$\pm$0.08 & 0.755$\pm$0.133 & 0.174$\pm$0.044 & 5255$\pm$24 & 4.44$\pm$0.06 & 0.767$\pm$0.105 & 0.070$\pm$0.031\\ \citet{morel14} & 5265$\pm$31 & 4.35$\pm$0.09 & 0.795$\pm$0.102 & 0.197$\pm$0.031 & 5275$\pm$31 & 4.45$\pm$0.08 & 0.870$\pm$0.089 & 0.089$\pm$0.032\\ \citet{reddy03} & 5320$\pm$38 & 4.51$\pm$0.11 & 0.900$\pm$0.062 & 0.218$\pm$0.036 & 5295$\pm$29 & 4.52$\pm$0.09 & 0.958$\pm$0.046 & 0.092$\pm$0.027\\ \citet{tsantaki19} & 5190$\pm$64 & 4.26$\pm$0.09 & 0.590$\pm$0.149 & 0.163$\pm$0.048 & 5140$\pm$81 & 4.35$\pm$0.08 & 0.485$\pm$0.198 & 0.050$\pm$0.049\\ \hline\hline \end{tabular} \end{table*} \begin{figure*}[h!] \centering \includegraphics[trim=0 180 0 90,clip,width=0.8\hsize]{Figures/results_strips.pdf} \caption{Results of the analysis of \object{$\alpha$ Cen B} (left panels) and \object{K2-138} (right panels) using the various iron line lists. The colour coding for each line list is indicated in the upper left panel. The parameters of \object{K2-138} determined by \citet{christiansen18} are shown in the right panels. The grey shaded areas for \object{$\alpha$ Cen B} delimit the interferometric $T_{\rm eff}$ and seismic $\log g$ values ($\pm$1 $\sigma$; see Sect.~\ref{sect_stellar_parameters} for details).} \label{fig_spectroscopic_parameters} \end{figure*} \subsection{Stellar abundances}\label{sect_stellar_abundances} We proceed for the abundance analysis with the extensive line list of \citet{melendez14} because the lines of some important elements (e.g., Mg) in \citet{biazzo12} are not covered by our observations. Hyperfine structure was taken into account for Sc, V, Mn, Co and Cu using atomic data from the Kurucz database\footnote{Available at \url{http://kurucz.harvard.edu/linelists.html}}, while the Eu data were taken from \citet{ivans06}. A classical curve-of-growth analysis making use of the EWs was performed for most species. However, the determination of some abundances relied on spectral synthesis. The oxygen abundance was based on \ion{[O}{i]} $\lambda$630.0, while the C abundance was also estimated from the C$_2$ lines at 508.6 and 513.5 nm. See \citet{morel14} for further details on the modeling of the \ion{[O}{i]} and C$_2$ features. Finally, the Eu abundance was based on a synthesis of a number of \ion{Eu}{ii} lines \citep[for details, see][]{2020A&A...644A..19W}. For \object{K2-138}, $v \sin i$ = 2.5 and a macroturbulence of 1.9 km s$^{-1}$ were assumed based on the analysis reported in \citet{2019A&A...631A..90L}. An attempt was made to model \ion{Li}{i} $\lambda$670.8. The line is not detected in \object{K2-138}, but the Li abundance appears to be much lower than solar. The abundances are provided in Table \ref{tab_spectroscopic_abundances}. The random uncertainties were estimated following \citet{morel18}. For the spectral synthesis, additional sources of errors (e.g., continuum placement) were taken into account \citep[see][]{morel14}. The O abundance is based on a single line that is weak (EW $<$ 10 m\AA) and blended with a Ni line. It is therefore uncertain. The same is true for the Mg abundance that is based on three strong lines exhibiting quite a large line-to-line scatter ($\sim$0.05 dex). The impact of lowering $T_{\rm eff}$ by 50 K (see Sect.~\ref{sect_stellar_parameters}) is also given in Table \ref{tab_spectroscopic_abundances}. The Sc, Ti and Cr abundances were derived from both neutral and singly ionised species. Ionisation balance is fulfilled within the uncertainties in all cases assuming the default parameters. However, it can be noted that the agreement systematically degrades for the cooler $T_{\rm eff}$ scale. \begin{table}[h!] \caption{Abundance results for \object{K2-138}. The last column shows the impact of lowering $T_{\rm eff}$ by 50 K (see Sect.~\ref{sect_stellar_parameters}), while keeping $\log g$ and $\xi$ unchanged.} \label{tab_spectroscopic_abundances} \centering \begin{tabular}{l|cc} \hline\hline Abundance ratio & Default $T_{\rm eff}$ scale & Cooler $T_{\rm eff}$ scale \\ \hline $[$Fe/H$]$ & +0.08$\pm$0.05 (42+4) & +0.01\\ \hline $[$\ion{C}{i}/Fe$]$ & --0.04$\pm$0.08 (3) & +0.03\\ $[$C$_2$/Fe$]$ & --0.07$\pm$0.09 (2) & --0.01 \\ $[$\ion{O}{i}/Fe$]$ & +0.03$\pm$0.10 (1) & --0.01\\ $[$\ion{Na}{i}/Fe$]$ & +0.02$\pm$0.06 (3) & --0.04\\ $[$\ion{Mg}{i}/Fe$]$ & --0.06$\pm$0.08 (3) & --0.05\\ $[$\ion{Al}{i}/Fe$]$ & +0.01$\pm$0.05 (2) & --0.04\\ $[$\ion{Si}{i}/Fe$]$ & +0.01$\pm$0.04 (10) & +0.00\\ $[$\ion{Ca}{i}/Fe$]$ & +0.04$\pm$0.06 (3) & --0.05\\ $[$\ion{Sc}{i}/Fe$]$ & --0.03$\pm$0.10 (4) & --0.06\\ $[$\ion{Sc}{ii}/Fe$]$ & --0.01$\pm$0.05 (5) & --0.01\\ $[$\ion{Ti}{i}/Fe$]$ & +0.01$\pm$0.08 (14) & --0.07\\ $[$\ion{Ti}{ii}/Fe$]$ & +0.01$\pm$0.06 (10) & +0.00\\ $[$\ion{V}{i}/Fe$]$ & +0.03$\pm$0.08 (5) & --0.07\\ $[$\ion{Cr}{i}/Fe$]$ & +0.03$\pm$0.05 (7) & --0.04\\ $[$\ion{Cr}{ii}/Fe$]$ & +0.08$\pm$0.04 (4) & +0.01\\ $[$\ion{Mn}{i}/Fe$]$ & +0.04$\pm$0.07 (5) & --0.05\\ $[$\ion{Co}{i}/Fe$]$ & +0.00$\pm$0.06 (7) & --0.03\\ $[$\ion{Ni}{i}/Fe$]$ & +0.00$\pm$0.04 (14) & --0.02\\ $[$\ion{Cu}{i}/Fe$]$ & --0.02$\pm$0.03 (2) & --0.02\\ $[$\ion{Zn}{i}/Fe$]$ & --0.01$\pm$0.03 (3) & +0.00\\ $[$\ion{Sr}{i}/Fe$]$ & +0.01$\pm$0.09 (1) & --0.07\\ $[$\ion{Y}{ii}/Fe$]$ & +0.02$\pm$0.07 (4) & --0.01\\ $[$\ion{Zr}{ii}/Fe$]$ & +0.06$\pm$0.06 (2) & --0.02\\ $[$\ion{Ba}{ii}/Fe$]$ & +0.02$\pm$0.07 (1) & --0.02\\ $[$\ion{Ce}{ii}/Fe$]$ & +0.01$\pm$0.08 (5) & --0.02\\ $[$\ion{Nd}{ii}/Fe$]$ & +0.07$\pm$0.05 (3) & --0.02\\ $[$\ion{Eu}{ii}/Fe$]$ & +0.04$\pm$0.08 (3) & --0.02\\ \hline $[$\ion{C}{i}/\ion{O}{i}$]$ & --0.07$\pm$0.13 & +0.04\\ $[$C$_2$/\ion{O}{i}$]$ & --0.10$\pm$0.12 & +0.00\\ $[$\ion{Mg}{i}/\ion{Si}{i}$]$ & --0.07$\pm$0.08 & --0.05\\ \hline\hline \end{tabular} \tablefoot{The number in brackets gives the number of lines the abundance is based on. For iron, the number of \ion{Fe}{i} and \ion{Fe}{ii} lines is given.} \end{table} \section{\texttt{PASTIS} analysis} \label{sect:pastis} The joint analysis of the HARPS radial velocities, \textit{K2} light curve and spectral energy distribution (SED) was made using the Bayesian software \texttt{PASTIS} \citep{2014MNRAS.441..983D}. Improvements with respect to our previous analysis in \citet{2019A&A...631A..90L} are (1) the radial velocities were nightly binned to average out the correlated high-frequency noise resulting from granulation and instrumental calibrations, (2) the new stellar parameters, as derived in section \ref{sect_stellar_parameters}, were used as priors. We ran two sets of analysis with the adopted $T_{\rm eff}$ and lowered by 50 K, as the latter cannot be ruled out, as reported in section \ref{sect_stellar_parameters}. The magnitudes used to construct the SED were taken from the American Association of Variable Star Observers Photometric All-Sky Survey \citep{2015AAS...22533616H} archive in the optical, from the Two-Micron All-Sky Survey \citep{2014AJ....148...81M} and the Wide-field Infrared Survey Explorer \citep{2014yCat.2328....0C} archives in the near-infrared. The SED was modelled with the BTSettl stellar atmospheric models \citep{2012RSPTA.370.2765A}. The radial velocities were modelled with keplerian orbit models for the planetary contribution and with a gaussian process regression for the correlated noise induced by the activity. For the latter, the following quasi-periodic kernel was used: \begin{equation} \begin{split} k(t_i, t_j) = A^2 \exp \left[ - \frac{1}{2} \left( \frac{t_i - t_j}{\lambda_1} \right)^2 - \frac{2}{\lambda_2^2} \sin^2 \left( \frac{\pi \left| t_i - t_j \right|}{P_{\rm rot}} \right) \right] \\ + \delta_{ij} \sqrt{\sigma_i^2 + \sigma_J^2} \end{split} \end{equation} where A can be identified to the radial velocity modulation amplitude, P$_{\rm rot}$ to the stellar rotation period, $\lambda_1$ to the correlation decay timescale of the active regions, $\lambda_2$ to the relative contribution between the periodic and the decaying components, and $\sigma_J$ to the radial velocity jitter. To model the photometry, we used the JKT Eclipsing Binary Orbit Program \citep{2008MNRAS.386.1644S} with an oversampling factor of 30 to account for the long integration time of \textit{Kepler} \citep{2010MNRAS.408.1758K}. The star was modelled with the PARSEC evolution tracks \citep{2012MNRAS.427..127B}, taking into account the asterodensity profiling \citep{2014MNRAS.440.2164K}, and with the limb darkening coefficients taken from \citet{2011A&A...529A..75C}. We ran 80 Markov chain Monte Carlo (MCMC) with $10^6$ iterations for the two different effective temperatures to explore the posterior distributions of the parameters. The convergence was assessed with a Kolmogorov-Smirnov test \citep{10.2307/1391067}. The burn-in phase was then removed \citep{2014MNRAS.441..983D} and the remaining iterations of the different chains having converged were merged. Both analysis, with $T_{\rm eff}$ and $T_{\rm eff}$ lowered by 50 K, converged towards the same distributions, and in particular the same median effective temperature. Therefore we only report the posteriors for the analysis based on $T_{\rm eff} = 5275$ K, along with the priors used. These are shown in Table \ref{MCMCprior}. The parameters obtained are fully compatible with that of \citet{2019A&A...631A..90L}. In particular, we find masses of $2.80^{+0.94}_{-0.96} ~\hbox{$\mathrm{M}_{\oplus}$}$, $5.95^{+1.17}_{-1.12} ~\hbox{$\mathrm{M}_{\oplus}$}$, $7.20\pm1.40 ~\hbox{$\mathrm{M}_{\oplus}$}$, $11.28^{+2.78}_{-2.72} ~\hbox{$\mathrm{M}_{\oplus}$}$ respectively for planets b, c, d, and e, giving a precision of $34\%$, $20\%$, $19\%$, and $25\%$. For planets f and g, the median values on the masses are respectively $2.43^{+3.05}_{-1.75} ~\hbox{$\mathrm{M}_{\oplus}$}$ and $2.45^{+2.92}_{-1.74} ~\hbox{$\mathrm{M}_{\oplus}$}$, giving a significance of $1.4 ~\sigma$ for both planets. For planet g, the non detection is not surprising given the relatively long orbital period, for a planet with a radius compatible with a low-density planet. Conversely, for planet f, we cannot exclude an absorption of the signal by the gaussian process given its orbital period is half the stellar rotation period. Further discussion on the constraints and upper limits of the planetary masses can be found in \citet{2019A&A...631A..90L}. The parameters of the planets were then used as input for the planets modelling described in the following section. \section{Composition analysis} \label{sect:methodology} \subsection{Interior-atmosphere model} \label{sect:interior} We used the internal structure model initially developed by \citet{2017ApJ...850...93B} and \citet{mousis20}, and recently updated by \cite{2021arXiv210108172A} for the study of their internal composition. The model can accommodate a surface water layer. To consider the effect of the stellar irradiation on this layer, we include a water-rich atmosphere on top of the high-pressure water layer or the mantle by coupling the interior to an atmosphere model. The atmospheric model computes the temperature at the bottom of the atmosphere, which is the boundary condition for the interior model. As a result, our current atmosphere-interior model allows us to assess in detail how well a close-in planet, as the ones we analyze in Sect. \ref{sect:results}, can support a water-rich layer either in liquid, vapour or supercritical state depending on the surface temperature. Our atmosphere-interior model takes into account the irradiation received by the planet and calculates the surface temperature assuming a water-rich atmosphere on top of a high-pressure water layer or a mantle. Therefore, in Sect. \ref{sect:results}, we use the terms volatile mass fraction and water mass fraction interchangeably. The planets in the multiplanetary systems we analyse are highly irradiated, with irradiation temperatures ranging from approximately 1300 K to 500 K (see Table \ref{output_mcmc}). Depending on the corresponding surface conditions, if water is present, it can be in vapour or supercritical state. The input variables of the interior structure model are the total planetary mass, the core mass fraction (CMF) and the water mass fraction (WMF), while the model outputs the total planetary radius and the Fe/Si mole ratio. In order to explore the parameter space, we performed a complete Bayesian analysis to obtain the probability density distributions of the parameters. This Bayesian analysis was carried out via the implementation of a MCMC algorithm, by adapting the method proposed by \citet{dorn15} to our interior and atmosphere model as described in \citet{2021arXiv210108172A}. Initial values of the three input parameters were randomly drawn from their prior distributions, which correspond to a Gaussian distribution for the mass, and uniform distributions for the CMF and the WMF. We establish a maximum WMF in the uniform prior of 80\%, based on the maximum water content found in Solar System bodies \citep{mckay19}. For the atmosphere, we have considered a composition of 99\% water and 1\% carbon dioxide. The atmosphere and the interior are coupled at a pressure of 300 bar. We consider the stellar spectral distribution of a Sun-like star for the calculation of the Bond albedo. The atmospheric mass, thickness, Bond albedo, and temperature at the bottom of the atmosphere are provided by a grid generated with the atmospheric model described in \citet{marcq17} and \cite{pluriel19}. \subsection{Atmospheric escape} \label{sect:escape} Atmospheric mass loss in super-Earths and sub-Neptunes can be produced by thermal or non-thermal escape, with Jeans escape \citep{jeans25}, XUV photoevaporation \citep{owen12} or core-powered mass loss \citep{ginzburg16}. These processes might shape the trend of the volatile mass fraction (water, H/He or a combination of both) in the inner region of multiplanetary systems. An estimate of the mass loss rates of different species can discriminate between two possible interior compositions. In our solar system, Jeans' escape efficiently removed lighter gases as H$_2$ and He on telluric planets, leaving heavier molecules. For the planets in the K2-138 system, we estimate Jeans mass loss rates \citep{aguichine21} by using as input the masses, radii and equilibrium temperatures we obtained as a result of our spectroscopic analysis (Sect. \ref{sect_spectroscopic_analysis}). For the rest of the multiplanetary systems we analyse, we use the parameters provided by the references we mention in Sect. \ref{sect:multipl_data}. The hydrodynamic escape of H-He is driven by the incident XUV flux from the host star. A star's XUV luminosity $L_{\mathrm{XUV}}$ is usually constant at early stages, called saturation regime (a few tens of Myr), and then evolves as a power-law function of time $L_{\mathrm{XUV}}\propto t^\alpha$, with $\alpha\simeq -1.5$ \citep{sanzforcada11}. Computing the mass loss rate from \citep{owen12}: \begin{equation} \dot{m} = \eta \frac{L_{\mathrm{XUV}} R_b^3}{GM_b (2a_b)^2}, \label{eq:dotm-xuv} \end{equation} where $G$ is the gravitational constant and $\eta=0.1$ is an efficiency factor \citep{owen12}. Following the approach in \cite{aguichine21}, we integrate Equation (\ref{eq:dotm-xuv}) over time assuming that only $L_{\mathrm{XUV}}$ can vary, implying mass and radius do not change significantly, to calculate the total lost mass. \subsection{Multiplanetary systems parameters} \label{sect:multipl_data} In addition to K2-138, we select a sample of multiplanetary systems that host only low-mass planets ($M$ < 20 $M_{\oplus}$), with five or more planets that have masses and radii available. These systems are TOI-178, Kepler-11, Kepler-102 and Kepler-80. For K2-138, we take the planetary mass and radius derived in section \ref{sect:pastis}, and the corrected Fe/Si molar ratio. The latter was estimated as Fe/Si = 0.77$\pm$0.07, using the metallicity and the Mg, Al, Si, Ca and Ni abundances presented in section \ref{sect_stellar_abundances}, following \cite{sotin07} and \cite{2017ApJ...850...93B}. For the other systems, we performed the same modeling, taking masses, radii and stellar abundances from \cite{Leleu21} for TOI-178; \cite{Lissauer11} and \cite{Brewer16} for Kepler-11; \cite{Marcy14} and \cite{Brewer18} for Kepler-102, and \cite{Macdonald16} and \cite{Macdonald21} for Kepler-80. The Fe/Si mole ratios of these systems are computed similarly to the Fe/Si mole ratio of K2-138 from their respective host stellar abundances. \begin{table*}[] \centering \begin{tabular}{cccccc} \hline \hline System & Planet & M [$M_{\oplus}$] & R [$R_{\oplus}$] & $a_{d}$ [AU] & $T_{irr}$ [K] \\ \hline \multirow{6}{*}{TOI-178} & b & 1.5$^{+0.39}_{-0.44}$ & 1.152$^{+0.073}_{-0.070}$ & 0.026 & 1040 \\ & c & 4.77$^{+0.55}_{-0.68}$ & 1.669 $^{+0.114}_{-0.099}$ & 0.037 & 873 \\ & d & 3.01$^{+0.80}_{-1.03}$ & 2.572$^{+0.075}_{-0.078}$ & 0.059 & 691 \\ & e & 3.86$^{+1.25}_{-0.94}$ & 2.207$^{+0.088}_{-0.090}$ & 0.078 & 600 \\ & f & 7.72$^{+1.67}_{-1.52}$ & 2.287$^{+0.108}_{-0.110}$ & 0.104 & 521 \\ & g & 3.94$^{+1.31}_{-1.62}$ & 2.87$^{+0.14}_{-0.13}$ & 0.128 & 471 \\ \hline \multirow{5}{*}{Kepler-11} & b & 4.3$^{+2.2}_{-2.0}$ & 1.97$\pm$0.19 & 0.091 & 953 \\ & c & 13.5$^{+4.8}_{-6.1}$ & 3.15$\pm$0.30 & 0.106 & 883 \\ & d & 6.1$^{+3.1}_{-1.7}$ & 3.43$\pm$0.32 & 0.159 & 721 \\ & e & 8.4$^{+2.5}_{-1.9}$ & 4.52$\pm$0.43 & 0.194 & 653 \\ & f & 2.3$^{+2.2}_{-1.2}$ & 2.61$\pm$0.25 & 0.250 & 575 \\ \hline \multirow{5}{*}{Kepler-102} & b & 0.41$\pm$1.6 & 0.47$\pm$0.02 & 0.055 & 868 \\ & c & -1.58$\pm$2.0 & 0.58$\pm$0.02 & 0.067 & 786 \\ & d & 3.80$\pm$1.8 & 1.18$\pm$0.04 & 0.086 & 597 \\ & e & 8.93$\pm$2.0 & 2.22$\pm$0.07 & 0.117 & 694 \\ & f & 0.62$\pm$3.3 & 0.88$\pm$0.03 & 0.165 & 501 \\ \hline \multirow{5}{*}{Kepler-80} & d & 5.95$^{+0.65}_{-0.60}$ & 1.309$^{+0.036}_{-0.032}$ & 0.033 & 990 \\ & e & 2.97$^{+0.76}_{-0.65}$ & 1.330$^{+0.039}_{-0.038}$ & 0.044 & 863 \\ & b & 3.50$^{+0.63}_{-0.57}$ & 2.367$^{+0.055}_{-0.052}$ & 0.058 & 750 \\ & c & 3.49$^{+0.63}_{-0.57}$ & 2.507$^{+0.061}_{-0.058}$ & 0.071 & 679 \\ & g & 0.065$^{+0.044}_{-0.038}$ & 1.05$^{+0.22}_{-0.24}$ & 0.094 & 588 \\ \hline \end{tabular} \caption{Masses, radii, semi-major axis and irradiation temperature for the multiplanetary systems TOI-178, Kepler-11, Kepler-102 and Kepler-80. References can be found in Sect. \ref{sect:multipl_data}. } \label{tab:my-table} \end{table*} \section{Compositional trends in multiplanetary systems} \label{sect:results} Table \ref{tab:multiplanets} shows the retrieved CMF and WMF and their one-dimensional 1$\sigma$ uncertainties as a result of our Bayesian analysis, as well as their atmospheric mass loss estimates. To assess how compatible a water-rich composition is with the data, we also show the difference between the observational mean and the retrieved mean, which is calculated as $d_{obs-ret}$ = max$\left\lbrace | R_{data}-R | , | M_{data}-M | \right\rbrace $. If $d_{obs-ret}$ is below 1$\sigma$, the retrieved mass and radius agree within the 1$\sigma$ confidence intervals with the observed mass and radius, meaning that the density of a planet is compatible with a volatile layer dominated by water. A high $d_{obs-ret}$ (> 1 $\sigma$), and a high WMF in our model simultaneously, indicate that a water-dominated atmosphere is not inflated enough to account for the low density of the planet, pointing to an atmosphere with more volatile gases, which are probably H and He. Table \ref{output_mcmc} shows the irradiation temperatures and the retrieved atmospheric parameters of the planets whose density is compatible with the presence of a volatile layer dominated by water. \subsection{K2-138} \begin{table*}[h] \centering \begin{tabular}{cccccccc} \hline \hline System & Planet & CMF & WMF & $d_{obs-ret}$ & $\Delta M_{H2}$ [$M_{\oplus}$] & $\Delta M_{H2O}$ [$M_{\oplus}$] & $\Delta M_{XUV}$ [$M_{\oplus}$] \\ \hline \multirow{6}{*}{K2-138} & b & 0.27$\pm$0.02 & 0.000$_{-0.000}^{+0.007}$ & 1.5 $\sigma$ & 0.132 & < 0.01 & 0.40 \\ & c & 0.23$\pm$0.02 & 0.13$\pm$0.04 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & d & 0.22$\pm$0.03 & 0.17$\pm$0.05 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & e & 0.11$\pm$0.02 & 0.57$\pm$0.08 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & f & 0.11$\pm$0.02 & 0.60$\pm$0.07 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ & g & 0.12$\pm$0.05 & 0.55$\pm$0.18 & 1.3 $\sigma$ & < 0.01 & < 0.01 & < 0.01 \\ \hline \multirow{6}{*}{TOI-178} & b & 0.21$\pm$0.30 & 0 & \textless 1 $\sigma$ & 0.83 & < 0.01 & 0.45 \\ & c & 0.30$\pm$0.02 & 0.02$^{+0.04}_{-0.02}$ & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.21 \\ & d & 0.10$\pm$0.01 & 0.69$\pm$0.05 & 1.3 $\sigma$ & 0.16 & < 0.01 & 0.48 \\ & e & 0.18$\pm$0.02 & 0.40$\pm$0.06 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.13 \\ & f & 0.22$\pm$0.03 & 0.28$\pm$0.10 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.04 \\ & g & 0.10$\pm$0.01 & 0.58$\pm$0.16 & 3.0 $\sigma$ & < 0.01 & < 0.01 & 0.11 \\ \hline \multirow{5}{*}{Kepler-11} & b & 0.20$\pm$0.04 & 0.27$\pm$0.10 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.10 \\ & c & 0.18$\pm$0.01 & 0.33$\pm$0.04 & 1.7 $\sigma$ & < 0.01 & < 0.01 & 0.10 \\ & d & 0.10$\pm$0.02 & 0.65$\pm$0.05 & 2.4 $\sigma$ & < 0.01 & < 0.01 & 0.13 \\ & e & 0.12$\pm$0.01 & 0.55$\pm$0.04 & 4.4 $\sigma$ & < 0.01 & < 0.01 & 0.14 \\ & f & 0.14$\pm$0.06 & 0.47$\pm$0.10 & 1.9 $\sigma$ & 0.56 & < 0.01 & 0.06 \\ \hline \multirow{5}{*}{Kepler-102} & b & 0.91$^{+0.09}_{-0.16}$ & 0 & \textless 1 $\sigma$ & 0.13 & < 0.01 & 0.03 \\ & c & 0.95$^{+0.05}_{-0.30}$ & 0 & \textless 1 $\sigma$ & 0.10 & < 0.01 & 0.03 \\ & d & 0.80$\pm$0.14 & 0 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.03 \\ & e & 0.22$\pm$0.02 & 0.17$\pm$0.07 & \textless 1 $\sigma$ & 0.01 & < 0.01 & 0.03 \\ & f & 0.27$\pm$0.09 & 0.04$\pm$0.04 & \textless 1 $\sigma$ & 0.02 & < 0.01 & 0.01 \\ \hline \multirow{5}{*}{Kepler-80} & d & 0.97 $^{+0.03}_{-0.05}$ & 0 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.35 \\ & e & 0.43$\pm$0.18 & 0 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.29 \\ & b & 0.13$\pm$0.02 & 0.58$\pm$0.07 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.11 \\ & c & 0.09$\pm$0.01 & 0.70$\pm$0.04 & \textless 1 $\sigma$ & < 0.01 & < 0.01 & 0.13 \\ & g & 0.31$\pm$0.02 & < 1.5 $\times \ 10^{-3}$ & \textless 1 $\sigma$ & 140 & 3.23 & 0.60 \\ \hline \end{tabular} \caption{Retrieved core mass fraction (CMF) and water mass fraction (WMF) of planets in the multiplanetary systems K2-138, TOI-178, Kepler-11, Kepler-102 and Kepler-80, with our interior-atmosphere model. A low $d_{obs-ret}$ indicates that the assumption of a water-dominated atmosphere is adequate for a particular planet (see text). $\Delta M_{H2}$, $\Delta M_{H2O}$ and $\Delta M_{XUV}$ correspond to the maximum estimate of atmospheric escape mass loss due to H$_{2}$, H$_{2}$O Jeans escape and XUV photoevaporation, respectively.} \label{tab:multiplanets} \end{table*} \begin{table*}[h] \centering \begin{tabular}{ccccc} \hline \hline Planet & $T_{irr}$ [K] & $T_{300}$ [K] & $z_{atm}$ [km] & $A_{B}$ \\ \hline K2-138 b & 1291 & 4110$\pm$44 & 932$\pm$151 & 0.213$\pm$0.001 \\ K2-138 c & 1125 & 3900$\pm$23 & 711$\pm$103 & 0.214$\pm$0.002 \\ K2-138 d & 978 & 3614$\pm$56 & 635$\pm$84 & 0.218$\pm$0.002 \\ K2-138 e & 850 & 3383$\pm$39 & 673$\pm$90 & 0.231$\pm$0.001 \\ K2-138 f & 735 & 3396$\pm$116 & 1483$\pm$546 & 0.260$\pm$0.004 \\ TOI-178 c & 873 & 3344$\pm$33 & 500$\pm$60 & 0.226$\pm$0.001 \\ TOI-178 d & 691 & 3254$\pm$45 & 1181$\pm$224 & 0.264$\pm$0.004 \\ TOI-178 e & 600 & 2930$\pm$31 & 690.7$\pm$133 & 0.225$\pm$0.018 \\ TOI-178 f & 521 & 2610$\pm$23 & 368$\pm$60 & 0.298$\pm$0.007 \\ Kepler-11 b & 953 & 3697$\pm$133 & 840$\pm$313 & 0.221$\pm$0.005 \\ Kepler-102 e & 694 & 2947$\pm$29 & 360$\pm$55 & 0.243$\pm$0.004 \\ Kepler-102 f & 501 & 2784$\pm$102 & 837$\pm$290 & 0.347$\pm$0.013 \\ Kepler-80 b & 750 & 3344$\pm$33 & 1133$\pm$148 & 0.253$\pm$0.002 \\ Kepler-80 c & 679 & 3219$\pm$29 & 1128$\pm$114 & 0.266$\pm$0.003 \\ \hline \end{tabular} \caption{Atmospheric parameters retrieved for the planets whose composition can accommodate a water-dominated atmosphere (see text). These parameters are the equilibrium temperature assuming a null albedo ($T_{irr}$), the atmospheric temperature at 300 bar ($T_{300}$), the thickness of the atmosphere from 300 bar to 20 mbar ($z_{atm}$), and the planetary Bond albedo ($A_{B}$).} \label{output_mcmc} \end{table*} \begin{figure} \centering \includegraphics[width=\hsize]{Figures/conf_interval_ternary_K138.pdf} \caption{1-$\sigma$ confidence regions derived from the 2D posterior distributions of the CMF and WMF obtained with the planetary interior Bayesian analysis. Axes indicate the core mass fraction (CMF), water mass fraction (WMF) and the mantle mass fraction (MMF). The latter is defined as MMF = 1 - (CMF+WMF).} \label{ternary} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{Figures/grid_fg_final.pdf} \caption{Total mass and radius of K2-138 f (upper panel) and K2-138 g (lower panel) from the different realisations of the MCMC (black crosses). The solid blue lines show the mass and radius measurements from \texttt{PASTIS}, and the dashed lines give the related uncertainties. The red line indicates the limit below which the planet cannot maintain an atmosphere.} \label{fig:K2-138f} \end{figure} Figure~\ref{ternary} displays the 1$\sigma$ confidence intervals derived from the 2D distributions of the WMF and CMF of the K2-138 in a ternary diagram. We can see that the confidence regions are aligned along a line almost parallel to the lines where the CMF is constant. This alignment is due to the the constraint on the Fe/Si mole ratio we have considered within the whole planetary system: the confidence regions are spread over the Fe/Si-isolines whose constant values range from Fe/Si = 0.70 to 0.84 \citep[see][their Figure 4]{2017ApJ...850...93B}. For K2-138 b, the results set an upper limit of 0.7\% in the WMF, which means that this planet is unlikely to have a significant amount of volatiles, including water. The retrieved planetary radius is 1.538 $R_{\oplus}$, which is 1.5$\sigma$ larger than the measured radius from the analysis in section \ref{sect:pastis}. This is due to the extended atmosphere necessary to produce temperature and pressure conditions to hold supercritical water on the surface ($P_{surf} > 300 $ bar). If we assume a mass of 2.80 $M_{\oplus}$ and a CMF of 0.27, a vapour atmosphere with a maximum surface pressure of 300 bar would yield a WMF of 0.01\% (WMF of Earth is 0.05\%) and a radius of 1.461 $R_{\oplus}$, which is well within the 1$\sigma$ confidence interval of the observed value. Therefore, we can conclude that K2-138 b is a volatile-poor planet, that might present a secondary atmosphere with a low surface pressure ($P_{surf} \leq 300 $ bar) or no atmosphere (WMF = 0). In addition, it is the planet with the highest CMF in the system, showing that planets in this system are likely to have less massive cores than Earth (CMF = 0.325) and the other terrestrial planets in the Solar System. The atmospheric model also establishes a minimum surface gravity of 2 m/s to retain an atmosphere. Unlike planets b, c, d and e, in which the 1-$\sigma$ intervals on the masses exclude such low surface gravity, this is not the case for planets f and g. For planet f, a lower limit on the surface gravity of the planet can be translated to a lower limit on the mass. If it is below this limit, the gravity at the surface would not be enough to retain an atmosphere. For planet f, with a total radius of 2.762 $R_{\oplus}$ and a CMF of 0.11, this limit would be approximately 2 $M_{\oplus}$. This minimum mass value to retain its atmosphere is above the lower limit of the total mass set by its 1 $\sigma$ uncertainties, as can be seen in Figure \ref{fig:K2-138f}, upper panel. Furthermore, planet f is the most water-rich in the K2-138 planetary system, with an upper limit of 66\% in the WMF, which is close to the 77\% maximum limit on the water content derived from measurements on cometary compositions. Similarly, planet g also presents a lower limit on the mass of the bulk of the planet of $\backsim$ 2 $M_{\oplus}$ (see Figure \ref{fig:K2-138f}, lower panel). Its retrieved planetary radius is significantly lower than the observational value, with a difference of 1.3 $\sigma$. Therefore, the atmosphere of K2-138 g is significantly more extended than an atmosphere dominated by water vapour under the same irradiance conditions. This increase in atmospheric thickness is probably due to an atmosphere rich in H and He. K2-138 g could have up to 5\% of volatile mass fraction assuming a H/He atmosphere \citep[see Fig. 1 in][]{lopez_fortney14}. A rough estimate of Jeans mass loss rates for K2-138 b yields $6\times 10^{-7}$ $\hbox{$\mathrm{M}_{\oplus}$}.\mathrm{Gyr}^{-1}$ for Jeans escape of H$_2$, and $5\times 10^{-84}$ $\hbox{$\mathrm{M}_{\oplus}$}.\mathrm{Gyr}^{-1}$ for Jeans escape of H$_2$O. For comparison, in the case of Earth the absence of H$_2$ is due to an exobase (altitude at which particles escape) temperature much higher than the equilibrium temperature \citep{hedin83}. An exobase temperature 2 times higher than the equilibrium temperature gives a mass-loss rate of $4\times 10^{-2}$ $\hbox{$\mathrm{M}_{\oplus}$}.\mathrm{Gyr}^{-1}$. In that case, an envelope of 1--10\% of H-He mixture could be efficiently removed, leaving only heavier species such as H$_2$O. In the case of hydrodynamic escape, we obtain a mass loss rate of 2 $\hbox{$\mathrm{M}_{\oplus}$}$.$\mathrm{Gyr}^{-1}$ during the saturation regime and $1\times 10^{-2}$ $\hbox{$\mathrm{M}_{\oplus}$}$.$\mathrm{Gyr}^{-1}$ at $t=3$ Gyr. This yields an integrated mass loss of $0.4\hbox{$\mathrm{M}_{\oplus}$}$, or 14\% of planet's b total mass. Comparing this value to the WMF derived for planets c and d from the MCMC in Table \ref{tab:multiplanets}, we conclude that K2-138 b could have formed with a thick envelope of H$_2$O that has been blown away by XUV photoevaporation. \subsection{TOI-178} In the TOI-178 system, planets b and c have an increasing WMF with progressing distance from the star, while planets d to g have WMF equal or greater than 30\%. For planets d and g, the the volatile layer is likely to present H/He, which would explain why in our analysis their WMF are in the 60-70\% range in addition to $d_{obs-ret}$ greater than 1$\sigma$. TOI-178 b could have lost up to 0.83 $M_{\oplus}$ of its current mass in H$_{2}$ due to Jeans escape, and up 0.45 $M_{\oplus}$ due to photoevaporation, while TOI-178 c could have lost 0.21 $M_{\oplus}$. In such scenario, TOI-178 b and c original volatile mass fraction would be up to 0.36 and 0.10, respectively compared to their current value. \subsection{Kepler-11} For Kepler-11, the WMF of the innermost planet is 0.27$\pm$0.10, which is compatible with a water-dominated envelope. For Kepler-11 c to e, their radius data are 1.7$\sigma$, 2.4$\sigma$ and 4.4$\sigma$ higher than the radius we retrieve with our model, discarding the water-rich envelope hypothesis. The increasing significance level indicates that these planets have an increasing content of H/He with distance from the star. In the case of the outermost planet, Kepler-11 f, the retrieved radius is 1.9$\sigma$ lower than the data, suggesting that this planet presents less H/He than planets c to e. Nonetheless, this could be because of Kepler-11 f not being able to retain a primordial atmosphere due to its low mass (2.3 $^{+2.2}_{-1.2}$ $M_{\oplus}$), compared to the higher masses of the rest of the planets in the system (> 6 $M_{\oplus}$). Furthermore, Kepler-11 f could have lost up to 0.56 $M_{\oplus}$ in H$_{2}$, according to our atmospheric Jeans escape calculation, whereas the other four planets in the system have atmospheric mass losses below 2$\times 10^{-3} \ M_{\oplus}$. \subsection{Kepler-102} The densities of the three innermost planets of Kepler-102 suggest that these are dry planets with high CMFs. Their core-to-mantle ratios could be even higher than the CMF we would expect from the Fe and Si stellar abundances of their host star. Therefore, we set the WMF equal to zero in our MCMC Bayesian analysis and let the CMF as the only free parameter. We only take into account the mass and radius as observables. Our modelling shows that Kepler-102 b, c and d are dry Mercury-like planets, with CMF = 0.91$^{+0.09}_{-0.16}$, 0.95$^{+0.05}_{-0.30}$ and 0.80$\pm$0.14, respectively. Their high CMF could be due to mantle evaporation \citep{Cameron85}, impacts \citep{Benz88,Benz07,Asphaug14} or planet formation in the vicinity of the rocklines \citep{Aguichine20,Scora20}. Kepler-102 e presents a WMF of 0.17$\pm$0.07, suggesting that this planet has a more volatile-rich composition than the planets that precede it. The large uncertainties in the mass of Kepler-102 f prevent us from determining whether this is a bare rocky planet with no atmosphere, or if it presents a thin atmosphere with a maximum WMF = 0.08. In addition, Jeans H$_{2}$ atmospheric escape could have removed up to 0.02 M$_{\oplus}$ from Kepler-102 f, yielding an original volatile mass fraction between 0.07 and 0.10. \subsection{Kepler-80} Kepler-80 d presents a high CMF, corresponding to a Fe-rich planet, similarly to Kepler-102 b and c. Kepler-80 e is consistent with a dry planet with an Earth-like CMF, whereas Kepler-80 b and c are volatile-dominated planets. Kepler-80 g shows a WMF of up to 0.15\%. Given its low mass M = 0.065$^{+0.044}_{-0.038} \ M_{\oplus}$ \citep{Macdonald21}, planet g could have not retained a H/He atmosphere, making a secondary atmosphere with water and/or CO$_{2}$ the most likely atmospheric composition for this planet. Based on our MCMC interior-atmosphere analysis, this atmosphere could be of less than 300 bar of surface pressure. This scenario is also supported by our estimated Jeans water escape, which is between 3.26 $\times \ 10^{-3} \ M_{\oplus}$ and 3.24 $M_{\oplus}$. Both Jeans escape and XUV photoevaporation could have removed efficiently a H/He envelope. The total atmospheric mass loss and the current mass add up to a planetary mass that is similar to that of Kepler-80 e, b and c. Finally, the radius of Kepler-80 g is 2.7 $\sigma$ higher than the radius of a rocky planet with no atmosphere, which suggests that Kepler-80 g probably has retained a gaseous envelope. \section{Discussion} \label{sect:discussion} Figure~\ref{distance} shows the volatile content of the five multiplanetary systems we analysed in this work as a function of the incident flux normalised with the incident flux received by the innermost planet. In addition, we include in Figure~\ref{distance} the WMF of TRAPPIST-1 derived with our interior-atmosphere model by \cite{2021arXiv210108172A} for a homogeneous comparison. Of all systems, K2-138 presents a very clear volatile mass fraction trend: an increasing gradient in water content with distance from the host star for planets b to d, followed by a constant volatile mass fraction for the outer planets (planets e to g). A similar trend is observed in the TRAPPIST-1 system, if one neglects TRAPPIST-1 d presenting a higher volatile mass fraction than its two surrounding inner and outer planets in Fig. ~\ref{distance}. In \cite{2021arXiv210108172A}, the WMF is obtained by assuming a condensed water layer. However, water could be in vapour phase and mixed with CO$_{2}$ in a CO$_{2}$- dominated atmosphere, lowering the overall volatile mass fraction of TRAPPIST-1 d. In that case, the TRAPPIST-1 system could potentially show the increase-plus-plateau volatile trend observed in K2-138. Transmission spectroscopy of TRAPPIST-1 d is needed to probe the composition of its atmosphere. The multiplanetary systems TOI-178 and Kepler-11 do not show smooth increases of the water mass fraction with orbital distances, although the inner planets present significantly less volatiles than the outer planets. Finally, Kepler-80 and Kepler-102 could form this trend if it was not because of their outermost planet, which presents a lower volatile mass fraction than the planet that immediately precedes it. In addition, the estimated original volatile mass fraction of Kepler-102 f is well within the uncertainties of the WMF of Kepler-102 e, meaning that the planets e and f could potentially form a plateau in the outer part of the Kepler-102 system with a water mass fraction of 10\%, similarly to TRAPPIST-1. In the case of the TOI-178 and Kepler-11, it would be necessary to adopt a self-consistent modelling approach that includes the possibility of a H/He-dominated volatile layer to determine whether their volatile mass fraction trend is as clear as that of K2-138 and TRAPPIST-1. For the other multiplanetary systems, which do not present high $d_{obs-ret}$ combined with high water mass fractions in our analysis, the volatile mass fraction would decrease for each individual planet under the assumption of a H/He envelope. Including H/He as part of the envelope would change the value of the volatile mass fraction of each individual planet, but it would not change our conclusion about the global volatile mass fraction trends in each system (i.e the gradient and plateau trend in TRAPPIST-1 and K2-138). Furthermore, the water-H/He degeneracy to which volatile-rich planets are subject to can only be broken with atmospheric characterization data, such as transmission spectroscopy and phase curves. In many cases, the volatile envelope of sub-Neptunes might not be dominated by either water or H/He, but it could be a mixture of both. This is supported by transmission spectroscopy of the sub-Neptune K2-18 b \citep{Tsiaras19,Benneke19,Madhusudhan20}, where water is detected, although its current trace species could be compatible with a H$_{2}$-rich atmosphere \citep{Yu21}. Additionally, meteorite outgassing experiments show that a significant fraction of H/He could be sustained in a water-dominated secondary atmosphere \citep{Thompson21}. The significant difference in volatile mass fraction between the inner planets and the outer planets of these multiplanetary systems indicates that these planets might have undergone similar formation and evolution histories. The gradient-plus-plateau trend could potentially result from the combination of planetary formation in ice-rich regions of the protoplanetary disk, atmospheric loss, and inward migration. The outer volatile-rich planets could have formed beyond the ice line prior to migration, where ice-rich solids are expected to form \citep{Mousis21}, producing planets with high volatile contents. In the systems whose planets present water mass fractions lower than 10\%, volatiles could have been simply delivered by building blocks made of chondritic minerals bearing this amount of water \citep{Daswani21}. In those conditions, the radial drift of icy planetesimals from beyond the snowline is not required. In the case of K2-138, the three-body Laplace resonances are a sign of an inner planetary migration \citep{2007ApJ...654.1110T, 2017MNRAS.470.1750I, 2017A&A...602A.101R}. For three systems, we found that their outermost planets (Kepler-11 f, Kepler-102 f and Kepler-80 g) have lower volatile mass fractions than the planets before them in the system. This could be due to their lower masses compared to the other planets in their systems, since they are not massive enough to have a surface gravity that would help them retain their atmospheres. In addition, these three low-mass, low-WMF planets could have formed further away from the water ice line than the water-rich planets in their systems, having less water-rich material available during accretion than those planets that formed in the vicinity of the water ice line. Contrasting with K2-138, the water mass fraction of the outer planets found in the planets of the TRAPPIST-1 and Kepler-102 systems are compatible with 10\% \citep{agol21,2021arXiv210108172A}, a value found in agreement with the water content of many asteroids of the Main Belt \citep{Vernazza15}. This similarity suggests that the building blocks of the outer planets of these systems could have agglomerated from a mixture of ice grains coming from the snowline and anhydrous silicates formed at closer distances from the host star, following the classical formation scenarios invoked for the Main Belt \citep{2002aste.book..235R}. In that case, this implies that the migration distances of the planets in TRAPPIST-1 and Kepler-102 would have been more restricted than those of the water-rich planets in the K2-138, TOI-178 and Kepler-11 systems. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{Figures/WMFplot_rev_v3.pdf} \caption{Volatile mass fraction trends of the six multiplanetary systems analysed with our interior-atmosphere model. We show the water mass fraction estimates (see text) as a function of the stellar incident flux or irradiation, $F$, in Earth irradiation units ($S_{\oplus}$ = 1361 $W/m^{2}$) in the upper panel. In the lower panel, the incident flux is normalised with respect to the inner, most irradiated planet in each system, $F_{innermost}$. Planets whose atmospheric composition is likely to be H/He-dominated instead of water-dominated ($d_{obs-ret}$ > 1 $\sigma$) are indicated in grey color. } \label{distance} \end{figure*} We have considered the Fe/Si mole ratio as an observable of our MCMC Bayesian analysis in addition to the planetary masses and radii. Even though the Fe/Si derived from stellar abundances and that obtained from rocky planet densities could depart from a 1:1 relationship \citep{Plotnykov20,Adibekyan21}, considering the Fe/Si mole ratio contributes to reducing the degeneracy between the rock+mantle layers and the volatile layer \citep{dorn15,Dorn17,2017ApJ...850...93B}. Particularly, assuming that the planetary Fe/Si mole ratio is similar to the Fe/Si ratio of the host star improves the determination of the CMF, but does not necessarily contribute to the determination of the volatile mass fraction in volatile-rich planets \citep{Otegi20}. This is the case of the TRAPPIST-1 system, where the inclusion of the Fe/Si mole ratio as an observable in the MCMC Bayesian analysis refines the determination of the surface pressure for the inner planets of the system, but slightly reduces the uncertainties of the WMF estimates for the outer planets \citep[see Tables 3 and 4 in][]{2021arXiv210108172A}. Therefore, considering the Fe/Si mole ratio does not affect the volatile general trend of the planets within a multiplanetary system. \section{Conclusions} \label{sect:conclusion} We carried out a homogeneous interior modelling and composition analysis of five multiplanetary systems that have 5 or more low-mass planets ($M < 20 \ M_{\oplus}$), rather than compiled the volatile content estimates of previous works, to eliminate the differences between interior models as a possible bias when comparing the compositional trends between planetary systems. In the case of the TOI-178, Kepler-11, Kepler-102 and Kepler-80 systems, we used previously published mass, radius and stellar abundances data. In the case of the K2-138 system, we completed the previous analysis with an in-depth stellar spectroscopic analysis. We performed a line-by-line differential analysis of K2-138 spectra with respect to $\alpha$ Cen B and the Sun, to derive the most accurate stellar parameters and abundances given the data at hand. These were used for a new complete Bayesian analysis of the radial velocities and photometry acquired on the system. We explored the robustness of the planetary parameters and stellar chemical abundances in our spectroscopic analysis. We concluded that the parameters we derived are fully consistent with the ones obtained by \citet{2019A&A...631A..90L}. With our interior-atmosphere model in a MCMC framework, we obtained the posterior distribution of the compositional parameters (CMF and WMF) and the atmospheric parameters assuming a water-dominated volatile layer of each of the planets in these multiplanetary systems. We found that K2-138 and TRAPPIST-1 present a very clear volatile trend with distance from the host star. Kepler-102 could potentially present this trend. For the TOI-178 and Kepler-11 systems, our modelling ruled out the presence of large hydrosphere as responsible for their low density. For such systems, it would be necessary to include H/He as part of the volatile layer in a self-consistent interior-atmosphere model. Nonetheless, all multiplanetary systems showed that the volatile mass fraction is significantly lower for the inner planets than for the outer planets. This is consistent with a formation history that involves formation of the outer planets in the vicinity of the ice line, inwards migration and atmospheric loss of the inner planets. We discussed the possible formation and evolution pathways that might yield these volatile content trends case-by-case. Similarly, we also commented on the possible causes of the high core mass fractions of the inner planets of Kepler-102 and Kepler-80, which might involve formation in the vicinity of the rocklines. In addition, the atmospheric thickness that we obtained as a result of our Bayesian analysis (see Table~\ref{output_mcmc}) can be used to estimate the scale height of the extended atmospheres of the planets analysed in this work, which is necessary to assess the observing time and number of transits to characterise the composition of these atmospheres with transmission spectroscopy. This would confirm the exact composition of their atmospheres. To better assess possible evolutionary effects on the current composition of the planet, future work should involve the inclusion of atmospheric mass loss processes in the coupled atmosphere-interior model. In this work, we assumed that the planets do not evolve with time. The variation of water mass fraction could also have been shaped by post-formation processes such as hydrodynamic escape \citep{Bonfanti20}. Each of the discussed processes has been studied individually with interior models to constrain whether the atmospheres of low-mass planets are primordial or secondary \citep{dorn_heng18,gupta21}, but none has modelled the effects of all these combined processes on the volatile reservoir of low-mass planets. \begin{acknowledgements} We would like to thank Maria Bergemann and Matthew Raymond Gent for a preliminary analysis of the stellar spectrum. This research has made use of the services of the ESO Science Archive Facility. This research was made possible through the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This paper includes data collected by the K2 mission. Funding for the K2 mission is provided by the NASA Science Mission directorate. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A\&AS, 143, 23. TM acknowledges financial support from Belspo for contract PRODEX PLATO mission development. We acknowledge the anonymous referee whose comments helped improve and clarify this manuscript. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,566,036
arxiv
\section*{Introduction} A transit of Venus (ToV) in front of the Sun is a rare event, a unique opportunity to study the sunlight refraction in the atmosphere of the planet during ingress and egress, from which the mesospheric and upper haze structure can be constrained. The event of June 5-6, 2012 \revis{was the first transit in history to occur} while a spacecraft was in orbit around the planet\revis{, and was observable from a large portion of the Earth, stretching from Central-East Europe to the American continent, across the Pacific}. Accounts of past historic transits provided detailed descriptions of the \revis{planet morphology through telescopic observations} of Venus during ingress and egress phases that were relevant for contact timings \citep[for a review, see][]{Link-1969}. In the past, timings were the only scientific data collected, in the attempt to use the events for the determination of the solar parallax. \revis{While today this interest no longer exists, another category of phenomena, involving the atmosphere of Venus, appears to be more relevant and can be linked to a larger domain of investigations, including exoplanet transits \citep{Ehrenreich2012, Widemann-et-al-2012}.} The portion of the planetary disk that is outside the solar photosphere has been repeatedly perceived as outlined by a thin bright arc called the aureole. On June 8, 2004, fast photometry based on electronic imaging devices allowed the first quantitative analysis of the phenomenon \citep{Tanga-etal-2012}. \revis{The accuracy of the observations in 2004 was limited because the campaigns were not specifically organized to photometrically observe the aureole}, which was only confirmed at that time. Measurements in 2004 were essentially obtained using NASA’s then operating Transition Region and Coronal Explorer solar observatory (TRACE), the Tenerife Themis solar telescope, the Pic-du-Midi 50 cm refractor, and the DOT in La Palma (Spain) \citep{Pasachoff-etal-2011, Tanga-etal-2012}. Owing to the difficulty of reaching an acceptable signal-to-noise ratio (S/N) next to the solar photosphere, a region that is typically contaminated by a strong background gradient, only the brightest portions of the aureole were sampled. This left a strong uncertainty on the faint end of the aureole evolution, when Venus is located farther away from the solar limb. In these conditions, it was not possible to probe the deepest refracting layers, which are close to tangential optical thickness $\tau =1$ of the Venus atmosphere. An isothermal model was fitted to the usable data, which yielded a single value of the physical scale height for each latitude and an estimate of the vertical extension of the refracting layers that contribute to the aureole. On June 5-6, 2012, several observers used a variety of acquisition systems to image the event; these systems ranged from amateur-sized to professional telescopes and cameras. In this way, a large amount of quantitative information on this atmospheric phenomenon was collected for the first time. For the 2012 campaign, initial results and observations have been presented \citep{Wilson-et-al-2012, Widemann-et-al-2012, Jaeggli-et-al-2013}. \revis{Direct multiwavelength measurements of the apparent size of the Venus atmosphere were obtained by \citet{Reale-et-al-2015}. In addition, the Doppler shift of sub-millimeter $^{12}$CO and $^{13}$CO absorption lines was mapped by \citet{Clancy-et-al-2015} using the James Clerk Maxwell Telescope (JCMT) to measure the Venus mesospheric winds at the time of the transit.} In this work, the first devoted to aureole photometry obtained during the June 2012 event, we use simultaneous data from the Earth-orbiting NASA Solar Dynamics Observatory (SDO) and Venus-orbiting ESA Venus Express spacecrafts. Optical data retrieved from an image sequence of the Helioseismic and Magnetic Imager \citep[HMI, ][]{Schou-et-al-2012} \revis{onboard the SDO mission } are compared to atmospheric refraction models based on a vertical atmospheric density profile obtained by the VEx Solar Occultation in the Infrared \citep[SOIR, ][]{Vandaele-et-al-2008} instrument during orbit 2238. The Venus Express operations occurred while Venus was transiting the Sun, as seen from Earth. In Fig.~\ref{F:transit_scheme} \revis{the positions of the contacts (I to IV) are indicated, as well as the orbit of European Space Agency's Venus Express orbiter around the planet, projected to scale. Venus is shown at its location during orbit 2238 when SOIR data were collected at the time of apparent solar ingress at latitude +49.33\textdegree on the evening terminator at 6.075 PM local solar time (LST). At the scale used in Fig. 1, the parallax effect from a site on the Earth surface, or from the position of SDO, is negligible. For data reduction, accurate positions of Venus relative to the solar limb are derived directly from SDO/HMI images.} The advantage of SOIR is clearly related to the high vertical resolution obtained from its vantage observation point. From the Earth's ground-based and orbit-based telescopes such as SDO/HMI, the aureole vertical extension (corresponding to a few atmospheric scale heights) is unresolved. The aureole brightness is the result of the sum of refracted light at different altitudes. Although the evolution of the Earth-Venus-Sun geometry during the transit allows separating the contribution of different layers, \revis{both photometric and calibration accuracy limit the vertical resolution}. On the other hand, an advantage specific to transits is the possibility of simultaneously probing the entire limb of Venus, which allows deriving the atmospheric properties at all the latitudes where the aureole is observed. \begin{figure} \includegraphics[width=\linewidth]{./Fig/VEXAG_Widemann-p12-ppt.pdf} \caption{Solar disk with the trajectory of Venus during the transit on June 5-6, 2012, as seen from the Earth geocenter. See the text for more details. } \label{F:transit_scheme} \end{figure} In this paper, we test the applicability of an isothermal approach to SDO/HMI time-resolved photometry and the possible improvements provided by using a multilayer approach of the Venus atmosphere based on Venus Express data, in particular the SOIR vertical density profile obtained during the transit at a latitude of +49$^{\circ}$ to reproduce the aureole photometry. We also derive constraints on the upper haze altitude \revis{and on the} tangential opacity of the same latitude. The paper is organized as follows. First, we describe the condition of the 2012 ToV and the method for extracting and analyzing the data (Sect. 1). We then present three numerical models that we developed to study the aureole (Sect. 2), and we apply the different models to SOIR solar occultation data obtained at orbit \revis{2238 (Fig.\ref{F:transit_scheme}) to test their} consistency with aureole data (Sect. 3). \section{Observations by the Solar Dynamics Observatory} The aureole photometry was derived from data acquired by the \textit{Helioseismic and Magnetic Imager} (HMI) instrument onboard the \textit{Solar Dynamics Observatory }(SDO, NASA), which operates from an inclined geosynchronous orbit since 2010. A total of 776 images were obtained during the ingress of Venus and 862 during the egress, at a resolution of $\sim$0.504 arcsec per pixel, corresponding to 105 km at the distance of Venus at the transit epoch. HMI was designed to measure Doppler shift and magnetic field vector at the solar photosphere by exploiting the 617.3 nm Fe I absorption line. \revis{We exploited here the continuum images corresponding to Level-1.5 data products, implying that they have been normalized by flat-fielding but not rescaled or modified further. During the transit, the time sampling interval is 45 seconds}. A 854×480 pixel subframe of a HMI image of is shown in Fig.~\ref{Figure SDO}. \begin{table} \centering \begin{tabular}{l l l} \hline Image sequence start (UT) & end (UT) & n. images \\ 20:00:02.66 & 22:50:02.66 & 776 \\ \hline \multicolumn{3}{l}{Center wavelength: 617.3 nm } \\ \hline \multicolumn{3}{l}{Image scale: 0.504 arcsec/pixel} \\ \hline \multicolumn{3}{l}{Apparent Venus diameter (from geocenter): 57.80 arcsec } \\ \hline \end{tabular} \caption{Summary of the main properties of the transit observations by SDO (ingress only). The Venus diameter is derived from computing the planet ephemerides.} \label{T:SDOprop} \end{table} \begin{figure} \includegraphics[width=\linewidth]{./Fig/aligned_0393.png} \includegraphics[width=\linewidth]{./Fig/Aureole_enhanced.pdf} \caption{Upper panel: Subframe centered on Venus \revis{at the epoch of} second contact, during the transit ingress, \revis{from a single SDO/HMI image obtained at 617.3 nm (Fe I absorption line)}. Lower panel: SDO image with extreme contrast stretch, to show the aureole. The radial direction of the flux measurement is shown, with the area at +49° that is considered for comparison to SOIR. \revis{The value of $f$ in the bottom left corner is the linear fraction of the Venus diameter projected outside the solar limb.}} \label{Figure SDO} \end{figure} \subsection{Aureole brightness determination} The photometry of the aureole consists of measuring the flux along a circular annulus containing the limb of Venus. The sector containing the aureole corresponds to the annulus portion projected against the background sky, outside the solar photosphere. \revis{To correctly determine the exact position of the aureole, we proceeded by fitting a circle to the limb of Venus, on two reference images where the planet is at least partially silhouetted against the Sun (with f<0.5). During the short duration of the ingress and egress, the motion of the planet relative to the Sun is essentially linear. Starting from the two reference positions, this allowed us to determine by extrapolation the position of Venus on all other images, which were previously aligned on the Sun.} The measurement was repeated for each image to study the variation of the aureole brightness over time. Our procedure started by extracting the transverse brightness profile of the aureole in the planetocentric radial direction. This was obtained by estimating the contribution of individual pixels in analog-to-digital units (ADU), using subpixel increments. At each step, a bilinear interpolation was applied to obtain the flux value at the corresponding position. The procedure, in absence of very steep brightness gradients, was sufficient to obtain a rather smooth profile. However, to further reduce the possibility that small fluctuations introduce noise on the curves by pixel-to-pixel variations, we averaged ten radial profiles spaced by 0.1\textdegree \ in latitude to obtain a final radial curve associated to a 1\textdegree \ interval. The typical signal was well approximated by a Gaussian (an example is shown in Fig. \ref{Figure 3}), representing the transverse cut of the line spread function of the imaging system. \revis{At this stage, the position of the peak on the profile was verified to ensure that the planet position was correctly computed. The atmospheric scale-height of about 5~km is therefore unresolved by a factor $\approx$80.} As our measurements are performed very close to the Sun, a background signal fading away from the limb, mainly due to scattering in the telescope optics, is always present, and it can be modeled as a linear slope added to the Gaussian. We thus modeled the radial profile by a function $F_t$ as \begin{equation}\label{eq:1} \left\{ \begin{array}{ll} F_t(X) &= F(X)+F_b(X), \\ F(X) &= g\ e^\frac{-(X-X_0)^2}{\sigma^2},\\ F_b(X) &= a*X+b \end{array} \right. ,\end{equation} where X is the radial position. The parameters \textit{a}, \textit{b}, \textit{g}, $X_0$ , and $\sigma$ were determined by a non-linear least squares fit on each of the profiles. The integral of the Gaussian component over the width of a ring surrounding the aureole $\int {F(X)}$ represents the background--subtracted aureole flux (Eq.~\ref{eq:1}). This approach is different from the aperture photometry adopted by \citet{Tanga-etal-2012} and allowed us to better evaluate both the background and the aureole signal. The aureole flux in ADU/pixel was converted into ADU per arcsec (i.e., the brightness of an aureole arc of 1~arcsec length) and then normalized to the brightness of a 1~arcsec$^2$ of photosphere, measured at 1~Venus diameter from the solar limb. With this method, the flux was measured at steps of 1\textdegree\ in latitude. \begin{figure} \includegraphics[width=\linewidth]{./Fig/Flux_49NW2.png} \caption{\revis{Radial intensity profile of the aureole (blue crosses) as a function of the number of HMI pixels along the radial direction from the center of Venus. The profile has been measured from one SDO/HMI frame, collected on June 5th, 2012 at 22:21:55, for the latitude +49\textdegree. The blue dots represent the bilinear interpolation performed on the radial profile at steps of 1/10 of one pixel.} The vertical axis is the signal intensity in ADU. The green curve is the result of a Gaussian fit with a linear slope. The width at half-height is 1.875 arcsec and corresponds to 390 km at Venus.} \label{Figure 3} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{./Fig/Flux_49NW_total.png} \caption{\revis{Aureole flux at a latitude of +49\textdegree is plotted as a function of the fraction of the Venus diameter seen outside of the photosphere $f$.} The flux is normalized to a reference element of the solar photosphere, the brightness of a 1$\times$1~arcsec$^2$ at one apparent Venus diameter from the solar limb. The aureole flux is plotted as a function of the linear fraction of the Venus diameter seen outside of the photosphere $f$, from the point of view of the observer.} \label{F:lightcurve} \end{figure} Following \citet{Link-1969}, we define the ``phase'' of the Venus egress/ingress ( \textit{f} ) as the linear fraction of the planet diameter projected outside the solar limb (Eq.~\ref{eq:fraction}), as seen from a given observer. For instance, the value \textit{f}=0 corresponds to the planet disk entirely projected on the Sun and internally tangent to its limb; at \textit{f}=0.5 the center of the planet falls exactly on the limb of the Sun. Of course, $f$ is a function of time, but it can also be derived directly from the images by measuring the planet position relative to the solar limb. In practice, we extracted from the images the epoch of the first contact (when Venus is externally tangent to the limb,$t_{1st}$) and the second contact (when Venus is internally tangent to the limb, $t_{2nd}$). As the epoch of any image ($t_i$) is known, assuming a linear motion of the planet (appropriate over the $\sim$20~min duration of ingress or egress), the corresponding value of $f$ can easily be derived: \begin{equation} f = - \frac{t_{i} - t_{2nd}}{t_{2nd} - t_{1st}} \label{eq:fraction} .\end{equation} By considering the whole image sequence obtained by SDO, we can then represent the phenomena observed with the evolving geometry of Venus with respect to the Sun and the observer by the parameter $f$, a proxy of time, which is directly related to the evolving geometric configuration. The position angle along the disk of Venus can easily be converted into a latitude by considering the known orientation of the SDO images (solar north up) and by computing the physical ephemerides of the Sun and Venus during the transit. At that epoch, the planetocentric latitude of the sub-Earth point was just 1\textdegree, implying that the discrepancy between position angle and latitude was very small and can be neglected in practice (1\textdegree\ at most at the poles). For a given latitude, that is, for a given point along the planet limb, our capability of observing the aureole is related to the interval of $f$ values for which that point is projected on the sky background, outside the solar photosphere. For this reason, the measurements of different latitudes span a different range of $f$. At the limit of large $f$ (planet largely outside the solar limb), the signal disappears into noise for any latitude that is considered. This occurs at levels $\sim$10$^{-4}$ of normalized flux. At small $f$, the geometric limit is represented by the position at which a given point on the planet limb touches the solar limb. However, since we integrated the flux of the aureole radially over ten pixels to estimate the background, a practical limit exists and is reached sooner than the geometric limit. It corresponds to the contamination by the photosphere margin, which directly enters the measured annulus. For this reason, we conservatively removed the extreme of the curve at the limit of the smallest $f$ ($\approx$0.02) where a discontinuity in the flux indicates that the photosphere contaminates our measurements. Given the high rate of sampling, we additionally averaged the aureole flux over bins of ten single measurements. We then computed error bars from the standard deviations within each bin. Figure~\ref{F:lightcurve} shows the light curve extracted at latitude +49\textdegree during the ingress at the morning terminator. Error bars include the contribution of photon noise from the background, the source, and the photospheric comparison. As expected, the curve presents an exponential decrease in brightness with increasing $f$, that is, at larger distances between the disk of Venus and the solar limb. By considering the light-curve section where a trend is clearly visible above the noise level, we are able to trace the aureole brightness over two orders of magnitude. The maximum brightness of the aureole (around 10$^{-1}$, normalized units) occurs at very low $f$ values. At this geometry the luminosity is dominated by sunlight crossing the atmosphere at the highest altitude probed by the aureole. The corresponding sunlight beams are affected by a very small (subarcsec) total deviation that is due to refraction. As the aureole image formed by refraction preserves the surface brightness of the source (in case of a perfectly transparent atmosphere), our normalization implies that the thickness of the atmospheric layer contributing to the aureole is $\sim$10$^{-1}$ arcsec$=20$~km. If additional opacity due to light scattering by aerosol particles above the cloud tops is included, the real altitude range can be higher. To derive physical parameters of the mesosphere where the refraction occurs, we try as a first approximation to adopt a model of transparent, isothermal atmosphere along the line of the initial model developed by \citet{Tanga-etal-2012}. As shown below, the light curve that we observe induced us to refine the isothermal assumption and adopt a multilayered approach for the refraction model. \section{Sunlight refraction models} \subsection{Isothermal model (model 1)} At first order, the aureole of Venus can be reproduced by a model taking into account the refraction of a finite array of elementary light sources originating from the solar photosphere. Our first approach is the isothermal model used in \citet{Tanga-etal-2012} for the interpretation of the transit data collected in 2004 by ground-based telescopes. The core of this model, called model 1 in the following, is based on the hypothesis of a transparent atmosphere as presented by \citet{Baum-Code-1953}. We recall its main properties below. The refraction angle $\omega$ of a light ray that crosses the atmosphere and reaches the observer is given by \begin{equation} \omega = -\nu(r)\sqrt{\left( \frac{2 \pi r}{H} \right)} \label{E:phi_iso} ,\end{equation} where $\nu$ is the refractivity, which decreases exponentially with \textit{r}, and $r$ is the minimum distance \revis{of} the considered ray path from the center of Venus. This quantity is related to the gas number density \textit{n} by $\nu\ = K n$, where K is the specific refractivity. \textit{H} is the scale height of the atmosphere. The factor by which the image of an element of the photosphere is shrunk by refraction is given by \begin{equation} \phi = \frac{1} {1+\frac{D}{d}\left( \frac{\partial \omega}{\partial r} \right)} \label{E:singlelayer} ,\end{equation} in which $d = 1 + \frac{D}{D'}$, with \textit{D'} and \textit{D} representing the distance of Venus from the Sun and Earth, respectively. At the transit epoch, $D = 0.288703$~AU and $D' = 0.726023$~AU. $\phi$ is also the ratio between the flux received by the observer from that element and its flux before refraction, if the atmosphere is completely transparent. By defining the conventional distance $r_{1/2}$ as the half occultation radius (measured from the planet center) at which $\phi = 0.5$, the following equation is derived: \begin{equation} \frac{1}{d} \left(\frac{1}{\phi(r)}-1 \right) + \log\left(\frac{1}{\phi(r)}-1 \right) = \frac{ r_{1/2} - r}{H} \label{E:Baumcode} .\end{equation} We note that Eq.~\ref{E:Baumcode} is valid in the range of $r$ (distance from the center of Venus) spanning from an altitude where the atmosphere is opaque (optical thickness $\tau >> 1$) to the limit at which the refracted light comes from the solar limb (smaller deviations do not reach the observer). While we assume that the inferior limit is constant, the upper one depends on the geometry and, for a given location at the planet limb, changes for different $f$ values. The total flux of the aureole will be the integral of the refracted light passing at that distance range from the center of Venus, that is, \begin{equation} F = \int_{r_{min}}^{r_{max}} S_\odot(r)\phi(r)\tau(r)\ l\ dr \label{eq:BCflux} ,\end{equation} where $S_\odot(r)$ is the flux emitted by an element of solar photosphere of size $l$, passing at a minimum distance $r$ from the center of Venus. The function $\tau(r)$ represents an absorption factor that can be included in the integration to reflect the vertical structure of aerosols in the upper haze \citep{Wilquet-et-al-2009, Wilquet-et-al-2012} as detailed in Sect.~\ref{S:boundaryc}. \begin{figure} \includegraphics[width=\linewidth]{./Fig/absorption_factor.png} \caption{Transmission function (Eq.\ref{eq:absorption}) through the atmosphere of Venus adopted by our model. The scale height of the aerosol for $\tau = 1$ at r = 87.4 km, corresponding to $k_{\tau} = 0.6$ km and $H_{\tau} = 4.8$~km, is based on \citet{Wilquet-et-al-2009}.} \label{absorption_factor} \end{figure} To model the aureole, we describe the brightness of the solar disk by a simple limb-darkening function, which yields $S_\odot(r)$ \citep{Hestroffer-Magnan-1998}. In this model, the free parameters are $H$ and $\Delta r = r_{1/2}-r_{\tau}$, where $r_{\tau}$ is the distance from the planet center at which $\tau(r_{\tau}) = 1$. \subsection{Multilayer model (model 2)} The isothermal approach, which provides only averaged quantities on the altitudes that generate the aureole, does not appear to be well suited for reproducing \revis{the} observed temporal brightness variations in the aureole flux at $f > 0.5$ (Sect. 3). We therefore implemented a ray–tracing approach considering a multilayered atmosphere, in which each layer is described by its refractive properties, called model 2 in this paper. In our case, the vertical distribution of the refractive index $N(r)$, sampled by a number of $n$ layers, is the unique physical quantity determining the trajectory of a light beam through the atmosphere. This model is entirely equivalent to those used for stellar occultations \citep{Ververka-Wasserman-1973, Elliot-1992, Elliot-2003} and is based on the computation of the total refraction angle resulting from discretizing the path integral of the smoothly varying direction of propagation: \begin{equation} \theta(r) = \int_{-\infty}^{+\infty} \frac{r}{r'} \frac{d}{dr'} \ln N(r')\ dx \label{eq:angle_deviation} ,\end{equation} in which $\theta(r)$ is the deviation angle of a light beam passing at a minimal distance $r$ from the planet center. $r'>r$ represents the atmospheric altitudes crossed by the light ray above $r$, $N(r')$ is the vertical refractive index profile, and $dx$ is the integration path along the ray propagation. It has been shown \citep{Elliot-1992} that the corresponding geometric attenuation of the light beam that is due to refraction is equivalent to the integral of several isothermal layers, each one contributing as in Eq.~\ref{E:singlelayer}, that is, \begin{equation} \phi = \int_{-\infty}^{+\infty} \frac{1}{1+ \frac{D}{d}\frac{d\theta(r)}{dr}} dx. \label{eq:compute_flux} \end{equation} \subsection{Upper haze boundary condition } \label{S:boundaryc} In all the numerical models adopted and compared to SDO/HMI data in Sects. 3.1-3.3, we took into account the geometry of the transit, which evolves with time, to compute sunlight refraction from the source (the solar photosphere) to the observer (placed on Earth or on a space satellite as in the case of SDO). The integrals needed to compute the contribution of each atmospheric layer to the aureole (either Eq. \ref{eq:BCflux} or \ref{eq:compute_flux}) were computed over an appropriate altitude range, from layers for which the optical thickness is $<<$1 up to $\sim$140~km, that is, above the region where the aureole is produced. To introduce a more realistic transition between the transparent atmosphere and the opaque cloud layers, we introduced a simple optical thickness variation with the altitude $z$, represented by the function \begin{equation} \tau(z) = 0.5+0.5\ \tanh\left[k_{\tau}\ (r-r_{cloud})\right] \label{eq:absorption} ,\end{equation} in which $r_{cloud}$ is the radius of the $\tau$=1 level (Fig.~\ref{absorption_factor}). The parameter $k_{\tau}=0.6\ (km)$ is chosen in such a way that the shape of the variation fits the profiles for aerosol absorption obtained by \citet{Wilquet-et-al-2009}. The scale height of the aerosols is $H_{\tau}=\frac{2.88\ (km)}{k_{\tau}}=4.8$~km. \section{Results of aureole photometry vs modeling} \subsection{Isothermal mesosphere (model 1a)} In the isothermal approach of model 1, the aureole brightness for a given latitude is uniquely determined by the value of the physical scale height ($H$) and by the layer thickness: $\Delta r = r_{1/2} - r_{cloud}$ \citep{Tanga-etal-2012}. The best fit to the isothermal model is obtained by a mixed \textit{\textup{\textit{Genetic}}} \citep{Holland-1975, Goldberg-1989, Davis-1991, Beasley-1993a, Beasley-1993b, Michalewicz-1994} and Markov chain Monte Carlo (MCMC) approach \citep{Metropolis-et-al-1953, Hastings-1970, Numerical-Recipes-2007}. The \textit{Genetic} algorithm is the computation of the best solution between two vectors of possible \textit{H} and $\Delta r$ values, evaluated on the base of the least-squares residuals between the computed flux and the observations. The first generation spans a wide range in the parameter space, from 0 to 50 km for \textit{H} and from 0 to 40 km for $\Delta r$. Each additional generation selects the best solutions and narrows the search on a more restrictive set of parameters. The third generation of the \textit{Genetic} algorithm was used to initiate the MCMC code, which is iterated a number of times sufficient to reach a non-linear least-square minimization condition. \revis{The aureole flux values that we need to model span more than 2 orders of magnitude, but in terms of physical interest, all brightness levels are equally relevant, including the fainter aureole associated with refraction by deeper atmospheric levels. For this reason, all our fits were computed on the logarithm of the flux, not the flux itself.} The results obtained from our photometry at a latitude +49\textdegree are illustrated in Table~\ref{T:res_BC} and Fig.~\ref{F:res_BC}. \revis{Error bars for the parameters are estimated by searching for the largest parameter variation that fits the standard deviation of the measurements.} \begin{figure} \includegraphics[width=\linewidth]{./Fig/Fit_49_1layer.png} \caption{Best fit of the aureole light curve \revis{from SDO/HMI} at +49\textdegree (morning terminator) obtained with the single-layer, isothermal model. The flux is the same than in Fig.~\ref{F:lightcurve}, binned over ten consecutive points. Parameters are $H=16.3$ km, $r_{cloud} = 94$ km and $r_{1/2}=96$ km. We obtain a significant flux excess in the model at $f > 0.5$ when the Venus limb is more distant from the solar limb, i.e., when higher altitude refractive layers are probed. See Table~\ref{T:res_BC} for the model parameters.} \label{F:res_BC} \end{figure} \begin{table} \centering \begin{tabular}{c c c c} \hline \multicolumn{4}{c}{One-layer model} \\ \hline \hline Latitude & $H$ (km) & $r_{1/2}$(km) & Altitude (km) \\ \hline $+49$\textdegree \ & $16.3\pm0.7$ & $96\pm1$ & $190\pm1$ \\ \hline \end{tabular} \caption{Result of the model fit for the entire aureole flux variation.} \label{T:res_BC} \end{table} While model 1 appears to reproduce the light curve for $f<0.4$ very closely, the faint aureole appears to be systematically overestimated up to $f\sim0.7$, where an abrupt cutoff occurs, resulting in a reduced chi square $\tilde{\chi}^2$ of 20. As at the cutoff the signal falls at noise level, the agreement of the model could be considered qualitatively acceptable, but the physical parameters thus determined do not appear to be realistic. In particular the high value of $H=16.3$~km disagrees strongly with other determinations. We can compare it to the typical scale height measured on the SOIR density profile (Fig. \ref{F:dens_SOIR}-top), which is 3-4 times smaller. This finding apparently indicates that the isothermal approach is not entirely appropriate. The behavior of the model at $f>0.4$, corresponding to the sunlight passing at lower altitudes, also suggests a change in the trend of the refractive properties with altitude. As refraction is related to density, the layered scale height distribution obtained by SOIR could play a significant role in the formation of the aureole. For this reason, we decided to adopt this three-layer structure as an intermediate step toward a more complex modeling. \subsection{Three isothermal layers (model 1b) } A variant of model 1 consists of a sliced analysis of the light curve. In fact, the geometry of the refraction is such that for increasing $f$ , only light rays passing deeper in the atmosphere can reach the observer. By considering the faint end of each light curve portion, only the atmosphere closer to the opaque cloud top, where deviation by refraction is maximum, contributes to the aureole. We assumed the SOIR measurement as an input to model 1b. \revis{As shown by the piecewise linear fit in the top panel of Fig.~\ref{F:dens_SOIR}, we considered that} a first layer exists at low altitudes and corresponds to $H=4.8$~km, mostly contributing at the faint end of the light curve $0.4<f<0.6$. The highest level (with the same scale height) should contribute only to the brightest peak ($f<0.1$), while the intermediate level \revis{($H=3$~km) should be relevant in the intermediate portion of the light curve}. Each of the three layers should then replicate the behavior of the isothermal model, within the corresponding altitude range given by the vertical profile of SOIR. As the altitude ranges and the scale heights are provided by SOIR, the only free parameters in this model are the three values of $\Delta r$. The results of the fit, computed by the same method as introduced above, are presented in the bottom panel of Fig.~\ref{F:dens_SOIR} and Table~\ref{T:results}\revis{, as a result of the application of the direct flux modeling (Eq.~\ref{eq:BCflux})}. In the isothermal inverse model and the triple-layer model, the scale height of the aerosols was $H_{\tau}=5.8$~km, the constant was $k_{\tau}=0.5$~(km), and the altitude of the $\tau =1$ was $r_{\tau}=80.0$~km. This altitude was chosen following \citet{Wilquet-et-al-2012}. It is interesting to note that, as expected, all the three layers contribute to the aureole for $f<0.3$, while the deepest layer dominates for $f>0.4$. However, the final result is not yet fully satisfactory as the flux is in general underestimated (by a factor up to $\sim$2) except for $f>0.5$,\revis{ yielding $\tilde{\chi}^2\approx100$}. These results seem suggest that a more complex model that is capable of reproducing more details of the vertical density profile might fit the observations better. \begin{figure} \includegraphics[width=\linewidth]{./Fig/Soir_Profile_poster.png} \includegraphics[width=\linewidth]{./Fig/Fit_49N_3_layers.png} \caption{Top: molecular density measured by SOIR. \revis{We consider three main layers above the cloud top} ($r_{cloud} \sim 94$ km). The slope of a linear fit over three segments provides three scale heights, $H_1 = 4.8$ km for altitudes below 116 km, $H_2 = 3.0$ km (116 to 135 km), and $H_3 = 4.8$ km (135 to 160 km). Bottom: results of the three-layer modeling (best fit). See Table \ref{T:results} for the model parameters. \revis{The purple (upper) curve represents the summation of the fluxes from each of the three layers, here represented in cyan, green, and red. The blue data points are the measurements, as reported in Fig.~\ref{F:res_BC}.} } \label{F:dens_SOIR} \end{figure} \begin{table*} \centering \begin{tabular}{c | c c c | c c c} \hline \multicolumn{7}{c}{Parameters and results of the three--layer model} \\ \hline \hline Layer (km) & $\mu_{SOIR}(r)$ & $T_{SOIR} (K)$ & $g(r) (m\ s^{-2})$ & $H$ (km) & $\Delta r$ (km) & $T (K)$ \\ \hline 94-116 & 43.22 & 243$\pm$12 & 8.56 & 4.8$\pm$0.5 & 38.0$\pm$2.5 & 214$\pm$10 \\ 116-135 & 42.12 & 141$\pm$12 & 8.51 & 3.0$\pm$0.5 & 47.1$\pm$2.5 & 129$\pm$10 \\ 135-160 & 35.55 & 209$\pm$24 & 8.45 & 4.8$\pm$0.5 & 73.0$\pm$2.5 & 173$\pm$10 \\ \hline \end{tabular} \caption{Results of the three-layer model at the latitude +49\textdegree (morning terminator). The molecular weight and the $T_{SOIR}$ temperature at the average altitude of each layer obtained from the data processing of orbit 2238 are reported together with the value of the gravity acceleration g(r).} \label{T:results} \end{table*} We can also compare the temperatures that are represented by the fully resolved vertical profile to our three--layer model. We use the equation \begin{equation} T = \frac{\mu(z)g(z)H(z)}{R} \label{eq:temperature} ,\end{equation} where R is the ideal gas constant and $\mu$ the mean molecular weight measured by SOIR. Gravity g(z) depends on altitude. For all the three layers we computed the corresponding quantities at their average altitude. The temperature values in the three-layer models do not exactly correspond to those obtained when the full profile of variations is taken into account, which additionally underlines the evidence that local fluctuations in the atmospheric scale height can be relevant. \subsection{Multilayer model (model 2)} By representing the atmosphere over several layers, whose vertical extent is much smaller than the typical scale height, we wish to test whether a better modeling of the photometry, relative to the three-layers approach, can be obtained. In turn, we will be able to investigate the sensitivity of the aureole photometry to small details of the vertical temperature profile. To integrate Eq.~\ref{eq:compute_flux} we discretized it on a set of $m$ atmospheric layers of equal thickness. In our case $m$=400 and the thickness $\delta$r=400~m. Each layer was associated with a different refractivity $\nu(r)$. By considering a pure CO$_2$ atmosphere, we computed the refractivity as $\nu(r) = K\ n(r)$, where the specific refractivity of CO$_2$ is $K=$1.67$\times$10$^{-29}$~m$^3$~molecule$^{-1}$ \citep{CO2}. $n(r)$ is the number density provided by SOIR following the approach in \citet{Mahieux-et-al-2015a}. The core of the computation is the application of Eq.~\ref{eq:compute_flux} at all layers; this provides the total refraction angle and the associated attenuation. From the refraction angle, a light ray is traced back from the observer toward the source on a plane containing the observer, the center of Venus, and the point of the terminator where refraction must be analyzed (at +49\textdegree in our case). When the ray falls on the solar photosphere, the corresponding flux contribution (weighted by $\phi$) is considered. The aerosol absorption factor \ref{eq:absorption} is also used to model the transition between the transparent and the opaque atmosphere. By adding the contribution of each layer, we obtained the aureole theoretical brightness. The computation was then repeated for each $f$ to reconstruct the full light curve, to be compared with SDO/HMI observations. The result of this procedure is shown in Fig.~\ref{F:multi-fit}, where the green curve shows the predicted flux obtained by the direct model, and the blue curve represents the SDO/HMI data. The fit agrees remarkably well for $f>0.08$, where the general slope is perfectly reproduced. \begin{figure} \includegraphics[width=\linewidth]{./Fig/Flux_49NW_soir.png} \includegraphics[width=\linewidth]{./Fig/Aureole_altitude_soir_49NW2.png} \caption{\revis{Top panel:} best fit obtained with the vertical density profile of SOIR for the latitude +49$^{\circ}$ and an aerosol scale height $H = 4.8$ km using the multilayer refractive model. \revis{Bottom panel:} altitude of the highest layer probed by the aureole as a function of f (blue solid line). The maximum altitude corresponds to the tangent limb geometry at which the source of the light ray reaching the observer is the solar limb. Above this altitude, the deviation by refraction is too small to deflect sunlight toward the observer at +49$^{\circ}$. The gray gradient represents the scale height of the aerosols (H = 4.8 km) \revis{used} in model 2. The dark line represents $\tau = 1$.} \label{F:multi-fit} \end{figure} The cloud altitude and the scale height of the aerosols are obtained iteratively by a least-squares minimization. The resulting values are $r_{\tau} = 89.0$~km and $r_{\tau}=4.8$~km, respectively. \revis{We obtain here the best fit from all the models with $\tilde{\chi}^2\approx9$, a value affected by the largely over-estimated aureole brightness for $f<0.08$}. Although the measured flux at this portion of the light curve might be marginally contaminated by the proximity of the solar limb, which makes the measurement rather delicate, our impression is that the difference is real and can probably be ascribed to the assumption of a pure CO$_2$ atmosphere. Figure~\ref{F:multi-fit} shows the altitude of the highest layer contributing to the aureole for each $f$ value. For very low $f$, the quasi--alignment of the solar limb with the refraction point on the Venus terminator, and with the observer, corresponds to very low refraction angles, that is, to high atmospheric levels. As the aureole reaches an altitude $z\sim 100~km$ from the planet surface, we can assume that fractionation starts to play a role, and other species different from CO$_2$ \citep{Bertaux-2007} need to be taken into account in the computation. These species, like H$_2$O HDO \citep{Federova-2008}, SO \citep{Bertaux-2007}, SO$_2$ \citep{Bertaux-2007, Belyaev-2012, Mahieux-et-al-2015a}, CO, O, He, N, and N$_2$ \citep{Vandaele-2016} or HCl / HF \citep{Mahieux-et-al-2015b} are present in the atmosphere and have a lower refractivity than CO$_2$. These could decrease the contribution of the highest atmospheric levels to the aureole brightness. \section{Conclusion and perspectives} A new procedure for the photometry of the aureole was implemented that provides accurate measurements of the elusive brightness of the aureole all along the Venus terminator. The time resolution of the SDO images analyzed in this paper is much higher than the one available in 2004, and the photometry is of much better quality, mainly because these observations were obtained from space. For the first time we were able to compare the vertical density profile obtained by SOIR to remote observations of the aureole, showing that the SOIR profile is capable of reproducing the general features. In the process we showed that the measured aureole flux is sensitive to details in the vertical profile. We compared three different approaches that can be used to model the aureole brightness. The first is based on a transparent isothermal atmosphere as described by \citet{Baum-Code-1953}. This approach was adopted to analyze the much less accurate data of the transit in 2004 \citep{Tanga-etal-2012}. The second approach consists of an extension of this model to three isothermal layers by using the information provided by the SOIR experiment at +49\textdegree, from observations secured by the Venus Express during the solar transit event itself. The SOIR vertical density profile clearly exhibits three ranges in which, at first order, the temperature can be considered as constant. The final attempt adopted a multilayer model with a layer thickness much smaller than any physical scale height. The full resolution of the vertical density profile by SOIR was adopted in this model. A comparison of the three methods showed that only the last model reproduces the trend of the aureole light curve with reasonable accuracy. This finding further indicates the sensitivity of the aureole to subtle details in the vertical density profile. As the model is based on the direct application of the SOIR--derived profile, our result is also an independent confirmation, from remote observations, of the results obtained by SOIR. The only free parameter of the multilayer approach, the altitude of the $\tau=1$ level, has a value compatible with other determinations of the upper cloud deck limit altitude. Our model adopts a simplified vertical distribution of aerosols that can be further improved or tested against more recent SOIR data. However, we find no clear discrepancy that can be attributed to a lack of detail in the aerosol distribution. After assessing the reliability of our photometry and modeling against the SOIR data, we will explore in following publications the vertical density and temperature profiles at other latitudes. Additional developments are due, by the implementation of an inverse model of Eq.~\ref{eq:compute_flux}, to be illustrated in a forthcoming publication. \begin{acknowledgements} This research is supported by the European Commission Framework Program FP7 under Grant Agreement 606798 (Project EuroVenus). We credit the National Aeronautics and Space Administration (NASA) and the HMI science team for providing the data. TW acknowledges University of Versailles-St-Quentin, CNES VEx-SI program and France's Programme National de Plan\'etologie. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,566,037
arxiv
\section*{Introduction.} Stellar feedback plays significant role in the regulation of the interstellar medium (ISM) structure and the whole galaxy evolution as well. Stellar winds and supernovae explosions create complexes of shells and supershells of ionized and neutral gas; e.g. \citet{bagetakos11} analysed the HI distribution in 20 nearby galaxies and revealed about 1000 cavities in their discs with sizes from 80 pc to 2.6~kpc, expansion velocities from 4 to 64~km/s (in the main 10-20 km/s) and ages from 2 to 150~Myr. The origin of the largest kpc-sized HI holes and shell-like structures (so-called supergiant shell, SGS) has been the subject of debate for more than two decades. In the standard approach based on the \citet{weaver77} model, HI shells result from the cumulative action of multiple stellar winds and supernovae explosions. However it was recognized long ago \citep[see e.g.][]{tenor88, rhode99, kim99, silich06} that this scenario cannot explain the origin of SGS since stellar cluster remnants detected are inconsistent with the input of mechanical energy required by the standard multiple winds and supernovae model. Recent studies based on Hubble Space Telescope (HST) observations have found that multiple star formation events over the age of the hole do provide enough energy to drive HI hole formation \citep[see e.g.][]{weisz09a,weisz09b,cannon11a,cannon11b}. A number of SGS exhibit signs of expansion-triggered star formation at their periphery. A special interest consists in a detailed analysis of the interaction of stars and gas in the region of new sites of star formation in the walls of SGS, which can elucidate the process of their evolution. Dwarf irregular galaxies provide the best environments to study the creation mechanisms of giant HI structures with star-formation episodes in their rims. Because of the slow solid-body rotation and the lack of strong spiral density waves which can destroy the giant shells, they grow to a larger size and live longer compared to such structures in spiral galaxies. The overall gravitational potential of dwarfs is much smaller than that of spiral galaxies, their HI disc scale height is larger and the gas volume density is lower than in spirals. Therefore the same amount of mechanical energy fed to the ISM of dwarf Irr galaxies creates very large long-lived holes with star formation in the walls triggered by their expansion. During several years we performed the analysis of the ionized and neutral gas structure and kinematics in nearby irregular galaxies: VII~Zw~403 \citep{zw}, IC~10 \citep{ic10}, IC~1613 \citep{lozinsk03}, IC~2574 \citep{ic2574}, Holmberg~II (Egorov et al. 2016, in preparation). In most of them we had a deal with the star formation ongoing in the rims of the HI supershells. In this work we briefly review the results of our study in the galaxies where the different stages of the SGS evolution are clearly seen: IC~1613, IC~2574 and Holmberg~II. \section*{Observational data} The observations in the H$\alpha$ emission line were made at the prime focus of the 6-m telescope of Special Astrophysical Observatory of Russian Academy of Sciences with SCORPIO \citep{scorpio} or SCORPIO-2 \citep{scorpio2} multi-mode focal reducers. The data obtained in a scanning Fabry--Perot interferometer (FPI) mode were used for the study of the ionized gas kinematics, while the narrow-band H$\alpha$ direct images were used for analysis of the ionized gas morphology. Neutral gas distribution and kinematics were studied using the data of archival VLA HI 21 cm observations from THINGS \citep{things} and LITTLE THINGS \citep{littlethings} surveys. The detailed description of the observational settings are shown in the corresponding papers for each galaxy discussed there. \section*{Triggered star formation in SGSs} We considered three galaxies where the difference in the evolution stage of SGS is most evident. The aim of our analysis was to highlight two questions: what triggers the star formation at large scales and how the ongoing star formation influences the evolution of the ``parent'' SGS in a neutral gas? We describe below the most interesting results obtained during the study of each of the selected galaxies and discuss the questions above. \subsection*{IC~2574} The well known ``supergiant shell'' (\#~35 in the list of \citealt{walter99}) in the dwarf Irr galaxy IC~2574 is located at the north-east outskirts of its disc. This SGS surrounding a $1000\times500$ pc hole in the most prominent region of current and recent star-formation activity in the galaxy represents the impressive example of giant HI shells with triggered SF along rims (see Fig.~\ref{fig:ic2574}). The analysis of resolved stars based on the HST data \citep{weisz09b} shows that the last most significant episodes of star formation in SGS began $\simeq$ 100~Myr ago and the recent bursts of star formation along the walls of the shell are as young as 10~Myr. The ages of the younger star-formation events are consistent with those derived from broadband photometry and are younger than the estimated kinematic age of the HI SGS, $14 \pm 3$~Myr \citep{walter98, walter99, ic2574}. The analysis of the ionized and neutral gas kinematics inside the rim of SGS is presented in \citet{ic2574}. We showed that for almost all HII complexes studied the energy input from young clusters located inside them is sufficient to drive the formation of observed ionized gas structures. The only exception we found is the shell-like region in the north-west part of the SGS that shows a high expansion velocity (65 km/s). This complex is the youngest in the region and its age is 1 Myr. \begin{figure} \includegraphics[width=0.5\linewidth]{IC2574_color-crop.png}~\includegraphics[width=0.5\linewidth]{IC2574_HI-crop.png} \caption{IC~2574. Left: False-color composite image of distribution of H$\alpha$ emission (red channel) and stellar continuum (green channel) obtained with 6-m telescope of SAO RAS and HI 21 cm emission (THINGS, blue channel). Right: HI distribution according to THINGS data. Green rectangle denotes the region shown in the left pannel.}\label{fig:ic2574} \end{figure} In the entire region of the SGS in IC~2574 we detected for the first time the faint diffuse emission in both H$\alpha$ and [SII] lines. It seems that the source of the emission observed is the shell-like structure that filled the internal part of the SGS. We conclude that the observed SGS in IC~2574 is at relatively young stage where the star formation in its walls had been started recently. It is one of the most dynamically active giant HI structures among Irr galaxies. Almost the whole of the rim of the SGS shows the significant increase of star formation rate during the recent 10 Myr. The expansion of the brightest HII region in the north part of SGS rapidly disperses the local HI gas. One can see here the emission features like ``horns'' located beyond the northern boundary of the SGS. We can suppose that the destruction of the northern wall of the HI supergiant shell due to star formation will result in the growth of the SGS and in its merging with the neighbouring lower sized HI supershells. After several billion years there will be a system of giant adjoining and/or interacting shell-like HI structures similar to those observed in another galaxy we want to draw attention to below -- Holmberg~II. \subsection*{IC~1613} Neutral gas distribution in IC~1613 galaxy shows a highly inhomogeneous structure with a number of HI holes, shells and arc-like structures of different sizes and expansion velocities (see Fig.~\ref{fig:ic1613}). The only known complex of recent star formation is located in the north-eastern rim of the largest (1 to 1.5 kpc sized) HI ``main supershell'' in the galaxy. This supershell is older then SGS in IC~2574 -- its kinematic age no less than 30 Myr \citep{silich06}. Three extended (300\mbox{--}350~pc) neutral shells with which the brightest ionized shells in the complex of star formation are associated are observed in the direction of the area of highest HI density in the galaxy. Two of H~I shells were found to expand at a velocity of 15\mbox{--}18~km~s$^{-1}$ \citet{lozinsk03}. The structure and kinematics of this complex were studied previously in a large number of papers \citep[see, e.g.,][and references therein]{lozinsk03, silich06}. \begin{figure} \includegraphics[width=0.5\linewidth]{IC1613_color.png}~\includegraphics[width=0.5\linewidth]{IC1613_HI-crop.png} \caption{IC~1613. Left: False-color composite image of distribution of H$\alpha$ (red color) and stellar continuum (yellow color) obtained with the SAO RAS 1-m telescope according to \citet{lozinsketal02}, and HI 21 cm emission (LITTLE THINGS, blue color). Right: HI distribution obtained by LITTLE THINGS data. Green rectangle denotes the region shown in the left panel.}\label{fig:ic1613} \end{figure} Fig.~\ref{fig:ic1613} clearly demonstrates that the H$\alpha$ emission in IC~1613 coincides well with the walls of three local HI shells mentioned above. The age of observed H$\alpha$ bubbles are much lower (0.6 -- 2.2 Myr) than the age of the HI shells (5.3 --5.6 Myr), that may be an indirect evidence of the triggering of star formation there by the collision of these neutral gas shells. \citet{lozinsk02} proposed that the whole star formation complex in the rim of the ``main supershell'' is created by the collision of this largest shell with the giant HI supershell to the north. \subsection*{Holmberg II} Holmberg~II is another example of irregular galaxies that have a non-uniform gas structure with a large amount of shells and holes in the neutral gas distribution (see Fig.~\ref{fig:hoii}). \citet{puche92} found 51 giant holes and supershells in Holmberg~II galaxy; \citet{bagetakos11} revealed 39 neutral gas cavities using more strong criteria. Their sizes are from 0.26 to 2.11 kpc and expansion velocity -- from 7 to 20 km/s. These values correspond to kinematic age from 10 to 150 Myr. It is necessary to have energy input up to $10^{53}$ erg for the formation of several SGS. The object of particular interest is the SGS \#17 \citep[according to the list of][]{bagetakos11} -- the largest and oldest HI supershell in the galaxy. Optical images in the H$\alpha$ line reveal the brightest complexes of star formation located in the rim of this SGS. This supershell is much older than the SGS in IC~2574 described above (150 Myr against 14 Myr). Nevertheless, in both cases we observed that the triggered star formation occurred in the walls of SGS. \citet{weisz09a} analyzed the stellar population using the data of HST observations and showed that star formation occurs in the north rim of the SGS in Holmberg~II about 40 Myr ago and during the last 15 Myr it is observed in the north-east and north-west parts of the shell. \begin{figure} \includegraphics[width=0.5\linewidth]{HoII_color-crop.png}~\includegraphics[width=0.5\linewidth]{HoII_HI-crop.png} \caption{Similar to Figure~\ref{fig:ic2574} for Holmberg~II galaxy. }\label{fig:hoii} \end{figure} In contrast with the case of SGS in IC~2574, in Holmberg~II we observed that triggered intensive star formation occurred not along the full length of the HI supershell rim, but only in one half of them. Why is the H$\alpha$ emission more intense in the northern part of HI supershell compared with its southern part? Probably, as it was in the case of IC~1613, the interaction and possible collision with two younger and smaller HI supershells to the north from the SGS considered are the reasons for triggering star formation in this region. Indeed, the age of the last starburst episode that occured in the region is in agreement with the estimations of the ages of these two supershells (\#16 and \#22 in the list of \citealt{bagetakos11}) -- 50 and 30 Myr. It is not surprising to detect H$\alpha$ emission inside the small HI shells in galaxies \citep[see, e.g., LITTLE THINGS survey][]{littlethings}, but whether giant ionized shells could be observed inside the HI SGSs? Up to date there was known only one kpc-sized H$\alpha$ supershell -- LMC 4 in Large Magelanic Cloud \citep{lmc}. Also there were found several supershells with smaller size in that galaxy. We already noticed the detection of diffuse ionized kpc-scale supershell in IC~2574 located inside HI SGS, but there we proposed its shell-like morphology only by kinematics analysis. We found the unique structure in Holmberg~II galaxy using the data of FPI and narrow-band imaging observation in H$\alpha$ line. We clearly detected the giant ionized supershell with resolved shell-like structure that has diameter about 2 kpc and coincides with the internal wall of the examined neutral HI SGS in the galaxy. We should note that this structure has a low brightness (about $10^{-17}$ erg/s/cm$^2$/arcsec$^2$). \citet{bastian11} found 126 OB stars in Holmberg~II, seven of them are located inside the H$\alpha$ supershell. These seven OB stars provide enough high energy photons to ionize the internal wall of HI supershell and to create the H$\alpha$ supershell observed. The detailed study of this structure as well as of the gas kinematics in the star formation regions in Holmberg~II will be presented in our forthcoming paper Egorov et al. (2016, in prep.) The observed parameters of local diffuse ionized gas in the SGS region are similar to those of the extra-planar diffuse ionized medium (DIG) in spirals and irregular galaxies. The observed [SII]/H$\alpha$ and [NII]/H$\alpha$ line ratios in the DIG are elevated compared to classical HII regions. The ionization of DIG in galaxies is traditionally explained by the leakage of ionizing photons from the HII regions as the main source \citep[see, e.g.,][and references in these papers]{seon09, hidgam06}. Because of this similarity we may propose that faint ionized gas emission observed in the SGSs in IC~2574 and Holmberg~II galaxies is the result of leakage from the bright HII regions in the SGSs walls. Indeed, all the HII complexes in the walls of studied galaxies have nonuniform, clumpy, or filamentary structure, which allows radiation to leak outside through low-density regions. \section*{Conclusions} \begin{figure} \includegraphics[width=0.5\linewidth]{HoI_color-crop.png}~\includegraphics[width=0.5\linewidth]{HoI_HI-crop.png} \caption{Holmberg I. Left: False-color composite image of distribution of H$\alpha$ (red color) and stellar continuum (yellow color) obtained with the SAO RAS 6-m telescope, and HI 21 cm emission (LITTLE THINGS, blue color). Right: HI distribution obtained by LITTLE THINGS data. Green rectangle denotes the region shown in the left panel. }\label{fig:hoi} \end{figure} In this paper we have discussed the nature of triggered star formation complexes located in the rims of the giant (1 kpc and more) HI supershells. These structures are common in nearby irregular galaxies. Here we considered three galaxies: IC~1613, IC~2574 and Holmberg~II, that might be best examples of the triggering of star formation in supergiant shells at the different evolution stages. Gas kinematics analysis gave us evidence about the influence of ongoing star formation onto HI supershells. We observe weak and faint filaments of ionized gas outside HII regions that might be a consequence of the ionizing photons leakage from embedded star formation regions. This process might perturb HI shells and lead to their destruction at the later stage. In the final consequence of that process, the ISM of the irregular galaxy might represent a single neutral supershell with diameter about the disc size and the ongoing star formation in its rim. A picture resembling this is observed in Holmberg~I galaxy (see Fig.~\ref{fig:hoi}). Large numbers of HI supershells in galaxy discs is a common picture for nearby irregular galaxies, but the star formation is not distributed uniformly inside them. It seems that the collision of the HI supershells might be one of the main drivers of the star formation triggering on such large scales. We found for the first time diffuse ionized gas inside the SGS in IC~2574 that shows the signs of the expansion and have size similar to diameter of the SGS. Similar and more confidently detected structures were discovered in Holmberg~II galaxy. The structure of the H$\alpha$ supershell in that galaxy also coincides with the inner wall of the largest HI supershell with active ongoing star formation inside. We considered two mechanisms of the formation of such structures: ionization of the inner wall of neutral supershell by the stellar population inside and/or by the leakage of the ionizing photons from the bright HII complexes. \section*{Acknowledgements} This work was supported by the Russian Foundation for Basic Research (project 14-02-00027) and by a grant from the President of the Russian Federation (MD3623.2015.2). A. Moiseev is grateful for the financial support of the Dynasty Foundation. The observations at the 6-meter BTA telescope were carried out with the financial support of the Ministry of Education and Science of the Russian Federation (agreement No. 14.619.21.0004, project ID RFMEFI61914X0004).
1,108,101,566,038
arxiv
\section{Introduction} Synthetic biology refers to the systematic design and engineering of biological systems, and is a growing domain which promises to revolutionize areas such as medicine, environmental treatment, and manufacturing \cite{Benner2005}. However, current technologies for synthetic biology are mostly manual and require a significant amount of domain experience. Artificial Intelligence (AI) can transform the process of designing biological molecules by helping scientists leverage large existing genomic and proteomic datasets; by uncovering patterns in these datasets, AI can help scientists design optimal biological molecules. In addition, generative models, such as Generative Adversarial Networks (GANs) can automate the process of designing DNA sequences, proteins, and additional macromolecules for usage in medicine and manufacturing. Solutions for using GANs for synthetic biology require a framework not only for the GAN to generate novel sequences, but also to optimize the generated sequences for desired properties such as binding affinity of the sequence for a particular ligand, or secondary structure of the generated macromolecule. These properties are necessary for the synthetic molecules to posses so they can be useful in various real-world use cases. Here, we present a novel feedback loop mechanism for generating DNA sequences using a GAN and then optimizing these sequences for desired properties using a separate predictor, which we call a function analyzer. The proposed feedback loop mechanism is applied to train a GAN to generate protein-coding sequences (genes), and then enrich the produced genes for those that produce 1) antimicrobial peptides, and 2) alpha-helical peptides. Antimicrobial peptides (AMPs) are typically lower molecular weight peptides with broad antimicrobial activity against bacteria, viruses, and fungi \cite{AMP}. They are an attractive area to apply GANs to since they are commonly short, less than 50 amino acids, and have promising applications to fighting drug resistant bacteria \cite{AMP_length}. Similarly, optimizing for resulting secondary structure of the genes is possible since common secondary structures, such as helices and beta sheets, arise even in short peptides. Secondary structure is also important to when designing proteins for particular functions. Optimizing for these two properties provides a proof of concept that the proposed feedback-loop architecture FBGAN can be used to effectively optimize a diverse set of properties, regardless of whether a differentiable analyzer is provided for that property. \paragraph{Related works} Besides GANs, Recurrent Neural Networks (RNNs) have also shown promise in producing sequences for synthetic biology applications. RNNs have shown to be successful in generating SMILES sequences for \textit{de novo drug discovery} \cite{segler} and recent work also showed that the RNN's outputs could be optimized for specific properties through transfer-learning and fine-tuning on sequences with desired properties \cite{Gupta2017}. A similar methodology has been applied to generate antimicrobial peptides \cite{muller_amp}. RNNs have also been combined with reinforcement learning to produce sequences optimized for certain properties in synthetic biology \cite{olivecrona}. However, GANs have the attractive property over RNNs that they allow for latent space interpolation with the input codes provided to the generator \cite{DBLP:journals/corr/SalimansGZCRC16}. Indeed, GANs are increasingly being used to generate realistic biological data. Recently, GANs have been used to morphologically profile cell images \cite{Goldsborough227645}, to generate time-series ICU data \cite{Esteban2017RealvaluedT}, and to generate single cell RNA-seq data from multiple cell types \cite{Ghahramani262501}. GANs have also been used to generate images of cells imaged by fluorescent microscopy, uniquely using a separable generator where one channel of the image was used as input to generate another channel \cite{osokin_gan}. In independent and concurrent work, Killoran \textit{et al.} use GANs to generate generic DNA sequences \cite{duvenaud_dna}. This work used a popular variant of the GAN known as the Wasserstein GAN, which optimizes the earth mover distance between the generated and real samples \cite{wgan}. In this approach, the generator was first pretrained to produce DNA sequences, and then the discriminator was replaced with a differentiable analyzer. The analyzer in this approach was a deep neural network that predicted, for instance, whether the input DNA sequence bound to a particular protein. By backpropagating through the analyzer, the authors modified the input noise into the generator into specific codes to yield desirable DNA sequences. This approach does not extend to nondifferentiable analyzers, and does not change the generator itself, but rather its input. Here, we propose a novel feedback-loop architecture, FBGAN, to enrich a GAN's outputs for user-desired properties; the architecture employs an external predictor for the desired property which, as an added benefit, need not be differentiable. We present a proof-of-concept of the feedback-loop architecture by first generating realistic genes, or protein-coding DNA sequences, up to 50 amino acids in length (156 nucleotides); feedback is then used to enrich the generator for genes coding for AMPs, and genes coding for alpha-helical peptides. \section{Methods} \subsection{GAN Model Architecture} The basic formulation of a GAN as proposed by Goodfellow \textit{et al} consists of two component networks, a Generator $G$ and a Discriminator $D$, where the generator $G$ creates new data points from a vector of input noise $z$, and the discriminator $D$ classifies those data points as real or fake \cite{GoodfellowGan}. The end goal of $G$ is to produce data points so realistic that $D$ is unable to classify them as fake. Each pass through the network includes a backpropagation step, where the parameters of $G$ are improved so the generated data points appear more realistic. $G$ and $D$ are playing a minimax game with the following loss function \cite{GoodfellowGan}: \begin{equation} \underset{G}{\text{min}} \underset{D}{\text{max}} V(D,G) = \mathbf{E}_{x \in P_{data}(x)} [log(D(x)] + \mathbf{E}_{z \in P(z)} [log(1-D(G(z))] \end{equation} Concretely, the discriminator seeks to maximize the probability $D(x)$ that $x$ is real when $x$ comes from a distribution of real data, and minimize the probability that the data point is real, $D(G(z))$, when $G(z)$ is the generated data. The Wasserstein GAN (WGAN) is a variant of the GAN which instead minimizes the Earth Mover (Wasserstein) distance between the distribution of real data and the distribution of generated data \cite{wgan}. A gradient penalty is imposed for gradients above one in order to maintain a Lipshitz constraint \cite{wgan_gp}. WGANs have been shown empirically to be more stable during training than the vanilla GAN formulation. Moreover, the Wasserstein distance corresponds well to the quality of the generated data points \cite{wgan}. Our GAN model for producing gene sequences follows the WGAN architecture with gradient penalty which proposed by Gulrajani \textit{et al} \cite{wgan_gp}. The model has five residual layers with two 1-D convolutions of size $5 \times 1$ each. However, we replace the softmax in the final layer with a Gumbel Softmax operation with temperature $t=0.75$. When sampling from the generator, the argmax of the probability distribution is taken to output a single nucleotide at each position. The model was coded in Pytorch and initially trained for $70$ epochs with a batch size $B=64$. \subsubsection{GAN Dataset} A diverse training set of genes was assembled in order to train the GAN to produce protein-coding sequences. More than 3655 proteins were collected from the Uniprot database, where each protein was less than 50 amino acids in length \cite{uniprot}. These proteins were selected from the set of all reviewed proteins in Uniprot with length from 5-50 residues, and the protein sequences were then clustered by sequence similarity $\geq 0.5$. One representative sequence was selected from each cluster to form a diverse dataset of short peptides. The dataset was limited to proteins up to 50 amino acids in length since this length allows for observations of protein properties such as secondary structure and binding activity, while limiting the long-term dependencies the GAN would have to learn to generate sequences coding for these proteins. The Uniprot peptides were then converted into cDNA sequences by translating each amino acid to a codon (where a random codon was selected when multiple codons mapped to one amino acid); the canonical start codon and a random stop codon were also added to each sequence. All sequences were padded to length 156, which was the maximum possible length. \subsection{Feedback-Loop Training Mechanism} As shown in Figure \ref{flowchart}, the feedback-loop mechanism consists of two components. The first component is the GAN, which generates novel gene sequences which have not been enriched for any properties. The second component is the analyzer; in our first use case, the analyzer is a differentiable neural network which takes in a gene sequence and predicts the probability that the sequence will code for an antimicrobial peptide (AMP). However, the analyzer can be any black-box which takes in a gene sequence and predicts the desirability of the gene sequence with a certain score. For instance, in our second use case the analyzer is a web server which returns the number of alpha-helical residues a gene will code for. The analyzer could even be a scientist who experimentally validates the produced gene sequences, which would be an example of \textit{active learning}. The GAN and analyzer are linked by the feedback mechanism after an initial number of pretraining epochs so that the generator is producing valid sequences. Once the feedback mechanism starts, once every epoch a set number of sequences are sampled from the generator and input into the analyzer. The analyzer predicts how favorable each gene sequence is, and the $n$ top favorable sequences are input back into the discriminator as "real" data that the generator must now mimic in order to minimize its loss. The generated sequences replace the oldest $n$ genes that were present in the discriminator's training dataset. The GAN is then trained as usual for one epoch (one pass through this training set). As the feedback process continues, the entire training set of the discriminator is replaced repeatedly by generated sequences that have received high scores from the analyzer. \begin{figure}[] \begin{minipage}{\textwidth} a)\\ \begin{center} \includegraphics[width=0.8\textwidth]{gan_feedback_flowchart_1a.pdf} \end{center} \end{minipage} \begin{minipage}{\textwidth} b)\\ \begin{center} \includegraphics[width=0.5\textwidth]{gan_feedback_flowchart_1b.pdf} \end{center} \end{minipage} \begin{minipage}{\textwidth} c)\\ \begin{center} \includegraphics[width=0.8\textwidth]{gan_feedback_flowchart_2.pdf} \end{center} \end{minipage} \caption{a) The general training mechanism of the GAN model used to produce gene sequences. The training set used was 3600 genes coding for Uniprot proteins under 50 amino acids in length. b) The general form of the function analyzer, which takes in a sequence and produces a score. The analyzer may be any model which fits this framework, from a deep-neural network to a lab. c) The novel feedback-loop training mechanism in FBGAN. At every epoch, several predictions are sampled from the generator and input into the analyzer. The analyzer gives a score to each sequence as demonstrated in b, and the highest scoring sequences are selected. These high scoring sequences are input back into the discriminator as "real" data. $n$ selected sequences from the analyzer replace the $n$ oldest sequences in the "real" training dataset of the discriminator. In this way, gradually the discriminator's set of "real" data is replaced by synthetic data receiving high scores from the analyzer.} \label{flowchart} \end{figure} \subsection{Analyzer for Antimicrobial-Peptide (AMP) Coding Genes} The analyzer was a classifier whose input was a gene sequence and output was a probability that the gene coded for an AMP. \subsubsection{Dataset} The AMP classifier was trained on 2600 experimentally verified antimicrobial peptides from the APD3 database \cite{apd3}, and a negative set of 2600 randomly extracted peptides from UniProt from 10 tO 50 amino acids (filtered for unnatural amino acids). The dataset was loaded using the Modlamp package \cite{modlamp}. As above, the proteins were translated to cDNA by translating each amino acid to a codon (a random codon in the case of redundancy), and by adding a start codon and random stop codon. The AMP training dataset was split into 60\% training, 20\% validation, and 20\% test sequences. \subsubsection{Classifier Architecture} Using Pytorch, we built and trained a Recurrent Neural Network (RNN) Classifier to predict whether a gene sequence would produce an antimicrobial peptide (AMP). The architecture of the RNN consisted of two GRU (Gated Recurrent Unit) layers with hidden state h = $128 \times 1$. The second LSTM layer’s output at the final time step was fed to a dense output layer, with the number of neurons equal to the number of output classes minus one. This dense layer had a sigmoid activation function, such that the output corresponded to the probability of the gene sequence being in the positive class. In order to reduce overfitting and improve generalization, we added dropout with $p=0.3$ in both layers. Using the Adam optimizer with learning rate lr = 0.001, we optimized the binary cross entropy loss of this network. The network was trained using minibatch gradient descent with batch size B = 64, and 60\% of the data was retained for training, 20\% for validation, and 20\% for testing. The model was trained for 30 epochs. \subsection{Secondary-Structure Black Box Analyzer} In order to optimize the synthetic genes for secondary structure, a wrapper was written around the PSIPRED predictor of secondary structure \cite{psipred}. The PSIPRED predictor takes in an amino-acid sequence and tags each amino acid in the sequence with known secondary structures, such as alpha-helix or beta-sheet. The wrapper takes in a gene sequence (sampled from the generator), converts it into a protein sequence, and predicts the secondary structure of the amino acids in that protein sequence. The wrapper then output the total number of alpha-helix tagged residues from the sequence. If the gene cannot be converted into a valid protein sequence, the wrapper outputs zero. The analyzer selects all sequences with helix length above some cutoff to move to the discriminator's training set. In this case, the cutoff was arbitrarily set to five residues. \section{Results and Discussion} \subsection{WGAN Architecture to Generate Protein-Coding Sequences} \begin{figure}[] \centering \includegraphics[width=0.8\textwidth]{pca_wgan_uniprot_less_dense.pdf} \caption{A set of 500 valid genes were sampled from the trained WGAN, and 10 physiochemical features were calculated for the proteins encoded by the synthetic genes. The same 10 features were also calculated for the cDNA sequences from Uniprot proteins. PCA was performed on the features of the natural cDNA sequences, and the synthetic genes were transformed accordingly. The first two principal components (PC1, PC2) are shown here; we see that the synthetic sequences lie in the same chemical space as the natural sequences.} \label{pca} \end{figure} Synthetic genes up to 156 nucleotides (50 amino acids) in length are produced from the WGAN architecture with gradient penalty; after training, three batches of sequences (192) sequences were sampled from the generator. Three batches were also sampled before one epoch of training. The correct gene structure was defined as a string starting with the canonical start codon, following with an integer number of codons of length 3, and ending with one of three canonical stop codons. Before training, $3.125\%$ of sequences initially followed the correct gene structure. After training, $77.08\%$ of sampled sequences contained the correct gene structure, demonstrating a large improvement after training. In order to further examine whether the synthetic genes were similar to natural cDNA sequences extracted from Uniprot, several physiochemical features of the resulting proteins were calculated such as length, molecular weight, charge, charge density, hydrophobicity, etc. These features were calculated for the synthetic genes and the natural cDNA sequences extracted from Uniprot, and a principal component analysis (PCA) was conducted on these physiochemical features. The PCA was fit on the features of the natural cDNA sequences, and the synthetic sequences were transformed accordingly. Figure \ref{pca} shows the Uniprot gene sequences and generated genes plotted with respect to these principal components, and we see that both the natural and synthetic sequences lie in the same space. In addition, as shown in Figure \ref{aa_dist}, the relative amino acid frequencies of the synthetic sequences mirror the relative frequencies of the natural cDNA sequences from Uniprot. \subsection{Deep RNN Analyzer for Antimicrobial Properties} The AMP analyzer used a Recurrent Neural Network (RNN) to score each gene sequence with its probability of producing an AMP. The architecture of the RNN consisted of two GRU (Gated Recurrent Unit) layers with hidden state h = $128 \times 1$ and dropout $p=0.3$ in both layers. In order to quantitatively measure the performance of the classifier, we measured the analyzer's accuracy, AUROC, precision, and recall. The model achieved a training accuracy of $0.9447$ and a validation accuracy of $0.8613$. The test accuracy was $0.842$, and the AUROC on the test set was $0.908$. The precision and recall on the test set were $0.826$ and $0.8608$, respectively, and the area under the precision-recall curve was $0.88$, as shown in Figure \ref{AMP_prec_recall}. \subsection{Feedback-Loop to Optimize Antimicrobial Properties} After both the GAN and function analyzer were trained, the two were linked with the described feedback-loop; at each epoch of training, sequences were sampled from the generator and fed into the analyzer. The analyzer then assigned each sequence a probability of being antimicrobial, and the top ranking sequences (here with $P(Antimicrobial) > 0.8$) were fed into the discriminator and labelled as "real" sequences. The $n$ top ranking sequences took the place of the $n$ oldest sequences in the discriminator's data set. In order to measure the effectiveness of this feedback-loop mechanism in FBGAN, two criteria were examined. The first criteria was whether the analyzer predicted more of the outputs from the generator to be antimicrobial over time (without sacrificing the gene structure); the second criteria was whether the generated genes were similar to known antimicrobial genes, in both their sequences and in the properties of the resulting proteins. In order to answer the first question, we examined the analyzer's predictions on the generator's sequences as the training progressed with feedback. As shown in Figure \ref{amp_analyzer_curves}, after only ten epochs of closed-loop training, the analyzer predicts the majority of sequences as being antimicrobial. After sixty epochs, nearly all the sequences are predicted to be antimicrobial with high probability (greater than $0.99$). Even though the threshold for feedback was at $0.8$, the generator continues to improve even beyond the threshold, suggesting that the closed-loop training is robust to changes in the threshold value. Moreover, 93.3\% of the generated sequences after closed-loop training have the correct gene structure, showing that the reading frame structure was not sacrificed but rather reinforced. \begin{figure}[] \begin{minipage}{\textwidth} a)\\ \begin{center} \includegraphics[width=0.7\textwidth]{moving_hist_amp.pdf} \end{center} \end{minipage} \begin{minipage}{\textwidth} b)\\ \begin{center} \includegraphics[width=0.7\textwidth]{threshold_plot_amp.pdf} \end{center} \end{minipage} \caption{a) Histograms showing predicted probability that generated genes are antimicrobial, as the closed-loop training progresses. While most sequences are initially assigned $0.1$ probability of being antimicrobial, as training progresses, nearly all sequences are eventually predicted to be antimicrobial with probability $>0.99$. b) Percentage of sequences predicted to be antimicrobial with probability above three thresholds: $[0.5, 0.8, 0.99]$. While $0.8$ was used as the cutoff for feedback, the percentage of sequences above $0.99$ also continues to rise during training with feedback.} \label{amp_analyzer_curves} \end{figure} Next the generated sequences were examined for similarity with the experimental antimicrobial genes, according to both sequence and physiochemical properties of the proteins coded for. Figure \ref{edit_distance}a shows a histogram of the mean edit distance between the known AMPs and proteins from synthetic genes before feedback, and the distance between the AMPs and proteins from synthetic genes produced after feedback. Figure \ref{edit_distance}b shows the intrinsic edit distance within the AMP proteins, and within the proteins coded for by the synthetic gene sequences after feedback. All edit distances were normalized by the length of the sequences, in order to not penalize longer sequences unfairly. The distribution of edit distances shifts after feedback to have a larger proportion of sequences with a lower edit distance from the AMP sequences. In addition, the sequences after feedback have a higher edit distance within themselves than the antimicrobial sequences do with each other; this demonstrates that the model has not overfit to replicate a single data point. \begin{figure}[h] \begin{minipage}{0.5\textwidth} a)\\ \includegraphics[width=\textwidth]{prot_edit_dist_hist_normalized.pdf} \end{minipage} \begin{minipage}{0.5\textwidth} b)\\ \includegraphics[width=\textwidth]{prot_edit_dist_hist_intra_norm.pdf} \end{minipage} \caption{a) Between-group edit distance (Levenstein distance) between known antimicrobial sequences (AMPs) and 1) proteins coded for by synthetic genes produced without feedback, and 2) proteins coded for by synthetic genes produced after feedback. In order to calculate between-group edit distance, the distance between each synthetic protein and each AMP was calculated and the means were then plotted. b) Within-group edit distance for AMPs and for proteins produced after feedback, to evaluate the variability of GAN generated genes after the feedback loop. Within-group edit distance was computed by selecting 500 sequences from the group and computing the distance between each sequence and every other sequence in the group; the mean of these distances was then taken and plotted.} \label{edit_distance} \end{figure} Next the physiochemical properties of the resulting proteins were measured, and are shown in Table 1. As can be seen in the table, the proteins encoded by the closed-loop sequences shift to be closer to the positive antimicrobial peptides in five out of ten physiochemical properties such as Length, Hydrophobicity, and Aromaticity, and remains as similar as the sequences without feedback for properties such as Charge and Aliphatic index. This is true even though the analyzer operated directly on the gene sequence rather than these physiochemical properties, and so the feedback mechanism did not directly optimize the physiochemical properties that show a shift. \begin{table}[h] \begin{tabular}{l|l|l|l} & Positive AMP & Before Feedback & After Feedback \\ \hline \textbf{Length} & 32.37 $\pm$ 17.983 & 21.419 $\pm$ 13.190 & 36.992$\pm$ 16.978 \\ \textbf{Molar Weight} & 3514.0068 $\pm$ 1980.59 & 2419.032 $\pm$ 1479.013 & 4023.584 $\pm$ 1848.048 \\ Charge & 3.8575 $\pm$ 2.979 & 2.356 $\pm$ 2.447 & 2.708 $\pm$ 2.249 \\ Charge Density & 0.00123 $\pm$ 0.00084 & 0.00127 $\pm$ 0.00138 & 0.00091 $\pm$ 0.00096 \\ pI & 10.2697 $\pm$ 2.046 & 10.143 $\pm$ 2.444 & 9.474 $\pm$ 1.844 \\ Instability Index & 27.174 $\pm$ 26.717 & 37.791 $\pm$ 35.697 & 53.145 $\pm$ 29.495 \\ \textbf{Aromaticity } & 0.0822 $\pm$ 0.0602 & 0.0642 $\pm$ 0.0695 & 0.0775 $\pm$ 0.066 \\ Aliphatic Index & 91.859 $\pm$ 47.236 & 84.397 $\pm$ 45.681 & 84.889 $\pm$ 34.837 \\ \textbf{Boman Index } & 0.770 $\pm$ 1.500 & 1.801 $\pm$ 1.721 & 0.888 $\pm$ 1.155 \\ \textbf{Hydrophobicity Ratio} & 0.435 $\pm$ 0.128 & 0.390 $\pm$ 0.144 & 0.441 $\pm$ 0.109 \end{tabular} \begin{center} \includegraphics[width=\textwidth]{Boxplot_pos_feedback_mainProperties.pdf} \label{properties_amp} \end{center} \caption{Mean $\pm$ standard deviation of physiochemical features, before and after feedback. Bolded properties are those for which the mean after feedback is closer to the mean of the positive sequences than the mean before feedback. Violin plots of some example properties demonstrating the shift after feedback are shown below.} \end{table} \subsection{Optimizing Secondary Structure with Black-Box PSIPRED Analyzer} The generator was then optimized to produce synthetic genes with a particular secondary structure in their products, in this case alpha-helical peptides. Besides being extremely important for protein function, secondary structure is attractive to optimize for since it arises in short peptides of length less than 50, such as the sequences being generated here. The analyzer used to optimize for helical peptides was a black-box secondary structure predictor from the PSIPRED server, which tags protein sequences with predicted secondary structure \cite{psipred} at each amino acid. All gene sequences with more than 5 alpha-helical residues were input back into the discriminator as real data. After 43 epochs of feedback, the helix length in the generated sequences was significantly higher than the helix length without feedback and the helix length of the original Uniprot proteins, as illustrated by Figure \ref{helix_len}. Folded examples of peptides we generated are shown in Figure \ref{peptide_models}; these 3D peptide structures were produced by \textit{ab initio} folding from our generated gene sequences, using knowledge-based force field template-free folding from the QUARK server \cite{quark}. The edit distance within the DNA sequences generated after PSIPRED feedback was in the same range as the edit distance within the Uniprot natural cDNA sequences, and higher than the edit distance within the synthetic sequences generated before feedback \ref{supp_edit_dist}. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{model1_cropped.png} \includegraphics[width=0.5\textwidth]{model2_22Helix_cropped.png} \caption{Example peptides from the synthetic genes output by our WGAN model with feedback from the PSIPRED analyzer. Both proteins show a clear helix structure. The peptide on the left was predicted to have 10 residues arranged in helices, while the peptide on the right was predicted to have 22 resides in helices; accordingly, the peptide on the right appears to have more residues arranged in helices. } \label{peptide_models} \end{figure} \begin{figure}[h] \begin{minipage}{\textwidth} a)\\ \begin{center} \includegraphics[width=0.8\textwidth]{helix_len_baseFile.pdf} \end{center} \end{minipage} \begin{minipage}{\textwidth} b)\\ \begin{center} \includegraphics[width=0.8\textwidth]{moving_hist_helixLen.pdf} \end{center} \end{minipage} \caption{a) Distribution showing alpha-helix lengths for natural proteins under 50 amino acids scraped from Uniprot. b) Distribution of alpha-helix lengths from synthetic gene sequences after 1 and 40 epochs of training. The predicted helix length from the generated sequences quickly shifts to be higher than the helix length of the natural proteins.} \label{helix_len} \end{figure} \section{Conclusion and Future Work} In this work, we have successfully developed a GAN model, FBGAN, to produce novel protein-coding sequences for peptides under 50 amino acids in length, and demonstrated a novel feedback-loop mechanism to optimize those sequences for desired properties. We use a function analyzer to evaluate sampled sequences from the generator at every epoch, and input the best scoring sequences back into the discriminator as "real" data points. In this way, the outputs from the generator slowly shift over time to outputs that are highly predicted to be positive by the function analyzer. This feedback-loop mechanism, to our knowledge, has not been proposed before for use in GANs; we have shown that this training mechanism is robust to the type of analyzer used, as the analyzer need not be a deep neural network in order for the feedback mechanism to be successful. We have demonstrated the usefulness of the feedback-loop mechanism in two use cases: 1) optimizing for genes that code for antimicrobial peptides (AMPs), and 2) optimizing for genes that code for alpha-helical peptides. For the first use case, we built our own deep RNN analyzer; for the second, we employed the existing PSIPRED analyzer in a black-box manner. In both cases, we were able to significantly shift the generator to produce genes likely to have the desired properties. It is useful to have the ability to optimize synthetic data for desired properties without a differentiable analyzer for two reasons: first of all, this allows the analyzer to be any model that takes in a datapoint and assigns it a score; the analyzer may now even be a machine carrying out experiments in a lab. The second reason is that many existing models in bioinformatics are based on non-differentiable operations, such as BLAST searches or homology detection algorithms. This feedback loop mechanism thus allows previous staples of synthetic biology research to integrate smoothly with the enormous capabilities of GANs. The feedback-loop technique is also desirable precisely because it is robust, simple, and easy to implement. While we were able to extend the GAN architecture to produce genes up to 156 base pairs in length while maintaining the correct start codon/stop codon structure, it was noticeably more difficult to maintain the structure at 156 base pairs than at 30 base pairs or 50. In order to allow the generator to learn patterns in the sequence over longer lengths, we might investigate using a recurrent architecture or even dilated convolutions in the generator, which have been shown to be effective in identifying long-term genomic dependencies \cite{Gupta_dilatedCNN}. It is still challenging to use GAN architectures to produce long, complex sequences, which currently limits the usefulness of GANs in designing whole proteins, which can be thousands of amino acids long. Here, in order to make the training process for the GAN easier, we focus on producing gene sequences which have a clear start/stop codon structure and only four nucleotides in the vocabulary. However, in the future, we might focus on producing protein sequences directly (with a vocabulary of 26 amino acids). While we have shown that the proteins from the synthetic genes have shifted after training to be more physiochemically similar to known Antimicrobial peptides, we would like to conduct additional experimental validation on the generated peptides. The same holds true for the predicted alpha-helical peptides. In future work, we would also like to apply and further validate the currently proposed method on additional application areas in genomics and personalized medicine, such as noncoding DNA and RNA. In addition, FBGAN's proposed feedback-loop mechanism for training GANs is not limited to sequences or to synthetic biology applications; thus, our future work also includes applying this methodology to image generation use-cases of GANs.
1,108,101,566,039
arxiv
\section{Introduction} \label{sec:intro} Due to the complex relationship between intensity distributions, multi-modal registration \cite{heinrich2012mind} remains a challenging topic. Optimization based registration, which optimizes similarity across phases or modals by aligning the voxel pairs, has been a dominant solution for a long time. However, along with the high complexity of solving 3D images optimization, it is very hard to define a descriptor which is robust enough to cope with the most considerable differences between the image pairs. Nowadays, a lot of methods leveraging deep learning has been proposed to solve the problems mentioned above. These approaches usually require registration fields of ground truth or landmarks which need to be annotated by experts. Some methods \cite{balakrishnan2018unsupervised}\cite{dalca2018unsupervised} explored unsupervised strategies built on the spatial transformer network. There are two main challenges in unsupervised-learning based registration. The first one is to define a loss which can efficiently provide similarity measurement across modalities or sequences. For example, mutual information (MI) has been widely and successfully used in registration tasks. But it requires binning or quantizing, which can cause gradient vanishing problem\cite{lau2019unsupervised}. The second challenge is no ground-truth. The intuition to solve multi-modal problem is image-to-image translation\cite{hu2018adversarial}\cite{fan2018adversarial}. But without pixel-wise aligned data pairs, it is difficult to train a GAN to generate synthesized images in which the all texture is mapping to the source exactly. For example, Cycle-GAN can generate the images from MR which just look like CT, but the accuracy in the details cannot meet the requirements of registration. In this paper, we propose a novel unsupervised method which can easily achieve deformable registration between different sequences or modalities. Local gradient loss, an efficient and robust metric, is the first time to be used in deep-learning-based registration method. We combine adversarial learning approach with spatial transformation to simplify multi-modal similarity to mono-modal similarity. Experiment results show that our approach is competitive to state-of-the-art image registration solutions in terms of accuracy and speed. \section{Method} \label{sec:format} \begin{figure}[htb] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{network.png}} \end{minipage} \caption{Overview architecture of the proposed model and training steps. Our model mainly consists of three components: a transformation network $T$, a generator $G$ and a discriminator $D$. While training, we take one gradient descent step on $T$, one step on $G$ and one step on $D$ by turns.} \label{} \end{figure} Our model mainly consists of three parts: an image transformation network $T$ which outputs the registration warp field for spatial transformation, an image generator $G$ which does multi-modal translation and a discriminator $D$ which can distinguish real images and synthesized images. The architecture of our model and the training details is illustrated in Fig.1. \subsection{Architecture} \label{ssec:subhead} The transformation network $T$ takes the reference image $R$ and the floating image $F$ as input, and then outputs the registration warp field $\phi$ . The mapping can be written as $T:(R,F)\Rightarrow\phi$. The floating image $F$ is warped into $F(\phi)$ using a spatial transformation function. Then $F(\phi)$ is sent to the generator $G$ which synthesizes images $F’(\phi)$ of the reference image domain. That, in turn, provides an easier registration task of single-modal between $R$ and $F’(\phi)$. In the proposed network, registration problem is divided into two parts: multi-modal registration $(R,F(\phi))$ and mono-modal registration $(R,F’(\phi))$, which share the same deformation warp field $\phi$. So every voxel $F’(\phi(p))$ in synthesized images should be mapped to $F(\phi(p))$ precisely. However, in the early learning period, $T$ is poor and the registration result is not accurate. If we use the architecture like Pix2Pix\cite{isola2017image} and send the unpaired $F(\phi)$ and $R$ to the discriminator, the generator will be confused and generate a misaligned $F’(\phi)$. To solve this problem, we present a gradient-constrained GAN method for unpaired data. This method is different in that the loss is learned, and can, not only penalize any possible structure that differs between output and target, but also penalize that between output and source. The task of generator consists of three parts: fooling the discriminator, minimizing $L1$ distance between output and the target, and keeping the output texture similar to the source. The discriminator’s job remains unchanged: only to discriminate real and fake. Both T and G are U-Net-like\cite{ronnebergerconvolutional} network. For details, our code and model parameters are available online at \url{https://github.com/Lauraxy/Multi_Modal_Registration}. These three networks should be trained by turns: one step of optimizing $D$, one step of optimizing $G$ and one step of optimizing $T$. Note that when training one network, the weights of other two networks should be fixed. Please refer to Fig.1 for details. As $G$ is updated gradually, $F’(\phi)$ becomes more and more real which helps to update $T$ and $\phi$. Then $F(\phi)$ can be aligned better to $R$ and in turn contributes to train $G$. This results in that $T$ and $G$ are reaching mutual beneficial. \subsection{Loss} \label{ssec:subhead} We had tried several loss functions for evaluating similarity between multi-sequence images, such as MI, NGF, and so on. However, each of them has their own weakness and cannot achieve satisfying registration results. For example, we try using Parzen Window estimation of MI to solve gradient-vanish problem, but huge memory consumption make it difficult to train a model in practice. NGF, in our experiment, cannot drive the warp field to convergence. Here we present a local gradient loss which can depict local structure information across modalities. It is similar to NGF, but more robust against noise and easy to converge fast. Suppose that $p$ is a voxel position of volume $I$, and we can get the local gradient by: \begin{equation} \nabla\hat{I}(p)= (\sum_{p\in{n^3}} x'(p), \sum_{p\in{n^3}} y'(p), \sum_{p\in{n^3}} z'(p)) \end{equation} Where $x’$, $y’$, $z’$ are gradient filed and $p$ iterates over a $n^3$ volume around $p$. Then the gradient can be normalized by: \begin{equation} n(I,p)=\frac{\nabla\hat{I}(p)}{\lVert\nabla\hat{I}(p)\rVert+\varepsilon} \end{equation} Where $\lVert\cdot\rVert$ means L2 distance. The local gradient loss between $R$ and $F$ can be defined as follow: \begin{equation} L_{LG}(R,F)=\sum_{p\in{\Omega}}|n(R,p) \cdot n(F,p)| \end{equation} $\Omega$ is the volume domain of $R$ and $F$. In the experiment of local gradient, if $n$ in Eq.1 is too small, the network would be difficult to converge. Instead, if $n$ is too large, the edge of $R$ and $F$ cannot be aligned accurately. Finally we set $n=7$ and get the best results. Next we will talk about the loss of T, which can be expressed as: \begin{equation} {L}_{T}(R,F,\phi)={L}_{sim}(R,F(\phi))+\alpha{L}_{smooth}(\phi) \end{equation} We set $L_{sim}$ as two parts: the negative local cross-correlation of $R$ and $F'(\phi)$, the negative local gradient distance between $R$ and $F(\phi)$: \begin{equation} {L}_{sim}(R,F(\phi))=-L_{LCC}(R,F'(\phi))-\beta L_{LG}(R,F(\phi)) \end{equation} Smooth loss, which enforce spatially smooth deformation, can be set as follow [2]: \begin{equation} {L}_{smooth}(\phi)=\sum_{p\in{\Omega}}{\lVert \nabla\phi(p) \rVert}^2 \end{equation} Then we talk about the generator $G$ and discriminator $D$. First of all let us review Pix2Pix, a promising approach for many image-to-image translation tasks. The loss of Pix2Pix can be expressed as: \begin{equation} {L}_{G^*}=arg \min \limits_{G} \max \limits_{D} {L}_{c^{GAN}} (G,D)+\lambda L_{L1}(G) \end{equation} Where ${L}_{c^{GAN}}$ is the objective of a conditional GAN\cite{isola2017image}, and $L_{L1}$ is the L1 distance between the source and the ground truth target. Different from Pix2Pix, in multi-modal registration task, the source and the target are not pixel-wise mapping data. That means directly push the source to near the ground truth in an L1 sense may lead to false translation, which is harmful for registraion. Here we introduce the local gradient loss to constrain gradient distance between the synthesized images $F'(\phi)$ and the source images $F(\phi)$ and keep the output texture similar to the source. We mix the GAN objective with local gradient loss to a complete loss: \begin{equation} \begin{aligned} L_{G'}=& arg \min \limits_{G} \max \limits_{D} {L}_{c^{GAN}} (G,D) - \mu L_{LG}(F'(\phi),F(\phi))\\ & + \lambda {L}_{L1} (F'(\phi),R) \end{aligned} \end{equation} \section{Experiments and Results} \label{sec:pagestyle} \subsection{Dataset} \label{ssec:subhead} We evaluated our method with Brain Tumor Segmentation (BraTS) 2018 dataset[12], which provides a large number of multi-sequence MRI scans, including T1, T2, FLAIR, and T1Gd. Different sequence in the dataset have been aligned very well. We evaluated the registration on T1 and T2 data pairs. We randomly chose 235 data for training and the rest 50 for testing. We cropped and downsized the images to the input size of $112\times128\times96$. We added random shift, rotation, scaling and elastic deformation to the scans and generated data pairs for registration, while the synthetic deformation fields can be regarded as registration ground-truth. The range of deformations can be large enough to -40~+40 voxels and it is a challenge of registration. \subsection{Baseline Methods} \label{ssec:subhead} We compare two well-established image registration methods with ours: A conventional MI-based approach[?] and VoxelMorph method[4]. In the former one, MI is implemented as driving forces within a fluid registration framework. The latter one introduces novel diffeomorphic integration layers combined with a transform layer to enable unsupervised end-to-end learning. But the original VoxelMorph set similarity as local cross-correlation which only function well in single-model registration.As described in chapter 2.2, we also tried several similarity metric with Voxel-morph framework, such as MI, NGF and LG. But only LG is capable of the regis-tration task. So we just use Voxelmorph with LG for comparison. \subsection{Evaluation} \label{ssec:subhead} We set the loss weight as: $\alpha=1$in Eq.4, $\beta=2$ in Eq.5 and $\mu=5$, $\lambda=100$ in Eq.8. For CC and local gradient, window size is set as $7\times7\times7$. We use ADAM optimizer with learning rate $1e-4$. NVIDIA Geforce 1080ti GPU with 11GB memory is applied for training and testing. To evaluate the effect of gradient-constrained loss(Eq.8) in generator $G$, we set the network with and without gradient-constrained, named Deform-GAN-2 and Deform-GAN-1, respectively. \begin{figure}[htb] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{results.png}} \end{minipage} \caption{Registration results for different methods.} \label{} \end{figure} The registration results are illustrated in Fig.2. MI is intrinsically a global measure and so its local estimation is difficult. VoxelMorph with local CC is based on gray value and cannot function well for cross-sequence. The results of VoxelMorph with gradient loss become much better and can handle large deformation between $R$ and $F$. This demonstrates the effectiveness of the local gradient loss. Our methods, both Deform-GAN-1 and Deform-GAN-2, prove higher accuracy of registration. Even for blur and noisy image (please see the second row), Deform-GAN can get satisfying results, and obviously, Deform-GAN-2 is even better. For further evaluation on the two setting of Deform-GAN, the warped floating images $F(\phi)$ and synthesized images $F'(\phi)$ from two GANs at different stages of training are shown in Fig.3. We can see that gradient constraint brings the faster convergence during training. Even at first epoch, white matter can be seen clearly in $F'(\phi)$. What’s more, Deform-GAN-2 is more stable in the training process (As the yellow arrows point out, there is less noise in $F'(\phi)$ of Deform-GAN-2 than that of Deform-GAN-1). Note that $F'(\phi)$ is important for calculating $CC(R,F'(\phi))$, it should be aligned to $F(\phi)$ strictly. The red arrows point out that the alignment between $F(\phi)$ and $F'(\phi)$ of Deform-GAN-2 is much better. \begin{figure}[htb] \begin{minipage}{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{compare.png}} \end{minipage} \caption{Warped floating images $F(\phi)$ and synthesized images $F'(\phi)$ from two Deform-GANs at different stages of training. The yellow arrows point out that Deform-GAN-2 is more stable than Deform-GAN-1. The red arrows indicate the misaligned area between $F(\phi)$ and $F'(\phi)$ in Deform-GAN-1. We can also see that Deform-GAN-2 learns more quickly in the early stages of training.} \label{} \end{figure} In order to quantify the registration results of our methods and the compared methods, we proposed additional evaluation experiments. For the BraTS dataset, we can warp the floating image by synthetic deformation fields as ground truth. Hence, the root-mean-square error (RMSE) of pixel-wise intensity can be calculated for the evaluation. Also, because the mask of tumor is given by BraTS challenge, we can calculate Dice score to evaluate registration result around tumor area. Table 1 shows the quantitative results. It can be seen that our method outperforms the others in terms of tumor Dice and RMSE. In terms of registration speed, deep-learning based methods are significantly faster than the traditional one. In particular, our method only need to run the transformation network in the inference process, so the runtime is still very fast, though a bit slower than VoxeMorph. \begin{table} \begin{minipage}{1.0\linewidth} \centering \caption{Evaluation of registration on the BraTS dataset in terms of RMSE, average Dice of whole tumor and average runtimes on GPU/CPU.}\label{tab1} \setlength{\tabcolsep}{2.3mm}{ \begin{tabular}{|l|l|l|l|} \hline Method & RMSE(\%) & Tumer Dice & Runtime(s)\\ \hline MI & 1.39$\pm$0.40 & 0.55$\pm$0.18 & -/6.1 \\ \hline VoxelMorph-LG & 1.42$\pm$0.36 & 0.61$\pm$0.12 & \textbf{0.09/3.9} \\ \hline Deform-GAN-1 & 1.33$\pm$0.31 & 0.67$\pm$0.13 & 0.11/4.4 \\ \hline Deform-GAN-2 & \textbf{1.18}$\pm$\textbf{0.23} & \textbf{0.69}$\pm$\textbf{0.10} & 0.11/4.4 \\ \hline \end{tabular}} \end{minipage} \end{table} \section{Conclusion} \label{sec:typestyle} A fast multi-modal deformable registration method that makes use of unsupervised learning is proposed. Adversarial learning method combined with spatial transformation helps to reduce similarity calculation between multi-modal to that between mono-modal. We are able to improve the registration results by a weighted sum of local gradient and local $CC$ in a way that the gradient based loss takes global coarse alignment, while local $CC$ loss ensures registration accuracy. Compared to recent learning based methods, our approach can effectively cope with the multi-modal registration problems with large deformation, non-functional intensity relations, noise and blur, promising in state-of-the-art accuracy and fast runtimes. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. \bibliographystyle{IEEEbib}
1,108,101,566,040
arxiv
\section{INTRODUCTION AND MOTIVATIONS} It is well known that defining an Hermitian (or unitary, by means of an exponentiation trick) phase operator for the Fock space of the isotropic harmonic oscillator and, more generally, for an infinite-dimensional Hilbert space is not an easy problem.$^{\cite{SusskindGlogower}}$ Pegg and Barnett$^{\cite{pegg-barnett}}$ solved it by replacing the oscillator algebra by a truncated oscillator algebra and thus were able to give a description of the phase properties of quantum states for the single modes of the electromagnetic field. In this spirit, Vourdas$^{\cite{Vourdas1990}}$ introduced phase operators and phase states (i.e., eigenvectors of a phase operator) for $su_2$ and $su_{1,1}$; for the $su_{1,1}$ Lie algebra, he noticed that the infinite-dimensional character of the representation space prevents to define a unitary phase operator. Phase operators and phase states for other symmetries were also studied. In particular, Klimov {\em et al.}$^{\cite{klimov1}}$ obtained phase states for some specific representations of $su_{3}$. Recently, a generalized oscillator algebra ${\cal A}_{\kappa}$, depending on a real parameter $\kappa$, was introduced to cover the cases of Lie algebras $su_2$ (for $\kappa < 0$) and $su_{1,1}$ (for $\kappa > 0$) as well as Weyl-Heisenberg algebra $h_4$ (for $\kappa = 0$).$^{\cite{AKW,daoud-kibler}}$ Temporally stable phase states were defined as eigenstates of phase operators for finite-dimensional ($\kappa < 0$) and infinite-dimensional representations ($\kappa \geq 0$) of the ${\cal A}_{\kappa}$ algebra.$^{\cite{daoud-kibler}}$ In the finite-dimensional case, corresponding either to $\kappa < 0$ or to $\kappa \geq 0$ with truncation, temporally stable phase states proved to be useful for deriving mutually unbiased bases.$^{\cite{AKW,daoud-kibler}}$ Such bases play an important role in quantum information and quantum cryptography. In this paper, we introduce an algebra, noted ${\cal A}_{\kappa}(2)$, which generalizes the ${\cal A}_{\kappa}$ algebra. For $\kappa < 0$, this new algebra is similar to that considered in the seminal work of Palev$^{\cite{palev1, palev2}}$ in the context of $A_n$-statistics. The ${\cal A}_{\kappa}(2)$ algebra allows to give an unified treatment of algebras $su_3$ (for $\kappa < 0$), $su_{2,1}$ (for $\kappa > 0$) and $h_4 \otimes h_4$ (for $\kappa = 0$). When we started this work, our aim was to study in an unified way: (i) phase operators for $su_3$, $su_{2,1}$ and $h_4 \otimes h_4$ and (ii) the corresponding phase states. We discovered, for $\kappa < 0$, that phase states can be defined only for partitions of the relevant Hilbert spaces and that a global definition of {\em phase states} requires the introduction of {\em vector phase states}, a concept that is closely related to that of vector coherent states. The notion of vector coherent states was strongly investigated by Hecht$^{\cite{Hecht}}$ and Zhang {\em et al.}$^{\cite{Gilmore}}$ at the end of the nineties. This notion was subsequently developed in Refs.~\cite{twareque2, twareque1, gazeau, twareque3} with applications to quantum dynamical systems presenting degeneracies. In particular, the authors of Ref.~\cite{twareque1} defined a vectorial generalization of the Gazeau-Klauder coherent states$^{\cite{gazeau}}$ leading to vector coherent states. Recently, this notion of vector coherent states was extensively investigated (see the works in Refs.~\cite{twareque2,twareque3}). This paper is organized as follows. The ${\cal A}_{\kappa}(2)$ generalized algebra is introduced in Section 2. We then define a quantum system associated with this algebra and generalizing the two-dimensional harmonic oscillator. In Section 3, phase operators and temporally stable vector phase states for the ${\cal A}_{\kappa}(2)$ algebra with $\kappa < 0$ are constructed. The phase operators and the corresponding temporally stable phase states for ${\cal A}_{\kappa}(2)$ with $\kappa \geq 0$ are presented in Section 4. Section 5 deals with a truncation of the ${\cal A}_{\kappa}(2)$ algebra, with $\kappa \geq 0$, necessary in order to get unitary phase operators. In Section 6, we show how a quantization of the temporality parameter occurring in the phase states for ${\cal A}_{\kappa}(2)$ with $\kappa < 0$ can lead to mutually unbiased bases. \section{GENERALIZED OSCILLATOR ALGEBRA ${\cal A}_{\kappa}(2)$} \subsection{The algebra} We first define the ${\cal A}_{\kappa}(2)$ algebra. This algebra is generated by six linear operators $a_i^-$, $a_i^+$ and $N_i$ with $i=1,2$ satisfying the commutation relations \begin{eqnarray} [a_i^- , a^+_i] = I + \kappa (N_1 + N_2 + N_i), \quad [N_i , a_j^{\pm}] = {\pm} \delta_{i,j} a_i^{\pm}, \quad i,j = 1,2 \label{commutation1} \end{eqnarray} and \begin{eqnarray} [a_i^{\pm} , a_j^{\pm}] = 0, \quad i \neq j, \label{commutation2} \end{eqnarray} complemented by the triple relations \begin{eqnarray} [a_i^{\pm} , [a_i^{\pm} , a_j^{\mp}]] = 0, \quad i \neq j. \label{commutation3} \end{eqnarray} In Eq.~(\ref{commutation1}), $I$ denotes the identity operator and $\kappa$ is a deformation parameter assumed to be real. Note that the ${\cal A}_{\kappa}$ algebra introduced in Ref.~\cite{daoud-kibler} formally follows from ${\cal A}_{\kappa}(2)$ by omitting the relation $[a_2^- , a^+_2] = I + \kappa (N_1 + 2 N_2)$ and by taking \begin{eqnarray} a_2^- = a_2^+ = N_2 = 0, \quad a_1^- = a^-, \quad a_1^+ = a^+, \quad N_1 = N \nonumber \end{eqnarray} in the remaining definitions of ${\cal A}_{\kappa}(2)$. Therefore, generalized oscillator algebra ${\cal A}_{\kappa}$ in Ref.~\cite{daoud-kibler} should logically be noted ${\cal A}_{\kappa}(1)$. For $\kappa = 0$, the ${\cal A}_{0}(2)$ algebra is nothing but the algebra for a two-dimensional isotropic harmonic oscillator and thus corresponds to two commuting copies of the Weyl-Heisenberg algebra $h_4$. For $\kappa \not= 0$, the ${\cal A}_{\kappa}(2)$ algebra resembles the algebra associated with the so-called $A_n$--statistics (for $n=2$) which was introduced by Palev$^{\cite{palev1}}$ and further studied from the microscopic point of view by Palev and Van der Jeugt.$^{\cite{palev2}}$ In this respect, let us recall that $A_n$--statistics is described by the $sl_{n+1}$ Lie algebra generated by $n$ pairs of creation and annihilation operators (of the type of the $a_i^+$ and $a_i^-$ operators above) satisfying usual commutation relations and triple commutation relations. Such a presentation of $sl_{n+1}$ is along the lines of the Jacobson approach according to which the $A_n$ Lie algebra can be defined by means of $2n$, rather than $n(n+2)$, generators satisfying commutation relations and triple commutation relations.$^{\cite{jacobson}}$ These $2n$ Jacobson generators correspond to $n$ pairs of creation and annihilation operators. In our case, the ${\cal A}_{\kappa}(2)$ algebra for $\kappa \not= 0$, with two pairs of Jacobson generators ($(a_i^+, a_i^-)$ for $i = 1,2$), can be identified to the Lie algebras $su_3$ for $\kappa < 0$ and $su_{2,1}$ for $\kappa > 0$. This can be seen as follows. Let us define a new pair $(a_3^+ , a_3^-)$ of operators in terms of the two pairs $(a_1^+ , a_1^-)$ and $(a_2^+ , a_2^-)$ of creation and annihilation operators through \begin{eqnarray} a_3^+ = [a_2^+ , a_1^-], \quad a_3^- = [a_1^+ , a_2^-]. \label{lesdeuxa3} \end{eqnarray} Following the trick used in Ref.~\cite{AKW} for the ${\cal A}_{\kappa}(1)$ algebra, let us introduce the operators \begin{eqnarray} & & E_{+\alpha} = \frac{1}{\sqrt{|\kappa|}} a_{\alpha}^+, \quad E_{-\alpha} = \frac{1}{\sqrt{|\kappa|}} a_{\alpha}^-, \quad \alpha = 1,2,3 \nonumber \\ & & H_1 = \frac{1}{2 \kappa} [I + \kappa (2 N_1+ N_2)], \quad H_2 = \frac{1}{2 \kappa} [I + \kappa (2 N_2+ N_1)] \nonumber \end{eqnarray} with $\kappa \not= 0$. It can be shown that the set $\{ E_{{\pm}\alpha} ; H_i : \alpha = 1,2,3 ; i = 1,2 \}$ spans $su_3$ for $\kappa < 0$ and $su_{2,1}$ for $\kappa > 0$. \subsection{Representation of ${\cal A}_{\kappa}(2)$} We now look for a Hilbertian representation of the ${\cal A}_{\kappa}(2)$ algebra on a Hilbert-Fock space ${\cal F}_{\kappa}$ of dimension $d$ with $d$ finite or infinite. Let \begin{eqnarray} \{ \vert n_1 , n_2 \rangle : n_1, n_2 = 0, 1, 2, \ldots \} \nonumber \end{eqnarray} be an orthonormal basis of ${\cal F}_{\kappa}$ with \begin{eqnarray} \langle n_1 , n_2 \vert n_1' , n_2' \rangle = \delta_{n_1,n_1'} \delta_{n_2,n_2'}. \nonumber \end{eqnarray} Number operators $N_1$ and $N_2$ are supposed to be diagonal in this basis, i.e., \begin{eqnarray} N_i \vert n_1, n_2 \rangle = n_i \vert n_1, n_2 \rangle, \quad i=1,2 \label{actionN} \end{eqnarray} while the action of the creation and annihilation operators $a_1^{\pm}$ and $a_2^{\pm}$ is defined by \begin{eqnarray} a_1^+ \vert n_1, n_2 \rangle = \sqrt{F_1(n_1+1,n_2)} e^{{-i[H(n_1+1,n_2)- H(n_1, n_2)] \varphi }} \vert n_1+1, n_2 \rangle, \label{action1+} \end{eqnarray} \begin{eqnarray} a_1^- \vert n_1, n_2\rangle = \sqrt{F_1(n_1,n_2)} e^{{+i[H(n_1,n_2)- H(n_1-1, n_2)] \varphi }}\vert n_1-1,n_2\rangle, \quad a_1^- \vert 0 , n_2 \rangle = 0 \label{action1-} \end{eqnarray} and \begin{eqnarray} a_2^+ \vert n_1, n_2 \rangle = \sqrt{F_2(n_1,n_2+1)} e^{{-i[H(n_1,n_2+1)- H(n_1, n_2)] \varphi }} \vert n_1,n_2+1 \rangle, \label{action2+} \end{eqnarray} \begin{eqnarray} a_2^- \vert n_1, n_2\rangle = \sqrt{F_2(n_1,n_2)} e^{{+i[H(n_1,n_2)- H(n_1, n_2-1)] \varphi }} \vert n_1,n_2-1\rangle, \quad a_2^- \vert n_1 , 0 \rangle = 0. \label{action2-} \end{eqnarray} In Eqs.~(\ref{action1+})-(\ref{action2-}), $\varphi$ is an arbitrary real parameter and the positive valued functions $F_1 : \mathbb{N}^2 \to \mathbb{R}_+$, $F_2 : \mathbb{N}^2 \to \mathbb{R}_+$ and $H : \mathbb{N}^2 \to \mathbb{R}_+$ are such that \begin{eqnarray} H = F_1 + F_2. \nonumber \end{eqnarray} It is a simple matter of calculation to check that (\ref{actionN})-(\ref{action2-}) generate a representation of the ${\cal A}_{\kappa}(2)$ algebra defined by (\ref{commutation1})-(\ref{commutation3}) provided that $F_1(n_1 , n_2)$ and $F_2(n_1 , n_2)$ satisfy the recurrence relations \begin{eqnarray} && F_1(n_1+1, n_2) - F_1(n_1, n_2) = 1 + \kappa ( 2n_1 + n_2), \quad F_1(0,n_2) = 0 \label{recu1} \\ && F_2(n_1, n_2+1) - F_2(n_1, n_2) = 1 + \kappa ( 2n_2 + n_1), \quad F_2(n_1,0) = 0. \label{recu2} \end{eqnarray} The solutions of Eqs.~(\ref{recu1}) and (\ref{recu2}) are \begin{eqnarray} F_i(n_1, n_2) = n_i [ 1 + \kappa (n_1+n_2-1)], \quad i=1,2. \label{Fi de n1,n2} \end{eqnarray} To ensure that the structure functions $F_1$ and $F_2$ be positive definite, we must have \begin{eqnarray} 1 + \kappa (n_1+n_2-1) > 0, \quad n_1 + n_2 > 0, \label{condition} \end{eqnarray} a condition to be discussed according to the sign of $\kappa$. In the representation of ${\cal A}_{\kappa}(2)$ defined by Eqs.~(\ref{actionN})-(\ref{condition}), creation (annihilation) operators $a^+_i$ ($a^-_i$) and number operators $N_i$ satisfy the Hermitian conjugation relations \begin{eqnarray} \left( a_i^- \right)^{\dagger} = a_i^+, \quad \left( N_i \right)^{\dagger} = N_i, \quad i = 1, 2 \nonumber \end{eqnarray} as for the two-dimensional oscillator. The $d$ dimension of the representation space ${\cal F}_{\kappa}$ can be deduced from condition (\ref{condition}). Two cases need to be considered according to as $\kappa \geq 0$ or $\kappa < 0$. \begin{itemize} \item For $\kappa \geq 0$, Eq.~(\ref{condition}) is trivially satisfied so that the $d$ dimension of ${\cal F}_{\kappa}$ is infinite. This is well known in the case $\kappa = 0$ which corresponds to a two-dimensional isotropic harmonic oscillator. For $\kappa > 0$, the representation corresponds to the symmetric discrete (infinite-dimensional) irreducible representation of the $SU_{2,1}$ group. \item For $\kappa < 0$, there exists a finite number of states satisfying condition (\ref{condition}). Indeed, we have \begin{eqnarray} n_1 + n_2 = 0, 1, \ldots, E(-\frac{1}{\kappa}), \nonumber \end{eqnarray} where $E(x)$ denotes the integer part of $x$. In the following, we shall take $-1/\kappa$ integer when $\kappa < 0$. Consequently, for $\kappa < 0$ the $d$ dimension of the finite-dimensional space ${\cal F}_{\kappa}$ is \begin{eqnarray} d = \frac{1}{2}(k+1)(k+2), \quad k = -\frac{1}{\kappa} \in \mathbb{N}^*. \label{dimensiond} \end{eqnarray} We know that the dimension $d(\lambda, \mu)$ of the irreducible representation $(\lambda, \mu)$ of $SU_3$ is given by \begin{eqnarray} d(\lambda, \mu) = \frac{1}{2}(\lambda + 1)(\mu + 1)(\lambda + \mu + 2 ), \quad \lambda \in \mathbb{N}, \quad \mu \in \mathbb{N}. \nonumber \end{eqnarray} Therefore, the finite-dimensional representation of ${\cal A}_{\kappa}(2)$ defined by (\ref{actionN})-(\ref{condition}) with $-1/\kappa = k \in \mathbb{N}^*$ corresponds to the irreducible representation $(0, k)$ or its adjoint $(k, 0)$ of $SU_3$. \end{itemize} \subsection{Generalized oscillator Hamiltonian} Since the ${\cal A}_{\kappa}(2)$ algebra can be viewed as an extension of the two-dimensional oscillator algebra, it is natural to consider the $a_1^+ a_1^- + a_2^+ a_2^-$ operator as an Hamiltonian associated with ${\cal A}_{\kappa}(2)$. The action of this operator on the space ${\cal F}_{\kappa}$ is given by \begin{eqnarray} (a_1^+ a_1^- + a_2^+ a_2^-) \vert n_1,n_2 \rangle &=& [F_1(n_1, n_2) + F_2(n_1, n_2)] \vert n_1,n_2 \rangle \nonumber \\ &=& (n_1 + n_2) [1 + \kappa (n_1 + n_2 - 1)] \vert n_1,n_2 \rangle \nonumber \end{eqnarray} or \begin{eqnarray} (a_1^+ a_1^- + a_2^+ a_2^-) \vert n_1,n_2 \rangle = H(n_1, n_2) \vert n_1,n_2 \rangle. \nonumber \end{eqnarray} Thus, the $a_1^+ a_1^- + a_2^+ a_2^-$ Hamiltonian can be written \begin{eqnarray} a_1^+ a_1^- + a_2^+ a_2^- = H, \nonumber \end{eqnarray} with \begin{eqnarray} H \equiv H(N_1, N_2) = (N_1 + N_2) [1 + \kappa (N_1 + N_2 - 1)] \nonumber \end{eqnarray} modulo its action on ${\cal F}_{\kappa}$. The $H$ Hamiltonian is clearly a nonlinear extension of the Hamiltonian for the two-dimensional isotropic harmonic oscillator. The eigenvalues \begin{eqnarray} \lambda_n = n [1 + \kappa (n - 1)], \quad n = n_1 + n_2, \quad n_1 \in \mathbb{N}, \quad n_2 \in \mathbb{N} \nonumber \end{eqnarray} of $H$ can be reduced for $\kappa = 0$ to the energies $n$ of the two-dimensional oscillator (up to additive and multiplicative constants). For $\kappa \not= 0$, the degeneracy of the $\lambda_n$ level is $n+1$ and coincides with the degeneracy of the $n$ level corresponding to $\kappa = 0$. \section{PHASE OPERATORS AND PHASE STATES FOR ${\cal A}_{\kappa}(2)$ WITH $\kappa < 0$} \subsection{Phase operators in finite dimension} \subsubsection{The $E_{1d}$ and $E_{2d}$ phase operators} For $\kappa < 0$ the finite-dimensional space ${\cal F}_{\kappa}$ is spanned by the basis \begin{eqnarray} \{ \vert n_1 , n_2 \rangle : n_1, n_2 \ {\rm ranging} \ | \ n_1 + n_2 \leq k \}. \nonumber \end{eqnarray} This space can be partitioned as \begin{eqnarray} {\cal F}_{\kappa} = \bigoplus_{l = 0}^{k}{\cal A}_{\kappa , l}, \nonumber \end{eqnarray} where ${\cal A}_{\kappa , l}$ is spanned by \begin{eqnarray} \{ \vert n , l \rangle : n = 0, 1, \ldots, k-l \}. \nonumber \end{eqnarray} We have \begin{eqnarray} {\rm dim \,} {\cal A}_{\kappa , l} = k-l+1 \nonumber \end{eqnarray} so that (\ref{action1+})-(\ref{action1-}) must be completed by \begin{eqnarray} a_1^+ \vert k-l , l \rangle = 0, \nonumber \end{eqnarray} which can be deduced from the calculation of $\langle k-l , l \vert a_1^-a_1^+ \vert k-l , l \rangle$. The operators $a_1^+$ and $a_1^-$ leave each subspace ${\cal A}_{\kappa , l}$ invariant. Then, it is convenient to write \begin{eqnarray} a_1^{\pm} = \sum_{l = 0}^{k} a_1^{\pm}(l), \nonumber \end{eqnarray} with the actions \begin{eqnarray} && a_1^+(l) \vert n, l' \rangle = \delta_{l,l'} \sqrt{F_1(n+1,l)} e^{{-i[H(n+1,l)- H(n, l)] \varphi }} \vert n+1,l \rangle, \nonumber \\ && a_1^+(l) \vert k-l',l' \rangle = 0, \nonumber \\ && a_1^-(l) \vert n, l' \rangle = \delta_{l,l'} \sqrt{F_1(n,l)} e^{{+i[H(n,l)- H(n-1, l)] \varphi }} \vert n-1,l \rangle, \nonumber \\ && a_1^-(l) \vert 0,l' \rangle = 0, \nonumber \end{eqnarray} which show that $a_1^+(l)$ and $a_1^-(l)$ leave ${\cal A}_{\kappa , l}$ invariant. Let us now define the $E_{1d}$ operator by \begin{eqnarray} E_{1d} \vert n_1 , n_2 \rangle = e^{i [H(n_1,n_2) - H(n_1-1,n_2)] \varphi} \vert n_1-1 , n_2 \rangle, \quad 0 \leq n_1 + n_2 \leq k, \quad n_1 \not= 0 \nonumber \end{eqnarray} and \begin{eqnarray} E_{1d} \vert 0 , n_2 \rangle = e^{i [H(0,n_2) - H(k-n_2,n_2)] \varphi} \vert k - n_2 , n_2 \rangle, \quad 0 \leq n_2 \leq k, \quad n_1 = 0. \nonumber \end{eqnarray} Thus, it is possible to write \begin{eqnarray} a_1^- = E_{1d} \sqrt{F_1(N_1,N_2)} \Leftrightarrow a_1^+ = \sqrt{F_1(N_1,N_2)} (E_{1d})^{\dagger}. \label{decomp pol 59} \end{eqnarray} The $E_{1d}$ operator can be developed as \begin{eqnarray} E_{1d} = \sum_{l = 0}^{k} E_{1d}(l), \nonumber \end{eqnarray} with \begin{eqnarray} E_{1d}(l) \vert n , l' \rangle &=& \delta_{l,l'} e^{i[H(n,l) - H(n-1,l)] \varphi} \vert n - 1, l \rangle, \quad n \neq 0, \label{actionE1} \\ E_{1d}(l) \vert 0 , l' \rangle &=& \delta_{l,l'} e^{i[H(0,l) - H(k-l,l)] \varphi} \vert k - l, l \rangle, \quad n = 0. \label{actionE1suite} \end{eqnarray} Operator $E_{1d}(l)$ leaves ${\cal A}_{\kappa , l}$ invariant and satisfies \begin{eqnarray} E_{1d}(l) (E_{1d}(l'))^{\dagger} = (E_{1d}(l'))^{\dagger} E_{1d}(l) = \delta_{l,l'} \sum_{n = 0}^{k-l} \vert n , l \rangle \langle n , l \vert. \nonumber \end{eqnarray} Consequently, we obtain \begin{eqnarray} E_{1d} (E_{1d})^{\dagger} = (E_{1d})^{\dagger} E_{1d} = \sum_{l = 0}^{k} (E_{1d}(l))^{\dagger} E_{1d}(l) = \sum_{l = 0}^{k} \sum_{n = 0}^{k-l} \vert n , l \rangle \langle n , l \vert = I, \nonumber \end{eqnarray} which shows that $E_{1d}$ is unitary. Therefore, Eq.~(\ref{decomp pol 59}) constitutes a polar decomposition of $a_1^-$ and $a_1^+$. Similar developments can be obtained for $a_2^-$ and $a_2^+$. We limit ourselves to the main results concerning the decomposition \begin{eqnarray} a_2^- = E_{2d} \sqrt{F_2(N_1,N_2)} \Leftrightarrow a_2^+ = \sqrt{F_2(N_1,N_2)} (E_{2d})^{\dagger}. \nonumber \end{eqnarray} In connection with this decomposition, we use the partition \begin{eqnarray} {\cal F}_{\kappa} = \bigoplus_{l = 0}^{k} {\cal B}_{\kappa , l}, \nonumber \end{eqnarray} where the ${\cal B}_{\kappa , l}$ subspace, of dimension $k-l+1$, is spanned by the basis \begin{eqnarray} \{ \vert l , n \rangle : n = 0, 1, \ldots, k-l \}. \nonumber \end{eqnarray} We can write \begin{eqnarray} E_{2d} = \sum_{l = 0}^{k} E_{2d}(l), \nonumber \end{eqnarray} where the $E_{2d}(l)$ operator satisfies \begin{eqnarray} E_{2d}(l) \vert l' , n \rangle &=& \delta_{l,l'} e^{i[H(l , n) - H(l, n-1)] \varphi} \vert l , n - 1 \rangle, \quad n \not = 0, \label{16prime} \\ E_{2d}(l) \vert l' , 0 \rangle &=& \delta_{l,l'} e^{i[H(l , 0) - H(l, k-l)] \varphi} \vert l , k - l \rangle, \quad n = 0, \label{17prime} \end{eqnarray} and \begin{eqnarray} E_{2d}(l) (E_{2d}(l'))^{\dagger} = (E_{2d}(l'))^{\dagger} E_{2d}(l) = \delta_{l,l'} \sum_{n = 0}^{k-l} \vert l , n \rangle \langle l , n \vert. \nonumber \end{eqnarray} This yields \begin{eqnarray} E_{2d} (E_{2d})^{\dagger} = (E_{2d})^{\dagger} E_{2d} = \sum_{l = 0}^{k} (E_{2d}(l))^{\dagger} E_{2d}(l) = I \nonumber \end{eqnarray} and the operator $E_{2d}$, like $E_{1d}$, is unitary. \subsubsection{The $E_{3d}$ phase operator} Let us go back to the pair $(a_3^+, a_3^-)$ of operators defined by (\ref{lesdeuxa3}) in terms of the pairs $(a_1^+, a_1^-)$ and $(a_2^+, a_2^-)$. The action of $a_3^+$ and $a_3^-$ on ${\cal F}_{\kappa}$ follows from (\ref{action1+})-(\ref{action2-}). We get \begin{eqnarray} a_3^+ \vert n_1, n_2 \rangle &=& - \kappa \sqrt{n_1(n_2 + 1)} \vert n_1-1 , n_2+1 \rangle, \nonumber \\ a_3^- \vert n_1, n_2 \rangle &=& - \kappa \sqrt{(n_1 + 1)n_2} \vert n_1+1 , n_2-1 \rangle. \nonumber \end{eqnarray} From Eqs.~(\ref{commutation3}) and (\ref{lesdeuxa3}), it is clear that the two pairs ($a_1^+,a_1^-$) and ($a_2^+,a_2^-$) commute when $\kappa = 0$. We thus recover that the ${\cal A}_0(2)$ algebra corresponds to a two-dimensional harmonic oscillator. Here, it is appropriate to use the partition \begin{eqnarray} {\cal F}_{\kappa} = \bigoplus_{l=0}^{k} {\cal C}_{\kappa , l}, \label{partitionC} \end{eqnarray} where the subspace ${\cal C}_{\kappa , l}$, of dimension $l+1$ (but not $k-l+1$ as for ${\cal A}_{\kappa , l}$ and ${\cal B}_{\kappa , l}$), spanned by the basis \begin{eqnarray} \{ \vert l-n , n \rangle : n = 0, 1, \ldots, l \} \nonumber \end{eqnarray} is left invariant by $a_3^+$ and $a_3^-$. Following the same line of reasoning as for $E_{1d}$ and $E_{2d}$, we can associate an operator $E_{3d}$ with the ladder operators $a_3^+$ and $a_3^-$. We take operator $E_{3d}$ associated with the partition (\ref{partitionC}) such that \begin{eqnarray} a_3^- = E_{3d} \sqrt{F_3(N_1,N_2)} \Leftrightarrow a_3^+ = \sqrt{F_3(N_1,N_2)} (E_{3d})^{\dagger}, \nonumber \end{eqnarray} where \begin{eqnarray} \sqrt{F_3(N_1,N_2)} = - \kappa \sqrt{(N_1 + 1)N_2}. \nonumber \end{eqnarray} The $E_{3d}$ operator reads \begin{eqnarray} E_{3d} = \sum_{l=0}^{k} E_{3d}(l), \nonumber \end{eqnarray} where $E_{3d}(l)$ can be taken to satisfy \begin{eqnarray} E_{3d}(l) \vert l' - n , n \rangle = \delta_{l,l'} \vert l - n + 1 , n-1 \rangle, \quad n \neq 0, \nonumber \end{eqnarray} \begin{eqnarray} E_{3d}(l) \vert l' , 0 \rangle = \delta_{l,l'} \vert 0 , l \rangle, \quad n = 0. \nonumber \end{eqnarray} Finally, we have \begin{eqnarray} E_{3d}(l) (E_{3d}(l'))^{\dagger} = (E_{3d}(l'))^{\dagger} E_{3d}(l) = \delta_{l,l'} \sum_{n = 0}^{l} \vert l-n , n \rangle \langle l-n , n \vert. \nonumber \end{eqnarray} As a consequence, we obtain \begin{eqnarray} E_{3d} (E_{3d})^{\dagger} = (E_{3d})^{\dagger} E_{3d} = \sum_{l = 0}^{k} (E_{3d}(l))^{\dagger} E_{3d}(l) = I, \nonumber \end{eqnarray} a result that reflects the unitarity property of $E_{3d}$. \subsubsection{The $E_{d}$ phase operator} Operators $E_{1d}(l)$, $E_{2d}(l)$ and $E_{3d}(l)$, defined for $\kappa <0$ as components of the operators $E_{1d}$, $E_{2d}$ and $E_{3d}$, leave invariant the sets ${\cal A}_{\kappa,l}$, ${\cal B}_{\kappa,l}$ and ${\cal C}_{\kappa,l}$, respectively. Therefore, operators $E_{1d}$, $E_{2d}$ and $E_{3d}$ do not connect all elements of ${\cal F}_{\kappa}$, i.e., a given element of ${\cal F}_{\kappa}$ cannot be obtained from repeated applications of $E_{1d}$, $E_{2d}$ and $E_{3d}$ on an arbitrary element of ${\cal F}_{\kappa}$. We now define a new operator $E_{d}$ which can connect (by means of repeated applications) any couple of elements in the $d$-dimensional space ${\cal F}_{\kappa}$ corresponding to $\kappa <0$. Let this global operator be defined via the action \begin{eqnarray} E_{d} \vert n , l \rangle = e^{i [H(n,l)- H(n-1, l)] \varphi} \vert n-1 , l \rangle, \quad n = 1, 2, \ldots, k-l, \quad l = 0, 1, \ldots, k \label{Ed1} \end{eqnarray} and the boundary actions \begin{eqnarray} && E_{d} \vert 0 , l \rangle = e^{i [H(0,l)- H(k-l+1, l-1)] \varphi} \vert k-l+1, l-1 \rangle, \quad l = 1, 2, \ldots, k \label{Ed2} \\ && E_{d} \vert 0 , 0 \rangle = e^{i [H(0,0)- H(0, k)] \varphi} \vert 0, k \rangle. \label{Ed3} \end{eqnarray} The $E_{d}$ operator is obviously unitary. By making the identification \begin{eqnarray} \Phi_{\frac{1}{2}l(2k - l + 3) + n} \equiv \vert n , l \rangle, \quad n = 0, 1, \ldots, k-l, \quad l = 0, 1, \ldots, k, \nonumber \end{eqnarray} the set \begin{eqnarray} \{ \Phi_{j} : j = 0, 1, \ldots, d-1 \} \nonumber \end{eqnarray} constitutes a basis for the $d$-dimensional Fock space ${\cal F}_{\kappa}$. Then, the various sets ${\cal A}_{\kappa,l}$ can be rewritten as \begin{eqnarray} {\cal A}_{\kappa,0} &:& \{ \Phi_{0}, \Phi_{1}, \ldots, \Phi_{k-1}, \Phi_{k} \} \nonumber \\ {\cal A}_{\kappa,1} &:& \{ \Phi_{k+1}, \Phi_{k+2}, \ldots, \Phi_{2k} \} \nonumber \\ & \vdots & \nonumber \\ {\cal A}_{\kappa,k} &:& \{ \Phi_{d-1} \}. \nonumber \end{eqnarray} Repeated applications of $E_{d}$ on the vectors $\Phi_{j}$ with $j = 0, 1, \ldots, d-1$ can be summarized by the following cyclic sequence \begin{eqnarray} E_{d} : \Phi_{d-1} \mapsto \Phi_{d-2} \mapsto \ldots \mapsto \Phi_{1} \mapsto \Phi_{0} \mapsto \Phi_{d-1} \mapsto {\rm etc.} \nonumber \end{eqnarray} The $E_{d}$ operator thus makes it possible to move inside each ${\cal A}_{\kappa,l}$ set and to connect the various sets according to the sequence \begin{eqnarray} E_{d} : {\cal A}_{\kappa,k} \to {\cal A}_{\kappa,k-1} \to \ldots \to {\cal A}_{\kappa,0} \to {\cal A}_{\kappa,k} \to {\rm etc.} \nonumber \end{eqnarray} Similar results hold for the partitions of ${\cal F}_{\kappa}$ in ${\cal B}_{\kappa,l}$ or ${\cal C}_{\kappa,l}$ subsets. \subsection{Phase states in finite dimension} \subsubsection{Phase states for $E_{1d}(l)$ and $E_{2d}(l)$} We first derive the eigenstates of $E_{1d}(l)$. For this purpose, let us consider the eigenvalue equation \begin{eqnarray} E_{1d}(l) \vert z_l \rangle = z_l \vert z_l \rangle, \quad \vert z_l \rangle = \sum_{n = 0}^{k-l} a_{n} z_l^n \vert n , l \rangle, \quad z_l \in \mathbb{C}. \nonumber \end{eqnarray} Using definition (\ref{actionE1})-(\ref{actionE1suite}), we obtain the following recurrence relation for the coefficients $a_{n}$ \begin{eqnarray} a_{n} = e^{-i [H(n , l) - H(n-1 , l)] \varphi} a_{n-1}, \quad n = 1, 2, \ldots, k-l \nonumber \end{eqnarray} with \begin{eqnarray} a_{0} = e^{-i [H(0,l) - H(k-l,l)] \varphi} a_{k-l} \nonumber \end{eqnarray} and the condition \begin{eqnarray} (z_l)^{k-l+1} = 1. \nonumber \end{eqnarray} Therefore, we get \begin{eqnarray} a_{n} = e^{-i [H(n,l) - H(0,l)] \varphi} a_{0}, \quad n = 0, 1, \ldots, k-l \nonumber \end{eqnarray} and the complex variable $z_l$ is a root of unity given by \begin{eqnarray} z_l = q_l^{m}, \quad m = 0, 1, \ldots, k-l, \nonumber \end{eqnarray} where \begin{eqnarray} q_l = \exp \left( \frac{2 \pi i} {k-l+1} \right) \label{definition of q_l} \end{eqnarray} is reminiscent of the deformation parameter used in the theory of quantum groups. The $a_{0}$ constant can be obtained, up to a phase factor, from the normalization condition $\langle z_l \vert z_l \rangle = 1$. We take \begin{eqnarray} a_{0} = \frac{1}{\sqrt {k-l+1}} e^{-i H(0,l) \varphi}, \label{choixdephase} \end{eqnarray} where the phase factor is chosen in order to ensure temporal stability of the $\vert z_l \rangle$ state. Finally, we arrive at the following normalized eigenstates of $E_{1d}(l)$ \begin{eqnarray} \vert z_l \rangle \equiv \vert l , m , \varphi \rangle = \frac{1}{\sqrt {k-l+1}} \sum_{n=0}^{k-l} e^{-i H(n,l) \varphi} q_l^{mn} \vert n , l \rangle. \label{coherentstatemvarphi} \end{eqnarray} The $\vert l , m , \varphi \rangle$ states are labeled by the parameters $l \in \{ 0, 1, \ldots, k \}$, $m \in \mathbb{Z}/(k-l+1)\mathbb{Z}$ and $\varphi \in \mathbb{R}$. They satisfy \begin{eqnarray} E_{1d}(l) \vert l , m , \varphi \rangle = e^{i\theta_{m}} \vert l , m , \varphi \rangle, \quad \theta_{m} = m \frac{2 \pi}{k-l+1}, \quad m = 0, 1, \ldots, k-l, \label{ancienne92} \end{eqnarray} which shows that $E_{1d}(l)$ is a phase operator. The phase states $\vert l , m, \varphi \rangle$ have remarkable properties: \begin{itemize} \item They are temporally stable with respect to the evolution operator associated with the $H$ Hamiltonian. In other words, they satisfy \begin{eqnarray} e^{-i H t} \vert l , m , \varphi \rangle = \vert l , m , \varphi + t \rangle \nonumber \end{eqnarray} for any value of the real parameter $t$. \item For fixed $\varphi$ and $l$, they satisfy the equiprobability relation \begin{eqnarray} | \langle n , l \vert l , m , \varphi \rangle | = \frac{1}{\sqrt{k-l+1}} \nonumber \end{eqnarray} and the property \begin{eqnarray} \sum_{m = 0}^{k-l} \vert l , m , \varphi \rangle \langle l , m , \varphi \vert = \sum_{n=0}^{k-l} \vert n , l \rangle \langle n , l \vert. \nonumber \end{eqnarray} \item The overlap between two phase states $\vert l', m' , \varphi' \rangle$ and $\vert l , m , \varphi \rangle$ reads \begin{eqnarray} \langle l , m , \varphi \vert l' , m' , \varphi' \rangle = \delta_{l,l'} \frac{1}{k-l+1} \sum_{n=0}^{k-l} q_l^{\rho(m-m', \varphi - \varphi', n)}, \nonumber \end{eqnarray} where \begin{eqnarray} \rho(m-m', \varphi - \varphi', n) = - (m - m')n + \frac{k-l+1}{2\pi} (\varphi - \varphi') H(n,l) \nonumber \end{eqnarray} with $q_l$ defined in (\ref{definition of q_l}). As a particular case, for fixed $\varphi$ we have the orthonormality relation \begin{eqnarray} \langle l , m , \varphi \vert l', m' , \varphi \rangle = \delta_{l,l'} \delta_{m,m'}. \nonumber \end{eqnarray} However, not all temporally stable phase states are orthogonal. \end{itemize} Similar results can be derived for the $E_{2d}(l)$ operator by exchanging the roles played by $n$ and $l$. It is enough to mention that the $\vert z_l \rangle$ eigenstates of $E_{2d}(l)$ can be taken in the form \begin{eqnarray} \vert z_l \rangle \equiv \vert l , m , \varphi \rangle = \frac{1}{\sqrt {k-l+1}} \sum_{n=0}^{k-l} e^{-i H(l,n) \varphi} q_l^{mn} \vert l , n \rangle \nonumber \end{eqnarray} and present properties identical to those of the states in (\ref{coherentstatemvarphi}). \subsubsection{Phase states for $E_{3d}(l)$} The eigenstates of the $E_{3d}(l)$ operator are given by \begin{eqnarray} E_{3d}(l) \vert w_l \rangle = w_l \vert w_l \rangle, \quad \vert w_l \rangle = \sum_{n = 0}^{l} c_{n} w_l^n \vert l-n , n \rangle, \quad w_l \in \mathbb{C}. \nonumber \end{eqnarray} The use of (\ref{16prime})-(\ref{17prime}) leads to the recurrence relation \begin{eqnarray} c_{n+1} = c_{n}, \quad n = 0, 1, \ldots, l-1 \nonumber \end{eqnarray} with the condition \begin{eqnarray} c_{0} = c_{l} (w_l)^{l+1}. \nonumber \end{eqnarray} It follows that \begin{eqnarray} c_{n} = c_{0}, \quad n = 0, 1, \ldots, l \nonumber \end{eqnarray} and the $w_l$ eigenvalues satisfy \begin{eqnarray} (w_l)^{l+1} = 1. \nonumber \end{eqnarray} Therefore, the admissible values for $w_l$ are \begin{eqnarray} w_l = \omega_l^{m}, \quad m = 0, 1, \ldots, l, \nonumber \end{eqnarray} with \begin{eqnarray} \omega_l = \exp \left( \frac{2 \pi i} {l+1} \right). \nonumber \end{eqnarray} As a result, the normalized eigenstates of $E_{3d}(l)$ can be taken in the form \begin{eqnarray} \vert w_l \rangle \equiv \Vert l , m , \varphi \rangle \rangle = \frac{1}{\sqrt{l+1}} e^{-i H(0,l) \varphi} \sum_{n=0}^{l} \omega_l^{mn} \vert l-n , n \rangle. \label{coherentstatemvarphiE3d} \end{eqnarray} The $\Vert l , m , \varphi \rangle \rangle$ states depend on the parameters $l \in \{ 0, 1, \ldots, k \}$, $m \in \mathbb{Z}/(l+1)\mathbb{Z}$ and $\varphi \in \mathbb{R}$. They satisfy \begin{eqnarray} E_{3d}(l) \Vert l , m , \varphi \rangle \rangle = e^{i\theta_{m}} \Vert l , m , \varphi \rangle \rangle, \quad \theta_{m} = m \frac{2 \pi}{l+1}, \label{ancienne92E3d} \end{eqnarray} so that $E_{3d}(l)$ is a phase operator. For fixed $l$, the set $\{ \Vert l , m , 0 \rangle \rangle : m = 0, 1, \ldots, l \}$, corresponding to $\varphi=0$, follows from the set $\{ \vert l-n , n \rangle : n = 0, 1, \ldots, l \}$ by making use of a (quantum) discrete Fourier transform.$^{\cite{Vourdas04}}$ Note that for $\varphi=0$, the $\Vert l , m , 0 \rangle \rangle$ phase states have the same form as the phase states for $SU_2$ derived by Vourdas.$^{\cite{Vourdas1990}}$ In the case where $\varphi \not= 0$, the $\Vert l , m , \varphi \rangle \rangle$ phase states for $E_{3d}(l)$ satisfy properties similar to those of the $\vert l , m , \varphi \rangle$ phase states for $E_{1d}(l)$ and for $E_{2d}(l)$ modulo the substitutions $(n , l) \to (l-n , n)$, $q_l \to \omega_l$ and $k-l \to l$. \subsubsection{Phase states for $E_{d}$} We are now in a position to derive the eigenstates of the $E_{d}$ operator. They are given by the following eigenvalue equation \begin{eqnarray} E_{d} \vert \psi \rangle = \lambda \vert \psi \rangle, \label{Eqlambda} \end{eqnarray} where \begin{eqnarray} \vert \psi \rangle = \sum_{l=0}^{k} \sum_{n=0}^{k-l} C_{n,l} \vert n,l \rangle. \label{vectlambda} \end{eqnarray} Introducing (\ref{vectlambda}) into (\ref{Eqlambda}) and using the definition in (\ref{Ed1})-(\ref{Ed3}) of the $E_{d}$ operator, a straightforward but long calculation leads to following recurrence relations \begin{eqnarray} && C_{n+1,l} e^{i [H(n+1,l)- H(n, l)] \varphi } = \lambda C_{n,l} \label{rel-rec-1} \\ && C_{0,l+1} e^{i [H(0,l+1)- H(k-l, l)] \varphi } = \lambda C_{k-l,l} \label{rel-rec-2} \end{eqnarray} for $l = 0, 1, \ldots, k-1$. For $l=k$, we have \begin{eqnarray} C_{0,0} ~ e^{i [H(0,0)- H(0, k)] \varphi } = \lambda C_{0,k}. \label{rel-rec-3} \end{eqnarray} (Note that (\ref{rel-rec-2}) with $l=k$ yields (\ref{rel-rec-3}) if $C_{0,k+1}$ is identified to $C_{0,0}$.) From the recurrence relation (\ref{rel-rec-1}), it is easy to get \begin{eqnarray} C_{n,l} = \lambda^n e^{{-i [H(n,l)- H(0, l)] \varphi }} C_{0,l}, \label{rel-3} \end{eqnarray} which, for $n = k-l$, gives \begin{eqnarray} C_{k-l,l} = \lambda^{k-l} e^{{-i [H(k-l,l)- H(0, l)] \varphi }} C_{0,l} \label{soixante10} \end{eqnarray} in terms of $ C_{0,l}$. By introducing (\ref{soixante10}) into (\ref{rel-rec-2}), we obtain the recurrence relation \begin{eqnarray} C_{0,l+1} e^{{i [H(0,l+1)- H(0, l)] \varphi }} = \lambda^{k-l+1} C_{0,l} \label{rel-4} \end{eqnarray} that completely determines the $C_{0,l}$ coefficients and subsequently the $C_{n,l}$ coefficients owing to (\ref{rel-3}). Indeed, the iteration of Eq.~(\ref{rel-4}) gives \begin{eqnarray} C_{0,l} = \lambda^{\frac{1}{2}l(2k - l + 3)} e^{-{i [H(0,l)- H(0, 0)] \varphi}} C_{0,0}. \label{soixante12} \end{eqnarray} By combining (\ref{rel-3}) with (\ref{soixante12}), we finally obtain \begin{eqnarray} C_{n,l}= \lambda^{\frac{1}{2}l(2k - l + 3) + n} e^{{-i H(n,l) \varphi }} C_{0,0}. \label{avantavant73} \end{eqnarray} Note that for $l = k$ ($\Rightarrow n = 0$), Eq.~(\ref{avantavant73}) becomes \begin{eqnarray} C_{0,k}= \lambda^{\frac{1}{2}k(k + 3)} e^{{-i H(0,k) \varphi }} C_{0,0}. \label{avant73} \end{eqnarray} The introduction of (\ref{avant73}) in (\ref{rel-rec-3}) produces the condition \begin{eqnarray} \lambda^d = 1. \nonumber \end{eqnarray} Consequently, the $\lambda$ eigenvalues are \begin{eqnarray} \lambda = \exp \left( \frac{2\pi i}{d} m \right), \quad m = 0, 1, \ldots, d-1. \nonumber \end{eqnarray} Finally, the normalized eigenvectors of the $E_{d}$ operator read \begin{eqnarray} \vert \psi \rangle \equiv \vert m , \varphi \rangle = \frac{1}{\sqrt{d}}\sum_{l=0}^{k} q^{\frac{1}{2}ml(2k-l+3)}\sum_{n=0}^{k-l} q^{mn} e^{-iH(n,l)\varphi} \vert n,l \rangle, \label{m,phi} \end{eqnarray} where \begin{eqnarray} q = \exp \left( \frac{2\pi i}{d} \right). \label{definition of q} \end{eqnarray} The $\vert m , \varphi \rangle$ states are labeled by the parameters $m \in \mathbb{Z}/d\mathbb{Z}$ and $\varphi \in \mathbb{R}$. They satisfy \begin{eqnarray} E_{d} \vert m , \varphi \rangle = e^{i\theta_m} \vert m , \varphi \rangle, \quad \theta_m = m \frac{2 \pi}{d}, \quad m = 0, 1, \ldots, d-1. \nonumber \end{eqnarray} As a conclusion, $E_{d}$ is a unitary phase operator. The $\vert m , \varphi \rangle$ phase states satisfy interesting properties: \begin{itemize} \item They are temporally stable under time evolution, i.e., \begin{eqnarray} e^{-i H t} \vert m , \varphi \rangle = \vert m , \varphi + t \rangle \nonumber \end{eqnarray} for any value of the real parameter $t$. \item For fixed $\varphi$, they satisfy the relation \begin{eqnarray} | \langle n , l \vert m , \varphi \rangle | = \frac{1}{\sqrt{d}} \nonumber \end{eqnarray} and the closure property \begin{eqnarray} \sum_{m = 0}^{d-1} \vert m , \varphi \rangle \langle m , \varphi \vert = \sum_{l=0}^{k}\sum_{n=0}^{k-l} \vert n , l \rangle \langle n,l \vert = I. \nonumber \end{eqnarray} \item The overlap between two phase states $\vert m' , \varphi' \rangle$ and $\vert m , \varphi \rangle$ reads \begin{eqnarray} \langle m , \varphi \vert m' , \varphi' \rangle = \frac{1}{d} \sum_{l=0}^{k}\sum_{n=0}^{k-l} q^{\tau(m'-m, \varphi - \varphi', n , l)}, \nonumber \end{eqnarray} where \begin{eqnarray} \tau(m'-m, \varphi - \varphi', n, l) = (m' - m) \bigg[ \frac{1}{2}l(2k - l + 3) + n \bigg] + \frac{d}{2\pi} (\varphi - \varphi') H(n,l) \nonumber \end{eqnarray} with $q$ defined in (\ref{definition of q}). As a particular case, we have the orthonormality relation \begin{eqnarray} \langle m , \varphi \vert m' , \varphi \rangle = \delta_{m,m'}. \nonumber \end{eqnarray} However, the temporally stable phase states are not all orthogonal. \end{itemize} \subsubsection{The $k=1$ particular case} To close Section 3.2, we now establish a contact with the results of Klimov {\em et al.}$^{\cite{klimov1}}$ which correspond to $k=1$ (i.e., $\kappa = -1$). In this particular case, the ${\cal F}_{\kappa}$ Fock space is three-dimensional ($d=3$). It corresponds to the representation space of $SU_3$ relevant for ordinary quarks and antiquarks in particle physics and for qutrits in quantum information. For the purpose of comparison, we put \begin{eqnarray} \vert \phi_1 \rangle \equiv \vert 0,0 \rangle, \quad \vert \phi_2 \rangle \equiv \vert 1,0 \rangle, \quad \vert \phi_3 \rangle \equiv \vert 0,1 \rangle. \nonumber \end{eqnarray} Then, the operators $E_{13}$, $E_{23}$, $E_{33}$ and $E_{3}$ assume the form \begin{eqnarray} && E_{13} = e^{ i \varphi} \vert \phi_1 \rangle \langle \phi_2 \vert + e^{-i \varphi} \vert \phi_2 \rangle \langle \phi_1 \vert + \vert \phi_3 \rangle \langle \phi_3 \vert \nonumber \\ && E_{23} = e^{ i \varphi} \vert \phi_1 \rangle \langle \phi_3 \vert + e^{-i \varphi} \vert \phi_3 \rangle \langle \phi_1 \vert + \vert \phi_2 \rangle \langle \phi_2 \vert \nonumber \\ && E_{33} = \vert \phi_2 \rangle \langle \phi_3 \vert + \vert \phi_3 \rangle \langle \phi_2 \vert + \vert \phi_1 \rangle \langle \phi_1 \vert \nonumber \\ && E_3 = e^{ i \varphi} \vert \phi_1 \rangle \langle \phi_2 \vert + \vert \phi_2 \rangle \langle \phi_3 \vert + e^{-i \varphi} \vert \phi_3 \rangle \langle \phi_1 \vert. \nonumber \end{eqnarray} Operators $E_{13}$, $E_{23}$ and $E_{33}$ have a form similar to that of the phase operators \begin{eqnarray} && {\hat E}_{12} = \vert \phi_1 \rangle \langle \phi_2 \vert - \vert \phi_2 \rangle \langle \phi_1 \vert + \vert \phi_3 \rangle \langle \phi_3 \vert \nonumber \\ && {\hat E}_{13} = \vert \phi_1 \rangle \langle \phi_3 \vert - \vert \phi_3 \rangle \langle \phi_1 \vert + \vert \phi_2 \rangle \langle \phi_2 \vert \nonumber \\ && {\hat E}_{23} = \vert \phi_2 \rangle \langle \phi_3 \vert - \vert \phi_3 \rangle \langle \phi_2 \vert + \vert \phi_1 \rangle \langle \phi_1 \vert \nonumber \end{eqnarray} introduced in Ref.~{\cite{klimov1} in connection with qutrits. Although the $E_{13}$, $E_{23}$ and $E_{33}$ operators derived in the present work cannot be deduced from the ${\hat E}_{12}$, ${\hat E}_{13}$ and ${\hat E}_{23}$ operators of Ref.~\cite{klimov1} by means of similarity transformations, the two sets of operators are equivalent in the sense that their action on the $\vert \phi_1 \rangle$, $\vert \phi_2 \rangle$ and $\vert \phi_3 \rangle$ vectors are identical up to phase factors. In addition, in the case where we do not take into account the spectator state ($\vert \phi_3 \rangle$, $\vert \phi_2 \rangle$ or $\vert \phi_1 \rangle$ for $E_{13}$, $E_{23}$ or $E_{33}$, respectively), our $SU_3$ phase operators are reduced to $SU_2$ phase operators which present the same periodicity condition (i.e., their square is the identity operator) as the $SU_2$ phase operators of Ref.~\cite{Vourdas1990}. In the $\varphi = 0$ case, our $SU_2$ phase states turn out to be identical to the phase states derived by Vourdas.$^{\cite{Vourdas1990}}$ Finally, note that the $E_{3}$ (and, more generally, $E_{d}$) operator is new; it has no equivalent in Ref.~\cite{klimov1}. \subsection{Vector phase states in finite dimension} We have now the necessary tools for introducing vector phase states associated with the unitary phase operators $E_{1d}$, $E_{2d}$ and $E_{3d}$. We give below a construction similar to the one discussed in Ref.~\cite{twareque1}. \subsubsection{Vector phase states for $E_{1d}$ and $E_{2d}$} To define vector phase states, we introduce the $(k+1) \times (k+1)$-matrix \begin{eqnarray} {\bf Z} = {\rm diag}(z_0 , z_1 , \ldots, z_k), \quad z_l = q_l^m \nonumber \end{eqnarray} and the $(k+1) \times 1$-vector \begin{eqnarray} [ n , l ] = \left( \begin{array}{c} 0 \\ \vdots\\ \vert n , l \rangle\\ \vdots\\ 0\\ \end{array} \right), \nonumber \end{eqnarray} where the $\vert n , l \rangle$ entry appears on the $l$-th line (with $l = 0, 1, \ldots, k$). Then, let us define \begin{eqnarray} [ l, m , \varphi ] = \frac{1}{\sqrt {k-l+1}} \sum_{n=0}^{k-l} e^{-i H(n,l) \varphi} {\bf Z}^{n} [n , l]. \label{vectorphasestates11} \end{eqnarray} From Eq.~(\ref{coherentstatemvarphi}), we have \begin{eqnarray} [ l, m , \varphi ] = \left( \begin{array}{c} 0 \\ \vdots\\ \vert l , m , \varphi \rangle\\ \vdots\\ 0\\ \end{array} \right), \label{vectorphasestates22} \end{eqnarray} where $\vert l , m , \varphi \rangle$ occurs on the $l$-th line. We shall refer the states (\ref{vectorphasestates22}) to as vector phase states. In this matrix presentation, it is useful to associate the matrix \begin{eqnarray} {\bf E_{1d}} = {\rm diag} \left( E_{1d}(0), E_{1d}(1), . . . ,E_{1d}(k) \right) \nonumber \end{eqnarray} with the unitary phase operator $E_{1d}$. It is easy to check that ${\bf E_{1d}}$ satisfies the matrix eigenvalue equation \begin{eqnarray} {\bf E_{1d}} [ l , m , \varphi ] = e^{i \theta_m} [ l , m , \varphi ], \quad \theta_m = m \frac{2 \pi}{k-l+1} \nonumber \end{eqnarray} (cf.~Eq.~(\ref{ancienne92})). Other properties of vector phase states $[ l , m , \varphi ]$ can be deduced from those of phase states $\vert l , m , \varphi \rangle$. For instance, we obtain \begin{itemize} \item The temporal stability condition \begin{eqnarray} e^{-i H t} [ l , m , \varphi ] = [ l , m , \varphi + t] \nonumber \end{eqnarray} for $t$ real. \item The closure relation \begin{eqnarray} \bigoplus_{l=0}^{k} \sum_{m=0}^{k-l} [ l , m , \varphi ] [ l , m , \varphi ]^{\dagger} = {\bf I_d}, \nonumber \end{eqnarray} where ${\bf I_d}$ is the unit matrix of dimension $d \times d$ with $d$ given by (\ref{dimensiond}). \end{itemize} Similar vector phase states can be obtained for $E_{2d}$ by permuting the $n$ and $l$ quantum numbers occurring in the derivation of the vector phase states for $E_{1d}$. \subsubsection{Vector phase states for $E_{3d}$} Let us define the diagonal matrix of dimension $(k+1) \times (k+1)$ \begin{eqnarray} {\bf W} = {\rm diag}(w_0 , w_1 , \ldots, w_k), \quad w_l = \omega_l^m \nonumber \end{eqnarray} and the column vector of dimension $(k+1) \times 1$ \begin{eqnarray} [[ n-l , n ]] = \left( \begin{array}{c} 0 \\ \vdots\\ \vert l-n , n \rangle\\ \vdots\\ 0\\ \end{array} \right), \nonumber \end{eqnarray} where the $\vert l-n , n \rangle$ state occurs on the $l$-th line (with $l = 0, 1, \ldots, k$). By defining \begin{eqnarray} [[ l, m , \varphi ]] = \frac{1}{\sqrt {l+1}} e^{-i H(l,0) \varphi} \sum_{n=0}^{l} {\bf W}^{n} [[l-n , n]], \nonumber \end{eqnarray} we obtain \begin{eqnarray} [[ l, m , \varphi ]] = \left( \begin{array}{c} 0 \\ \vdots\\ \Vert l , m , \varphi \rangle \rangle\\ \vdots\\ 0\\ \end{array} \right), \label{vectorphasestates22E3d} \end{eqnarray} where the $\Vert l , m , \varphi \rangle \rangle$ phase state appears on the $l$-th line. Equation (\ref{vectorphasestates22E3d}) defines vector phase states associated with the $E_{3d}$ phase operator. These states satisfy the eigenvalue equation \begin{eqnarray} {\bf E_{3d}} [[ l , m , \varphi ]] = e^{i \theta_m} [[ l , m , \varphi ]], \quad \theta_m = m \frac{2 \pi}{l+1}, \nonumber \end{eqnarray} where \begin{eqnarray} {\bf E_{3d}} = {\rm diag} \left( E_{3d}(0), E_{3d}(1), . . . ,E_{3d}(k) \right). \nonumber \end{eqnarray} The $[[ l , m , \varphi ]]$ vector phase states satisfy properties which can be deduced from those of the $[ l , m , \varphi ]$ vector phase states owing to simple correspondence rules. \section{PHASE OPERATORS AND PHASE STATES FOR ${\cal A}_{\kappa}(2)$ WITH $\kappa \geq 0$} \subsection{Phase operators in infinite dimension} In the case $\kappa \geq 0$, we can decompose the Jacobson operators $a_i^-$ and $a^+_i$ as \begin{eqnarray} a_i^- = E_{i\infty} \sqrt{F_i(N_1 , N_2)}, \quad a^+_i = \sqrt{F_i(N_1 , N_2)} \left( E_{i\infty} \right)^{\dagger}, \quad i = 1,2, \label{decompo cas infini} \end{eqnarray} where \begin{eqnarray} && E_{1\infty} = \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} e^{i [H(n_1 + 1 , n_2)- H(n_1 , n_2)] \varphi } \vert n_1 , n_2 \rangle \langle n_1+1 , n_2 \vert \label{E1infini} \\ && E_{2\infty} = \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} e^{i [H(n_1 , n_2 +1)- H(n_1 , n_2)] \varphi } \vert n_1 , n_2 \rangle \langle n_1, n_2 + 1\vert. \label{E2infini} \end{eqnarray} The operators $E_{i\infty}$, $i = 1,2$, satisfy \begin{eqnarray} && E_{1\infty}\left( E_{1\infty} \right)^{\dagger} = I, \quad \left( E_{1\infty} \right)^{\dagger} E_{1\infty} = I - \sum_{n_2=0}^{\infty} \vert 0 , n_2 \rangle\langle 0 , n_2\vert \label{pasunitaire1} \\ && E_{2\infty}\left( E_{2\infty} \right)^{\dagger} = I, \quad \left( E_{2\infty} \right)^{\dagger} E_{2\infty} = I - \sum_{n_1=0}^{\infty} \vert n_1 , 0 \rangle\langle n_1 , 0\vert. \label{pasunitaire2} \end{eqnarray} Equations (\ref{pasunitaire1}) and (\ref{pasunitaire2}) show that $E_{i\infty}$, $i = 1,2$, are not unitary operators. In a similar way, operators $a_3^+$ and $a_3^-$ can be rewritten \begin{eqnarray} a_3^- = - \kappa E_{3\infty} \sqrt{(N_1 + 1) N_2}, \quad a_3^+ = - \kappa \sqrt{(N_1 + 1) N_2} \left( E_{3\infty} \right)^{\dagger}, \nonumber \end{eqnarray} where \begin{eqnarray} E_{3\infty} = \sum_{n_1=0}^{\infty} \sum_{n_2=0}^{\infty} \vert n_1 + 1 , n_2 \rangle \langle n_1 , n_2 + 1 \vert. \nonumber \end{eqnarray} The $E_{3\infty}$ operator is not unitary since \begin{eqnarray} E_{3\infty} \left( E_{3\infty} \right)^{\dagger} = I - \sum_{n_2=0}^{\infty} \vert 0 , n_2 \rangle \langle 0 , n_2 \vert, \quad \left( E_{3\infty} \right)^{\dagger} E_{3\infty} = I - \sum_{n_1=0}^{\infty} \vert n_1 , 0 \rangle \langle n_1 , 0 \vert, \nonumber \end{eqnarray} to be compared with (\ref{pasunitaire1}) and (\ref{pasunitaire2}). The $E_{3\infty}$ operator is not independent of $E_{1\infty}$ and $E_{2\infty}$. Indeed, it can be expressed as \begin{eqnarray} E_{3\infty} = \left( E_{1\infty} \right)^{\dagger} E_{2\infty}, \label{pasunitaire3} \end{eqnarray} a relation of central importance for deriving its eigenvalues (see Section 4.2). \subsection{Phase states in infinite dimension} It is easy to show that operators $E_{1\infty}$ and $E_{2\infty}$ commute. Hence, that they can be simultaneously diagonalized. In this regard, let us consider the eigenvalue equations \begin{eqnarray} E_{1\infty} \vert z_1 , z_2 ) = z_1 \vert z_1 , z_2 ), \quad E_{2\infty} \vert z_1 , z_2 ) = z_2 \vert z_1 , z_2 ), \quad (z_1 , z_2) \in \mathbb{C}^2, \label{eqE1infini} \end{eqnarray} where \begin{eqnarray} \vert z_1 , z_2 ) = \sum_{n_1=0}^{\infty} \sum_{n_2=0}^{\infty} D_{n_1 , n_2} \vert n_1, n_2 \rangle. \nonumber \end{eqnarray} By using the definitions of the nonunitary phase operators (\ref{E1infini}) and (\ref{E2infini}), it is easy to check from the eigenvalue equations (\ref{eqE1infini}) that the complex coefficients $D_{n_1 , n_2}$ satisfy the following recurrence relations \begin{eqnarray} && D_{n_1 + 1 , n_2} e^{i H(n_1+1 , n_2) \varphi} = z_1 D_{n_1 , n_2} e^{iH(n_1 , n_2)\varphi} \label{recurencekappa+1} \\ && D_{n_1 , n_2 + 1} e^{i H(n_1 , n_2+1) \varphi} = z_2 D_{n_1 , n_2} e^{iH(n_1 , n_2)\varphi}, \label{recurencekappa+2} \end{eqnarray} which lead to \begin{eqnarray} D_{n_1 , n_2}= e^{-i H(n_1 , n_2) \varphi} z_1^{n_1} z_2^{n_2}D_{0, 0}. \nonumber \end{eqnarray} It follows that the normalized common eigenstates of the operators $E_{1\infty}$ and $E_{2\infty}$ are given by \begin{eqnarray} \vert z_1 , z_2 ) = \sqrt{(1 - |z_1|^2)(1 - |z_2|^2)} \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} z_1^{n_1} z_2^{n_2}e^{- i H(n_1 , n_2)\varphi} \vert n_1 , n_2 \rangle \nonumber \end{eqnarray} on the domain $\{ (z_1 , z_2) \in {\mathbb{C}^2} : |z_1| < 1, |z_2| < 1 \}$. Following the method developed in Refs.~\cite{vourdasLimit, voudasBM} for the Lie algebra $su_{1,1}$ and in Ref.~\cite{daoud-kibler} for the algebra ${\cal A}_{\kappa}(1)$, we define the states \begin{eqnarray} \vert \theta_1, \theta_2,\varphi ) = \lim_{z_1 \rightarrow e^{i\theta_1}} \lim_{z_2 \rightarrow e^{i\theta_2}} \frac{1}{\sqrt{(1 - |z_1|^2)(1 - |z_2|^2)}} \vert z_1 , z_2 ), \nonumber \end{eqnarray} where $\theta_1 , \theta_2 \in [-\pi , +\pi]$. We thus get \begin{eqnarray} \vert \theta_1, \theta_2,\varphi ) = \sum_{n_1=0}^{\infty}\sum_{n_2=0}^{\infty} e^{i n_1 \theta_1}e^{i n_2 \theta_2} e^{- i H(n_1 , n_2) \varphi} \vert n_1 , n_2 \rangle. \nonumber \end{eqnarray} These states, defined on $S^1 \times S^1$, turn out to be phase states since we have \begin{eqnarray} E_{1\infty} \vert \theta_1, \theta_2,\varphi ) = e^{i \theta_1} \vert \theta_1, \theta_2,\varphi ), \quad E_{2\infty} \vert \theta_1, \theta_2,\varphi ) = e^{i \theta_2} \vert \theta_1, \theta_2,\varphi ). \nonumber \end{eqnarray} Hence, the operators $E_{i\infty}$, $i=1,2$, are (nonunitary) phase operators. The main properties of the $\vert \theta_1 , \theta_2 , \varphi )$ states are the following. \begin{itemize} \item They are temporally stable in the sense that \begin{eqnarray} e^{-i H t} \vert \theta_1 , \theta_2 , \varphi ) = \vert \theta_1 , \theta_2 , \varphi + t ), \nonumber \end{eqnarray} with $t$ real. \item They are not normalized and not orthogonal. However, for fixed $\varphi$, they satisfy the closure relation \begin{eqnarray} \frac{1}{(2\pi)^2} \int_{-\pi}^{+\pi} d\theta_1 \int_{-\pi}^{+\pi} d\theta_2 \vert \theta_1 , \theta_2 , \varphi ) ( \theta_1 , \theta_2 , \varphi \vert = I. \nonumber \end{eqnarray} \end{itemize} In view of Eq.~(\ref{pasunitaire3}), we have \begin{eqnarray} E_{3\infty} | \theta_1 , \theta_2, \varphi ) = e^{i(\theta_2 - \theta_1)} | \theta_1 , \theta_2, \varphi ), \nonumber \end{eqnarray} so that the $| \theta_1 , \theta_2, \varphi )$ states are common eigenstates to $E_{1\infty}$, $E_{2\infty}$ and $E_{3\infty}$. To close, a comparison is in order. For $\varphi = 0$, the $\vert \theta_1 , \theta_2 , 0 )$ states have the same form as the phase states derived in Ref.~\cite{bertola-Deguise} which present the closure property but are not temporally stable. \section{TRUNCATED GENERALIZED OSCILLATOR ALGEBRA} For $\kappa \geq 0$ the ${\cal F}_{\kappa}$ Hilbert space associated with ${\cal A}_{\kappa}(2)$ is infinite-dimensional and it is thus impossible to define a unitary phase operator. On the other hand, for $\kappa < 0$ the ${\cal F}_{\kappa}$ space is finite-dimensional and there is no problem to define unitary phase operators. Therefore, for $\kappa \geq 0$ it is appropriate to truncate the ${\cal F}_{\kappa}$ space in order to get a subspace ${\cal F}_{\kappa , \sigma}$ of dimension $(\sigma + 1)(\sigma + 2)/2$ with $\sigma$ playing the role of $k$. Then, it will be possible to define unitary phase operators and vector phase vectors for the ${\cal F}_{\kappa , \sigma}$ truncated space with $\kappa \geq 0$. To achieve this goal, we shall adapt the truncation procedure discussed in Ref.~\cite{pegg-barnett} for the $h_4$ Weyl-Heisenberg algebra and in Ref.~\cite{AKW,daoud-kibler} for the ${\cal A}_{\kappa}(1)$ algebra with $\kappa \geq 0$. The restriction of infinite-dimensional space ${\cal F}_{\kappa}$ ($\kappa \geq 0$) to finite-dimensional space ${\cal F}_{\kappa,\sigma}$ with basis \begin{eqnarray} \{ |n_1 , n_2 \rangle : n_1, n_2 \ {\rm ranging} \ | \ n_1+n_2 \leq \sigma \} \nonumber \end{eqnarray} can be done by means of the projection operator \begin{eqnarray} \Pi_{\sigma} = \sum_{n_1 = 0}^{\sigma} \sum_{n_2 = 0}^{\sigma - n_1} \vert n_1 , n_2 \rangle \langle n_1 , n_2 \vert = \sum_{n_2 = 0}^{\sigma} \sum_{n_1 = 0}^{\sigma - n_2} \vert n_1 , n_2 \rangle \langle n_1 , n_2 \vert. \nonumber \end{eqnarray} Let us then define the four new ladder operators \begin{eqnarray} b_i^{\pm} = \Pi_{\sigma} a_i^{\pm} \Pi_{\sigma}, \quad i = 1, 2. \nonumber \end{eqnarray} They can be rewritten as \begin{eqnarray} && b_1^{+} = (b_1^{-})^{\dagger} = \sum_{n_2 = 0}^{\sigma-1} \sum_{n_1 = 0}^{\sigma - n_2-1} \sqrt{F_1(n_1+1,n_2)} e^{-i [H(n_1+1,n_2) - H(n_1, n_2)] \varphi} \vert n_1+1 , n_2 \rangle \langle n_1 , n_2 \vert \nonumber \\ && b_2^{+} = (b_2^{-})^{\dagger} = \sum_{n_1 = 0}^{\sigma-1} \sum_{n_2 = 0}^{\sigma - n_1-1} \sqrt{F_2(n_1,n_2+1)} e^{-i [H(n_1,n_2+1)- H(n_1, n_2)] \varphi} \vert n_1 , n_2+1 \rangle \langle n_1 , n_2 \vert \nonumber \end{eqnarray} A straightforward calculation shows that the action of $b_1^{\pm}$ on ${\cal F}_{\kappa}$ is given by \begin{eqnarray} && b_1^+ \vert n_1, n_2 \rangle = \sqrt{F_1(n_1+1,n_2)} e^{-i[H(n_1+1,n_2)- H(n_1, n_2)] \varphi} \vert n_1+1, n_2 \rangle \nonumber \\ && \qquad \qquad \qquad \qquad \qquad \qquad \qquad {\rm for} \quad n_1+n_2 = 0, 1, \ldots, \sigma-1 \nonumber \\ && b_1^+ \vert \sigma - n_2, n_2 \rangle = 0 \quad {\rm for} \quad n_2 = 0, 1, \ldots, \sigma \nonumber \\ && b_1^+ \vert n_1, n_2 \rangle = 0 \quad {\rm for} \quad n_1+n_2 = \sigma, \sigma+1, \sigma+2, \ldots \nonumber \end{eqnarray} and \begin{eqnarray} && b_1^- \vert n_1, n_2 \rangle = \sqrt{F_1(n_1,n_2)} e^{+i[H(n_1,n_2)- H(n_1 - 1, n_2)] \varphi} \vert n_1 - 1, n_2 \rangle \nonumber \\ && \qquad \qquad \qquad \qquad \qquad {\rm for} \quad n_1 \not= 0 \quad {\rm and} \quad n_2 = 0, 1, \ldots, \sigma - 1 \nonumber \\ && b_1^- \vert 0 , n_2 \rangle = 0 \quad {\rm for} \quad n_2 = 0, 1, \ldots, \sigma \nonumber \\ && b_1^- \vert n_1, n_2 \rangle = 0 \quad {\rm for} \quad n_1+n_2 = \sigma+1, \sigma+2, \sigma+3, \ldots. \nonumber \end{eqnarray} Similarly, we have \begin{eqnarray} && b_2^+ \vert n_1, n_2 \rangle = \sqrt{F_2(n_1,n_2+1)} e^{-i[H(n_1,n_2+1)- H(n_1, n_2)] \varphi} \vert n_1, n_2+1 \rangle \nonumber \\ && \qquad \qquad \qquad \qquad \qquad \qquad \qquad {\rm for} \quad n_1+n_2 = 0, 1, \ldots, \sigma-1 \nonumber \\ && b_2^+ \vert n_1 , \sigma - n_1 \rangle = 0 \quad {\rm for} \quad n_1 = 0, 1, \ldots, \sigma \nonumber \\ && b_2^+ \vert n_1, n_2 \rangle = 0 \quad {\rm for} \quad n_1+n_2 = \sigma, \sigma+1, \sigma+2, \ldots \nonumber \end{eqnarray} and \begin{eqnarray} && b_2^- \vert n_1, n_2 \rangle = \sqrt{F_2(n_1,n_2)} e^{+i[H(n_1,n_2)- H(n_1, n_2 - 1)] \varphi} \vert n_1 , n_2 - 1 \rangle \nonumber \\ && \qquad \qquad \qquad \qquad \qquad {\rm for} \quad n_2 \not= 0 \quad {\rm and} \quad n_1 = 0, 1, \ldots, \sigma - 1 \nonumber \\ && b_2^- \vert n_1 , 0 \rangle = 0 \quad {\rm for} \quad n_1 = 0, 1, \ldots, \sigma \nonumber \\ && b_2^- \vert n_1, n_2 \rangle = 0 \quad {\rm for} \quad n_1+n_2 = \sigma+1, \sigma+2, \sigma+3, \ldots. \nonumber \end{eqnarray} Therefore, the action of operators $b_i^{\pm}$ ($i=1,2$) on ${\cal F}_{\kappa,\sigma}$ with $\kappa \geq 0$ is similar to that of $a_i^{\pm}$ ($i=1,2$) on ${\cal F}_{\kappa}$ with $\kappa < 0$. We may ask what is the algebra generated by operators $b_i^{\pm}$ and $N_i$ ($i=1,2$)? Indeed, the latter operators satisfy the following algebraic relations when acting on the ${\cal F}_{\kappa,\sigma}$ space \begin{eqnarray} & & [b_1^- , b_1^+] = I + \kappa (2 N_1 + N_2) - \sum_{l=0}^{\sigma} F_1(\sigma - l +1, l) \vert \sigma-l , l \rangle \langle \sigma-l , l \vert \nonumber \\ & & [b_2^- , b_2^+] = I + \kappa (2 N_2 + N_1) - \sum_{l=0}^{\sigma} F_2(l, \sigma - l +1) \vert l , \sigma-l \rangle \langle l , \sigma-l \vert \nonumber \\ & & [N_i , b_j^{\pm}] = {\pm} \delta_{i,j} b_i^{\pm}, \quad i,j = 1,2 \nonumber \\ & & [b_i^{\pm} , b_j^{\pm}] = 0, \quad [b_i^{\pm} , [b_i^{\pm} , b_j^{\mp}]] = 0, \quad i \neq j. \nonumber \end{eqnarray} Operators $b_i^{\pm}$ and $N_i$ ($i=1,2$) acting on ${\cal F}_{\kappa,\sigma}$ generate an algebra, noted ${\cal A}_{\kappa, \sigma}(2)$. The ${\cal A}_{\kappa, \sigma}(2)$ algebra generalizes ${\cal A}_{\kappa, s}(1)$ which results from the truncation of the ${\cal A}_{\kappa}(1)$ algebra.$^{\cite{daoud-kibler}}$ By using the trick to pass from ${\cal A}_{\kappa}(2)$ to ${\cal A}_{\kappa}(1)$, see section 2.1, we get ${\cal A}_{\kappa, s-1}(2) \to {\cal A}_{\kappa, s}(1)$. The ${\cal A}_{\kappa, s}(1)$ truncated algebra gives in turn the Pegg-Barnett truncated algebra$^{\cite{pegg-barnett}}$ when $\kappa \to 0$. As a conclusion, the action of $b_i^{\pm}$ ($i=1,2$) on the complement of ${\cal F}_{\kappa, \sigma}$ with respect to ${\cal F}_{\kappa}$ leads to the null vector while the action of these operators on the ${\cal F}_{\kappa, \sigma}$ space with $\kappa \geq 0$ is the same as the action of $a_i^{\pm}$ ($i=1,2$) on the ${\cal F}_{\kappa}$ space with $\kappa < 0$ modulo some evident changes of notations. It is thus possible to apply the procedure developed for ${\cal F}_{\kappa}$ space with $\kappa < 0$ in order to obtain unitary phase operators on ${\cal F}_{\kappa, \sigma}$ with $\kappa \geq 0$ and the corresponding vector phase states. The derivation of the vector phase states for the ${\cal A}_{\kappa, \sigma}(2)$ truncated algebra can be done simply by replacing $k$ by $\sigma$. In this respect, the $\sigma$ truncation index can be compared to the $k$ quenching index (or Chen index) used for characterizing the finite-dimensional representation $(0,k)$ or $(k,0)$ of $SU_3$.$^{\cite{Chenindex}}$ \section{APPLICATION TO MUTUALLY UNBIASED BASES} We now examine the possibility to produce specific bases, known as mutually unbiased bases (MUBs) in quantum information, for finite-dimensional Hilbert spaces from the phase states of $E_{1d}(l)$, $E_{2d}(l)$ and $E_{3d}(l)$. Let us recall that two distinct orthonormal bases \begin{eqnarray} \{ | a \alpha \rangle : \alpha = 0, 1, \ldots, N-1 \} \nonumber \end{eqnarray} and \begin{eqnarray} \{ | b \beta \rangle : \beta = 0, 1, \ldots, N-1 \} \nonumber \end{eqnarray} of the $N$-dimensional Hilbert spaces $\mathbb{C}^{N}$ are said to be unbiased if and only if \begin{eqnarray} \forall \alpha = 0, 1, \ldots, N-1, \ \ \forall \beta = 0, 1, \ldots, N-1 \ : \ \vert \langle a \alpha | b \beta \rangle \vert = \frac{1}{\sqrt{N}} \nonumber \end{eqnarray} (cf.~Refs.~\cite{ivanovic,Klimov05,Klimov06,woottersFields}). We begin with the $\vert l , m , \varphi \rangle$ phase states associated with the $E_{1d}(l)$ phase operator (see (\ref{coherentstatemvarphi}) and (\ref{ancienne92})). In Eq.~(\ref{coherentstatemvarphi}), $l$ can take the values $0, 1, \ldots, k$. Let us put $l=0$ and switch to the notations \begin{eqnarray} k \equiv N-1, \quad m \equiv \alpha, \quad | n,0 \rangle \equiv | N-1-n \rangle \nonumber \end{eqnarray} (with $\alpha, n = 0, 1, \ldots, N-1$) for easy comparison with some previous works. Then, Eq.~(\ref{coherentstatemvarphi}) becomes \begin{eqnarray} \vert 0 , \alpha , \varphi \rangle = \frac{1}{\sqrt {N}} \sum_{n=0}^{N-1} \exp \left[ -\frac{i}{N-1} n(N-n) \varphi + \frac{2 \pi i}{N} n \alpha \right] \vert N-1-n \rangle. \label{zeroalphaphi} \end{eqnarray} For $\varphi=0$, Eq.~(\ref{zeroalphaphi}) describes a (quantum) discrete Fourier transform$^{\cite{Vourdas04}}$ that allows to pass from the set $\{ |N - 1 - n \rangle : n = 0, 1, \ldots, N-1 \}$ of cardinal $N$ to the set $\{ \vert 0 , \alpha , 0 \rangle : \alpha = 0, 1, \ldots, N-1 \}$ of cardinal $N$ too. In the special case where $\varphi$ is quantized as \begin{eqnarray} \varphi = - \pi \frac{N-1}{N} a, \quad a = 0, 1, \ldots, N-1, \label{phidiscrete} \end{eqnarray} equation (\ref{zeroalphaphi}) leads to \begin{eqnarray} \vert 0 , \alpha , \varphi \rangle \equiv | a \alpha \rangle = \frac{1}{\sqrt {N}} \sum_{n=0}^{N-1} q_0^{n(N-n) a/2 + n \alpha} \vert N-1-n \rangle, \label{zeroalphaphiquantized} \end{eqnarray} where \begin{eqnarray} q_0 = \exp \left( \frac{2 \pi i} {N} \right). \nonumber \end{eqnarray} Equation (\ref{zeroalphaphiquantized}) with $a \not= 0$ corresponds to a (quantum) quadratic discrete Fourier trans\-form.$^{\cite{kibler3,kibler1,kibler2}}$ In this regard, note that the $| a \alpha \rangle$ state in (\ref{zeroalphaphiquantized}) can be identified with the $| a \alpha ; r \rangle$ state with $r=0$ discussed recently in the framework of the quadratic discrete Fourier transform.$^{\cite{InTech}}$ Following Ref.~\cite{InTech}, we consider the set \begin{eqnarray} B_N = \{ | N-1-n \rangle : n = 0, 1, 2, \ldots, N-1 \} = \{ | n \rangle : n = 0, 1, 2, \ldots, N-1 \} \nonumber \end{eqnarray} as an orthonormal basis for the $N$-dimensional Hilbert space. This basis is called computational basis in quantum information. Then, the sets \begin{eqnarray} B_{0a} = \{ | a \alpha \rangle : \alpha = 0, 1, 2, \ldots, N-1 \}, \quad a = 0, 1, 2, \ldots, N-1 \nonumber \end{eqnarray} constitute $N$ new orthonormal bases of the space. The $B_{0a}$ basis is a special case, corresponding to $r=0$, of the $B_{ra}$ bases derived in Ref.~\cite{InTech} from a polar decomposition of the $su_2$ Lie algebra. The overlap between two bases $B_{0a}$ and $B_{0b}$ is given by \begin{eqnarray} \langle a \alpha | b \beta \rangle = \frac{1}{N} \sum_{n=0}^{N-1} q_0^{n(N-n) (b-a)/2 + n (\beta - \alpha)}, \nonumber \end{eqnarray} a relation which can be expressed in term of the generalized Gauss sum$^{\cite{les2Berndt}}$ \begin{eqnarray} S(u, v, w) = \sum_{k=0}^{|w|-1} e^{i \pi (uk^2 + vk) / w}. \nonumber \end{eqnarray} In fact, we obtain \begin{eqnarray} \langle a \alpha | b \beta \rangle = \frac{1}{N} S(u, v, w), \label{overlap en S} \end{eqnarray} with \begin{eqnarray} u = a-b, \quad v = -(a-b)N - 2 (\alpha-\beta), \quad w = N. \nonumber \end{eqnarray} In the case where $N$ is a prime integer, the calculation of $S(u, v, w)$ in (\ref{overlap en S}) yields \begin{eqnarray} \vert \langle a \alpha | b \beta \rangle \vert = \frac{1}{\sqrt{N}}, \quad a \not= b, \quad \alpha, \beta = 0, 1, \ldots, N-1, \quad N \ {\rm prime}. \label{MUB1} \end{eqnarray} On the other hand, it is evident that \begin{eqnarray} \vert \langle n | a \alpha \rangle \vert = \frac{1}{\sqrt{N}}, \quad n, \alpha = 0, 1, \ldots, N-1 \label{MUB2} \end{eqnarray} holds for any strictly positive value of $N$. As a result, Eqs.~(\ref{MUB1}) and (\ref{MUB2}) shows that bases $B_N$ and $B_{0a}$ with $a = 0, 1, \ldots, N-1$ provide a complete set of $N+1$ MUBs when $N$ is a prime integer. A similar result can be derived by quantizing, according to (\ref{phidiscrete}), the $\varphi$ parameter occurring in the eigenstates of $E_{2d}(0)$. The form of the $E_{3d}(l)$ phase operator being different from those of $E_{1d}(l)$ and $E_{2d}(l)$, we proceed in a different way for obtaining MUBs from the $\Vert l , m , \varphi \rangle \rangle$ eigenstates of $E_{3d}(l)$ (see (\ref{coherentstatemvarphiE3d}) and (\ref{ancienne92E3d})). We put $\varphi=0$ in (\ref{coherentstatemvarphiE3d}) and apply the $e^{- i F_3(N_1,N_2) \varphi}$ operator on the resultant state. This gives \begin{eqnarray} e^{- i F_3(N_1,N_2) \varphi} \Vert l , m , 0 \rangle \rangle = \frac{1}{\sqrt{l+1}} \sum_{n=0}^{l} \exp \left[ -i \frac{1}{k^2} n(l+1-n) \varphi \right] \omega_l^{mn} \vert l-n , n \rangle. \nonumber \end{eqnarray} For the sake of comparison, we introduce \begin{eqnarray} l \equiv N-1, \quad m \equiv \alpha, \quad \omega_{N-1} \equiv \exp \left( \frac{2 \pi i}{N} \right), \quad | l-n,n \rangle \equiv | N-1-n \rangle \nonumber \end{eqnarray} and we quantize $\varphi$ via \begin{eqnarray} \varphi = - \pi \frac{k^2}{N} a, \quad a = 0, 1, \ldots, N-1. \nonumber \end{eqnarray} Hence, the vector \begin{eqnarray} e^{- i F_3(N_1,N_2) \varphi} \Vert l , m , 0 \rangle \rangle \equiv | a \alpha \rangle \nonumber \end{eqnarray} reads \begin{eqnarray} \vert a \alpha \rangle = \frac{1}{\sqrt {N}} \sum_{n=0}^{N-1} \omega_{N-1}^{n(N-n) a/2 + n \alpha} \vert N-1-n \rangle, \label{aalphaE3d(l)} \end{eqnarray} which bears the same form as (\ref{zeroalphaphiquantized}). Consequently for $N$ a prime integer, Eq.~(\ref{aalphaE3d(l)}) generates $N$ MUBs $B_{0a}$ with $a = 0, 1, \ldots, N-1$ which together with the computational basis $B_N$ form a complete set of $N+1$ MUBs. \section{CONCLUDING REMARKS} The main results of this work are the following. The $su_3$, $su_{2,1}$ and $h_4 \otimes h_4$ algebras can be described in an unified way via the introduction of the ${\cal A}_{\kappa}(2)$ algebra. A quantum system with a quadratic spectrum (for $\kappa \not= 0$) is associated with ${\cal A}_{\kappa}(2)$~; for $\kappa = 0$, this system coincides with the two-dimensional isotropic harmonic oscillator. In the case $\kappa < 0$, the unitary phase operators ($E_{1d}$, $E_{2d}$ and $E_{3d}$) defined in this paper generalize those constructed in Ref.~\cite{klimov1} for an $su_3$ three-level system (corresponding to $d=3$)~; they give rise to new phase states, namely, vector phase states which are eigenstates obtained along lines similar to those developed in Ref.~\cite{twareque1, twareque3} for obtaining a vectorial generalization of the coherent states introduced in Ref.~\cite{gazeau}. Still for $\kappa < 0$, a new type of unitary phase operator ($E_d$) can be defined~; it specificity is to span all vectors of the $d$-dimensional representation space of ${\cal A}_{\kappa}(2)$ from any vector of the space. In the case $\kappa \geq 0$, it is possible to define nonunitary phase operators. They can be turned to unitary phase operators by truncating (to some finite but arbitrarily large order) the representation space of ${\cal A}_{\kappa}(2)$. This leads to a truncated generalized oscillator algebra (${\cal A}_{\kappa, \sigma}(2)$) that can be reduced to the Pegg-Barnett truncated oscillator algebra$^{\cite{pegg-barnett}}$ through an appropriate limiting process where $\kappa \to 0$. Among the various properties of the phase states and vector phase states derived for $\kappa < 0$ and $\kappa \geq 0$, the property of temporal stability is essential. It has no equivalent in Ref.~\cite{Vourdas1990}. In last analysis, this property results from the introduction of a phase factor ($\varphi$) in the action of the annihilation and creation operators of ${\cal A}_{\kappa}(2)$. As an unexpected result, the quantization of this phase factor allows to derive mutually unbiased bases from temporally stable phase states for $\kappa < 0$. This is a further evidence that ``phases do matters after all''$^{\cite{Klimov06}}$ and are important in quantum mechanics. \section*{ACKNOWLEDGMENTS} MD would like to thank the hospitality and kindness of the {\em Service de physique th\'eorique de l'Institut de Physique Nucl\'eaire de Lyon} where this work was done. \newpage
1,108,101,566,041
arxiv
\section{Introduction}\label{intro.sec} There is significant evidence that some form of dark matter dominates the gravitating mass in the universe and its abundance is known to great precision \citep{komatsu11}. The most popular candidate for dark matter is the class of weakly interacting massive particles (WIMPs), of which supersymmetric neutralinos are examples \citep{steigman1985,griest1988,jungman1996}. WIMPs are stable, with negligible self-interactions, and are non-relativistic at decoupling (``cold``). It is important to recognize that of these characteristics, it is primarily their coldness that is well tested via its association with significant small-scale power. Indeed, WIMPs are the canonical Cold Dark Matter (CDM) candidate. Cosmological models based on CDM reproduce the spatial clustering of galaxies on large scales quite well \citep{reidetal10} and even the clustering of galaxies on $ \sim 1$ Mpc scales appears to match that expected for CDM {\em subhalos} \citep{kravtsov2004,conroy2006,trujilloGomezetal11,reddick2012}. Beyond the fact that the universe appears to behave as expected for CDM on large scales, we have few constraints on the microphysical parameters of the dark matter, especially those that would manifest themselves at the high densities associated with cores of galaxy halos. It is worth asking what (if anything) about vanilla CDM can change without violating observational bounds. In this paper we use cosmological simulations to explore the observational consequences of a CDM particle that is strongly self-interacting, focusing specifically on the limiting case of velocity-independent, elastic scattering. Dark matter particles with appreciable self-interactions have been discussed in the literature for more than two decades \citep{carlson92,machacek93,delaix95,spergelandsteinhardt00,firmani00}, and are now recognized as generic consequences of hidden-sector extensions to the Standard Model \citep{pospelov08, arkanihamed09, ackerman09, feng09, feng10a, loeb11}. Importantly, even if dark sector particles have no couplings to Standard Model particles they might experience strong interactions with themselves, mediated by dark gauge bosons (see \citealt{feng10} and \citealt{peter12} for reviews). {\em The implication is that astrophysical constraints associated with the small-scale clustering of dark matter may be the only way to test these scenarios}. Phenomenologically, self-interacting dark matter (SIDM) is attractive because it offers a means to lower the central densities of galaxies without destroying the successes of CDM on large scales. Cosmological simulations that contain only CDM indicate that dark-matter halos should be cuspy and with (high) concentrations that correlate with the collapse time of the halo \citep{nfw97, bullock2001, wechsler2002}. This is inconsistent with observations of galaxy rotation curves, which show that galaxies are less concentrated and less cuspy than predicted in CDM simulations \citep[e.g.][]{floresandprimack94,simonetal05, kuzioetal08, bloketal10, dutton2011, kuzioetal11,ohetal11a, walkerandpenarrubia11, saluccietal12, castignanietal12}. Even for clusters of galaxies, the density profiles of the host dark-matter halos appear in a number of cases to be shallower than predicted by CDM-only structure simulations, with the total (dark matter + baryons) density profile in a closer match to the CDM prediction for the dark matter alone \citep[e.g.][]{sandetal04, sand2008, newmanetal09, newmanetal11,coe2012,umetsu2012}. One possible answer is feedback. In principle, the expulsion of gas from galaxies can result in lower dark matter densities compared to dissipationless simulations, and thus bring CDM models in line with observations \citep{governato2010,ohetal11b,pontzen2011,brooketal12,governato2012}. However, a new level of concern exists for dwarf spheroidal galaxies \citep{mbketal11a, ferreroetal11,mbketal11b}. Systems with $M_* \sim 10^6 \,\mathrm{M}_{\odot}$ appear to be missing $\sim 5 \times 10^7 \,\mathrm{M}_{\odot}$ of dark matter compared to standard CDM expectations \citep{mbketal11b}. It is difficult to understand how feedback from such a tiny amount of star formation could have possibly blown out enough gas to reduce the densities of dwarf spheroidal galaxies to the level required to match observations (\citealt{mbketal11b,teyssieretal2012,zolotov2012,penarrubia2012}; Garrison-Kimmel et al., in preparation). \cite{spergelandsteinhardt00} were the first to discuss SIDM in the context of the central density problem (see also \citealt{firmani00}). The centers of SIDM halos are expected to have constant density isothermal cores that arise as kinetic energy is transmitted from the hot outer halo inward \citep{balberg2002,colinetal02,ahn2005,koda2011}. This can happen if the cross section over mass of the dark matter particle, $\sigma/m$, is large enough for there to be a relatively high probability of scattering over a time $t_{\rm age}$ comparable to the age of the halo: $\Gamma \, t_{\rm age} \sim 1$, where $\Gamma$ is the scattering rate per particle. The rate will vary with local dark matter density $\rho(r)$ as a function of radius $r$ in a dark halo as \begin{equation}\label{eq:gamma} \Gamma(r) \simeq \rho(r) (\sigma/m) v_{\mathrm{rms}}(r) \, , \end{equation} up to order unity factors, where $v_{\mathrm{rms}}$ is the rms speed of dark-matter particles. Based on rough analytic arguments, \citet{spergelandsteinhardt00} suggested $\sigma/m \sim 0.1-100 \ \, {{\rm cm}^2/{\rm g}}$ would produce observable consequences in the cores of halos. Numerical simulations have confirmed the expected phenomenology of core formation \citep{bukert00} though \citet{kochanek&white00} emphasized the possibility that SIDM halos could eventually become {\em more} dense than their CDM counterparts as a result of eventual heat flux from the inside out (much like core collapse globular clusters). However this process is suppressed when merging from hierarchical formation is included \citep[for a discussion see][]{ahn2005}. We do not see clear signatures of core collapse in the halos we analyzed for $\sigma/m=1 \ \, {{\rm cm}^2/{\rm g}}$. The first cosmological simulations aimed at understanding dwarf densities were performed by \citet{daveetal01} who used a small volume ($4 h^{-1} \, {\mathrm{Mpc}}$ on a side) in order to focus computational power on dwarf halos. They concluded that $\sigma/m = 0.1-10 \, {{\rm cm}^2/{\rm g}}$ came close to reproducing core densities of small galaxies, favoring the upper end of that range but not being able to rule out the lower end due to resolution. Almost concurrently, \citet{yoshida00} ran cosmological simulations focusing on the cluster-mass regime. Based on the estimated core size of cluster CL 0024+1654, they concluded that cross sections no larger than $\sim 0.1 \ \, {{\rm cm}^2/{\rm g}}$ were allowed, raising doubts that constant-cross-section SIDM models could be consistent with observations of both dwarf galaxies and clusters. These concerns were echoed by \citet{miralda2002} who suggested that SIDM halos would be significantly more spherical than observed for galaxy clusters. Similarly, \citet{gnedinandostriker01} argued that SIDM would lead to excessive sub halo evaporation in galaxy clusters. More recently, the merging cluster system known as the Bullet Cluster has been used to derive the limits (68\% C.L.) $\sigma/m < 0.7 \, {{\rm cm}^2/{\rm g}}$ \citep{randalletal08} based on evaporation of dark matter from the subcluster and $\sigma/m <1.25 \, {{\rm cm}^2/{\rm g}}$ \citep{randalletal08} based on the observed lack of offset between the bullet subcluster mass peak and the galaxy light centroid. In order to relax this apparent tension between what was required to match dwarf densities and the observed properties of galaxy clusters, velocity dependent cross sections that diminish the effects of self-interaction in cluster environments have been considered \citep{firmani00,colinetal02,feng09,loeb11,vogelsberger12}. There are a few new developments that motivate us to revisit constant SIDM cross sections on the order of $\sigma/m \sim 1 \, {{\rm cm}^2/{\rm g}}$. For example, the cluster (CL 0024+1654) used by \citet{yoshida00} to place one of the tightest limits at $\sigma/m = 0.1$, is now recognized as an ongoing merger along the line of sight \citep{czoske2001,czoske2002,zhang2005,jee2007,jee2010,umetsu2010}. This calls into question its usefulness as a comparison case for non-merging cluster simulations. In a companion paper (Peter, Rocha, Bullock and Kaplinghat, 2012) we use the same simulations described here to show that published constraints on SIDM based on halo shape comparisons are significantly weaker than previously believed. Further, the results presented below clearly demonstrate that the tendency for subhalos to evaporate in SIDM models \citep{gnedinandostriker01} is not significant for $\sigma/m \sim 1 \, {{\rm cm}^2/{\rm g}}$. Finally (and related to the previous point), the best numerical analysis of the Bullet Cluster \citep{randalletal08} used initial cluster density profiles that were unmotivated cosmologically with central densities about a factor of two too high for the SIDM cross sections considered (producing a scattering rate that is inconsistently high). Based on this observation, the bullet cluster constraint based on evaporation of dark matter from the subcluster should be relaxed since the amount of subcluster mass that becomes unbound is directly proportional to the density of dark matter encountered in its orbit. Moreover, their model galaxies were placed in the cluster halo potentials without subhalos surrounding them, an assumption (based on analytic estimates for SIDM subhalo evaporation) that is not supported by our simulations. This could affect the constraints based on the (lack of) offset between dynamical mass and light. Thus we believe that the bullet cluster constraints as discussed above are likely only relevant for models with $\sigma/m > 1 \, {{\rm cm}^2/{\rm g}}$. However, the constraints could be made significantly stronger by comparing SIDM predictions to the densities inferred from the convergence maps since the central halo densities for $\sigma/m \simeq 1 \, {{\rm cm}^2/{\rm g}}$ are significantly lower than the CDM predictions, as we show later. Given these motivations, we perform a set of cosmological simulations with both CDM and SIDM. For SIDM we ran $\sigma/m = 1$ and $0.1 \, {{\rm cm}^2/{\rm g}}$ models (hereafter SIDM$_1$ and SIDM$_{0.1}$), {\em i.e.}, models that we have argued pass the Bullet cluster tests. Our simulations provide us with a sample of halos that span a mass range much larger than either \citet{daveetal01} or \citet{yoshida00} both with and without self-interactions. One of the key findings from our simulations is that the core sizes are expected to scale approximately as a fixed fraction of the NFW scale radius the halo would have in the absence of scatterings. We can see where this scaling arises from a quick look at Equation~\ref{eq:gamma}. This equation allows us to argue that the radius ($r_1$) below which we expect dark matter particles (on average) to have scattered once or more is set by: \begin{equation}\label{eq:onescatter} \rho_{\mathrm{s}} f(r/r_{\mathrm{s}}) v_\mathrm{rms} \propto \frac{V_{\mathrm{max}}^2}{r_{\mathrm{max}}^2} f(r_1/r_{\mathrm{s}}) V_{\mathrm{max}} = \rm constant \enspace, \end{equation} where $f(x)$ is the functional form of the NFW (or a related) density profile. In writing the above equation we have assumed that the density profile for SIDM is not significantly different from CDM at $r_1$, something that we verify through our simulations. Now, since CDM enforces a $V_{\mathrm{max}}-r_{\mathrm{max}}$ relation such that $V_{\mathrm{max}}\propto r_{\mathrm{max}}^{1.4-1.5}$, we see that the solution to $r_1/r_{\mathrm{s}}$ is going to be only mildly dependent on the halo properties. We develop an analytic model based on this insight later, but this is the underlying reason for why we find core sizes to be a fixed fraction of the NFW scale radius of the same halo in the absence of scatterings. The major conclusion we reach based on the simulations and the analytic model presented here is that a self-interacting dark matter model with a cross-section over dark matter particle mass $\sim 0.1 \, {{\rm cm}^2/{\rm g}}$ would be capable of reproducing the core sizes and central densities observed in dark matter halos {\em at all scales}, from clusters to dwarf spheroidals, without the need for velocity-dependence in the cross-section. In the next section, we discuss our new algorithm to compute the self-interaction probability for N-body particles, derived self-consistently from the Boltzmann equation. We discuss this new algorithm in detail in Appendix \ref{appendixA}. In \S \ref{implementation.sec}, we show how this algorithm is implemented in the publicly available code GADGET-2 \citep{springel05}. We run tests that show that our algorithm gets the correct interaction rate and post-scattering kinematics. The results of these tests are in \S \ref{test.sec}. The cosmological simulations with this new algorithm are described in detail in \S \ref{sims.sec}. In \S \ref{prelim.sec} we provide some preliminary illustrations of our simulation snapshots and in \ref{lss:sec} we demonstrate that the large-scale statistical properties of SIDM are identical to CDM. In \S \ref{halos.sec} we present the properties of individual SIDM$_1$ and SIDM$_{0.1}$ halos and compare them to the their CDM counterparts. In \S \ref{subhalos.sec} we discuss the subhalo mass functions in our SIDM and CDM simulations and show that SIDM$_1$ subhalo mass functions are very close to that of CDM in the range of halo masses we can resolve. We provide scaling relations for the SIDM$_1$ halo properties in \S \ref{scaling.sec} and in \S \ref{analytic.sec} we present an analytic model that reproduces these scaling relations as well as the absolute densities and core radii of SIDM$_1$ halos. We use these scaling relations and the analytic model to make a broad-brush comparison to observed data in \S \ref{discussion.sec}. We present a summary together with our final conclusions in \S \ref{sumandconc.sec}. \section{Simulating Dark Matter Self Interactions} \label{implementation.sec} Our simulations rely on a new algorithm for modeling self-interacting dark matter with N-body simulations. Here we introduce our approach and provide a brief summary. In Appendix \ref{appendixA} we derive the algorithm explicitly starting with the Botlzmann equation and give details for general implementation. In N-body simulations, the simulated (macro)particles represent an ensemble of many dark-matter particles. Each simulation particle of mass $m_\mathrm{p}$ can be thought of as a patch of dark-matter phase-space density. In our treatment of dark matter self-scattering, the phase space patch of each particle is represented by a delta function in velocity and a spatially extended kernel $W(r,h_\mathrm{si})$, smoothing out the phase space in configuration space on a self-interaction smoothing length $h_\mathrm{si}$. The value of $h_\mathrm{si}$ needs to be set by considering the physical conditions of the problem (see \S \ref{test.sec}) as it specifies the range over which N-body particles can affect each other via self-interactions. In principle, $h_\mathrm{si}$ could be different for each particle and vary depending on the local density, but in the simulations presented here we fix $h_\mathrm{si}$ to be the same for all particles in a given simulation, setting the size of $h_\mathrm{si}$ according the lowest densities at which self-interactions are effective for a given cross section. When two phase-space patches overlap, we need to calculate the pairwise interaction rate between them. We do so by considering the ``scattering out'' part of the Boltzmann collision term in Equation (\ref{eq:boltzmann}) and Eqs. (\ref{eq:pair1})-(\ref{eq:pair6}). The implied rate of scattering of an N-body particle $j$ off of a target particle $i$ of mass $m_\mathrm{p}$ is \begin{equation} \label{gameq.eq} \Gamma(i|j) = (\sigma/m) m_\mathrm{p} | \mathbf{v}_i - \mathbf{v}_j | g_{ji} \, , \end{equation} where $g_{ji}$ is a number density factor that accounts for the overlap of the two particles' smoothing kernels:~\footnote{This equation applies only if $h_\mathrm{si}$ is the same for both particles. See Appendix A for the general form.} \begin{equation} g_{ji} = \int_0^{h_\mathrm{si}} d^3 \mathbf{x}^\prime W(|\mathbf{x}^\prime|,h_\mathrm{si}) W(| \delta \mathbf{x}_{ji} + \mathbf{x}^\prime|,h_\mathrm{si}) \, . \end{equation} The probability that such an interaction occurs in a time step $\delta t$ is \begin{equation} \label{probeq.eq} P(i|j) = \Gamma(i|j) \, \delta t \, , \end{equation} and the total probability of interaction between N-body particles $i$ and $j$ is \begin{equation} \label{totalProb.eq} P_{ij} = \frac{P(i|j) + P(j|i)}{2}. \end{equation} Specifically, $P_{ij}$ is the probability for a macroparticle representing a patch of phase space around $(\mathbf{x}_j,\mathbf{v}_j)$ to interact with a target particle representing a patch of phase space around $(\mathbf{x}_i,\mathbf{v}_i)$ in a time $\delta t$. We determine if particles interact by drawing a random number for each pair of particles that are close enough for the probability of interaction to be greater than zero. If a pair does scatter, we do a Monte Carlo for the new velocity directions, populating these parts of the phase-space and deleting the two particles at their initial phase-space locations. Note that by virtue of populating the new phase space regions, we are taking care of the ``scattering in'' term of the collision integral in Equation (\ref{eq:boltzmann}). We avoid double counting by only accounting for $P_{ij} = P_{ji}$ once during a given time-step $\delta t$. In the limit of a large number of macroparticles, the total interaction probability for each particle $i$ should approach \begin{equation} \label{probTotal.eq} P_i = \sum_{j} P_{ij} \,. \end{equation} We show in \S 3 that this approach correctly reproduces the expected number of scatterings in a idealized test case. Our method for simulating scattering differs from previous approaches in a few key ways. It is most similar to that of \citet{daveetal01} in that we both directly consider interactions between pairs of phase-space patches and rely on a scattering rate similar in form to Equation \ref{gameq.eq}. The difference is that their geometric factor $g_{ji}$ is not the same---our factor arises explicitly from the overlap in patches of phase space between neighboring macroparticles, as derived from the collision term in the Boltzmann equation (see Appendix A for details). Other authors determine the scattering rate $\Gamma$ of individual phase-space patches based on estimates of the local mass density (typically using some number of nearest neighbors or using an SPH kernel). The Monte Carlo is then based on an estimated scattering rate of an individual particle on the background, and a scattering partner is only chosen after a scattering event is determined to have occurred \citep{kochanek2000, yoshida00, colinetal02,randalletal08}. The scattering probability in this latter approach is not symmetric. For macroparticles of identical mass, $P(i|j) = P(j|i)$ explicitly in our approach, but not the other approach because the density estimated at the position of macroparticle $i$ need not be the same as that estimated at the position of particle $j$. In the future, there should be a direct comparison among these scattering algorithms to determine if they yield consistent results. \begin{figure} \begin {center} \includegraphics[width=0.5\textwidth]{figures/h_converge.eps \end {center} \caption{Fraction of the expected total number of interactions that are computed in our test simulation as a function of the self-interaction smoothing length. The self-interaction cross section for each run is shown in units of cm$^2$/g in the legend. The code converges to the expected number of interactions when the smoothing length approaches the background inter-particle separation, i.e. when $h_\mathrm{si} (\rho_\mathrm{bg}/m_\mathrm{p})^{1/3} \gtrsim 0.2$.} \label{hconvtestFig} \end{figure} We have implemented our algorithm in the publicly available version of the cosmological simulation code GADGET-2 \citep{springel05}. GADGET-2 computes the short-range gravitational interactions by means of a hierarchical multipole expansion, also known as a tree algorithm. Particles are grouped hierarchically by a repeated subdivision of space, so their gravitational contribution can be accounted by means of a single multipole force computation. A cubical root node encompasses the full mass distribution. The node is repeatedly subdivided into eight daughter nodes of half the side length each (an oct-tree) until one ends up with ``leaf'' nodes containing single particles. Forces for a given particle are then obtained by ``walking'' the tree, opening nodes that are too close for their multipole expansion to be a correct approximation to their gravitational contribution. In GADGET-2, spurious strong close encounters by particles are avoided by convolving the single point particle density distribution with a normalized spline kernel (``gravitational softening''). To implement our algorithm, we take advantage of the tree-walk already build in GADGET-2, computing self interactions during the calculation of the gravitational interactions. For this to work we have to modify the opening criterion such that nodes are opened if they are able to have particles closer than $2 h_\mathrm{si}$ from a target scatterer (or $h_i + h_j$ if particles have different self-interaction smoothing lengths). When computing the probability of interaction we use the same spline kernel used in GADGET-2 \citep{monaghan&lattazio85}, defined as \begin{equation} W(r,h)=\frac{8}{\pi h^3} \left\{ \begin{array}{ll} 1-6\left(\frac{r}{h}\right)^2 + 6\left(\frac{r}{h}\right)^3, & 0\le\frac{r}{h}\le\frac{1}{2} ,\\ 2\left(1-\frac{r}{h}\right)^3, & \frac{1}{2}<\frac{r}{h}\le 1 ,\\ 0 , & \frac{r}{h}>1 . \end{array} \right. \label{eqkernel} \end{equation} If a pair interacts we give both particles a kick consistent with an elastic scattering that is isotropic in the center of mass frame. The post-scatter particle velocities are \begin{align} \mathbf{v}_0^\prime &= \mathbf{v}_c + \frac{m_1}{m_0+m_1}V \mathbf{e}, \nonumber \\ \mathbf{v}_1^\prime &= \mathbf{v}_c - \frac{m_0}{m_0+m_1}V \mathbf{e}, \end{align} where $\mathbf{v}_c$ is the center of mass velocity, $V$ is the relative speed of the particles (conserved for elastic collisions) and $\mathbf{e}$ is a randomly chosen direction. The time-step criterion is also modified to assure that the scattering probability for any pair of particles is small, $P = \Gamma \ \delta t << 1$. An individual particle time-step is decreased by a factor of 2 if during the last tree-walk the maximum probability of interaction for any pair involving such a particle was $P_\mathrm{max} > 0.2$. Once a particle time-step is modified due to the previous restriction, if $P_\mathrm{max} < 0.1$ for such a particle and its current time-step is smaller than the one given by the standard criterion on GADGET-2, we increase it by a factor of 2. \begin{table*} \label{sims.tab} {\bf Table 1:} Simulations discussed in this paper.\\ \centering \begin{tabular}{lcccccc} Name & Volume & Number of Particles & Particle Mass &Force Softening & Smoothing Length & Cross-section \\ & $L_\mathrm{Box}$ [$h^{-1} \, {\mathrm{Mpc}}$] & $N_\mathrm{p}$ & $m_\mathrm{p}$ [$h^{-1} \, {\mathrm{M}}_\odot$] & $\epsilon$ [$h^{-1} \, {\mathrm{kpc}}$] & $h_\mathrm{si}$ [$h^{-1} \, {\mathrm{kpc}}$] & $\sigma/m$ [$\, {{\rm cm}^2/{\rm g}}$] \\ \hline \hline CDM-50 & $50$ & $512^3$ & $6.88\times10^7$ & $1.0$ & $-$ & 0\\ CDM-25 & $25$ & $512^3$ & $8.59\times10^6$ & $0.4$ & $-$ & 0\\ CDM-Z11 & $(3 R_{\rm vir})$* & $2.5\times10^6$* & $1.07\times10^6$*& $0.3$ & $-$ & 0\\ CDM-Z12 & $(3 R_{\rm vir})$* & $5.6\times10^7$* & $1.34\times10^5$* & $0.1$ & $-$ & 0\\ \hline SIDM$_{0.1}$-50 & $50$ & $512^3$ & $6.88\times10^7$ & $1.0$ & $2.8 \ \epsilon$ & 0.1\\ SIDM$_{0.1}$-25 & $25$ & $512^3$ & $8.59\times10^6$ & $0.4$ & $2.8 \ \epsilon$ & 0.1\\ SIDM$_{0.1}$-Z11 & $(3 R_{\rm vir})$* & $2.5\times10^6$* & $1.07\times10^6$* & $0.3$ & $2.8 \ \epsilon$ & 0.1\\ SIDM$_{0.1}$-Z12 & $(3 R_{\rm vir})$* & $5.6\times10^7$* & $1.34\times10^5$* & $0.1$ & $1.4 \ \epsilon$ & 0.1 \\ \hline SIDM$_1$-50 & $50$ & $512^3$ & $6.88\times10^7$ & $1.0$ & $2.8 \ \epsilon$ & 1\\ SIDM$_1$-25 & $25$ & $512^3$ & $8.59\times10^6$ & $0.4$ & $2.8 \ \epsilon$ & 1\\ SIDM$_1$-Z11 & $(3 R_{\rm vir})$* & $2.5\times10^6$* & $1.07\times10^6$* & $0.3$ & $2.8 \ \epsilon$ & 1\\ SIDM$_1$-Z12 & $(3 R_{\rm vir})$* & $5.6\times10^7$* & $1.34\times10^5$* & $0.1$ & $1.4 \ \epsilon$ & 1\\ \end{tabular} \vskip 0.5 cm *Note: The Z11 and Z12 runs are zoom simulations with multiple particle species concentrating on halos of mass $M_{\rm vir} = 5 \times 10^{11}$ M$_\odot$ and $1.0 \times 10^{12}$ M$_\odot$, respectively (no $h$). The volumes listed refer to the number of virial radii used to find the Lagrangian volumes associated with the zoom. The particle properties listed are for the highest resolution particles only. \end{table*} \section{Test of the SIDM Implementation} \label{test.sec} Before performing cosmological simulations, we carried out a controlled test of the implementation in order to make sure the scattering rate and kinematics are correctly followed in the code, and to determine the optimum value of the SIDM softening kernel length $h_\mathrm{si}$. The simplest and cleanest scenario for testing our implementation consists of a uniform sphere of particles moving through a uniform field of stationary background particles. The coordinate system is defined such as the sphere is moving along the positive z-direction with constant velocity $v_s$. The particles forming the sphere and the particles forming the background field are tagged as different types within the code and here we will refer to them simply as \textit{sphere} (s) and \textit{background} (bg) particles respectively. We only allow scatterings involving two different types of particles (i.e. sphere-background interactions only) and turn off gravitational forces among all of the particles. Furthermore all particles have the same mass $m_\mathrm{p}$. The expected number of interactions for this case is given by \begin{equation} \label{expNinteractEq} N_{exp}(t) = \sum_{i\in \textrm{s}, j \in \textrm{bg}} P_{ij}= N_s (\sigma/m) \rho_\mathrm{bg} v_s \ t \, \end{equation} where $N_s$ is the total number of Sphere particles, $\rho_\mathrm{bg}$ is the density of the background field and $t$ is the elapsed time from the begining of the simulation. From this experiment we have found that the number of interactions computed by the code depends on the self-interaction smoothing length $h_\mathrm{si}$ (see Figure \ref{hconvtestFig}), which is fixed to be the same for all particles in this test. The number of interactions converges to the expected value given by Equation (\ref{expNinteractEq}) as $h_\mathrm{si}$ becomes comparable to the background inter-particle separation, specifically when $h_\mathrm{si} (\rho_\mathrm{bg}/m_\mathrm{p})^{1/3} \gtrsim 0.2$. For $h_\mathrm{si} (\rho_\mathrm{bg}/m_\mathrm{p})^{1/3} \gtrsim 0.5$ the accuracy of the algorithm does not improve by much and the time of the calculations increases rapidly, $\propto h_\mathrm{si}^3$. Apart from the expense, using larger values of $h_\mathrm{si}$ would lead to increasingly non-local interactions among particles, which is inconsistent with the model under consideration. We also check the kinematics of the scatters in this test simulation and describe the results in Appendix \ref{appendixB}. The resulting kinematics and number of interactions from our test simulation agrees well with the expectations from the theory as long as $h_\mathrm{si} (\rho_\mathrm{bg}/m_\mathrm{p})^{1/3} \gtrsim 0.2$. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{figures/CDM_50box_10MpcSlice.eps} \includegraphics[width=0.45\textwidth]{figures/SIDM_50box_10MpcSlice.eps} \vskip.05cm \includegraphics[width=0.45\textwidth]{figures/CDM_Z12_host.eps} \includegraphics[width=0.45\textwidth]{figures/SIDM_Z12_host.eps \end {center} \caption{Top: Large scale structure in CDM (left) and SIDM$_1$ (right) shown as a $50\times50 \, h^{-1} \, {\mathrm{Mpc}}$ slice with $10 \, h^{-1} \, {\mathrm{Mpc}}$ thickness through our cosmological simulations. Particles are colored according to their local phase-space density. There are no visible differences between the two cases. Bottom: Small scale structure in a Milky Way mass halo (Z12) simulated with CDM (left) and SIDM$_1$ (right), including all particles within $200 h^{-1} \, {\mathrm{kpc}}$ of the halo centers. The magnitude of the central phase-space density is lower in SIDM because the physical density is lower {\em and} the velocity dispersion is higher. The core of the SIDM halo is also slightly rounder. Note that substructure content is quite similar except in the central regions} \label{Viz.fig} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.498\textwidth]{{figures/2ptFunction.eps}} \includegraphics[width=0.498\textwidth]{{figures/VmaxFunction_hiZ.eps}} \caption{Large-scale characteristics {\em Left:} Dark matter two-point correlation functions from our CDM-50 (CDM-25) and SIDM$_1$-50 (SIDM$_1$-25) simulations in black (grey) and blue (cyan) colors respectively. There are no noticeable difference between the CDM and SIDM$_1$ dark matter clustering over the scales plotted. {\em Right:} Cumulative number density of dark matter halos as a function of their maximum circular velocity ($V_{\mathrm{max}}$) at different redshifts for our CDM-50 (solid) and SIDM$_1$-50 (dashed) simulations. There are no significant differences in the $V_{\mathrm{max}}$ functions of CDM and SIDM$_1$ at any redshift. } \label{lss.fig} \end{center} \end{figure*} \section{Overview of Cosmological Simulations} \label{sims.sec} We initialize our cosmological simulations using the Multi-Scale Initial Conditions (MUSIC) code of \cite{hahn&abel11}. We have a total of four initial condition sets, each run with both CDM and SIDM. The first two are cubic volumes of $25 h^{-1} \, {\mathrm{Mpc}}$ and $50 h^{-1} \, {\mathrm{Mpc}}$ on a side, each with $512^3$ particles. As discussed below, these simulations allow us to resolve the structure of a statistical sample of group ($\sim 10^{13} \,\mathrm{M}_{\odot}$) and cluster ($\sim 10^{14} \,\mathrm{M}_{\odot}$) halos. The second two initial conditions concentrate computational power on zoom regions \citep{katz&white93} drawn from the $50 h^{-1} \, {\mathrm{Mpc}}$ box, specifically aimed at exploring the density structure of two smaller halos, one with virial mass \footnote{We define $M_{\mathrm{vir}}$ as $M_{\mathrm{vir}} = \frac{4}{3} \pi \rho_b \Delta_\mathrm{vir}(z) r_{\mathrm{vir}}^3$, and $r_{\mathrm{vir}}$ as $\tilde{\rho}(r_{\mathrm{vir}}) = \Delta_\mathrm{vir}(z) \rho_b$. Where $\tilde{\rho}(r_{\mathrm{vir}})$ denotes the overdensity within $r_{\mathrm{vir}}$, $\rho_b$ is the background density and $\Delta_\mathrm{vir}$ the virial overdensity.} $M_{\mathrm{vir}} = 7.1 \times 10^{11} h^{-1} \, {\mathrm{M}}_\odot = 1 \times 10^{12} \,\mathrm{M}_{\odot}$ (Z12) and one with $M_{\mathrm{vir}} = 3.5 \times 10^{11} h^{-1} \, {\mathrm{M}}_\odot = 5 \times 10^{11} \,\mathrm{M}_{\odot}$ (Z11). The Z12 run in particular is fairly high resolution, with more than five million particles in the virial radius. Table 1 summarizes the simulation parameters. The cosmology used is based on WMAP7 results for a $\Lambda$CDM Universe: $h = 0.71$, $\Omega_\mathrm{m} = 0.266$, $\Omega_{\Lambda} = 0.734$, $\Omega_\mathrm{b} = 0.0449$, $n_\mathrm{s} = 0.963$, $\sigma_8 = 0.801$ \citep{komatsu11}. Each of our four initial conditions has been evolved from redshift $z = 250$ to redshift $z = 0$ with collisionless dark matter (labeled CDM) and with two types of self-interacting dark matter: one with $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ (labeled SIDM$_1$) and another with $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ (labeled SIDM$_{0.1}$). We can use the same initial conditions for CDM and SIDM because at high redshift the low densities and low relative velocities of the dark matter make self-interactions insignificant. Table \ref{sims.tab} list all the simulations used for this study and detail their force, mass, and self-interaction resolution. In addition to the simulations listed in the table, we also ran the cosmological boxes with SIDM cross sections $\sigma/m = 0.03\hbox{ cm}^2/\hbox{g}$. We do not present results from these low cross section runs here because no core density differences were resolved within the numerical convergence radii of our simulations. As shown in \S \ref{test.sec} the self-interaction smoothing length $h_\mathrm{si}$ must be larger than ~20\% the inter-particle separation in order to achieve convergence on the interaction rate. All the work for this paper was done with a fixed $h_\mathrm{si}$ for all particles carefully chosen for each simulation so that the self-interactions are well resolved at densities a few times to an order of magnitude lower than the lowest densities for which self-interactions are significant. We have run the cosmological boxes with different choices for $h_\mathrm{si}$ (changes by factors of 2 to 4) and have found that our results are unaffected. We have also run tests on isolated halos with varying smoothing lengths and again find that the effects of self-interactions are robust to reasonable changes in $h_\mathrm{si}$. All of our halo catalogs and density profiles are derived using the publicly available code Amiga Halo Finder (AHF) \citep{knollmannandknebe09}. \section{Simulation Results} \subsection{Preliminary Illustrations} \label{prelim.sec} Before presenting any quantitative comparisons between our CDM and SIDM runs, we provide some simulation renderings in order to help communicate the qualitative differences. The upper panels of Figure \ref{Viz.fig} show a large-scale comparison: two ($50\times50\times10$) $h^{-1} \, {\mathrm{Mpc}}$ slices from the CDM-50 and SIDM$_1$-50 boxes side-by-side at $z=0$. The structures are color-coded by local phase-space density ($\propto \rho/v_{\rm rms}^3$). It is evident that there are no observable differences in the large-scale characteristics of CDM and SIDM$_1$. We discuss this result in more quantitative terms in \S \ref{lss:sec} but of course this is expected. The SIDM models we explore do not have appreciable rates of interaction for densities outside the cores of dark matter halos. The upper panels of Figure \ref{Viz.fig} provide a visual reminder that the SIDM models we consider are effectively identical to CDM on larges scales. The differences between CDM and SIDM become apparent only when one considers the internal structure of individual halos. The lower panels of Figure \ref{Viz.fig} provide side-by-side images of a Milky-Way mass halo (Z12) simulated with CDM (left) and SIDM$_1$ (right). SIDM tends to make the cores of halos less dense {\em and} kinetically hotter (see \S \ref{halos.sec}) and these two differences are enhanced multiplicatively in the phase-space density renderings. The central regions of the host halo are also slightly rounder in the SIDM case (Peter et al. 2012). Importantly, the difference in substructure characteristics are minimal, especially at larger radii. We return to a quantitative description of substructure differences in \S \ref{subhalos.sec}. \subsection{Large Scale Structure and Halo Abundances} \label{lss:sec} Figure \ref{lss.fig} provides a quantitative comparison of both the clustering properties (left) and halo abundance evolution (right) between our full-box CDM and SIDM$_1$ simulations. The left panel shows the two-point function of dark matter particles in both cosmological runs for CDM and SIDM$_1$. There are no discernible differences between SIDM and CDM over the scales plotted, though of course the different box sizes (and associated resolutions) mean that the boxes themselves only overlap for a limited range of scales. For a given set of initial conditions, however, SIDM and CDM give identical results. The right panel of Figure \ref{lss.fig} shows the cumulative number density of dark-matter halos (including subhalos) as a function of their peak circular velocity ($V_{\mathrm{max}}$) for the CDM-50 (solid) and SIDM$_1$-50 (dashed) simulations at various redshifts. Remarkably, this comparison shows no significant difference either -- indicating that SIDM with cross sections as large as $1 \, {{\rm cm}^2/{\rm g}}$ does not strongly affect the maximum circular velocities of individual halos. The two panels of Figure \ref{lss.fig} demonstrate that for large-scale comparisons, including analyses involving field halo mass functions, SIDM and CDM yield identical results. The implication is that observations of large-scale structure are just as much a ``verification" of SIDM as they are of CDM. \subsection{Halo Structure} \label{halos.sec} Before presenting statistics on halo structure, we focus on six well resolved halos that span our full mass range $M_{\rm vir} = 5\times 10^{11}-2\times 10^{14} \,\mathrm{M}_{\odot}$, selected from our full simulation suite, including our two zoom-simulation halos (Z12 and Z11). Figures \ref{densProfiles.fig} through \ref{vrmsProfiles.fig} show radial profiles for the density, circular velocity and velocity dispersion for all three dark matter cases. In each figure, black circles correspond to CDM, green triangles to SIDM$_{0.1}$, and blue stars to SIDM$_1$. All profiles are shown down to the innermost resolved radius for which the average two-body relaxation time roughly matches the age of the Universe \citep{poweretal03}. We begin with the density profiles of halos shown in the six-panel Figure \ref{densProfiles.fig}. For each halo in the CDM run we have fit an NFW profile \citep{nfw97} to its radial density structure: \begin{equation} \rho_{\rm NFW}(r) = \frac{\rho_s \, r_s^3}{r(r_s+r)^2}, \end{equation} and recorded its corresponding scale radius $r_s$. The CDM-fit $r_s$ value for each halo is given in its associated panel along with the halo virial mass. The radial profiles for each halo (in both the CDM and SIDM runs) are normalized with respect to the CDM $r_s$ value in the plot. This allows our full range of halo masses to be plotted on identical axes. The SIDM versions of each halo show remarkable similarity to their CDM counterparts at large radii. However, the SIDM$_1$ cases clearly begin to roll towards constant-density cores at small radii. The best resolved halos in the SIDM$_{0.1}$ runs also demonstrate lower central densities compared to CDM, though the differences are at the factor of $\sim 2$ level even in our best resolved systems. Clearly, higher resolution simulations will be required in order to fully quantify the expected differences between CDM and SIDM for $\sigma/m \sim 0.1 \, {{\rm cm}^2/{\rm g}}$. \begin{figure*} \begin {center} \includegraphics[height=0.45\textheight]{figures/Density-R.eps} \end {center} \caption{Density profiles for our six example halos from our SIDM$_1$ (blue stars) and SIDM$_{0.1}$ (green triangles) simulations and their CDM counterparts. With self-interactions turned on, halo central densities decrease, forming cored density profiles. Solid lines are for the best NFW (black) and Burkert (blue) fits, with the points representing the density at each radial bin found by AHF. The arrow indicates the location of the Burkert core radius $r_\mathrm{b}$. $r_\mathrm{s}$ is the NFW scale radius of the corresponding CDM halo density profile (black solid line). Burkert profiles provide a reasonable fit to our SIDM$_1$ halos only because $r_\mathrm{b} \approx r_\mathrm{s}$ for $\sigma/m=1 \, {{\rm cm}^2/{\rm g}}$, so a cored profile with a single scale radius works. As discussed in \S \ref{analytic.sec} this is not the case for $\sigma/m=0.1 \, {{\rm cm}^2/{\rm g}}$ and thus Burkert profiles are not a good fit to our SIDM$_{0.1}$ halos.} \label{densProfiles.fig} \end{figure*} For the SIDM$_1$ cases we can quantify the halo cores by fitting them to \citet{burkert1995} profiles \begin{equation} \label{Burkert.Eq} \rho_{\rm B}(r) = \frac{\rho_\mathrm{b}r_\mathrm{b}^3}{(r+r_\mathrm{b})(r^2+r_\mathrm{b}^2)}. \end{equation} These Burkert fits are shown as blue dashed lines. They are good fits for radii within $r\sim2-3 \ r_{\mathrm{s}}$, but the quality of the fits gets worse at large radii. The blue arrows in each panel show the value of the best-fit Burkert core radius for the SIDM$_{1}$ halos. Note that the values are remarkably stable in proportion to the CDM $r_s$ value at $r_b \simeq 0.7 \, r_s$. As explained in \S \ref{analytic.sec}, the fact that the SIDM$_1$ profiles are reasonably well characterized by a single scale-radius Burkert profile may be a lucky accident, only valid for cross sections near $1 \, {{\rm cm}^2/{\rm g}}$. It just so happens that for this cross section the radius where dark matter particles experience significant scattering sets in at $r \sim r_{\mathrm{s}}$ (see Figure \ref{sigvRhoProfiles.fig} and related discussion). For a smaller cross section (with a correspondingly smaller core) a multiple parameter fit may be necessary. Given the beginnings of very small cores we are seeing in the SIDM$_{0.1}$ runs, it would appear that we would need one scale radius to define an $r_s$ bend and a second scale radii to define a distinct core. Another qualitative fact worth noting is that the density profiles of the SIDM$_1$ halos overshoot the CDM density profiles near the Burkert core radius (not as much as the Burkert fits do, but the difference in the data points is noticeable). This is due to the fact that as particles scatter in the center, those that gain energy are pushed to larger apocenter orbits. This observation invites us to consider a toy model for SIDM halos where the effect of SIDM is confined to a region (smaller than a radius of about $r_b$) wherein particles redistribute energy and move towards a constant density isothermal core. We will develop this model further to explain the scaling relations between core size and halo mass in \S \ref{scaling.sec}. \begin{figure*} \begin {center} \includegraphics[height=0.45\textheight]{figures/Vcirc-R_uptors.eps} \end {center} \caption{Circular velocity profiles for our example selection of six well resolved halos from our CDM, SIDM$_1$ and SIDM$_{0.1}$ simulations. The magnitude of the circular velocity at small radii $r \lesssim r_\mathrm{s}$ is lowered for all halos when self-interactions are turned on. $r_\mathrm{s}$ is the NFW scale radius of the corresponding CDM halo density profile.} \label{vcircProfiles.fig} \end{figure*} The circular velocity curves for the same set of halos discussed above are shown in Figure \ref{vcircProfiles.fig}. The SIDM rotation curves rise more steeply and have a lower normalization than for CDM within the NFW scale radius $r_s$. This brings to mind the rotation curves observed for low surface brightness galaxies and we will explore this connection later. Note though that the peak circular velocity $V_{\mathrm{max}}$ actually is slightly higher for the SIDM$_1$ case because of the mass rearrangement (evident in the density profiles in Figure \ref{densProfiles.fig}) briefly discussed in the last paragraph. At radii well outside the core radius, the rotation curves of the CDM and SIDM$_1$ halos converge, though this convergence occurs beyond the plot axes $> r_s$ for most of the halos shown. An appreciation of why the density profiles of SIDM halos become cored can be gained from studying their velocity dispersion profiles compared to their CDM counterparts, as illustrated in Figure \ref{vrmsProfiles.fig}. Here $v_{\rm rms}$ is defined as the root-mean-square speed of all particles within radius $r$. While the CDM halos (black) are colder in the center than in their outer parts (reflecting a cuspy density profile) the SIDM halos have hotter cores, indicative of heat transport from the outside in. Moreover, the SIDM halos are slightly {\em colder} at large radii, again reflecting a redistribution of energy. As discussed in the introduction, it is this heat transport that is the key to understanding why CDM halos differ from SIDM halos in their density structure \citep{balberg2002,colinetal02,ahn2005,koda2011}. The added thermal pressure at small radii is what gives rise to the core. The SIDM$_1$ simulations have sufficient interactions that they have been driven to isothermal profiles for $r/r_{\mathrm{s}} \lesssim 1$, while for SIDM$_{0.1}$ the $v_{\rm rms}$ profiles typically begin to deviate from the CDM lines only at smaller radii, $r/r_{\mathrm{s}} \sim 0.2$, reflecting the relatively lower scattering rate. The deviations in the SIDM $v_{\rm rms}$ profiles compared to CDM appear to set in at approximately the radius where we expect every particle to have interacted once in a Hubble time. This is explored directly in Figure \ref{sigvRhoProfiles.fig}, where we present a proxy for the local scattering rate as a function of distance from the halo center: \begin{equation} \rho(r) \, v_{\mathrm{rms}}(r) \propto \Gamma(r) \, (\sigma/m)^{-1}. \end{equation} We have divided out the cross section so it is easier to compare the SIDM$_{0.1}$ and SIDM$_1$ cases. Figure \ref{sigvRhoProfiles.fig} presents this rate proxy in units of $1 \hbox{ Gyr cm}^2/\hbox{g}$: for the SIDM$_1$ case (with $\sigma/m = 1 \, {{\rm cm}^2/{\rm g}}$) the radius where a typical particle will have scattered once over a 10 Gyr halo lifetime is $\rho(r)v_{\mathrm{rms}}(r) = 0.1$. For the SIDM$_{0.1}$ case (with $\sigma/m = 0.1 \, \hbox{cm}^2/\hbox{g}$), the ordinate needs to be ten times higher ($\sim 1$) in order to achieve the same scattering rate. By comparing Figure \ref{sigvRhoProfiles.fig} to Figure \ref{vrmsProfiles.fig} (and to some extent to all Figures 4-6) we see that the effects of self-interactions do become evident at radii corresponding to $\rho \, v_{\rm rms} \sim 0.1$ for SIDM$_1$ (at $r/r_{\mathrm{s}} \sim 0.8$) and $\rho \, v_{\rm rms} \sim 1$ for SIDM$_{0.1}$ (at $r/r_{\mathrm{s}} \sim 0.2$). Interestingly, for the SIDM$_1$ halos this interaction radius is fairly close to the Burkert scale radius (shown by the blue arrows). It should be kept in mind, however, that the structure of halos can be affected to larger radii because particles scattering in the inner regions can gain energy and move to larger orbits. A careful inspection of the density and rotation velocity profiles shows that this is indeed the case. We will discuss these findings in more detail in Sections \ref{scaling.sec} and \ref{analytic.sec}. In particular, in \S \ref{analytic.sec} we present an analytic model aimed at understanding how the central densities and scale radii of SIDM halos are set in the context of energetics. But before moving on to those issues, we first explore halo substructure in SIDM. \begin{figure*} \begin {center} \includegraphics[height=0.45\textheight]{figures/vrms-R.eps} \end {center} \caption{Velocity dispersion profiles for our six example halos from our SIDM$_1$ and SIDM$_{0.1}$ simulations over-plotted with their CDM counterparts. The velocity dispersion is inflated at small radii and slightly suppressed at large radii. The effects set in at approximately the radius where SIDM particles experience at least one interaction on average over the lifetime of the halo (see Figure \ref{sigvRhoProfiles.fig}).} \label{vrmsProfiles.fig} \end{figure*} \subsection{Substructure} \label{subhalos.sec} The question of halo substructure is an important one for SIDM. One of the original motivations for SIDM was to reduce the number of subhalos in the Milky-Way halo in order to match the relative dearth of observed satellite galaxies \citep{spergelandsteinhardt00}. However, the over-reduction of halo substructure is now recognized as a negative feature of SIDM compared to CDM, given the clear evidence for galaxy-size subhalos throughout galaxy clusters \citep{natarajan2009} and the new discoveries of ultra-faint galaxies around the Milky Way (see \citet{willman2010} and \citet{bullock2010} for reviews). In fact, one of the most stringent constraints on the self-interaction cross section comes from analytic subhalo-evaporation arguments \citep{gnedinandostriker01}. Figure \ref{subVmaxFunct.fig} demonstrates that the effects of subhalo evaporation in SIDM are not as strong as previously suggested on analytic grounds. Here we show the cumulative number of subhalos larger than a given $V_{\mathrm{max}}$ for a sample of well-resolved halos in our CDM (solid), SIDM$_{0.1}$ (dotted), and and SIDM$_1$ (dashed) simulations. The associated virial masses for each host halo are shown in the legend. The left panel presents the $V_{\mathrm{max}}$ function for all subhalos within the virial radius of each host and the right panel restricts the analysis to subhalos within half of the virial radius. We see that generally the reduction in substructure counts at a fixed $V_{\mathrm{max}}$ is small but non-zero and that the effects appear to be stronger at small radii than large. Similarly, there appears to be slightly more reduction of substructure in the SIDM cluster halos compared to the galaxy size systems. We can understand both trends, 1) the increase in the difference between the CDM and SIDM $V_{\mathrm{max}}$ functions as $M_{\mathrm{vir}}$ increases and 2) the increase in the difference as one looks at the central regions of the halo, using the results from the previous section as a guide. The typical probability that particle in an SIDM subhalo will interact with a particle in the background halo is \begin{equation} P \approx \langle \rho_{host}(\mathrm{r}) (\sigma/m) v_{orb}(\mathrm{r})\rangle_{T} \, T, \end{equation} where $v_{orb}(\mathrm{r})$ is the orbital speed of the subhalo at position $\mathrm{r}$, $\rho_{host}$ is the mass density of the host halo, and $T$ is the orbital period. The typical speed of the subhalo is similar to the rms speed of the smooth component of the halo, and thus $\rho_{host}(\mathrm{r}) (\sigma/m) v_{orb}(\mathrm{r})$ should be similar to the function we show in Figure \ref{sigvRhoProfiles.fig}. At fixed $r/r_{\mathrm{s}}$ we expect $P$ to scale with $V_{\mathrm{max}}$ as $V_{\mathrm{max}}^3/r_{\mathrm{max}}^2$ (given that $\rho_{\mathrm{s}} \propto V_{\mathrm{max}}^2/r_{\mathrm{max}}^2$), which is a very mildly increasing function of $V_{\mathrm{max}}$ over the range of halo masses we have simulated. Note though that we expect scatter at fixed halo mass because of the scatter in the $V_{\mathrm{max}}-r_{\mathrm{max}}$ relation \citep{bullock2001}. While the increase in destruction of subhalos with host halo mass is not strong, it is clear from the above arguments that subhalos in the inner parts of the halo ($r/r_{\mathrm{s}} \ll 1$) should be destroyed but the bulk of the subhalos around $r/r_{\mathrm{s}} \sim 1$ and beyond should survive for $\sigma/m=1\, {{\rm cm}^2/{\rm g}}$. This effect is strengthened by the fact that subhalos in the innermost region of the halo were accreted much longer ago than subhalos in the outskirts, so they have experienced many more orbits \citep{rochaetal11}. These arguments explain the comparisons between the subhalo mass functions plotted in Figure \ref{subVmaxFunct.fig}. Our arguments demonstrate that a large fraction of the subhalos found in CDM halos (most of which are in the outer parts) would still survive in SIDM halos for $\sigma/m$ values around or below $1\, {{\rm cm}^2/{\rm g}}$. \begin{figure*} \begin {center} \includegraphics[height=0.45\textheight] {figures/sigv_rho-R.eps} \end {center} \caption{Estimate of the local scattering rate modulo the cross section $\rho v_{\mathrm{rms}} = \Gamma (\sigma/m)^{-1}$ for six well resolved halos from our CDM, SIDM$_{0.1}$, and SIDM$_1$ simulations. The quantity is scaled by $1 \hbox{ Gyr cm}^2/\hbox{g}$, such that $1$ in these units means that each particle has roughly one interaction per Gyr in SIDM$_{1}$ and $0.1$ per Gyr in SIDM$_{0.1}$. Based on this argument, the effects of self-interactions in the properties of halos over $\sim 10$ Gyr should start to become important when the ordinate is greater than about 0.1 in SIDM$_{1}$ ($r/r_{\mathrm{s}} \sim 0.8$) and greater than about 1 in SIDM$_{0.1}$ ($r/r_{\mathrm{s}} \sim 0.2$). Comparisons to Figures 4-6 indicate that this is indeed the case. } \label{sigvRhoProfiles.fig} \end{figure*} \vskip 0.2cm \noindent Overall in the previous two sections we have seen that the effects of self-interactions between dark matter particles in cosmological simulations are primarily in the central regions of dark matter halos, leaving the large scale structure identical to our non-interacting CDM simulations. Thus we retain the desirable features of CDM on large scales while revealing different phenomenology near halo centers. In the following section we will move to explore how the properties of SIDM halos presented here scale with halo mass. \section{Scaling Relations} \label{scaling.sec} In the previous section we saw that while SIDM preserves the CDM large scale properties of dark matter halos, self-interactions in the central regions of halos result in a decrease of central densities and the formation of cores in their density profiles. We found that the density profiles of halos from our SIDM$_1$ simulations can be relatively well fit by Burkert density profiles inside $r \sim 2-3 r_{\mathrm{s}}$ (see Figure \ref{densProfiles.fig}). Here we define a sample of well resolved halos from all our SIDM$_1$ simulations and use Burkert fits to their density profiles in order to quantify their central densities and core sizes. We then provide scaling relations of dark matter halo properties with maximum circular velocity $V_{\mathrm{max}}$. The sample of halos used for the rest of this section consists of the two host halos in our SIDM$_1$-Z11 and SIDM$_1$-Z12 simulations together with the 25 most massive halos from our SIDM$_1$-50 and the 25 most massive halos from our SIDM$_1$-25 simulations. That gives us a total of 52 halos spanning a range $V_{\mathrm{max}} = 30-860 \ \, {\rm km}/{\rm s}$ or $M_{\mathrm{vir}} = 5\times 10^{11} - 2\times 10^{14} \ \,\mathrm{M}_{\odot}$. For this set of halos the innermost resolved radius, defined by Equation 20 in \citet{poweretal03}, is always smaller than one third of the Burkert scale radius from which we define the sizes of cores. It is vital that we do a conservative comparison to the \citet{poweretal03} radius because both gravitational scattering and self-interactions lead to the same phenomelogical result of constant density cores. Most of the halos (other than the 52 we select here) do not pass this test well enough for the core set by self-interactions to be resolved with confidence. This desire to be conservative in our presentation of scaling relations forces us to find these relations from only a small sample of halos for SIDM$_{1}$ and leaves us with basically no halos to find scalings for SIDM$_{0.1}$. Also one has to keep in mind that our SIDM$_{1}$ relations could be biased by selecting only the most massive halos in our full box simulations. Evidently higher resolution simulations are necessary to find definitive answers. It is reassuring however that the scaling relations derived from our analytical arguments in \S \ref{analytic.sec} agree so well with the ones presented here for $\sigma/m = 0.1 \, {{\rm cm}^2/{\rm g}}$. We have checked that for all of our halos we resolve the scattering rate out to at least four times the Burkert scale radius. Outside of this point the scattering rate is underestimated because of our choice of the self-interaction smoothing length relative to the interparticle spacing (see \S \ref{test.sec}). However, the expected scattering rate is negligible with respect to the Hubble rate outside that radius (Figure \ref{sigvRhoProfiles.fig}). Moreover, we have re-run our $50 h^{-1} \, {\mathrm{Mpc}}$ boxes for a range of SIDM smoothing values and found identical results. Thus we consider our sample to be well resolved. Eight halos in our sample are undergoing significant interactions and have density profiles that are clearly perturbed even in the CDM runs. We include these eight systems in all of the following plots but indicate them with open symbols. We do not use them in the best fits for the scaling relations that we provide. \begin{figure*} \begin {center} \includegraphics[width=\textwidth] {figures/subVmaxFunction.eps} \end {center} \caption{ Subhalo cumulative number as a function of halo peak circular velocity ($V_{\mathrm{max}}$) for several well-resolved halos in our CDM (solid), SIDM$_{0.1}$ (dotted), and SIDM$_1$ (dashed) simulations. When looking at all subhalos within $r < r_{\mathrm{vir}}$ (left), the differences are small and the slope of the subhalo $V_{\mathrm{max}}$ function is the same for the CDM and SIDM cases. The offset in the subhalo $V_{\mathrm{max}}$ function increases when we look only at subhalos inside $r < 0.5 \ r_{\mathrm{vir}}$ (right panel), showing that SIDM suppresses the number of subhalos in the central regions of halos more strongly.} \label{subVmaxFunct.fig} \end{figure*} We start by examining the global structure of halos as characterized by the maximum circular velocity $V_{\mathrm{max}}$ and the radius where the rotation curve peaks, $r_{\mathrm{max}}$. The relationship between $V_{\mathrm{max}}$ and $r_{\mathrm{max}}$ provides a simple, intermediate-scale measure of halo concentration and we aim to investigate any differences between SIDM and CDM. Figure \ref{rmaxVmax.fig} shows the $V_{\mathrm{max}} - r_{\mathrm{max}}$ relation for CDM (black) and SIDM$_1$ (blue) halos. We can see that small differences of about $10\%$ exists in both $V_{\mathrm{max}}$ and $r_{\mathrm{max}}$, with SIDM$_1$ halos having larger values for $V_{\mathrm{max}}$ and smaller for $r_{\mathrm{max}}$. This was already evident in Figure \ref{vcircProfiles.fig}, where the circular velocity curves of SIDM$_1$ halos seem to peak at slightly smaller radii and slightly larger velocities than their CDM analogs, even though SIDM$_1$ curves decrease more steeply at the center. The apparent difference is consistent with a picture where energy exchange due to scattering redistributes the SIDM dark matter particles, with many of the tightly bound particles scattered onto less bound, high apocenter orbits. Since the radius at which self-interactions are significant (see Figure \ref{sigvRhoProfiles.fig}) is smaller than (but close to) $r_{\mathrm{s}}$, it is entirely reasonable that the scattered particles lead to a new $r_{\mathrm{max}}$ for SIDM$_1$ that is smaller than the CDM $r_{\mathrm{max}}$ and a $V_{\mathrm{max}}$ that is larger. Notice that the slope of the $V_{\mathrm{max}}-r_{\mathrm{max}}$ relation is unchanged from CDM to SIDM$_1$. The best-fit relations are: \begin{align}\label{RmaxVmax.eq} r_{\mathrm{max}} &= 26.21 \, {\rm kpc} \ \left(\frac{V_{\mathrm{max}}}{100 \, {\rm km}/{\rm s}}\right)^{1.45} \ \ (\mathrm{CDM})\,, \nonumber \\ r_{\mathrm{max}} &= 22.46 \, {\rm kpc} \ \left(\frac{V_{\mathrm{max}}}{100 \, {\rm km}/{\rm s}}\right)^{1.46} \ \ (\mathrm{SIDM}_1). \end{align} We continue this discussion by considering the sizes of cores in our SIDM$_1$ simulations as a function of $V_{\mathrm{max}}$. The core sizes of halos are quantified by the scale radius in the Burkert fit to their density profiles, namely $r_\mathrm{b}$ in Equation \ref{Burkert.Eq}. Figure \ref{rbVmax.fig} shows that for this relation a single power law holds along the whole range of our sample. We will come back to this result in our discussion section (\S \ref{discussion.sec}) on extrapolating to smaller and larger $V_{\mathrm{max}}$ values to make contact with observations of cores in galaxies and clusters. The power law that best fits our data is given by \begin{align} \label{rb-vmax.eq} r_\mathrm{b} &= 7.50 \, {\rm kpc} \ \left(\frac{V_{\mathrm{max}}}{100 \, {\rm km}/{\rm s}}\right)^{1.31}. \end{align} If we fit to $M_{\mathrm{vir}}$ instead of $V_{\mathrm{max}}$ we get \begin{align} \label{rb-mvir.eq} r_\mathrm{b} &= 2.21 \, {\rm kpc} \ \left(\frac{M_{\mathrm{vir}}}{10^{10} \,\mathrm{M}_{\odot}}\right)^{0.43}. \end{align} We note that the scaling with $V_{\mathrm{max}}$ is close to that expected for $r_{\mathrm{max}}$ or $r_{\mathrm{s}}$. We show this explicitly by fitting for the core size of SIDM$_1$ halos $r_\mathrm{b}$ as a function of the NFW scale radius $r_\mathrm{s}$ of their CDM counterparts, as shown in Figure \ref{rbRs.fig}. We find that the ratio of the core size of a SIDM$_1$ halo to the scale radius of the corresponding CDM halo varies very mildly with $V_{\mathrm{max}}$. In other words, the core sizes are a fixed fraction of the CDM halo scale radius. The relation that best fits our data is given by \begin{align} \frac{r_\mathrm{b}}{r_{\mathrm{s}}} &= 0.71 \ \left(\frac{r_\mathrm{s}}{10 \, {\rm kpc}}\right)^{-0.08}. \end{align}\\ This underscores the point that $r_\mathrm{b}$ and $r_{\mathrm{s}}$ are closely tied to each other and the fact that they are numerically so close to each other is the reason why a cored profile with a single scale (like a Burkert profile) provides a reasonable fit to our SIDM$_1$ halos. We will explain this striking behavior using an analytic model in the next section. The central densities in SIDM$_1$ halos can be defined either as the Burkert profiles scale density or as the density at the innermost resolved radius. We have found that both definitions give similar results with no significant differences. In Figure \ref{rhobVmax.fig}, we show how the Burkert scale density $\rho_\mathrm{b}$ scales with $V_{\mathrm{max}}$. The trend in the $\rho_\mathrm{b}-V_{\mathrm{max}}$ relation is not as strong as for the $r_\mathrm{b}-V_{\mathrm{max}}$ relation, with a scatter as large as about a factor of 3. We will come back to the implications of this result in our discussion section (\S \ref{discussion.sec}). The relation that best fits our data is given by \begin{align} \label{rhob-vmax.eq} \rho_\mathrm{b} &= 0.015 \,\mathrm{M}_{\odot}/\mathrm{pc}^3 \ \left(\frac{V_{\mathrm{max}}}{100 \, {\rm km}/{\rm s}}\right)^{-0.55}. \end{align} If we fit to $M_{\mathrm{vir}}$ instead of $V_{\mathrm{max}}$ we get \begin{align} \label{rhob-mvir.eq} \rho_\mathrm{b} &= 0.029 \,\mathrm{M}_{\odot}/\mathrm{pc}^3 \ \left(\frac{M_{\mathrm{vir}}}{10^{10} \,\mathrm{M}_{\odot}}\right)^{-0.19}. \end{align} We urge caution when using the above fits to the central densities as it is likely to be affected by our small sample size given the large scatter. The toy model discussed in the next section predicts a slightly stronger scaling with $V_{\mathrm{max}}$ . However, the typical densities of order $0.01 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ for galaxy halos and $0.001 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ for cluster halos (see Figure \ref{rhobVmax.fig}) are in line with the predictions of the analytic model. \bigskip \noindent In this section we have presented scaling relations for the properties of halos in our SIDM$_1$ simulations. Our limited resolution allows us to use only 52 halos spanning a modest mass range, from which we throw out eight systems that are undergoing mergers. Admittedly, this sample is not large enough to be definitive, especially in regards to scatter. However, the strong correlation between the SIDM core radius $r_\mathrm{b}$ and the counterpart CDM scale radius $r_{\mathrm{s}}$ is clearly statistically significant and the general trends provide a useful guide for tentative observational comparisons -- a subject we will return to in the final section below. \section{Analytic model to explain the scaling Relations} \label{analytic.sec} In this section we develop a simple model to understand the scaling relations shown in \S \ref{scaling.sec}. This model is based on identifying an appropriate radius $r_1$ within which self-interactions are effective and demanding that the mass as well as the average velocity dispersion within this radius is set by the mass and the average velocity dispersion (within the same radius) of the {\em same halo in the absence of self-scatterings}. The mass loss due to scatterings in the core should be insignificant because particles rarely get enough energy to escape and this implies that the mass within $r_1$ should be close to what it would have been in the absence of self-interactions. This also implies that the potential outside $r_1$ is unchanged from its CDM model prediction, but tends to a constant value faster inside $r_1$. Within this set of approximations, the dominant effect due to scatterings is to re-distribute kinetic energy in the core, while keeping the total kinetic energy within $r_1$ the same as it would have had before self-interactions became important. We have looked at the kinetic energy profiles in the best-resolved halos in our simulations and have confirmed that this is indeed a good approximation. Note that in this picture, there is a clear demarcation of time-scales such that the inner halo structure (say $r \lesssim r_{\mathrm{s}}$) is set (the same way as in CDM model) well before self-interactions become important. For cross sections much larger than what we are interested in here, this need not hold. To set up the model, we start by recalling that self-interactions work to create an isothermal core (see Figure \ref{vrmsProfiles.fig}) that is isotropic (both spatially and in velocity space). Using the spherical Jeans equation, one can then see that for a system with these properties \begin{equation} v_{\rm rms,0}^2 = 3\sigma_{r}(0)^2=2\pi \xi^{-1} G \rho(0) r_0^2\,, \end{equation} where we have defined $r_0$ to be the expansion parameter such that $\rho(r)\sigma_r(r)^2 = \rho(0)\sigma_r(0)^2(1-\xi(r/r_0)^2)$ when $r\ll r_0$, and $\sigma_r$ is the radial velocity dispersion. The form of the Taylor expansion for $\rho(r)\sigma_r(r)^2$ is dictated by the Jeans equation for density profiles that tend to a constant value, as may be readily ascertained by taking the derivative of $\rho(r)\sigma_r(r)^2$. To fix $r_0$, we will choose it to be equivalent to the Burkert scale radius where the density is one-fourth of the central density. The parameter $\xi$ encapsulates uncertainties from the profile and velocity dispersion anisotropy in the outer parts of the halo. We test various models and find that a range of 2-3 for $\xi$ is largely consistent with most parameterizations and hence we fix $\xi=2.5$. If we specify the central velocity dispersion, then with an additional constraint on the core region ({\em i.e.}, $r_1$), we would be able to back out both the core radius and the core density. \begin{figure} \begin {center} \includegraphics[width=0.5\textwidth]{figures/Rmax-Vmax.eps} \end {center} \caption{$r_{\mathrm{max}}$ vs. $V_{\mathrm{max}}$ for our combined sample of well resolved halos from our SIDM$_1$ and CDM simulations. Open symbols correspond to halos for which the density profiles showed signs of being perturbed, thus they were not included in the best fit of the relation. Small differences of about $10\%$ exists in both $V_{\mathrm{max}}$ and $r_{\mathrm{max}}$, however the slope of $V_{\mathrm{max}}$-$r_{\mathrm{max}}$ relation is unchanged from CDM to SIDM$_1$. } \label{rmaxVmax.fig} \end{figure} We then set $v_{\rm rms,0}^2$ equal to the average velocity dispersion squared ({\em i.e.}, two times kinetic energy divided by mass) within the region $r_1$ in the absence of self-interactions. This basically demands that the kinetic energy within $r_1$ is unchanged from the value it would have had in the absence of self-interactions. Note, however, that we are setting the average velocity dispersion squared equal to $v_{\rm rms,0}^2$ and not the corresponding average in the SIDM halo. This is an approximation, but one that is degenerate with choosing the $\xi$ parameter. To finish specifying this model, we need a density profile for the region inside $r_1$. A Burkert profile has a velocity dispersion profile (assuming isotropy) that asymptotes very slowly to the central dispersion. For small radii, the radial dispersion profile is slowly increasing (with radius) because of the $r/r_\mathrm{b}$ term in the Taylor expansion for the density profile. If we want a flatter central dispersion profile (as is observed for the SIDM$_1$ halos), we can fix this by either assuming an isothermal profile or something like $1/(1+1.52(r/r_0)^2)^{3/2}$. The final results turn out to be qualitatively similar for these profiles. Hence we adopt a Burkert profile for ease of comparison to the fits presented here and then check the results with more appropriate profiles later. Our two constraints (on the radial velocity dispersion and mass) fully specify the density and radial scales of the Burkert profile. \begin{figure} \begin {center} \includegraphics[width=0.5\textwidth]{figures/rb-Vmax.eps} \end {center} \caption{Burkert scale radius vs. $V_{\mathrm{max}}$ for our combined sample of well resolved halos from our SIDM$_1$-50 (blue circles), SIDM$_1$-25 (green stars), SIDM$_1$-Z12 (cyan square) and SIDM$_1$-Z11 (red triangle) simulations. Open symbols correspond to halos that are undergoing mergers. These perturbed halos were not included in the fit for the scaling relation. A single power law holds along the whole range of our sample, suggesting that this dependence continues towards smaller and larger $V_{\mathrm{max}}$ values. } \label{rbVmax.fig} \end{figure} In order to obtain scaling relations we need to estimate $r_1$, which demarcates the inner region where self-interactions are effective from the outer region that is mostly undisturbed by the self-interactions. In reality, this divide will not be sharp but we will see that the main features of the scaling relations are well-captured by this simple model. We define $r_1$ to be the region where each particle on average suffers one interaction. Since the region outside is assumed to be unperturbed by interactions, we may estimate $r_1$ as: \begin{equation} \Gamma(r_1) t_{\rm age}=1.3 \rho_{\rm CDM}(r_1) v_{\rm rms,CDM}(r_1) \frac{\sigma}{m} t_{\rm age}=1\,, \label{gamma1.eq} \end{equation} where we set age ($t_{\rm age}$) to be 10 Gyr for now, keeping in mind that larger halos have a shorter age and that major mergers can reset the timer. We will consider what happens when $t_{\rm age}$ is a function of halo mass shortly. The factor 1.3 is $\langle \left | \vec{v}-\vec{u}\right | \rangle/\sqrt{\langle v^2\rangle}$ for a Maxwellian distribution where $\vec{u}$ and $\vec{v}$ are the velocities of the two interacting dark matter particles. We have not attempted to use a more realistic velocity distribution since the dependence of this factor on a possible high-velocity cut-off to the distribution function was found to be fairly mild. For the density profile in the absence of self-interactions, we assume a NFW profile and to fix the velocity dispersion we use the observed fact that the phase space density is a power-law in radius \citep{taylor2001}. By noting that $v_{\rm rms,CDM}(r) = (\rho_{\rm CDM}(r)/Q(r))^{1/3}$ and using a phase-space density profile $Q(r) = Q(r_\mathrm{s}) (r/r_\mathrm{s})^{-\eta}$ \citep{taylor2001,rasia2004,ascasibar2004,dehnen2005,ascasibar2008}, we may fully specify the dependence of $r_1$ on the cross-section and halo parameters (say $V_{\mathrm{max}}$ and $r_{\mathrm{max}}$). For the phase-space density profile we use a power-law index $\eta=2$ and $Q(r_\mathrm{s})=0.3/(G V_{\mathrm{max}} r_{\mathrm{max}}^2)$ derived from jointly fitting our relaxed CDM halos; these parameters are very similar to the fits provided in \citet{ascasibar2008}. Let us first look at how $r_1$ scales with $r_{\mathrm{s}}$ in the NFW density profile. One notes that $\rho_{\mathrm{s}} =1.72 V_{\mathrm{max}}^2/(G r_{\mathrm{max}}^2)$ and hence $\rho_{\mathrm{s}} V_{\mathrm{max}} \propto V_{\mathrm{max}}^3/r_{\mathrm{max}}^2$ which is a very mildly increasing function of $V_{\mathrm{max}}$ as our Equation \ref{RmaxVmax.eq} shows. Thus Equation \ref{gamma1.eq} implies that $r_1/r_{\mathrm{s}}$ should be roughly a constant. Numerically, we find that $r_1/r_{\mathrm{s}} \simeq 0.7-0.8$ over the range of $V_{\mathrm{max}}$ of interest for $\sigma/m =1 \, {{\rm cm}^2/{\rm g}}$. Having now specified $r_1$, we are ready to look at the scalings of $r_\mathrm{b}$ and $\rho_\mathrm{b}$. For our assumed value of $\xi$, $v_{\rm rms,0}^2\simeq 2.5 G \rho_\mathrm{b} r_\mathrm{b}^2$. Thus we are looking for the value of $r_\mathrm{b}/r_\mathrm{s}$ that solves, \begin{equation} \left\langle \frac{\rho_\mathrm{s}}{(r/r_s)(1+r/r_s)^2} \frac{(r/r_s)^{\eta}}{Q(r_\mathrm{s})} \right \rangle(r_1)= (v_{\rm rms,0})^3\,, \label{constraint.eq} \end{equation} with the constraint that $M_b(r_1)=M_\mathrm{NFW}(r_1)$ where $M_\mathrm{NFW}(r)$ and $M_\mathrm{b}(r)$ are the masses enclosed within radius $r$ for NFW and Burkert profiles, respectively. We note that if $r_\mathrm{b}/r_{\mathrm{s}}$ is not a strong function of $V_{\mathrm{max}}$ and since we know $r_1/r_{\mathrm{s}}$ is a mild function of $V_{\mathrm{max}}$, then the mass constraint essentially sets $\rho_\mathrm{b}r_\mathrm{b}^3/(\rhosr_{\mathrm{s}}^3)$ to be a constant or $\rho_\mathrm{b}r_\mathrm{b}^3 \propto (\rhosr_{\mathrm{s}}^3)$. This implies $v_{\rm rms,0}^2 r_\mathrm{b} \propto V_{\mathrm{max}}^2 r_{\mathrm{max}}$. Now Equation \ref{constraint.eq} sets $v_{\rm rms,0} \simeq V_{\mathrm{max}}$ because $r_1/r_{\mathrm{s}}$ is a mild function of $V_{\mathrm{max}}$ and it therefore follows that $r_\mathrm{b} \propto r_{\mathrm{s}}$ is a consistent solution to the above equations. As a check we note that assuming $r_1/r_{\mathrm{s}}=0.7-0.8$ gives $v_{\rm rms,0} \simeq 1.1 V_{\mathrm{max}}$, in reasonable agreement with our SIDM$_1$ simulation results (see Figure \ref{vrmsProfiles.fig}). This simple model thus predicts that $r_\mathrm{b}/r_\mathrm{s}$ should not vary much with $V_{\mathrm{max}}$ in agreement with the observed scaling relations from the SIDM$_1$ simulation. In detail, the model predicts that $r_\mathrm{b}/r_{\mathrm{s}}=0.5-0.6$ for dwarf to cluster halos in good agreement with the fits to our SIDM$_1$ halos, but about 25\% smaller for $V_{\mathrm{max}} \sim 100\, \rm km/s$. It departs from the results of the simulation in predicting that $r_\mathrm{b}/r_{\mathrm{s}}$ increases gently with $V_{\mathrm{max}}$, whereas Figure \ref{rbRs.fig} predicts that this ratio should decrease gently with $V_{\mathrm{max}}$. We find that this departure from simulations is likely related to the assumption of a constant age for all halos. To generalize our model, we use the results of \citet{wechsler2002} who show that the virial concentrations of halos are correlated with their formation times, and in particular $c_{\rm vir}=4.1 (1+z_{\rm form})$ for a particular definition of formation time. We invert this equation to derive an estimate of the halo age using $z_{\rm form}$. With the age thus specified in Equation~\ref{gamma1.eq}, we find that now $r_\mathrm{b}/r_{\mathrm{s}}$ decreases gently with $V_{\mathrm{max}}$ in substantial agreement with the fit to our simulations. Thus the reason that larger halos have a smaller $r_\mathrm{b}/r_{\mathrm{s}}$ is because self-interactions have had less time to operate. We note that the values for the core radius in the analytic model with halo mass dependent $t_{\rm age}$ are uniformly about 25\% smaller, but this should not be a cause for concern given the approximation in demanding a sharp transition at $r_1$. \begin{figure} \begin {center} \includegraphics[width=0.5\textwidth]{figures/rb-rs.eps} \end {center} \caption{Burkert scale radius in SIDM$_1$ halos vs. the NFW scale radius in their CDM counterparts. Points and labels are the same as in Figure \ref{rbVmax.fig}. There is a one-to-one correlation indicating that the core size of SIDM$_1$ halos scales the same as the scale radius of CDM halos with $V_{\mathrm{max}}$ } \label{rbRs.fig} \end{figure} Given the Burkert core radius $r_\mathrm{b}$ and the central velocity dispersion $v_{\rm rms,0}$, one can easily check that the central density $\rho_\mathrm{b}$ is about $0.01 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ for $V_{\mathrm{max}}=300 \, {\rm km}/{\rm s}$ halos and $0.005 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ for $V_{\mathrm{max}}=1000\, {\rm km}/{\rm s}$ in this analytic model. These numbers and the scaling with $V_{\mathrm{max}}$ for $\rho_\mathrm{b}$ (when including the halo mass dependent $t_\mathrm{age}$) are in good agreement with the densities in Figure \ref{rhobVmax.fig} and the fit in Equation \ref{rhob-vmax.eq}. As we have indicated before, the scaling relation for the central density should be interpreted with care given the large scatter. Given the tight correlation between core radius and $r_{\mathrm{s}}$, it is possible that the substantial scatter in the central density arises in large part due to the scatter introduced by the assembly history in the concentration-mass relation. This has important implications for fitting to the rotation velocity profiles of low-surface brightness spirals \citep{kuzioetal10} and deserves more work. The simple model constructed above also provides insight into the core collapse time scales. In particular, as long as the outer part (region outside $r_1$) dominates the potential well and sets the average central temperature (or the total kinetic energy in the core), we do not expect core collapse. This is simply because core collapse requires uncontrolled decrease in temperature, which is prohibited here. Once $r_1$ moves out well beyond $r_{\mathrm{max}}$ or to the virial radius, there is significant loss of particles and core collapse may occur if there are no further major mergers. The time scale for this process is much longer than the age of the universe for $\sigma/m=1\, {{\rm cm}^2/{\rm g}}$ because the inner core is at $r_1 < r_{\mathrm{s}}$ after 10 Gyr for this self-interaction strength and we see no evidence for significant mass loss. \begin{figure} \begin {center} \includegraphics[width=0.5\textwidth]{figures/rhob-Vmax.eps} \end {center} \caption{Burkert scale density vs. Vmax. Points and labels are the same as in Figure\ref{rbVmax.fig}. The trend in the $\rho_\mathrm{b}-V_{\mathrm{max}}$ relation is not as clear as for the $r_\mathrm{b}-V_{\mathrm{max}}$ relation, with a scatter of up to a factor of 3.} \label{rhobVmax.fig} \end{figure} \section{Observational Comparisons} \label{discussion.sec} The goal of this section is to discuss our results in comparison to observationally inferred properties of dark-matter density profiles. In particular, we will focus on the core densities and core sizes. \S \ref{discussion_predict.sec} presents our expectations for SIDM$_1$ and SIDM$_{0.1}$. Our predictions for $\sigma/m = 1\hbox{ cm}^2/\hbox{g}$ are anchored robustly to our simulations, though they do require some extrapolation beyond the mass range directly probed by our simulations ($V_{\mathrm{max}} = 130 - 860 \ \, {\rm km}/{\rm s}$). For $\sigma/m = 0.1 \hbox{ cm}^2/\hbox{g}$ the predictions are much less secure because the associated core sizes are of order our resolution limit, thus we rely on our our analytic model more directly here. In \S \ref{discussion_obs.sec}, we discuss our predictions in light of observations of dark-matter halos for a wide range of halo masses. In \S \ref{discussion_subhalo.sec}, we discuss our results on subhalos in the context of past work and constraints on SIDM based on subhalo properties. Before proceeding with this discussion we would like to clarify how we quantify core sizes. In this work, we have fit the $\sigma/m=1 \, {{\rm cm}^2/{\rm g}}$ halos with Burkert density profiles. However, many observational constraints on cores on galaxy scales come from fitting pseudo-isothermal density profiles with core size $r_{\mathrm{pi}}$ to data \citep[e.g.,][]{simonetal05,kuzioetal08}, although some constraints do come from Burkert modeling \citep{saluccietal12}. We found that pseudo-isothermal density profiles also give good fits to the inner regions of the SIDM$_1$ halos, but Burkert fits are better because of that profile's $\rho \propto r^{-3}$ dependence at large radii. For a pseudo-isothermal density profile ($\propto 1/(r_c^2+r^2)$), the density decreases to one-fourth the central density at 1.73 times its core radius $r_c$. Thus, as a crude approximation, one may convert the Burkert radius to the equivalent pseudo-isothermal core radius by multiplying by a factor of 0.58 ($r_c \simeq r_b/1.73$). \subsection{Predicted Core Sizes and Central Densities in SIDM}\label{discussion_predict.sec} \subsubsection{SIDM with $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$.} The central properties of dark-matter halos have been inferred from observations from tiny Milky Way dwarf spheroidal (dSph) galaxies ($V_{\mathrm{max}} \lesssim 50\hbox{ km}/\hbox{s}$) to galaxy clusters ($V_{\mathrm{max}} \gtrsim 1000\hbox{ km}/\hbox{s}$). If we extrapolate the results from our set of SIDM$_1$ simulations using Eqs. (\ref{rb-vmax.eq})-(\ref{rhob-mvir.eq}) we predict that SIDM halos with $\sigma/m = 1 \, {{\rm cm}^2/{\rm g}}$ would have the following (Burkert) core sizes and central densities: \bigskip \noindent For galaxy clusters $(V_{\mathrm{max}} \simeq 700-1000\hbox{ km}/\hbox{s})$: \begin{equation} r_\mathrm{b} \simeq (95-155) \, {\rm kpc} \: ; \: \rho_\mathrm{b} \simeq (0.005-0.004) \,\mathrm{M}_{\odot} \mathrm{pc}^{-3} \nonumber \end{equation} \noindent For low-mass spirals $(V_{\mathrm{max}} \simeq 50-130 \, {\rm km}/{\rm s})$: \begin{equation} r_\mathrm{b} \simeq (3-10) \, {\rm kpc} \, ; \, \rho_\mathrm{b} \simeq (0.02-0.01) \,\mathrm{M}_{\odot} \mathrm{pc}^{-3} \nonumber \end{equation} \noindent For dwarf spheroidals galaxies $(V_{\mathrm{max}} \simeq 20-50 \, {\rm km}/{\rm s})$: \begin{equation} r_\mathrm{b} \simeq (0.9-3) \, {\rm kpc} \, ; \, \rho_\mathrm{b} \simeq \: (0.04-0.02) \,\mathrm{M}_{\odot} \mathrm{pc}^{-3} \nonumber \end{equation} \bigskip \noindent Although we can not completely determine the scatter in our scaling relations due to low number statistics, it is important to note from Figs. \ref{rbVmax.fig} and \ref{rhobVmax.fig} that a scatter of at least a factor of 2 in core sizes, and at least a factor of 3 in central densities, is expected for a given $V_{\mathrm{max}}$. We suspect that these differences are in large part a result of the diversity of merger histories of dark-matter halos. Note that the $V_{\mathrm{max}}$-$r_{\mathrm{max}}$ and $Q(r_{\mathrm{s}})$ scalings assumed in the analytic model are the median values. The strong dependence of the SIDM halo profiles on these quantities makes it clear that the scatter in these relations will introduce significant scatter in the halo core sizes and core densities. Thus the analytic model should also provide a simple way to understand (some of the) scatter seen for SIDM$_1$ halo properties. In future work we will characterize the relation between the core properties and merger history in the context of a detailed discussion of the scatter in the scaling relations, especially on scales that we do not resolve with our current simulations. \subsubsection{SIDM with $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$} As discussed in \S \ref{halos.sec} our SIDM$_{0.1}$ simulations are not well enough resolved to definitively measure a core radius for any of our halos, much less define scatter in that quantity. Nevertheless, our best resolved systems do demonstrate some clear deviations from CDM and allow us to cautiously estimate individual core densities. Referring back to Figure 4, we see that in our two best resolved cluster halos (at $M_{\mathrm{vir}} \simeq 10^{14} \,\mathrm{M}_{\odot}$) the SIDM$_{0.1}$ core densities approach $\sim 0.01 \, \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ -- each at least a factor of $\sim 3$ denser than their SIDM$_1$ counterparts. Similarly, in our Z12 Milky-Way case, the SIDM$_{0.1}$ core density appears to be approaching $\sim 0.1 \, \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ compared to $\sim 0.02 \, \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ in the SIDM$_1$ case. Given the lack of well-resolved halo profiles, it is worth appealing to the analytic model presented in \S \ref{analytic.sec} to estimate core radii for SIDM$_{0.1}$. Using exactly the same arguments (including the halo mass dependent age), we find that $r_1/r_{\mathrm{s}} \simeq 0.05-0.12$ in the 100-1000 km/s $V_{\mathrm{max}}$ range and a corresponding Burkert core radius $r_b/r_{\mathrm{s}} \simeq 0.09-0.17$. We note that the Burkert radius is close to but slightly larger than $r_1$. It is important to keep in mind that in this analytic model we are only explicitly fitting the inner "self-interaction zone" of $r<r_1$. This does not imply that the entire halo has to be well-fit by the Burkert profile. Recall that a single-scale Burkert profile only works as well as it does for $\sigma/m = 1\hbox{ cm}^2/\hbox{g}$ because $r_b \approx r_s$, such that to a good approximation there is only one relevant length scale. For the smaller cross section that we are now considering we expect the core and NFW scale radii to be widely separated, suggesting that a generic functional form for SIDM halos should have two scale radii. A wide separation between the SIDM$_{0.1}$ core and $r_s$ does appear to be consistent with the highest resolved halos presented in Figure 4. However, we note that given the strong correlation between $r_\mathrm{b}/r_{\mathrm{s}}$, we still expect a one-parameter family of models for a given $\sigma/m$. To see how dependent our results are on the shape of the {\em inner} halo profile, we modify the analytic model to include a density profile that decreases with radius as $1/(1+(r/r_c)^2)^{\alpha/2}$. For this density profile, the velocity dispersion profile has the right form to match our simulation results. The price we pay is the introduction of a new parameter $\alpha$. We set this parameter alpha by additionally demanding that the slope of the mass profile (i.e., density) is continuous at $r_1$, so that the mass profile joins smoothly with the NFW mass profile. This picks out a narrow range $\alpha=5.5-7.0$ as the solution over most of the $V_{\mathrm{max}}$ range of interest (with smaller values corresponding to lower $V_{\mathrm{max}}$). Interestingly, this implies that at $r_1$, the slope of the density profile is very close to $-2$ for the entire range of $V_{\mathrm{max}}$ values of interest. Note that while the mass profile is continuous, the slope of the density profile is not matched smoothly at $r_1$ (since the slope of the NFW profile would be closer to $-1$ at $r_1 \ll r_{\mathrm{s}}$) . This probably signals that if the matching were not done sharply (at $r_1$), the density profile of SIDM would overshoot that of CDM and catch up at some radius beyond $r_1$ (as is seen in the comparison of SIDM$_1$ and CDM density profiles). As a check we apply this $\alpha$-model to $\sigma/m =1 \ \, {{\rm cm}^2/{\rm g}}$ case and find that the results are qualitatively the same as the model with the Burkert profile. The quantitative differences are at 20\% level with the densities being smaller and inferred Burkert core radii (where density is 1/4 of the central density) larger compared to the Burkert profile model. The predicted slope of the density profile at $r_1$ is close to $-2.5$ implying a smoother transition to the NFW profile (since $r_1 \sim r_{\mathrm{s}}$ for $\sigma/m = 1\ \, {{\rm cm}^2/{\rm g}}$), as is seen Figure \ref{densProfiles.fig}. For the $\sigma/m =0.1 \ \, {{\rm cm}^2/{\rm g}}$ case, we obtain $r_c/r_{\mathrm{s}}=0.08-0.17$ and an equivalent Burkert core radius (where the density is one-fourth of the central density) $r_\mathrm{b}/r_s=0.06-0.14$, in substantial agreement with the results we obtained using the Burkert profile. Thus our analysis would suggest that core sizes $\sim 0.1r_{\mathrm{s}}$ for $\sigma/m=0.1\ \, {{\rm cm}^2/{\rm g}}$. The results from the analytic model for $\sigma/m=0.1\ \, {{\rm cm}^2/{\rm g}}$ also seem consistent with our simulation results; see Figure 6 where the $v_{\rm rms}$ profiles for SIDM$_{0.1}$ start to deviate from CDM at $\sim 0.2 r_s$. Based on the discussion above we conclude that for $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ we expect: \bigskip \noindent For galaxy clusters $(V_{\mathrm{max}} \simeq 700-1000 \hbox{ km}/\hbox{s})$: \begin{equation} r_\mathrm{b} \sim (16-20) \, {\rm kpc} \: ; \: \rho_\mathrm{b} \sim 0.04 \,\mathrm{M}_{\odot} \mathrm{pc}^{-3} \nonumber \end{equation} \noindent For low-mass spirals $(V_{\mathrm{max}} \simeq 50-130 \, {\rm km}/{\rm s})$: \begin{equation} r_\mathrm{b} \sim (0.6 - 2.5) \, {\rm kpc} \, ; \, \rho_\mathrm{b} \sim 0.2-0.1 \,\mathrm{M}_{\odot} \mathrm{pc}^{-3} \nonumber \end{equation} \noindent For dwarf spheroidals galaxies $(V_{\mathrm{max}} \simeq 20-50 \, {\rm km}/{\rm s})$: \begin{equation} r_\mathrm{b} \sim (0.2-0.6) \, {\rm kpc} \, ; \, \rho_\mathrm{b} \sim 0.5-0.2 \,\mathrm{M}_{\odot} \mathrm{pc}^{-3} \nonumber \end{equation} \bigskip \noindent These values do not include the scatter from mass assembly history. It is probably reasonable to assume a factor of 2 scatter for both core radii and core densities based on what we see in SIDM$_1$. It is also possible that the core densities are $\sim 50\%$ smaller than what we would see in simulations, given that the SIDM$_1$ simulations have core densities that are somewhat larger than the predictions from the analytic model. For the dwarf spheroidal galaxies, the values should be interpreted with caution as it is the prediction for field halos with $V_{\mathrm{max}}$ range $20-50 \, {\rm km}/{\rm s}$. While these values are somewhat tentative compared to those presented above for SIDM$_{1}$ (given our lack of direct simulation fits), two factors are reassuring. First, the analytic model is based on the simple assumption that scattering redistributes kinetic energy within the inner halo and the non-trivial aspect of the model is defining this "inner halo" region. There is no reason to suspect that this assumption or the prescription breaks down for SIDM$_{0.1}$ halos when it works so well in describing the SIDM$_1$ halos. The predicted densities are in line with those inferred for the best resolved halos in our SIDM$_{0.1}$ simulations (shown in Figure 4 and discussed above). For the core radii, we reiterate that the label ``$r_{\rm b}$" should be interpreted (according to its definition in the analytic model) as the radius where the density reaches one-fourth the asymptotic core density. The overall profile of a halo with such a small core compared to $r_{\mathrm{s}}$ will not be fit by the Burkert form. Note that the strong correlations we predict between the core radius and the NFW scale radius raise the intriguing possibility that the SIDM halos may be also well fit (modulo scatter) by a single parameter profile as is the case for CDM. Next, we compare our predictions for SIDM core properties against data and show that the core radii and densities appear to be consistent with that seen in real data, motivating future simulations with high enough resolution to resolve cores in SIDM$_{0.1}$ halos. \subsection{Observed Core Sizes and Central Densities vs. SIDM}\label{discussion_obs.sec} In this section, we explore the predictions for the properties of density profiles with SIDM in the context of observational constraints on density profiles. We also revisit previous constraints on SIDM from observations in light of our simulation suite. \subsubsection{Clusters} One of the tightest SIDM constraints from the first generation of SIDM studies emerged from one cluster simulation and one observed galaxy cluster. Specifically, \citet{yoshida00} simulated an individual galaxy cluster with different SIDM cross sections. When comparing the core size of this simulated cluster to the core size estimated by \citet{tysonetal98} for CL 0024+1654, they found that the observed core in CL 0024+1654 would be consistent with SIDM only if $\sigma/m \lesssim 0.1\hbox{ cm}^2/\hbox{g}$. Since that time, evidence has emerged that this particular cluster is undergoing a merger along the line of sight \citep{czoske2001,czoske2002,zhang2005,jee2007,jee2010,umetsu2010}. Thus, this cluster is not the ideal candidate for SIDM constraints based on the properties of relaxed halos, and the \citet{yoshida00} constraint is not valid in this context. Using X-ray emission, weak lensing, strong lensing, stellar kinematics of the brightest cluster galaxy (BCG) or some combination thereof, the mass distributions within a number of galaxy clusters have been mapped in the past decade. \citet{arabadjisetal02} placed a conservative upper limit of $75 \, {\rm kpc}$ on the size of any constant-density core, and an average density within the inner $50 \, {\rm kpc}$ of $\sim 0.025 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ for an halo with an estimated mass $M\sim 4 \times 10^{14} \,\mathrm{M}_{\odot}$. \citet{sandetal04}, \citet{sand2008} , \citet{newmanetal09}, and \citet{newmanetal11} all find central density profiles in clusters shallower than the NFW CDM prediction. The difference in the work between these authors and others is that they use stellar kinematics of the BCG to constrain the density profile of the cluster dark-matter halo on small scales. While this probe of the density profile is more sensitive on small scales than strong lensing is, proper inference of the dark halo properties depends on accurate modeling of the BGC density profile and equilibrium structure. They have typically assumed a ``gNFW'' profile in order to constrain the central densities: $\rho(r) \propto 1/(x^g (1+x)^{3-g})$ with $r=xr_{\mathrm{s}}$ and the NFW form obtained when $g=1$. The \citet{newmanetal09} and \citet{newmanetal11} mass models of $M\sim10^{15} \,\mathrm{M}_{\odot}$ clusters show average dark matter central densities within $10$ kpc of $\sim 0.03-0.06 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ and $r_s$ of order 100 kpc. Note that 10 kpc is typically the smallest radii our simulations can resolve. \citet{saha2006} and \citet{saha2009} studied the mass structure of 3 cluster halos from gravitational lensing and obtained density profiles that are consistent with $\rho \propto r^{-1}$ outside the inner $10-20 \, {\rm kpc}$ regions. Similarly \citet{morandi2010} and \citet{morandi2012} find that the radial mass distribution of cluster dark-matter halos are consistent with NFW predictions outside 30 kpc in projection. The CLASH multi-cycle treasury program on the \emph{Hubble Space Telecope} is finding many new strongly lensed galaxies in about a set of 25 massive clusters \citep{postman2012}. Initial results from this program show that the total density profile of these clusters (or total density minus the brightest cluster galaxy), if modeled as spherically symmetric, are consistent with NFW predictions for the halo alone if the gNFW functional form is used in the fit \citep{zitrin2011,coe2012,umetsu2012}. However, \citet{morandi2010} and \citet{morandi2011a} find that spherical mass modeling of galaxy clusters typically results in an overestimate of the the cuspiness of the density profile, although axially-symmetric modeling is found to lead to underestimates \citep{meneghetti2007}. Thus, the present status of the density profiles of the CLASH clusters is unclear and clearly an interesting data set to look forward to. We note here a complexity involved in using the lensing results to constrain SIDM models. Lensing provides mass in cylinders along the line of sight and this 2D mass profile is sensitive to mass from a large range of radii. As an example, lets consider mass within 30 kpc in projection. If we were to do something extreme and create a zero density core inside $30 \, {\rm kpc}$ sphere, the differences in the 2D mass profile would be less than a factor of 2 for clusters in the $10^{14-15} \,\mathrm{M}_{\odot}$ mass range. For SIDM$_{0.1}$, the differences are comparatively benign. Our analytic model predicts that differences relative to CDM at about $0.1 r_{\mathrm{s}}$ (which is $10-40 \, {\rm kpc}$ for $10^{14-15} \,\mathrm{M}_{\odot}$ virial mass range) are 20-30\%, which implies SIDM$_{0.1}$ surface mass density profiles are very similar to CDM on these scales. But for SIDM$_1$ the expected differences would be measurably large. On a related technical note, we discourage the use of the gNFW functional form when thinking about models that deviate from the CDM paradigm. In the SIDM case, for $\sigma/m < 1\ \, {{\rm cm}^2/{\rm g}}$, there will generically be two scale radii: one is the NFW-like scale radius which is the result of hierarchical structure formation \citep{lithwick2011}, and the second is the core radius from dark-matter self-scattering. For $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$, as we explained in detail in \S \ref{analytic.sec}, the two scales are about the same. If most of the cluster data constrain the density profile beyond a SIDM core, as they may for weak lensing and X-ray studies, the gNFW or NFW fit is dominated by those data, and a core will not be ``detected'' in the fit. In future work, we will simulate halos with a broader range of $\sigma/m$ and provide SIDM-inspired density profiles to the community. The results discussed above seem to suggest that the density profile beyond about 25 kpc should be close to the predictions from the NFW profile. To test this we plot the average physical density within 25 kpc for well-resolved halos in our CDM (black), SIDM$_{0.1}$ (green), and SIDM$_1$ (blue) simulations in Figure \ref{rho25Vmax.fig}. We see that for the most massive halos, the $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ run produces densities at $25$ kpc that are $\sim 2-3$ times lower than their CDM counterparts. Thus it seems like the measured densities in clusters rule out $\sigma/m=1\ \, {{\rm cm}^2/{\rm g}}$ SIDM model. At the same time the $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ simulations are quite similar to CDM at these these radii, though beginning to show some differences as we discussed earlier in this section. Analyses that combine information from X-rays, lensing and BCG stellar kinematics seem to suggest lowered densities (e.g., \citet{newmanetal11}) that would be compatible with SIDM$_{0.1}$. Given this outlook, it is reasonable to conclude that estimates of the central dark matter density in clusters will provide essential tests of interesting SIDM models. \begin{figure} \begin {center} \includegraphics[width=0.5\textwidth]{figures/rho_25kpc-Vmax.eps} \end {center} \caption{ Dark matter average density within 25 kpc vs. $V_{\mathrm{max}}$ for resolved halos in our CDM, SIDM$_{0.1}$, and SIDM$_1$ simulations. It is clear that SIDM$_1$ has significantly lower densities than CDM halos at group and cluster scales. For the SIDM$_{0.1}$ model, the differences are muted and only appear on cluster scales. Thus observations of central densities in clusters likely provide the most promising avenue to look for signatures of SIDM with cross sections in the vicinity of $0.1 \ \, {{\rm cm}^2/{\rm g}}$.} \label{rho25Vmax.fig} \end{figure} \subsubsection{Low-Mass Spirals} For low-mass spirals with maximum circular velocities in the range $50-130 \, {\rm km}/{\rm s}$, constant-density cores with sizes of $\sim 0.5-8 \, {\rm kpc}$ and central densities of approximately $\sim 0.01-0.5 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ have been observed \citep{debloketal01, simonetal05, sanchezsalcedo05, kuzioetal08, kuzioetal10, ohetal11a, saluccietal12}. Similar to what we found for clusters scales, SIDM with $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ would be able to reproduce the {\em largest} core sizes observed in low-mass galaxies but it predicts central densities that are too low. SIDM with $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ would be much more consistent. Moreover, the predicted log-slope of the density profile at 500 parsecs for $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ halos in the 50-130 $\, {\rm km}/{\rm s}$ range is $-0.5$ to 0, both facts consistent with results from THINGS \citep{ohetal11a}. Note that the slope at 500 pc for the $\sigma/m =1 \ \, {{\rm cm}^2/{\rm g}}$ model is 0 in the same $V_{\mathrm{max}}$ range, which is not consistent with the scatter seen in the data. We conclude, as before, that the observed densities and core radii are not consistent with SIDM$_{1}$ but are fairly well reproduced in SIDM models with $\sigma/m \simeq 0.1 \ \, {{\rm cm}^2/{\rm g}}$. \subsubsection{Dwarf Spheroidals in the Milky Way Halo} The least massive and most dark-matter-dominated galaxies provide an excellent setting to confront the predictions of different dark matter models with observations. Recent work by \citet{mbketal11a, mbketal11b} has found that the estimated central densities of the bright Milky Way dwarf spheroidal satellites are lower than the densities of the massive subhalos in dark-matter-only simulations. SIDM offers a way to solve this problem because it reduces the central density of halos. Thus in SIDM, the massive subhalos \emph{do} host the luminous dSph but have shallower density profiles than predicted in CDM simulations. This has recently been demonstrated by \citet{vogelsberger12}. We do not directly compare to \citet{vogelsberger12} because their work is focused on the subhalos of the Milky Way and the velocity-independent cross section that they simulate ($\sigma/m = 10 \, {{\rm cm}^2/{\rm g}}$) is larger than the cross sections considered in our work. Regardless of whether Milky Way dSphs have cuspy or cored dark-matter halos, we may estimate the enclosed mass, and hence average density, around the half-light radius of the stellar distribution. Mass estimates within $300 \mathrm{pc}$ and mass profile modelings using stellar kinematics together with chemo-dynamically distinct stellar subcomponets of Milky Way dwarf spheroidal galaxies suggest central densities of approximately $\sim 0.1 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$ \citep{strigarietal08, wolfetal10, walkerandpenarrubia11, amoriscoandevans12, wolfandbullock12}. For the faintest dSph Segue 1, the density within the half-light radius (about 40 pc) is measured to be about $2.5^{+4.1}_{-1.9}\,\mathrm{M}_{\odot}/\mathrm{pc}^3$ \cite{Simon2011,Martinez2011}. The errors on Segue 1 density are large but it is clear that if SIDM is to accommodate this result, it must allow for large scatter in the core sizes and densities for small $V_{\mathrm{max}}$ halos. With a factor of 2-3 scatter in the densities quoted earlier for SIDM$_{0.1}$ halos, Segue 1 would appear to be compatible with SIDM$_{0.1}$ if its $V_{\mathrm{max}}$ value is towards the lower end of the $20-50 \, {\rm km}/{\rm s}$ range in $V_{\mathrm{max}}$. For the two dSph galaxies that appear to have cored density profiles (Fornax and Sculptor), the cores sizes must be of order$\sim 0.2-1 \, {\rm kpc}$ \citep{walkerandpenarrubia11}. For small halos with circular velocities in the $20-50 \, {\rm km}/{\rm s}$ range, which is close to the expected peak circular velocities of dwarf spheroidal halos before infall into the Milky Way host halo, an SIDM with $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ predicts core sizes in the order of $\sim0.8-3.0 \, {\rm kpc}$, with central densities of about $\sim0.02-0.04 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$. Therefore, we find again that $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ cannot reproduce the observed high central densities. On the other hand, our estimates suggest that an SIDM model with $\sigma/m = 0.1\ \, {{\rm cm}^2/{\rm g}}$ would produce central densities and core sizes consistent with the Milky Way dSph.\\ \noindent In this last section we have used the analytic results that explain the scaling relations for the core sizes and central densities of halos in our SIDM$_1$ and SIDM$_{0.1}$ simulations, to extrapolate our results to scales ranging from galaxy clusters to dwarf spheroidal galaxies and to lower cross sections. We have found that $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ would be unable to reproduce the observed high central densities. Remarkably, we find that the observations should be consistent with the predictions of a self-interacting dark matter with cross section in the ballpark of $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$. These expectations are based on the scaling relations seen in SIDM$_1$ simulations and our analytic model, which is consistent with the results from our direct $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ simulations at the radii where we can trust our simulations. This deserves further study both in terms of simulations with SIDM cross section values smaller than $1 \ \, {{\rm cm}^2/{\rm g}}$ and more detailed comparisons to observations. Our current look at the global data does not suggest a need for a velocity dependent cross section as has been previously suggested. In the companion paper (Peter, Rocha, Bullock and Kaplinghat, 2012) we show that these SIDM models are also consistent with observations of halo shapes. \subsection{Observed Substructure vs. SIDM}\label{discussion_subhalo.sec} In Figure \ref{subVmaxFunct.fig} we showed that the number of subhalos for $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ is not significantly different from CDM predictions, especially in galaxy-scale halos. This is interesting because it means that SIDM fails to deliver on one of the original motivations for considering this model of dark matter. Recall that \citet{spergelandsteinhardt00} originally promoted SIDM as a solution to the missing satellites problem \citep{klypin1999,moore1999}, stating that many subhalos would be evaporated by interactions with the background halo. Given the new discoveries of ultra-faint galaxies around the Milky Way and the high likelihood of many more discoveries from surveys like LSST \citep{willman2010,bullocketal2010}, a significant reduction in substructure counts may very well be a negative characteristic of any non-CDM model \citep{tollerud2008}. However, in Milky Way mass halos, SIDM with $\sigma/m = 1\ \, {{\rm cm}^2/{\rm g}}$ will yield a significant probability for subhalo particle scattering only for systems that pass within $\sim 10\hbox{ kpc}$ of the host halo center. Thus, for this cross section, we can form interesting-sized cores but largely leave the subhalo mass function unaffected in Milky Way-mass halos. For smaller cross sections, the differences between SIDM and CDM subhalo mass functions will be even smaller. We note that we are not the first to find that SIDM can form cores but not solve the missing satellites problem; it was first discussed in \citet{donghia2003}. This finding is also interesting in the context of other alternatives to CDM. Warm dark matter (WDM) models, for which the outstanding difference from CDM is that dark-matter particles have high speeds at matter-radiation equality and a related free-streaming cutoff in the matter power spectrum, predict a suppression in the halo (and subhalo) mass function at small scales. Otherwise, the abundance and structure of halos and subhalos is nearly indistinguishable from CDM \citep{villaescusa2011,maccio2012}. WDM halos may be less concentrated than CDM halos on scales not much larger than the free-streaming scale, but are still \emph{cusped}. They are only significantly cored right at the free-streaming scale, at which the halo and subhalo abundance is highly suppressed. Thus, each of the two leading modifications to CDM can solve only one of the two historical motivations for looking beyond the CDM paradigm. The lack of subhalo suppression for $\sigma/m \lesssim 1 \ \, {{\rm cm}^2/{\rm g}}$ has implications for another of the SIDM halo constraints from a decade ago. \citet{gnedinandostriker01} set a constraint excluding the range of $0.3 < \sigma/m < 10^4 \ \, {{\rm cm}^2/{\rm g}}$ based on the fundamental plane of elliptical galaxies. The argument rests on the observation that there are not significant differences in the fundamental plane of field ellipticals and cluster ellipticals \citep[e.g.,][]{kochanek2000,bernardi2003,labarbera2010}. Elliptical galaxies have a significant amount of dark matter within their half-light radii, with more massive ellipticals having larger mass-to-light ratios, either caused by varying stellar mass-to-light ratios or varying dark matter content \citep{padmanabhan2004,Tollerud2011,Conroy2012}. \citet{gnedinandostriker01} argue that elliptical galaxies falling into cluster-mass halos should have dark matter evaporated from their centers if $\sigma/m \neq 0$, which would cause the stars in the elliptical galaxy to adiabatically expand and hence move the galaxy off the fundamental plane. However, in our simulations, we find that few subhalos are fully evaporated, and that the subhalo $V_{\mathrm{max}}$ function is not greatly different for $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ from CDM. In addition, our analytic arguments show that the trend with (host) halo mass for the evaporation of subhalos at fixed $r/r_{\mathrm{s}}$ is mild. This suggests that the \citet{gnedinandostriker01} constraints are overly conservative even at the $\sigma/m \simeq 1 \ \, {{\rm cm}^2/{\rm g}}$ level. The main caveats are that the suppression of the subhalo $V_{\mathrm{max}}$ function is higher in more massive clusters and that the suppression is highest at the center of the cluster halo. It would also be interesting to see if there are any differences in the fundamental plane as a function of projected distance in the cluster, both observationally and in simulations. For all of these reasons, it would be worthwhile to perform simulations of elliptical galaxies in clusters with SIDM and explore the fundamental-plane constraints in more depth. \bigskip To summarize, although we have not fully resolved the cores of $\sigma/m=0.1 \, {{\rm cm}^2/{\rm g}}$ SIDM halos, the intuition gleaned from our analytic model (tested agains the SIDM$_{1}$ results) and our moderately-resolved simulation results suggest that $\sigma/m = 0.1 \, {{\rm cm}^2/{\rm g}}$ is an excellent fit to the data across the range of halo masses from dwarf satellites of the Milky Way to clusters of galaxies. Values of cross section over dark matter particle mass in this range are fully consistent with the {\em published} Bullet cluster constraints (cf. \S\ref{intro.sec}), measurements of dark matter density on small-scales and subhalo survival requirements. In a companion paper (Peter, Rocha, Bullock and Kaplinghat 2012), we show that this model is also consistent with halo shape estimates. It is therefore important to simulate galaxy and cluster halos with cross sections in the $0.1 \, {{\rm cm}^2/{\rm g}}$ range. \section{Summary and Conclusions} \label{sumandconc.sec} We have presented a new algorithm to include elastic self-scattering of dark matter particles in N-body codes and used it to study the structure of self-interacting dark matter (SIDM) halos simulated in a full cosmological context. Our suite of simulations (summarized in Table 1) rely on identical initial conditions to explore SIDM models with velocity-independent cross sections $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ and $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ as well as a comparison set of standard CDM simulations (with $\sigma/m = 0$). Our primary conclusion is that while SIDM looks identical to CDM on large scales, SIDM halos have constant density cores, with core radii that scale in proportion to the standard CDM scale radius ($r_{\rm core} \simeq \epsilon \, r_{\mathrm{s}}$). The relative size of the core increases with increasing cross section ($\epsilon \simeq 0.7$ for $\sigma/m = 1$ and $\epsilon \sim 0.2$ for $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$). Correspondingly, at fixed halo mass, core densities decrease with increasing SIDM cross section. For both core radii and core densities, there is significant scatter about the scaling with $V_{\mathrm{max}}$ of the halo. The scaling relationship is strong enough that measurements of dark matter densities in the cores of dark matter dominated galaxies and large galaxy clusters likely provide the most robust constraints on the dark matter cross section at this time. In a companion paper (Peter, Rocha, Bullock and Kaplinghat, 2012) we demonstrate, contrary to previous claims, that SIDM constraints from halo shape measurements may be less restrictive than (or at least similar to those from) measurements of absolute core densities alone. Based on our simulation results we conclude that the dark matter self-scattering cross section must be smaller than $1 \ \, {{\rm cm}^2/{\rm g}}$ in order to avoid under-predicting the observed core densities in galaxy clusters, low surface brightness spirals (LSBs), and dwarf spheroidal galaxies. However, an SIDM model with a {\em velocity-independent} cross section of about $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ appears capable of reproducing reported core sizes and central densities of dwarfs, LSBs, and galaxy clusters. Higher resolution simulations with better statistics will be needed to confirm this expectation. \bigskip \noindent An accounting of our results are as follows: \begin{itemize} \item Outside of the central regions of dark matter halos ($r \gtrsim 0.5 R_{\rm vir}$) the large scale properties of SIDM cosmological simulations are effectively identical to CDM simulations. This implies that all of the large-scale confirmations of the CDM theory apply to SIDM as well. \\ \item The subhalo $V_{\mathrm{max}}$ function in SIDM with $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ differs by less than $\sim 30\%$ compared to CDM across the mass range $5\times 10^{11}M_\odot - 2\times 10^{14}M_\odot$ studied directly with our simulations . Differences in the $V_{\mathrm{max}}$ function with respect to CDM are only apparent deep within the centers of large dark-matter halos. Thus, although is possible, it will be difficult to constrain SIDM models based on the effects subhalo evaporation. \\ \item SIDM produces halos with constant density cores, with correspondingly lower central densities than CDM halos of the same mass. For $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$, our simulated halo density structure is reasonably well characterized by a Burkert (1995) profile fit with a core size $r_{\rm b} \simeq 0.7 r_{\mathrm{s}}$, where $r_{\mathrm{s}}$ is the NFW scale radius of the same halo in the absence of self-interactions. Core densities tend to increase with decreasing halo mass ($\rho_b \propto M_{\mathrm{vir}}^{-0.2}$) but demonstrate about a factor of $\sim 3$ scatter at fixed mass (likely owing to the intrinsic scatter in dark matter halo concentrations). \\ \item SIDM halo core sizes, central densities, and associated scaling relations can be understood in the context of a simple analytic model. The model treats the SIDM halo as consisting of a core region, where self-interactions have redistributed kinetic energy to create an approximately isothermal cored density profile; and an outer region, where self-interactions are not effective. The transition between these regions is set by the strength of the self-interactions and this model allows us to make quantitative predictions for smaller cross sections where the cores are not resolved by our simulations. Based on this model and a few of our best resolved simulated halos we find core sizes $\sim 0.1 r_{\mathrm{s}}$ for $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$. \\ \item Halo core densities over the mass range from $10^{15} - 10^{10} \,\mathrm{M}_{\odot}$ in SIDM with $\sigma/m = 1 \ \, {{\rm cm}^2/{\rm g}}$ are too low ($ \sim 0.005 - 0.04 \, \,\mathrm{M}_{\odot}/\mathrm{pc}^3$) to match observed central densities in galaxy clusters ($\sim 0.03 \,\mathrm{M}_{\odot}/\mathrm{pc}$) and dwarf spheroidals ($\sim 0.1 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$). \\ \item Halo core central densities in SIDM with $\sigma/m = 0.1 \ \, {{\rm cm}^2/{\rm g}}$ are in line with those observed from galaxy clusters to tiny dwarfs ($0.02 - 0.5 \,\mathrm{M}_{\odot}/\mathrm{pc}^3$) without the need for any velocity dependence. The densities are more consistent with observations than those predicted in dissipationless CDM simulations, which are generically too high. SIDM models with this cross section over dark matter particle mass value are consistent with Bullet cluster observations, subhalo survival requirements and, as we show in a companion paper (Peter, Rocha, Bullock and Kaplinghat, 2012), measurements of dark matter halo shapes. \end{itemize} Future work is necessary to expand both the dynamic range of our simulations in halo mass and resolution as well as the dynamic range in cross sections. These simulations are necessary in order to make detailed comparisons with observations given the exciting possibility that dark matter self-interaction with $\sigma/m$ in the ballpark of $0.1 \ \, {{\rm cm}^2/{\rm g}}$ could be an excellent fit to the central densities of halos over 4-5 orders of magnitude in mass. \section*{Acknowledgments} MR was supported by a CONACYT doctoral Fellowship and NASA grant NNX09AG01G. AHGP is supported by a Gary McCue Fellowship through the Center for Cosmology at UC Irvine, NASA Grant No. NNX09AD09G at UCI, National Science Foundation (NSF) grant 0855462 at UC Irvine, and the NSF under Grant No. NSF PHY11-25915 while visiting the Kavli Institute for Theoretical Physics. JSB was partially supported by the Miller Institute for Basic Research in Science during a Visiting Miller Professorship in the Department of Astronomy at the University of California Berkeley. JO was supported by a Fullbright-MICINN Postdoctoral Fellowship. MK is supported by NASA grant NNX09AD09G and NSF grant 0855462. This research was supported in part by the Perimeter Institute of Theoretical Physics during a visit by MK. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation. The work of LAM was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. LAM acknowledges NASA ATFP support. Simulations were performed in the Pleiades supercomputer of the NASA Advanced Supercomputing (NAS) Division, and the Kraken supercomputer of the National Institute for Computational Sciences (NICS) through an XSEDE allocation.
1,108,101,566,042
arxiv
\section{Introduction} The differential geometry of normed spaces is a topic of research that was studied by authors like Busemann \cite{Bus3}, Guggenheimer \cite{Gug2}, and Petty \cite{Pet}, and it is still far away from being comprehensively investigated. Its relation to Finsler geometry is nicely described in \cite{Bus2} and \cite{Bus7}; see also the more recent references \cite{shen} and \cite{shen2}. This paper is the second of a series of three papers devoted to study this topic (the other two papers are \cite{diffgeom} and \cite{diffgeom3}, see also \cite{Ba-Ma-Sho}). In the first paper \cite{diffgeom} we studied the differential geometry of surfaces immersed in normed spaces from the viewpoint of classical differential geometry. However, the methods used to define some curvature concepts came from affine differential geometry, and hence many questions related to this latter subject emerged. In this present paper we aim to address and answer some of these questions. \\ We begin by briefly describing the theory developed in \cite{diffgeom}. We work with an immersion $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ of a surface $M$ in the space $\mathbb{R}^3$ endowed with a norm $||\cdot||$, which is considered to be \emph{admissible}. This means that the \emph{unit sphere} $\partial B:=\{x\in\mathbb{R}^3:||x||=1\}$ of the \emph{normed} or \emph{Minkowski space} $(\mathbb{R}^3,||\cdot||)$ has strictly positive Gaussian curvature as a surface of the Euclidean space $(\mathbb{R}^3,\langle \cdot,\cdot\rangle)$, where $\langle\cdot,\cdot\rangle$ denotes the usual \emph{inner product} in $\mathbb{R}^3$. Note that the unit sphere is the boundary of the \emph{unit ball} $B:=\{x \in \mathbb{R}^3:||x|| \leq 1\}$, which is a compact, convex set with interior points centered at the origin. Respective homothetical copies are called \emph{Minkowski spheres} and \emph{Minkowski balls}. We say that a vector $v\in\mathbb{R}^3$ is \emph{Birkhoff orthogonal} to a plane $P \subseteq \mathbb{R}^3$ if for each $w \in P$ we have $||v + tw|| \geq ||v||$ for any $t \in \mathbb{R}$ (see \cite{alonso}). Geometrically, a vector $v$ is Birkhoff orthogonal to a plane $P$ if $P$ supports the unit ball of $(\mathbb{R}^3,||\cdot||)$ at $v/||v||$. Due to the admissibility of the norm, it follows that Birkhoff orthogonality is unique both on left and on right. \\ The \emph{Birkhoff-Gauss map} of $M$ is an analogue to the Gauss map defined in terms of Birkhoff orthogonality as follows: for each $p \in M$, the Birkhoff normal vector to $M$ at $p$ is a vector $\eta(p) \in \partial B$ which is Birkhoff orthogonal to the tangent plane to $M$ at $p$. Such a vector field can be globally defined if $M$ is orientable, and hence we will always assume this hypothesis. The immersion $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ with the Birkhoff normal vector field is an \emph{equiaffine immersion}, in the sense of \cite{nomizu} (see \cite{diffgeom} for a proof). \\ At each point, the eigenvalues of the differential map $d\eta_p$ are called \emph{principal curvatures}. Their product is the \emph{Minkowski Gaussian curvature}, and their arithmetic mean is the \emph{Minkowski mean curvature}. We also endow $M$ with an induced connection $\nabla$ by means of the \emph{Gauss equation} \begin{align*} D_XY = \nabla_XY + h(X,Y)\eta, \end{align*} where $X,Y$ are smooth vector fields in $M$, and $h(X,Y)$ is a symmetric bilinear form which can be regarded as the second fundamental form in our context. We say that the immersion is \emph{nondegenerate} if the rank of $h$ equals $2$.For this bilinear form, we have the formula \begin{align}\label{exph} h(X,Y) = \frac{\langle D_XY,\xi\rangle}{\langle \eta,\xi\rangle} = -\frac{\langle Y,d\xi_pX\rangle}{\langle\eta,\xi\rangle} = -\frac{\langle du^{-1}_{\eta(p)}Y,d\eta_pX\rangle}{\langle\eta,\xi\rangle}, \end{align} where $\xi$ denotes the usual Euclidean Gauss map of $M$, and $u^{-1}$ is the Euclidean Gauss map of the unit sphere $\partial B$. Notice that we have $\eta = u\circ\xi$ (where $\circ$ denotes the usual composition of maps). We also define the \emph{normal curvature} $k_{M,p}(X)$ of $M$ at a point $p$ in direction $X$ to be the circular curvature of the curve obtained by intersecting $M$ with the plane spanned by $\eta(p)$ and $X$ (translated to pass through $p$, of course). For the normal curvature we have the equality \begin{align}\label{normcurv} k_{M,p}(X) = \frac{\langle du^{-1}_{\eta(p)}X,d\eta_pX\rangle}{\langle du^{-1}_{\eta(p)}X,X\rangle}. \end{align} Now we describe the structure of the paper. In Section \ref{riemann} we endow the surface with a Riemannian metric which has a lot of interesting relations with its Minkowski normal curvature. Such metric will be very useful in Section \ref{distance}, where we re-obtain the (Minkowski) curvature concepts by means of ambient affine distance functions. Section \ref{blaschke} is devoted to the question when the Birkhoff normal field of a surface coincides with the \emph{affine normal field}. We prove that this is the case if and only if the ambient geometry is Euclidean and the surface is a Euclidean sphere. \\ As mentioned, in this paper we will continue our considerations in \cite{diffgeom}, dealing further on with these concepts. Other important references devoted to \emph{Minkowski geometry} (i.e., the geometry of finite dimensional real Banach spaces) are \cite{martini2}, \cite{martini1}, and \cite{thompson}. Regarding differential geometry, our main references are \cite{manfredo} and \cite{nomizu}. \section{A related Riemannian metric}\label{riemann} In this section we endow an immersed surface with a certain Riemannian metric which appears naturally when studying the Minkowski normal curvature of a surface. \\ Let $f:M\rightarrow\mathbb{R}^3$ be an immersed surface with (Euclidean) Gauss map given by $\xi:M\rightarrow\partial B_e$. The \emph{Dupin indicatrix} of $M$ at $p$ is the curve in $T_pM$ formed by the vectors $V \in T_pM$ such that $\langle V,d\xi_pV\rangle = \pm1$. Since the unit sphere of the Minkowski norm is an immersed surface whose Gauss map is $u^{-1}$, we have that for each $q \in \partial B$ its Dupin indicatrix is determined by the solution of the equation $\langle du^{-1}_qV,V\rangle = 1$ in $T_q\partial B$ (where we may consider only the positive sign, since we are assuming that the norm is admissible, and hence the Gaussian curvature of $\partial B$ is strictly positive). It follows that the Dupin indicatrix of $\partial B$ at each point is an ellipse, and therefore induces a Euclidean metric (which may differ, however, from the ambient Euclidean metric). We will endow an immersed surface with a Riemannian metric by considering, in each of its tangent spaces, the metric given by the Dupin indicatrix of $\partial B$ at the parallel tangent space. At first glance it seems that this is a somewhat artificial construction, but we can sharply describe the Dupin indicatrix of the \textbf{Minkowski sphere} in terms of the principal directions of the \textbf{surface}.\\ Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be an immersed surface, let $p \in M$, and let $V_1,V_2 \in T_pM$ be principal directions associated to the (Minkowski) principal curvatures $\lambda_1,\lambda_2 \in \mathbb{R}$, respectively. We may assume that $h(V_1,V_2) = 0$, since this is the case when $p$ is non-umbilic; if $p$ is umbilic, then every direction is principal (see \cite[Section 4]{diffgeom}). Now we re-scale $V_1$ and $V_2$ in order to have $\langle du^{-1}_{\eta(p)}V_1,V_1\rangle = \langle du^{-1}_{\eta(p)}V_2,V_2\rangle = 1$, where we remember the identification $T_{\eta(p)}\partial B \simeq T_pM$. From the proof of \cite[Theorem 5.2]{diffgeom} we have that $\langle du^{-1}_{\eta(p)}V_1,V_2 \rangle = \langle du^{-1}_{\eta(p)}V_2,V_1\rangle = 0$. Hence the Dupin indicatrix of $\partial B$ at $\eta(p)$ is the curve parametrized as \begin{align}\label{dupin} [0,2\pi]\ni\theta \mapsto V(\theta) = V_1\cos\theta + V_2\sin\theta \in T_{\eta(p)}\partial B\simeq T_pM. \end{align} As mentioned previously, it is clear that this curve is an ellipse, and it is the unit circle of the metric induced by the inner product $\langle\cdot,\cdot\rangle_p:T_pM\times T_pM\rightarrow\mathbb{R}$ defined by the setting $\langle V_1,V_1\rangle_p = \langle V_2,V_2\rangle_p = 1$ and $\langle V_1,V_2\rangle_p = 0$ (which is merely the inner product $(X,Y)\mapsto\langle du^{-1}_{\eta(p)}X,Y\rangle$). From now on, we refer to this curve as the \emph{Dupin indicatrix at} $T_pM$, and to the associated metric as the \emph{Dupin metric at} $T_pM$. Notice that, in the classical setting, by this construction one would naturally re-obtain the restriction of the ambient metric to each tangent space. \\ The first application of this metric is a way to calculate the Minkowski mean curvature, analogous to the Euclidean subcase. In this particular case, it is known that the mean curvature can be calculated as the mean of the normal curvature over the unit circle of the tangent space. In other words, if $k_n(\theta)$ denotes the (Euclidean) normal curvature in the direction of a vector forming an angle $\theta$ with a fixed direction, then \begin{align*} H_e = \frac{1}{2\pi}\int_0^{2\pi}k_n(\theta) \ d\theta, \end{align*} where $H_e$ denotes the Euclidean mean curvature of $M$. We obtain something similar for the general Minkowksi case, but now with the Dupin indicatrix of $T_pM$. In what follows, $H$ denotes the Minkowski mean curvature of $M$. \begin{prop} The Minkowski mean curvature of an immersed surface $M$ at a point $p \in M$ is the mean of the normal curvature as a function of the Dupin indicatrix of $T_pM$. \end{prop} \begin{proof} From (\ref{normcurv}) we get the equality \begin{align*} k_{M,p}(\theta) := k_{M,p}(V(\theta)) = \lambda_1(\cos\theta)^2 + \lambda_2(\sin\theta)^2. \end{align*} Hence, a simple calculation gives \begin{align*} \frac{1}{2\pi}\int_{0}^{2\pi}k_{M,p}(\theta) \ d\theta = \frac{\lambda_1}{2\pi}\int_{0}^{2\pi}(\cos\theta)^2d\theta + \frac{\lambda_2}{2\pi}\int_0^{2\pi}(\sin\theta)^2d\theta = \frac{\lambda_1+\lambda_2}{2} = H, \end{align*} as we claimed. Notice that $2\pi$ equals the length of the Dupin indicatrix in the metric derived from it. \end{proof} The Dupin metric in a tangent space $T_pM$ gives rise to a natural orthogonality relation: we say that $X,Y \in T_pM$ are \emph{Dupin orthogonal} whenever $\langle X,Y\rangle_p = 0$. In the Euclidean subcase, it is well known that the sum of the normal curvatures of $M$ at $p$ in a pair of orthogonal directions equals twice the mean curvature. We will show that this is true in the Minkowski case if one replaces usual orthogonality by Dupin orthogonality. \begin{prop} Let $X,Y \in T_pM$ be non-zero vectors. Then \begin{align*} k_{M,p}(X) + k_{M,p}(Y) = 2H \end{align*} if and only if $X$ and $Y$ are Dupin orthogonal or Dupin complementary. \end{prop} \begin{proof} Let $X$ and $Y$ be given in the Dupin indicatrix of $T_pM$ as $V(\theta_0)$ and $V(\theta_1)$, respectively. Then Dupin orthogonality and Dupin complementarity of $X$ and $Y$ give $\cos(\theta_0 - \theta_1) = 0$ and $\cos(\theta_0+\theta_1) = 0$, respectively. Hence we may write $\theta_1 = \theta_0 \pm \frac{\pi}{2}$. It follows that \begin{align*} k_{M,p}(X) + k_{M,p}(Y) = \lambda_1(\cos^2\theta_0+ \sin^2\theta_0)+\lambda_2(\cos^2\theta_0+ \sin^2\theta_0) = \lambda_1+\lambda_2 = 2H. \end{align*} Now assume that $k_{M,p}(X) + k_{M,p}(Y) = 2H$. Suppose also that $\lambda_2\neq\lambda_1$, since the equality case is trivial. Then we may write \begin{align*} \lambda_1(\cos^2\theta_0 + \cos^2\theta_1) + \lambda_2(\sin^2\theta_0 + \sin^2\theta_1) = \lambda_1 + \lambda_2, \end{align*} and this can be easily rewritten as \begin{align*} (\lambda_2-\lambda_1)(\sin^2\theta_0 + \sin^2\theta_1) = \lambda_2-\lambda_1. \end{align*} It follows that $\cos(\theta_0-\theta_1) = 0$ or $\cos(\theta_0+\theta_1) = 0$, and therefore $X$ and $Y$ are Dupin orthogonal or Dupin complementary. \end{proof} \begin{coro} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be an immersed hypersurface whose Minkowski Gaussian curvature is negative. If the Minkowski mean curvature of $M$ at $p \in M$ equals $0$, then the asymptotic directions of $M$ at $p$ are Dupin orthogonal. \end{coro} \begin{proof} One just has to recall that, due to \cite[Corollary 5.3]{diffgeom}, a direction $X \in T_pM$ is \emph{asymptotic} if and only if $k_{M,p}(X) = 0$. Hence the result comes directly from the proposition above. \end{proof} \begin{remark} In the Euclidean subcase, the sum of normal curvatures of an immersed surface $M$ at a point $p \in M$ at two orthogonal directions is always a constant (see \cite{manfredo}). In the general Minkowski case, the Dupin indicatrix at each tangent plane (which depends only on the ambient Minkowski metric, and not on the surface) somehow ``organizes" the directions in an analogous way. \\ \end{remark} From the affine viewpoint, it will be better to work with another Riemannian metric, obtained from the Dupin metric (and which preserves its orthogonality relation). We call this new metric the \emph{weighted Dupin metric}, and it is defined as \begin{align*} b(X,Y) := \frac{\langle du^{-1}_{\eta(p)}X,Y\rangle}{\langle\eta(p),\xi(p)\rangle}, \end{align*} for $p \in M$ and $X,Y \in T_pM$. Now we will establish a relation between the weighted Dupin metric of an immersed surface and its Minkowski Gaussian curvature. In classical differential geometry, it is fairly known that the (Euclidean) Gaussian curvature of a surface equals the ratio between the determinants (with respect to any fixed basis) of the second and first fundamental forms, respectively. \begin{prop} Let $h_{ij}$ and $b_{ij}$ denote the affine fundamental form $h$ and the weighted Dupin metric in a certain fixed basis. Then \begin{align*} K = \frac{\mathrm{det}(h_{ij})}{\mathrm{det}(b_{ij})}, \end{align*} where $K$ is the Minkowski Gaussian curvature. \end{prop} \begin{proof} We can assume that the Gaussian curvature is non-zero, since this case is straightforward. By assuming this, we can take a local frame $\{V_1,V_2\}$ of (Minkowski) principal directions, i.e., such that $d\eta_qV_1 = \lambda_1V_1$ and $d\eta_qV_2 = \lambda_2V_2$, for each $q$ in a small neighborhood where the principal curvatures $\lambda_1$ and $\lambda_2$ do not vanish. From the proof of \cite[Theorem 5.2]{diffgeom} we have \begin{align*} \langle du^{-1}_{\eta(p)}V_1,V_2\rangle = \langle V_1,du^{-1}_{\eta(p)}V_2\rangle = 0, \end{align*} and since $V_1$ and $V_2$ are conjugate in the classical sense (see \cite[Lemma 4.2]{diffgeom}), we also have $h(V_1,V_2) = 0$ (in case that $p$ is umbilic, we just choose $V_1$ and $V_2$ to be conjugate). Now we compute \begin{align*} h(V_1,V_1) = -\frac{\langle V_1,d\xi_pV_1\rangle}{\langle \eta,\xi\rangle} = -\frac{\langle V_1,du^{-1}_{\eta(p)}\circ d\eta_pV_1\rangle}{\langle \eta,\xi\rangle} = -\frac{\lambda_1\langle V_1,du^{-1}_{\eta(p)}V_1\rangle}{\langle\eta,\xi\rangle}, \end{align*} and an analogue holds for $h(V_2,V_2)$. Finally, in the basis $\{V_1,V_2\}$ we have \begin{align*} \frac{\mathrm{det}(h_{ij})}{\mathrm{det}(b_{ij})} = \frac{h(V_1,V_1)h(V_2,V_2)}{b(V_1,V_1)b(V_2,V_2)} = \frac{\lambda_1\lambda_2\langle du^{-1}_{\eta(p)}V_1,V_1\rangle \langle du^{-1}_{\eta(p)}V_2,V_2\rangle \langle\eta,\xi\rangle^{-2}}{\langle du^{-1}_{\eta(p)}V_1,V_1\rangle \langle du^{-1}_{\eta(p)}V_2,V_2\rangle \langle\eta,\xi\rangle^{-2}} = \lambda_1\lambda_2, \end{align*} and the latter is the Minkowski Gaussian curvature. This is what we wanted to verify. \end{proof} \section{Distance functions and curvatures}\label{distance} In this section we obtain the Minkowski curvatures of a surface in terms of distance functions. Recall that a point $p \in M$ of the domain of a function $g:M\rightarrow\mathbb{R}$ is said to be a \emph{critical point} if $dg_p = 0$. Let $p \in M$ be a fixed point. For each $q \in M$, we can decompose the vector $p-q$ as follows: \begin{align}\label{distfunc} p-q = g(q)\eta(p) + V(q), \end{align} where $V(q) \in T_pM$ is the projection of $q-p$ on $T_pM$ and $g:M\rightarrow\mathbb{R}$ is a smooth function. The function $g$ can be regarded as the Minkowski distance from $q \in M$ to the plane $p\oplus T_pM$, where $\oplus$ denotes the direct sum (geometrically, we are simply translating $T_pM$ such that its origin lies at $p$). \begin{lemma} The point $p \in M$ is a critical point of the function $g$ defined above. \end{lemma} \begin{proof} Let $X$ denote a smooth extension of a fixed vector $X \in T_pM$ in a neighborhood of $p$ (with a little abuse of notation). Differentiating (\ref{distfunc}) with respect to $X$, and evaluating at $p$, we have \begin{align*} -X = (Xg)\eta + D_XV = (Xg)\eta + \nabla_XV + h(X,V)\eta. \end{align*} It follows that $dg_pX = -h(X,V) = 0$, since $V(p) = 0$. \end{proof} In standard affine differential geometry, one can define an analogue of the classical Hessian in a manifold $M$ endowed with a nondegenerate bilinear form $h:M\times M\rightarrow\mathbb{R}$. Namely, the \emph{h-Hessian} $\mathrm{hess}_hf$ of a map $f:(M,h)\rightarrow\mathbb{R}$ is defined as \begin{align}\label{hessh} \mathrm{hess}_hf(X,Y) := X(Yf) - (\bar{\nabla}_XY)f, \end{align} for any $p \in M$ and any $X,Y \in T_pM$, where $\bar{\nabla}$ is the Levi-Civita connection of $h$. For our purpose, we will consider the Hessian in $M$ with respect to the weighted Dupin metric. Denoting by $\hat{\nabla}$ the Levi-Civita connection of the weighted Dupin metric, we define the \emph{b-Hessian} of the immersion $f:M\rightarrow\mathbb{R}$ to be \begin{align}\label{hessb} \mathrm{hess}_bf(X,Y) := X(Yf) - (\hat{\nabla}_XY)f. \end{align} As a consequence of the previous lemma, it follows that the $b$-Hessian $\mathrm{hess}_bg$ of $g$ at $p$ is given by $\mathrm{hess}_bg(X,Y)|_p = X(Yg)|_p$, since $p$ is a critical point of $g$. \begin{teo} The Hessian of the function $g$ defined above at $p \in M$ equals $-h$ at $p$, where $h$ is the affine fundamental form. In particular, if $X$ is unit in the weighted Dupin metric, then \begin{align*} k_{M,p}(X) = \mathrm{hess}_bg(X,X)|_p. \end{align*} \end{teo} \begin{proof} We evaluate the (Euclidean) inner product of (\ref{distfunc}) with $\xi(p)$ to obtain \begin{align*} \langle p-q,\xi(p)\rangle = g(q)\langle \eta(p),\xi(p)\rangle. \end{align*} Let $X,Y \in T_pM$ and denote by the same letters smooth extensions of these vectors to a neighborhood of $p$. Derivating the above expression with respect to $Y$ and $X$, respectively, and evaluating at $p$ yields \begin{align*} -\langle D_XY,\xi\rangle = X(Y(g))\langle\eta,\xi\rangle, \end{align*} where the reader may notice that $\eta$ and $\xi$ are always evaluated at $p$; that is why their derivatives vanish. Since $p$ is a critical point of $g$, from (\ref{exph}) we get \begin{align}\label{hessandh} \mathrm{hess}_bg(X,Y)|_p = X(Yg)|_p = - \frac{\langle D_XY,\xi\rangle}{\langle \eta,\xi\rangle} = -h(X,Y). \end{align} The claim on the normal curvature comes straightforwardly from the formula \begin{align}\label{normh} k_{M,p}(X) = -\frac{h(X,X)\langle\eta,\xi\rangle}{\langle du^{-1}_{\eta(p)}X,X\rangle}, \end{align} obtained in \cite[Corollary 5.3]{diffgeom}. The reader may notice that here we are generalizing a well known result from classical differential geometry by regarding the affine fundamental form as the second fundamental form, and normalizing with respect to the weighted Dupin metric (instead of the usual metric). \end{proof} \begin{remark} Notice that equality (\ref{normh}) can be written as \begin{align*} k_{M,p}(X) = -\frac{h(X,X)}{b(X,X)}, \end{align*} which makes the Minkowski normal curvature analogous to the usual Euclidean normal curvature if one regards $h$ as the second fundamental form and the weighted Dupin metric as the first fundamental form. Of course, this is indeed the case if the norm in $\mathbb{R}^3$ is Euclidean.\\ \end{remark} As in the Euclidean subcase, we will obtain the curvatures of an immersed surface in terms of the distances of the points of the surface to a fixed point in $\mathbb{R}^3$. Let $a \in \mathbb{R}^3\setminus M$ be a fixed point, and define the \emph{distance function} $D_a:M\rightarrow\mathbb{R}$ of $M$ to $a$ as $D_a(q) = ||q-a||$, for $q \in M$. Notice that the level sets of $D_a$ are the spheres $S_{\rho}(a):=\{x\in \mathbb{R}^3:||x-a|| = \rho\}$, $\rho \geq 0$, and hence a point $p \in M$ is a critical point of $D_a$ if and only if $T_pS_{||p-a||}(a) = T_pM$. In other words, $p \in M$ is a critical point of $D_a$ if and only if $p-a$ is Birkhoff orthogonal to $T_pM$. \\ \begin{prop} Let $p\in M$ be a critical point of the distance function $D_a:M\rightarrow\mathbb{R}$, where $a \in \mathbb{R}^3\setminus M$. Then we have the equivalence \begin{align*} \mathrm{hess}_bD_a(V,V)|_p = 0 \ \Leftrightarrow \ k_{M,p}(V) = \frac{1}{D_a(p)} \end{align*} for any nonzero vector $V \in T_pM$. \end{prop} \begin{proof} Let $\gamma:(-\varepsilon,\varepsilon)\rightarrow M$ be an arc-length parametrization of the curve obtained as intersection of $M$ with the (translated, to pass through $p$) plane spanned by $\eta(p)$ and $V$. Assume also that $\gamma(0) = p$ and that $\gamma'(0) = V$ (in other words, we are assuming that $V$ is unit). By definition, we have that the normal curvature $k_{M,p}(V)$ is the circular curvature of $\gamma(s)$ at $s = 0$ (see \cite{diffgeom}). Observe that, since $p$ is a critical point of $D_a$, it follows that $p - a$ is in the direction of $\eta(p)$, and hence $a$ is a point of the (translated, to pass through $p$) plane spanned by $V$ and $\eta(p)$. From \cite[Proposition 9.1]{Ba-Ma-Sho} we have immediately that \begin{align*} \left.\frac{d^2}{ds^2}\left(D_a\circ\gamma\right)\right|_{s=0} = 0 \ \Leftrightarrow \ k_{M,p}(V) = \frac{1}{D_a(p)}. \end{align*} Since we clearly have that $\mathrm{hess}_bD_a(V,V)|_p$ is precisely the expression on the left hand side of the equality above, the proof is complete. \end{proof} \begin{remark} A concept of \emph{affine normal curvature} for \emph{Blaschke immersions} is defined in \cite[Definition 5.2]{davis}. The reader may notice that the above proposition states that the concept of Minkowski normal curvature is somehow analogous to it. Moreover, the relations between the affine normal curvature and the affine shape operator are very similar to the relations between the Minkowski normal curvature and the derivative of the Birkhoff-Gauss map, see \cite[Section 5]{davis} and \cite[Section 5]{diffgeom}.\\ \end{remark} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a \emph{nondegenerate} immersed surface (meaning that the affine fundamental form $h$ of $M$ has rank $2$), and fix a point $a \in \mathbb{R}^3\setminus M$. The \emph{affine distance function} from $a$ to $M$ is the function $\rho:M\rightarrow\mathbb{R}$ defined by the decomposition \begin{align}\label{affinedist} p - a = \rho(p)\eta(p) + V(p), \end{align} where $V(p) \in T_pM$. We aim to find a result similar to \cite[Proposition 6.2]{nomizu} for the affine distance function, since this would give another expression for the Minkowski mean curvature. However, since in general our immersion is not a \emph{Blaschke immersion} (see Section \ref{blaschke} for the definition), we may not expect its \emph{cubic form} to vanish, and hence we possibly need a Laplacian concept other than that used in the mentioned result. This is indeed true. We define the \emph{$\nabla$-Laplacian} of a function $f:M\rightarrow\mathbb{R}$ to be \begin{align*} \Delta f := \mathrm{div}_{\nabla}(\mathrm{grad}_hf), \end{align*} where $\mathrm{grad}_hf:M\rightarrow TM$ is the \emph{gradient of $f$ with respect to $h$}, defined to be the (unique) section of $TM$ such that $Xf = h(X,\mathrm{grad}_hf)$, and $\mathrm{div}_{\nabla}:C^{\infty}(TM)\rightarrow C^{\infty}(M)$ is the \emph{divergence operator with respect to the induced connection $\nabla$}, defined formally as \begin{align*} \mathrm{div}_{\nabla}X|_p = \mathrm{tr}\{Y\mapsto \nabla_YX:Y\in T_pM\} \end{align*} for sections $X \in C^{\infty}(TM)$. We can re-obtain the Minkowski mean curvature in terms of the $\nabla$-Laplacian of an affine distance function, as we will see next. \begin{teo} Let $\rho:M\rightarrow\mathbb{R}$ be the affine distance function with respect to a given point $a \in \mathbb{R}^3 \setminus M$, and assume that $M$ is nondegenerate. Then the following equality holds: \begin{align*} \Delta\rho = 2(H\rho - 1). \end{align*} \end{teo} \begin{proof} Derivating (\ref{affinedist}) with respect to $X \in T_pM$, we get \begin{align*} X = (X\rho)\eta + \rho D_X\eta + D_XV = (X\rho)\eta + \rho D_X\eta + \nabla_XV + h(X,V)\eta. \end{align*} Since $D_X\eta$ is tangential, we have that $(X\rho) = -h(X,V)$, and from this we get $\mathrm{grad}_h\rho = -V$. We also have \begin{align}\label{eqaff} X = \rho D_X\eta + \nabla_XV. \end{align} To calculate the divergence, we may use any positive definite bilinear form, and hence we use $b$. As usual, let $V_1$ and $V_2$ be principal directions of $M$ at $p$ associated to principal curvatures $\lambda_1,\lambda_2 \in \mathbb{R}$, respectively. Assume that both vectors are normalized with respect to $b$. We have \begin{align*} \mathrm{div}_{\nabla}(\mathrm{grad}_h\rho) = b(-\nabla_{V_1}V,V_1) + b(-\nabla_{V_2}V,V_2) = b(\rho\lambda_1V_1 - V_1,V_1) + b(\rho\lambda_2V_2 - V_2, V_2) = \\ = \rho\lambda_1 + \rho\lambda_2 - 2 = 2(H\rho - 1), \end{align*} where the second equality comes from (\ref{eqaff}). \end{proof} \begin{remark} We say that an immersed surface is \emph{minimal} (in the Minkowski sense) if its Minkowski mean curvature vanishes everywhere. The main interest in the above theorem is that it means, in particular, that a nondegenerate immersed surface $M$ is minimal if and only if $\Delta\rho = -2$. This characterizes nondegenerate Minkowski minimal surfaces in terms of a partial differential equation for the affine distance function. \\ \end{remark} In affine differential geometry, it is known that the affine distance function from a fixed point $a \in \mathbb{R}^{n+1}$ to a nondegenerate Blaschke hypersurface $M$ is constant if and only if $M$ is a \emph{proper affine hypersphere} with center $a$ (see \cite[Proposition 5.10]{nomizu}). Next we prove something analogous for normed spaces, characterizing Minkowski spheres. \begin{prop} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a nondegenerate surface immersion, and let $a \in \mathbb{R}^3$. Denote by $\rho:M\rightarrow\mathbb{R}$ the affine distance function from $a$ to $M$, as defined in \emph{(\ref{affinedist})}. Then, $\rho$ is constant if and only if $M$ is contained in a Minkowski sphere with $a$ as center. \end{prop} \begin{proof} The ``only if" part is immediate. Differentiating (\ref{affinedist}) in $p \in M$ and with respect to a direction $X \in T_pM$ yields the equality \begin{align*} X = X(\rho)\eta + \rho D_X\eta + \nabla_XV + h(X,V)\eta, \end{align*} and hence $X(\rho) = -h(X,V)$. Since $M$ is nondegenerate, if $\rho$ is constant we have $V = 0$. Therefore, $X = \rho D_X\eta$, and hence \begin{align*} d\eta_p(X) = \frac{1}{\rho}X, \end{align*} for any $p \in M$ and $X \in T_pM$. As a consequence, all points of $M$ are umbilic points, and this characterizes Minkowski spheres (see \cite[Proposition 4.5]{diffgeom}). \end{proof} \section{The Birkhoff normal as the affine normal} \label{blaschke} A transversal vector field $\xi$ in an immersed surface $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ induces two natural volume elements. The \emph{induced volume} is the $2$-form given by $\omega(X,Y):=\mathrm{det}[X,Y,\xi]$, where $\mathrm{det}$ denotes the usual determinant in $\mathbb{R}^3$. Also, the affine fundamental form $h$ associated to $\xi$ induces the volume form $\omega_h(X,Y):=|\mathrm{det}[h_{ij}]|^{1/2}$, where $[h_{ij}]$ is the matrix of $h$ with respect to the vectors $X$ and $Y$. A \emph{Blaschke immersion} is an immersion endowed with an equiaffine transversal vector field for which $|\omega| = \omega_h$, where $|\cdot|$ denotes the usual absolute value in $\mathbb{R}$ (see \cite{nomizu} for more information on Blaschke immersions).\\ It is well known that for a nondegenerate immersed surface $f:M\rightarrow \mathbb{R}^3$ there exists (locally) a transversal vector field that makes $f$ a Blaschke immersion, and that this vector field is unique up to the sign (see \cite{nomizu} for a proof). We call it the \emph{affine normal field} of $M$. It is also clear that the affine normal field can be globally defined if and only if $M$ is orientable. The natural question that arises here is: when is the Birkhoff normal vector field the affine normal field of an immersed surface? A first result in this direction discusses whether the Birkhoff normal of the unit sphere is the affine normal. This question is independent of Minkowski geometry: we are asking whether the position vector of a centrally symmetric smooth and strictly convex body is its affine normal. We consider first the planar case, where the immersed hypersurfaces are curves. \begin{teo} The unit circle of a normed plane with the Birkhoff normal field as transversal field is a Blaschke immersion if and only if the plane is Euclidean. \end{teo} \begin{proof} Assume that we have a usual auxiliary Euclidean structure in $\mathbb{R}^2$. Let $\varphi(s):[0,l_e(S)]\rightarrow (\mathbb{R}^2,||\cdot||)$ be a parametrization of the unit circle by \textbf{Euclidean} arc-length, where $l_e(S)$ denotes the Euclidean length of the unit circle. The Gauss equation reads \begin{align*} \varphi''(s) = f(s)\varphi'(s) + h(s)\varphi(s), \end{align*} for some functions $f,h:[0,l_e(S)]\rightarrow\mathbb{R}$ (here, $f\varphi'$ is the induced connection). Let $\xi:[0,l_e(S)]\rightarrow \mathbb{R}^2$ be the Euclidean unit normal field (outward pointing). Taking inner products, we have \begin{align*} h = \frac{\langle\varphi'',\xi\rangle}{\langle \varphi,\xi\rangle}, \end{align*} where we omit the parameter for simplicity. Since $\xi$ is unit and $\varphi$ is a parametrization on Euclidean arc-length, it follows that $\langle \varphi'',\xi\rangle = -k_e$, the Euclidean curvature of $S$. Notice that, moreover, the function $g:=\langle\varphi,\xi\rangle$ is the usual support function of $S$. Now, if $\varphi$ is the affine normal, we get the equality \begin{align*} \frac{k_e}{g} = |h| = [\varphi',\varphi]^2 = [\varphi',\langle\varphi,\xi\rangle\xi + \langle\varphi',\varphi\rangle\varphi']^2 = \langle\varphi,\xi\rangle^2 = g^2. \end{align*} It follows that $k_e = g^3$. Let now $\theta$ be the parameter of $S$ by the angle between the tangent direction and the $x$-axis. Then it is known that the Euclidean curvature is given in terms of the support function by \begin{align*}k_e^{-1} = \frac{d^2g}{d\theta^2} + g, \end{align*} and hence the Euclidean support function of the unit circle is a $\pi$-periodic solution of the Ermakov-Pinney equation (see \cite{pinney}), namely \begin{align*} \frac{d^2g}{d\theta^2} + g = g^{-3}. \end{align*} Therefore, by uniqueness it follows that $g = 1$ (cf. \cite[Theorem 5.1]{Ba-Ma-Sho}). Recalling that $g$ is the Euclidean support function of $S$, we have that $S$ is indeed the Euclidean unit circle. \end{proof} Now we prove the three-dimensional version of the previous theorem. \begin{teo} Let $||\cdot||$ be an admissible norm in $\mathbb{R}^3$. Then the Birkhoff normal vector field of the unit sphere $\partial B$ is the affine normal field if and only if the norm is derived from an inner product. \end{teo} \begin{proof} Before we start, we underline that our proof is quite independent of the theory developed here, which is used only as inspiration. Let $p \in \partial B$, and $E_1,E_2\in C^{\infty}(U)$ be vector fields given by the (Euclidean) orthonormal principal directions of $\partial B$ at each point. Also, let $\lambda_1,\lambda_2$ denote the (Euclidean) principal curvatures of $\partial B$ at each point. If $\eta$ denotes the Birkhoff-Gauss map of $\partial B$ (which is precisely the position vector, since we are considering the geometry given by $\partial B$), then equality (\ref{exph}) gives the following equations: \begin{align*} h(E_1,E_1) = -\frac{\langle E_1,d\xi_qE_1\rangle}{\langle\eta,\xi\rangle} = -\frac{\lambda_1}{\langle\eta,\xi\rangle} \ \ \mathrm{and}\\ h(E_2,E_2) = -\frac{\langle E_2,d\xi_qE_2\rangle}{\langle\eta,\xi\rangle} = -\frac{\lambda_2}{\langle\eta,\xi\rangle}, \end{align*} for any $q \in U$, where $h$ is, as usual, the affine fundamental form induced by the transversal vector field $\eta$. Also, since $E_1$ and $E_2$ are conjugate directions, it follows that $D_{E_1}E_2$ is tangential. Therefore, we have $h(E_1,E_2) = 0$. Thus, the volume form induced by $h$ is given by \begin{align*} \omega_h(E_1,E_2) = \left(\frac{\lambda_1\lambda_2}{\langle\eta,\xi\rangle^2}\right)^{\frac{1}{2}} = \frac{(\lambda_1\lambda_2)^{\frac{1}{2}}}{\langle\eta,\xi\rangle}. \end{align*} On the other hand, the volume form $\omega$ induced by the Birkhoff normal vector field is defined as \begin{align*} \omega(E_1,E_2) = \mathrm{det}[E_1,E_2,\eta] = \langle\eta,\xi\rangle, \end{align*} where, up to a re-orientation, we may assume $\langle\eta,\xi\rangle > 0$. By definiton, $\eta$ is the affine normal if and only if $\omega_h = \omega$. Then the assumption that $\eta$ is the affine normal of $\partial B$ yields \begin{align*} \langle\eta,\xi\rangle = (\lambda_1\lambda_2)^{\frac{1}{4}}. \end{align*} Notice that $\lambda_1\lambda_2$ is the (Euclidean) Gaussian curvature of $\partial B$. Let us use the notation $K := \lambda_1\lambda_2$. Following \cite[Example 3.4]{nomizu}, the affine normal of $\partial B$ must be given by \begin{align}\label{affnoreq} \eta = K^{\frac{1}{4}}\xi + Z, \end{align} where $\xi$ is the Euclidean normal vector, and $Z$ is the gradient of the function $K^{\frac{1}{4}}$ with respect to $h$. Thus, $Z\in T_qM$ is, at each point $q \in U$, the vector for which \begin{align*} h(Z,X) = X\left(K^{\frac{1}{4}}\right) \end{align*} holds for every smooth vector field $X$. Since $K^{\frac{1}{4}} = \langle\eta,\xi\rangle$, this equality reads \begin{align*} h(Z,X) = X(\langle\eta,\xi\rangle) = \langle D_X\eta,\xi\rangle + \langle \eta,D_X\xi\rangle = \langle \eta,d\xi_qX\rangle. \end{align*} Then, replacing $X$ by $E_1$ and $E_2$, respectively, and using again formula (\ref{exph}), we have \begin{align*} \langle Z,E_1\rangle = -\lambda_1\langle\eta,\xi\rangle\langle \eta,E_1\rangle \ \ \mathrm{and} \\ \langle Z,E_2\rangle = -\lambda_2\langle\eta,\xi\rangle\langle\eta,E_2\rangle. \end{align*} Finally, (\ref{affnoreq}) and the decomposition of $Z$ in the basis $\{E_1,E_2\}$ and of $\eta$ in the basis $\{E_1,E_2,\xi\}$ yield \begin{align*} K^{\frac{1}{4}}\xi + Z = \langle\eta,\xi\rangle\xi -\lambda_1\langle\eta,\xi\rangle\langle \eta,E_1\rangle E_1 -\lambda_2\langle\eta,\xi\rangle\langle\eta,E_2\rangle E_2 = \\ = \langle\eta,\xi\rangle\xi + \langle\eta,E_1\rangle E_1 + \langle\eta,E_2\rangle E_2. \end{align*} Therefore, we have that \begin{align*} \lambda_1 = \lambda_2 = -\frac{1}{\langle\eta,\xi\rangle}. \end{align*} It follows that every point of $\partial B$ is umbilic (in the Euclidean sense). Therefore, $\partial B$ must be a Euclidean circle. \end{proof} Our next concern is an existence problem: Let $||\cdot||$ be an admissible norm yielding a Minkowski geometry in $\mathbb{R}^3$. Can we always guarantee that an immersed surface exists which, when endowed with the Birkhoff-Gauss map, is a Blaschke immersion? We show now that the answer is no, except for the Euclidean subcase. \begin{teo} Assume that $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ is a connected, compact and immersed surface without boundary, where $||\cdot||$ is admissible. Then the affine normal of $M$ equals its Birkhoff normal if and only if the norm is Euclidean and $M$ is a sphere. \end{teo} \begin{proof} Let $K$ be the (Euclidean) Gaussian curvature of $M$, and let $\eta:M\rightarrow\partial B$ be its Birkhoff normal field. If $\eta$ is the affine normal field, then from the proof of the last theorem we have \begin{align*} \eta = K^{\frac{1}{4}}\xi + Z, \end{align*} where $Z \in T_pM$ is the vector for which $h(Z,X) = X(K^{\frac{1}{4}})$ holds for each $X \in T_pM$. Our result will be a consequence of the fact that the derivatives of $\eta$ and $\xi$ are always tangential. Indeed, for any $p \in M$ and $X \in T_pM$ we have the equality \begin{align*} D_X\eta = X(K^{\frac{1}{4}})\xi + K^{\frac{1}{4}}D_X\xi + D_XZ = h(Z,X)\xi + K^{\frac{1}{4}}D_X\xi + \nabla_XZ + h(X,Z)\eta, \end{align*} and hence $0 = \langle D_X\eta,\xi\rangle = h(Z,X)(1+\langle\eta,\xi\rangle)$. Since $\langle\eta,\xi\rangle$ can be assumed to be positive (after re-orienting $\eta$, if necessary), it follows that $h(Z,X) = 0$ for any $p \in M$ and $X \in T_pM$. Therefore, the Euclidean Gaussian curvature of $M$ is constant. It follows that $M$ must be contained in a Euclidean sphere. Also we have that $\langle\eta,\xi\rangle$ is constant, since $\langle\eta,\xi\rangle = K^{\frac{1}{4}} = c$, say. Since $\eta$ can be regarded as the position vector of $\partial B$, it follows that the (Euclidean) support function of $\partial B$ is constant. Thus, $M$ is a Euclidean sphere. \end{proof} \begin{remark} If we drop the hypothesis of $M$ being compact, then we would clearly still have that $M$ is contained in a Euclidean sphere, and that there is a ``portion" of $\partial B$ which is a piece of a Euclidean sphere. More precisely, this would be the subset of $\partial B$ of points at which $\partial B$ is supported by a hyperplane parallel to some tangent space of $M$. \end{remark}
1,108,101,566,043
arxiv
\section{Introduction} \label{sec:intro} Histopathology is regarded as the gold standard method for cancer diagnosis, including almost all types of cancers, such as breast, lung, colon and prostate cancer \cite{rubin2008rubin, gurcan2009histopathological, he2012histology}. Suspicious tissues are biopsied and the biopsy undergoes fixation, sectioning, and finally mounting on a slide. The biopsy section then is subjected to haematoxylin and eosin (H\&E) staining which is a routinely used staining procedure that enhances tissue structure and cell morphology. A pathologist would then thoroughly examine the H\&E stained slides under a microscope at multiple magnification levels, searching for morphological signatures indicating the onset or progression of cancerous tissues whose presence determines whether the tumor should be diagnosed as benign or malignant. The whole process, however, can be very time-consuming, since it is often required that the pathologist switch between magnification levels and jump among different image locations \cite{roa2010experimental}. In addition, the diagnosis from a pathologist can sometimes be subjective and heavily dependent on the experience of the pathologist \cite{he2012histology}. In order to address the above problems, computer aided diagnosis (CAD) systems have been proposed to facilitate cancer diagnosis, not only to reduce labor work for the pathologist, but also to improve objectivity and consistency. Despite the work that has been done in the last few decades \cite{demir2005automated, gurcan2009histopathological, he2012histology, veta2014breast}, tumor malignancy classification remains still a challenge for most automatic cancer diagnosis applications due to the tremendous complexity of histopathological images, which can be due to various reasons including the staining variations in specimen treatment process \cite{mccann2015automated} and the diversity of tissue characteristics in different cancers. Therefore, a robust and reliable CAD system for cancer diagnosis has to be designed to capture all discriminative features in histopathological images effectively. However, as has been pointed out by many researchers \cite{demir2005automated, gurcan2009histopathological, he2012histology, spanhol2016breast}, when using traditional classification approaches, the feature engineering step can be very difficult that requires a fair amount of expert domain knowledge. Recently, a wide variety of new deep learning technologies \cite{lecun2015deep, krizhevsky2012imagenet}, such as the convolutional neural network (CNN), first developed by LeCun \emph{et al}\bmvaOneDot \cite{lecun1989backpropagation}, have achieved great success on various computer vision and pattern recognition tasks. Indeed, CNN has become the state-of-the-art method for image based classification problems, consistently outperforming traditional machine learning methods. More importantly, CNN can automatically extract discriminative features from images by itself. As a result, no hand-crafted feature engineering step is required anymore, which saves considerable efforts in most applications including histopathological image classification. \section{Previous work} Due to its superior performance compared to traditional machine learning methods, CNN has been widely applied to histopathological cancer diagnosis problems. Cire{\c{s}}an \emph{et al}\bmvaOneDot \cite{cirecsan2013mitosis} use CNN to detect mitosis in breast cancer histological images and won the ICPR 2012 mitosis detection competition. Sirinukunwattana \emph{et al}\bmvaOneDot \cite{sirinukunwattana2016locality} propose a spatially constrained CNN for nucleus detection and then a Neighboring Ensemble Predictor (NEP) coupled with CNN for nucleus classification in colon caner histological images, and achieve the highest average F1 score for this problem compared to other methods. Although both of the above two papers are not directly working on tumor malignancy classification, their results could undoubtedly benefit cancer diagnosis, since both mitosis and nuclear characteristics are important indicators for cancerous tissue detection. Direct work on malignancy classification have also been published. For example, Cruz-Roa \emph{et al}\bmvaOneDot \cite{cruz2014automatic} show that a CNN classifier achieves a balanced accuracy of 84.23\% for the detection of invasive ductal carcinoma, where the best performance of methods using handcrafted features and classifiers is 78.74\%. Similarly, Litjens \emph{et al}\bmvaOneDot \cite{litjens2016deep} also demonstrate that CNN improves the efficacy of prostate cancer diagnosis. We note that the previous work mentioned above on histopathological image classification using convolutional neural networks are done on whole slide images (WSI), and the patches used for training are extracted from the original images at a certain fixed magnification level. However, an experienced pathologist would not choose to determine a diagnosis decision based on a single magnification level. In practice, it is often required that the pathologists evaluate the histopathological slides at multiple magnification levels \cite{roa2010experimental, romo2014discriminant}, as different magnifications give different features. For instance, lower magnification gives global texture information and tissue structure while higher magnification resolve more on cellular morphology and sub-cellular details \cite{gurcan2009histopathological}. Sometimes it is difficult to determine a diagnosis merely based on a single magnification level. Only by integrating all the features at multiple magnification levels, a confident diagnosis can be determined. Recently, an image dataset BreaKHis is released \cite{spanhol2016dataset}, which provides histopathological images of breast tumor at multiple magnification levels (40$\times$, 100$\times$, 200$\times$ and 400$\times$). Both traditional methods using handcrafted features \cite{spanhol2016dataset} and CNN method \cite{spanhol2016breast} have been applied on this dataset for malignancy classification, and it has been shown that by combining different CNNs using fusion rules, the CNN performance has an improvement of 6\% in classification accuracy, compared to traditional methods. However, one disadvantage of this paper \cite{spanhol2016breast}, is that four CNN classifiers have to be trained, with one classifier specialized for each of the four magnifications. Seeking to find a better solution to this problem, Bayramoglu \emph{et al}\bmvaOneDot \cite{bayramogludeep} propose a magnification independent approach with both single-task (malignancy) and multi-task (malignancy and magnification) classification, where they ignore magnification information of the image and train a unique CNN classifier for all magnifications. Although the performance is slightly impaired, it indeed improves the efficiency. Nevertheless, when evaluated on the testing sets, both of the previous work \cite{spanhol2016breast, bayramogludeep} on BreaKHis dataset using CNN fail to determine a diagnosis for a patient based on features from multiple magnification levels at the same time. Instead, they give separate classification accuracy for each individual magnification, independent to other available magnifications. \section{Histopathological case-based classification} \label{sec:approach} In order to build a more reasonable and reliable computer aided diagnosis system, we propose a case-based approach for histopathological malignancy classification, where a case is defined as a sequence of images including one or more images from each of all the available levels of magnification for a certain dataset. For example, for the BreaKHis dataset, a typical case could consist of one or more images at each of the following magnifications in order: 40$\times$, 100$\times$, 200$\times$ and 400$\times$ (Figure \ref{fig:showcaseimage}). A trained classifier should be able to learn all the features from different magnification images, and give a unique and more accurate result based on all information given (e.g. tissue structure at lower magnification, cell phenotype at higher magnification), equivalent to how an histopathological expert would choose to perform analysis at multiple magnification levels. In this section, we first present our algorithm that constructs a case-based image set from any given histopathological image dataset with multiple magnifications and malignancies (Section \ref{sec:algorithm}). We then introduce a CNN model to classify our histopathological cases (Section \ref{sec:classifier}). Finally, we describe the three performance metrics that will be used for the evaluation of histopathological case-based classification (Section \ref{sec:metrics}). \begin{figure}[ht] \centering \includegraphics[scale=0.32]{figure_showcaseimage} \caption{ A typical histopathological case of breast tumor with different magnifications. } \label{fig:showcaseimage} \end{figure} \subsection{Case-based image set initialization} \label{sec:algorithm} Histopathological image datasets are often given as images in multiple separated magnifications, but not as cases. Therefore, the first step is to build an appropriate number of histopathological cases based on the given dataset. To limit the size of the input set, the cases will include exactly one image from each magnification level. Algorithm \ref{algo:casebuild} describes the initialization of a case-based image set from the original dataset with multiple magnifications and malignancies. Put simply, for each case build, the algorithm randomly chooses one image from each subset of images that belong to different magnifications, with the restriction that all images in the same case must have the same malignancy label, which will also be the final class label for the resulting case. For simplicity, we illustrate in Algorithm \ref{algo:casebuild}, assuming that two types of malignancy (benign and malignant) and four levels of magnification (40$\times$, 100$\times$, 200$\times$ and 400$\times$) are available, which is the case for BreaKHis dataset. However, this algorithm can be applied to any number of malignancy types and magnification levels. The only parameter passed to the algorithm is the expected number of output cases $k$ (which we assume to be a multiple of the number of types of malignancy). In training set initialization, we want this set size, which we will denote as $k_{\mbox{\scriptsize{\it train}}}$, to be relatively large in order to avoid over-fitting our model later on, but also not too large due to limited computational resources and running time. Therefore, this parameter needs to be fine-tuned for different problem settings as we will show in more detail in Section \ref{sec:experiments}. Algorithm \ref{algo:casebuild} can be applied to both training and testing sets, depending on the inputs of image sets. Note in the training phase, a case consists of one single image from each magnification level, but not necessary from the same particular patient. The images can be randomly selected from different patients as long as they share the same malignancy. This is why the patient information does not come in Algorithm \ref{algo:casebuild}. However, in the testing phase, we may want the cases to be patient specific, which can be achieved by setting patient specific images as the input to Algorithm \ref{algo:casebuild}. After the whole process, the initialized case-based image sets are ready for training or evaluation. \begin{algorithm} \SetKwData{Left}{left} \SetKwData{This}{this} \SetKwData{Up}{up} \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{image sets I$_{\mbox{\it Malignancy} \times \mbox{\it Magnification}}$, where {\it Malignancy} is the set of malignancy types, e.g. \{benign, malignant\}, and {\it Magnification} is the set of magnifications, e.g. \{40, 100, 200, 400\}} \Output{$X$ $\leftarrow$ data, $y$ $\leftarrow$ label} \Parameter{\textit{k} = expected number of output cases} \BlankLine \emph{Initialize i = 0}\; \ForEach{$mal \in \mbox{\it Malignancy}$}{ \emph{Initialize counter$_{mal}$ = 0} \; \emph{Initialize new current combination $X_i$}\; \Repeat{counter$_{mal}$ $\geq$ $k/|\mbox{\it Malignancy}|$}{ \ForEach{$mag \in \mbox{\it Magnification}$}{ randomly pick an image from image set I$_\text{\it (mal, mag)}$ and add to $X_i$; } \If{current combination $X_i$ not in $X$}{ add $X_i$ to $X$;\\ $y_i$ = $mal$; \\ $i$ += 1;\\ \textit{counter$_{mal}$} += 1; } } } \caption{Case-based image set initialization} \label{algo:casebuild} \end{algorithm} \subsection{ResNet-based classifier} \label{sec:classifier} We choose to use deep residual neural networks (ResNets) to classify the histopathological cases. ResNets are a special kind of convolutional neural networks that have residual units in parallel to regular convolutional layers. The design of residual units are quite flexible such that they can also be further engineered in order to get better performance \cite{he2016deep, he2016identity}. We start with a simple 18-layer ResNet model (ResNet-18), as this model can be easily adapted to even deeper models (e.g. 152 layers) if required. The overall architecture of ResNet-18 is shown in Figure \ref{fig:architecture}. The model contains two types of residual units: the residual unit with an identity shortcut and the residual unit with a projection shortcut. The only difference between these two types is that in the projection shortcut, an additional convolutional layer is required due to the change of dimension from input to output. Each residual unit contains six sequential components: Batch Normalization, Rectified Linear Unit (ReLU), Convolution, Batch Normalization, ReLU and Convolution. An average pooling layer is used before the final fully connected layer. \begin{figure}[ht] \centering \includegraphics[scale=0.45]{figure_architecture} \caption{ (a) Residual unit with identity shortcut; (b) Residual unit with projection shortcut; (c) Overall architecture of the ResNet used in the paper. } \label{fig:architecture} \end{figure} \subsection{Metrics} \label{sec:metrics} Spanhol \emph{et al}\bmvaOneDot \cite{spanhol2016breast} have introduced two ways to report method performances for medical image classification: image recognition rate and patient recognition rate. Here, to accommodate for our case-based approach, however, we use a case level metric instead of an image level metric. The case recognition rate is defined as follows: \begin{equation*} \text{Case Recognition Rate} = \frac{N_{rec}}{N_{all}} \tag{1}\label{eq:1} \end{equation*} where \textit{N}$_{all}$ is the total number of all cases constructed for the testing set, and \textit{N}$_{rec}$ is the number of correctly classified cases. Unlike the case recognition rate, the patient recognition rate takes patient information into account. For each patient $p$ in the testing set, let \textit{N}$_{p_{all}}$ be the total number of cases that belong to patient $p$, and \textit{N}$_{p_{rec}}$ be the number of correctly classified cases for patient $p$, then the patient recognition rate can be defined as \cite{spanhol2016breast}: \begin{equation*} \text{Patient Recognition Rate} = \frac{\sum_{p} (N_{p_{rec}} / N_{p_{all}})} {\text{Total Number of Patients}}. \tag{2}\label{eq:2} \end{equation*} In addition to the above two recognition rates, we also give a new metric defined at the diagnosis level. First, we give a final diagnosis to each patient in the testing set based on a simple voting strategy where we assume that the diagnosis is benign if the ratio of benign to malignant cases for the patient $p$ is above a threshold, $\mbox{\it malignancy\_threshold}$: \begin{equation*} \text{Patient Diagnosis $_{p}$} =\left\{ \begin{array}{@{}ll@{}} benign, & \text{if}\ \frac{N_{p_{benign}}}{N_{p_{all}}} > \mbox{\it malignancy\_threshold} \\ malignant, & \text{otherwise} \end{array}\right. \tag{3}\label{eq:3} \end{equation*} where \textit{N}$_{p_{benign}}$ is the number of cases that are diagnosed as benign for patient $p$. For example, if $\mbox{\it malignancy\_threshold}$ is set to 0.5, the patient $p$ is assigned a diagnosis of benign if more than half of the cases for patient $p$ are classified as benign. Based on the diagnoses assigned to the patients, diagnosis accuracy for the classification is defined as the follows: \begin{equation*} \text{Diagnosis Accuracy} = \frac{\text{Number of Correctly Diagnosed Patients}}{\text{Total Number of Patients}}. \tag{4}\label{eq:4} \end{equation*} We believe the diagnosis accuracy metric should be emphasized more for future research on histopathological diagnosis problems, as it is of utmost clinical importance that a computer-aided diagnosis system be able to give a final diagnosis for a patient, and based on the accuracy at which the diagnosis is correct or not, we can judge its performance. \section{Experiments and Results} \label{sec:experiments} This section evaluates our case-based approach for histopathological diagnosis that is proposed in Section \ref{sec:approach}. \subsection{Dataset} To test the proposed case-based approach for histopathological diagnosis, we use the BreaKHis database \cite{spanhol2016dataset}, a recently released dataset of breast tumor histopathological images. BreaKHis contains both benign and malignant breast tumor images, which were collected from 82 patients at multiple magnification levels (40$\times$, 100$\times$, 200$\times$ and 400$\times$). Each patient may have a different number of images for each magnification. In total, there are 2480 benign and 5429 malignant images, with each image acquired in three channels (RGB). Besides the histopathological images, BreaKHis also provides a five-fold protocol for testing. We use the same testing protocol as previous work \cite{spanhol2016breast, bayramogludeep}, where the whole dataset is split into training (70\%) and testing (30\%) set for five trials, such that none of the images associated with the patients in the training set are used in the testing set. In the end, 54 out of the total of 82 patients are grouped into the training set, and the rest of the 28 patients are used as evaluation samples for all the five folds. The BreaKHis images are originally of size 700$\times$460$\times$3. To speed up the processing times and lessen the memory requirements, the images are resized to 100$\times$100$\times$3 for both the training and testing sets. \subsection{Implementation} With all images from BreaKHis, we implement Algorithm \ref{algo:casebuild} to build histopathological cases for both the training and testing sets. To find the best parameter $k_{\mbox{\scriptsize{\it train}}}$ for Algorithm \ref{algo:casebuild} when initializing the training sets, we utilize fold 1 for a series of experiments by setting the number of output cases over a range of values from 100 to 40,000 as shown in Figure \ref{fig:choosecasenumber}. After comparing the case-level accuracies for the different sizes of training sets, we choose the smallest size of the training set that gives the best accuracy as our final $k_{\mbox{\scriptsize{\it train}}}$. Note that for some of the smaller sizes of the training sets, we repeat some of the experiments independently three or five times since the performance of trained model can vary a lot for these sizes. The final chosen parameter $k_{\mbox{\scriptsize{\it train}}}$ for training set initialization achieves a balance between computational resource requirement and performance. On the other hand, for testing set initialization, we simply use $k_{\mbox{\scriptsize{\it test}}}$ = 30,000 for the size of the testing sets as evaluation on thirty thousands cases gives a quite stable estimation of model performance according to our trial experiments. For all experiments, we implement our classifier ResNet using Keras, a deep learning library written in python with either TensorFlow or Theano as a backend \cite{chollet2015keras}. We use Theano as the backend in this paper. To optimize the weights, we use stochastic gradient descent, with a batch size of 100 to compute the gradients using back propagation. The initial learning rate is set to 0.001, decay by 1e-6 over each update, and Nestrov momentum is set to 0.9. We train our neural network for 100 epoches. All experiments are done on 4 Intel Xeon(R) E3-1271 v3 processors with a NVIDIA Quadro K2000/PCIe/SSE2 GPU with CUDA 7.5 installed in a Ubuntu 16.04 LTS. \subsection{Results} First, regarding the choice of $k_{\mbox{\scriptsize{\it train}}}$, as is shown in Figure \ref{fig:choosecasenumber}, with the increase of the total number of cases used for training, the testing accuracy also increases and finally reaches the plateau. When the model is trained on only 100 cases, there is a large variation in model performance based on five independently conducted experiments. In the worst case, for $100$ cases, the performance is not much better than a random guess. However, the model performance is significantly improved when trained on a large number of cases. In addition, the variation becomes smaller as well. From the bottom plot in Figure \ref{fig:choosecasenumber}, we can see that the curve starts to converge when the number of cases is increased to $5,000$, and reaches a maximum at around $10,000$. To understand the effect of the choice of $k_{\mbox{\scriptsize{\it train}}}$ on the running time, when setting $k_{\mbox{\scriptsize{\it train}}}$ equal to $40,000$, the total training time required for a single fold is around 900 seconds per epoch, while it is 2 seconds per epoch for $k_{\mbox{\scriptsize{\it train}}}$ set to $100$. By setting our parameter $k_{\mbox{\scriptsize{\it train}}}$ equal to $10,000$, we significantly reduce the running time by around four times, from 900 seconds to 226 seconds, when compared to $k_{\mbox{\scriptsize{\it train}}}$ equal to $40,000$, without sacrificing accuracy. Therefore, we choose to set our parameter $k_{\mbox{\scriptsize{\it train}}}$ equal to $10,000$ in training set initialization algorithm for all the following experiments. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{figure_choosecasenumber} \caption{Performance in terms of case-level accuracy versus number of histopathological cases $k_{\mbox{\scriptsize{\it train}}}$ used for training. Top table shows the testing accuracies in each experiment; The bottom plot is the visualization of the table. } \label{fig:choosecasenumber} \end{figure} With the parameters $k_{\mbox{\scriptsize{\it train}}}$ and $k_{\mbox{\scriptsize{\it test}}}$ set, we can then thoroughly evaluate our case-based approach based on five-fold testing protocol, using the metrics that we described in Section \ref{sec:metrics}. For each fold, we first build the case-based training and testing sets using Algorithm \ref{algo:casebuild}, by setting $k_{\mbox{\scriptsize{\it train}}}$ = 10,000 for the training set and $k_{\mbox{\scriptsize{\it test}}}$ = 30,000 for the testing set. Note that both the training and testing sets are balanced in terms of the different malignancy types. After the models are trained, we then evaluate the model performance using the following three metrics: case recognition rate, patient recognition rate, and diagnosis accuracy. For the diagnosis of benign or malignant, we set $\mbox{\it malignancy\_threshold}$ to 0.5. Table \ref{table:accuracies} shows the final results. \begin{table}[ht] \caption{ Performance of case-based approach for histopathological malignancy diagnosis based on case-level, patient-level and diagnosis-level accuracy. } \centering \begin{tabular}{l C{1.2cm} C{1.0cm} C{1.0cm} C{1.0cm} C{1.0cm} | C{1.8cm}} \hline\noalign{\smallskip} Accuracy Type & Fold 1 & Fold 2 & Fold 3 & Fold 4 & Fold 5 & Average\\ \noalign{\smallskip} \hline \noalign{\smallskip} Case Recogn. Rate & 0.9246 & 0.8596 & 0.9355 & 0.9220 & 0.9323 & 0.9148 \\ Patient Recogn. Rate & 0.8731 & 0.8424 & 0.8753 & 0.8090 & 0.9182 & 0.8636 \\ Diagnosis Accuracy & $25/28$ & $23/28$ & $26/28$ & $23/28$ & $27/28$ & 0.8857 \\ \hline \end{tabular} \label{table:accuracies} \end{table} Our case-based approach gives average accuracies of 91.48\% (case-level), 86.36\% (patient-level) and 88.57\% (diagnosis-level) on the testing sets. As we are the first to use case-level and diagnosis-level accuracies, we can't compare the results for these metrics to previous results. However, based on patient-level accuracy, our case-based approach (86.36\%) outperforms the multi-task CNN method (82.13\%, average of four magnifications) \cite{bayramogludeep} and the magnification independent single-task CNN method (83.25\%, average of four magnifications) \cite{bayramogludeep}, and achieves a comparable performance to the best results obtained from the combination of four patch image extraction strategies and three fusion rules using a patch-based method for specific magnifications (40$\times$: 90.0\%; 100$\times$: 88.4\%; 200$\times$: 84.6\%; 400$\times$: 86.1\%) \cite{spanhol2016breast}. We further investigate the misclassified patients in terms of malignancy diagnosis for all five folds, and summarize the results as confusion matrices in Figure \ref{fig:confusionmatrix}. In total, 16 out of 140 patient samples over the five folds are misclassified, with a false positive rate of 5.0\% and a false negative rate of 6.43\%. \begin{figure}[ht] \centering \includegraphics[scale=0.33]{figure_confusionmatrix} \caption{ The confusion matrices of case-based approach for histopathological malignancy diagnosis in five folds. } \label{fig:confusionmatrix} \end{figure} \section{Conclusion} In this paper, we propose a case-based approach for histopathological malignancy diagnosis using deep residual neural networks. We first introduce an algorithm for case-based image set initialization for both training and testing based on histopathological images at multiple magnification levels, and then present a ResNet-based classifier and three metrics to report method performances for medical image classification. Finally, we evaluate our proposed approach using the breast tumor histopathological image dataset BreaKHis. Our results show that the case-based approach achieves better performance than the state-of-the-art methods. Moreover, we believe our case-based approach is a more reasonable way for histopathological malignancy classification since it makes diagnosis decision based on features learned at multiple magnifications. Another principle advantage of our work over the previous work \cite{spanhol2016breast, bayramogludeep} is that our method gives a single diagnosis for the patient, whereas in the previous work four potentially differing diagnoses are given for the same patient, one for each of four magnification levels. To be clinically applicable, these latter approaches would then require a final voting step or similar diagnosis selection step which are not discussed in their papers \cite{spanhol2016breast, bayramogludeep}. For future work, more complex deep CNN architectures will be investigated.
1,108,101,566,044
arxiv
\section{Introduction} SoCs composed of a pool of heterogeneous processing elements offer performance gains over their homogeneous counterparts as they allow pairing each task or execution phase of application with a suitable processing element (PE) based on the state of the system resources. To harness this flexibility, programming models have been introduced where application developers or domain experts guide the compilation process by making task to PE mapping decisions based on offline profiling. For example, in CUDA-based programming, programmers have to understand the application, partition it into independent tasks, and manually map them to threads and blocks in GPU. This model of computation results in a static execution flow and a hand-crafted schedule that is greedily tuned for a single application. In this study, we introduce an integrated compile time and runtime environment that automatically detects parallelism in the user application, transforms the program to parallel representation, and provides the runtime system with a flexible binary structure. This allows the runtime system to dynamically schedule and launch these tasks in parallel to heterogeneous resources based on the system state rather than rely on a hand-crafted static schedule. \begin{table}[!t] \centering \scalebox{0.73}{ \begin{tabular}{|l|c|cc|cc|c|c|} \hline \multirow{2}{*}{\textbf{}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}} Dedicated\\ Framework\end{tabular}}}} & \multicolumn{2}{c|}{\textbf{Program Analysis}} & \multicolumn{2}{c|}{\textbf{Target Architecture}} & \multirow{2}{*}{\textbf{End-to-End}} \\ \cline{3-6} \multicolumn{1}{|c|}{} & & \multicolumn{1}{c|}{\textbf{Static}} & \textbf{Dynamic} & \multicolumn{1}{c|}{\textbf{Multicore}} & \textbf{Hetero. SoC} & \\ \hline Tensorflow\cite{abadi2016tensorflow} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \CheckmarkBold \\ \hline Halide\cite{ragan2013halide} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \multicolumn{1}{c|}{\CheckmarkBold} & & \CheckmarkBold \\ \hline HPVM\cite{kotsifakou2018hpvm} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \CheckmarkBold \\ \hline Chi et al.\cite{chi2021extending} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \multicolumn{1}{c|}{} & \CheckmarkBold & \CheckmarkBold \\ \hline Parwiz \cite{ketterlin2012profiling} & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \\ \hline SD3 \cite{kim2010sd3} & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \\ \hline Wang et al. \cite{wang2014integrating} & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \CheckmarkBold \\ \hline Kremlin \cite{garcia2011kremlin} & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & & \CheckmarkBold \\ \hline Ours & & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \multicolumn{1}{c|}{\CheckmarkBold} & \CheckmarkBold & \CheckmarkBold \\ \hline \end{tabular} } \caption{Related work on program analysis and workload parallelization} \vspace{-6mm} \label{tab:related-work} \end{table} Programming models such as OpenMP\cite{dagum1998openmp} and Pthread\cite{butenhof1997programming} offer interfaces like pragma labels and thread bindings where users specify the parallelism in their applications explicitly. Program analysis tools have been introduced to enable transforming a user application to a parallel representation based on static ~\cite{ragan2013halide, abadi2016tensorflow, chi2021extending, kotsifakou2018hpvm} or dynamic \cite{kim2010sd3,garcia2011kremlin,wang2014integrating,ketterlin2012profiling} analysis as listed in Table~\ref{tab:related-work}. These tools also vary in terms of their approach to parallelism detection from instruction level granularity targeted for multi-core architectures to task level granularity for heterogeneous architectures. Static parallelization methods require the user to express the application in an explicit data flow style either using a domain-specific language (DSL) or a dedicated framework. Among the dynamic methods, our approach is the only one that targets both homogeneous multi-core architectures and heterogeneous SoCs in an end-to-end integrated compile and run time flow. Our approach employs a profile-guided dynamic analysis of memory access patterns using a combination of memory tuples, loop-access patterns and function pointers for inferring the required runtime control states that are necessary for task-level dependence analysis. It offers the ability to retarget user applications with coarse-grained computation tasks for heterogeneous architectures through code transformations that enable parallel task execution in one unified compile time and runtime framework. The key technical contributions of this study are as follows: \begin{itemize} \item We introduce a novel profile-guided memory analysis approach to detect the data dependencies among coarse-grained tasks in a given application and expose the parallelism among those tasks. \item We present a methodology that partitions the user application into serial and parallel tasks following a fork-join programming model and compiles into an application binary representation with embedded parallelism and instrumentation such that the runtime system can issue and execute all independent tasks concurrently. \item We integrate the profile-guided parallel program generation tool flow with an open-source runtime and demonstrate an end-to-end system that is able to compile and execute real-life applications on off-the-shelf platforms. \end{itemize} We demonstrate the ability to identify task-level parallelism for real life applications, transform user-application for parallel execution, and successfully execute those tasks in parallel first using an event-driven simulation environment DS3\cite{arda2020ds3}. After validating the ability to extract parallelism, we demonstrate functionally correct task-level parallelism in our runtime for those parallelized applications on a homogeneous 8-core Xeon processor. Finally, we demonstrate functionally correct parallel execution on an emulated heterogeneous SoC composed of 3 ARM CPUs and an FPGA-based accelerator. This emulation platform illustrates not only our ability to process single application with parallel execution flow but also our ability to execute multiple dynamically arriving applications supporting both task-level and application level parallel execution across heterogeneous set of resources. \begin{figure*}[!h] \centerline{\includegraphics[width=0.98\textwidth]{updated-figure/toolflow.pdf}} \caption{Overall tool flow involves \emph{Pre-processing} (Section~\ref{sec:preprocess}). \emph{Data DAG Generation} (Section~\ref{sec:data-dag}), followed by \emph{Parallel Code Generation} (Section~\ref{sec:codegen}).} \label{fig:ToolFlow} \end{figure*} \begin{figure*}[!ht] \centerline{\includegraphics[width=1.0\textwidth]{updated-figure/code-example.pdf}} \caption{The state of the user application as it is processed through based on the tool flow shown in Figure~\ref{fig:ToolFlow}. (a) User application with function pointers, pointers for passing variables and unknown loop iteration count; (b) Instrumented user application for tracing during pre-processing; (c) Flattened user application after pre-processing step; (d) Parallelized user application instrumented with counter based conditional wait.} \vspace{-4mm} \label{fig:code-example} \end{figure*} \section{Profile Guided Parallel Program Generation} \label{sec:methodology} \subsection{Overview}\label{sec:overview} The overall flow of the profile guided parallel program generation as illustrated in Figure~\ref{fig:ToolFlow}, involves the \textit{Preprocessing} (Step-1) and \textit{Application Data DAG Generation} (Step-2) followed by \textit{Parallel Code Generation} (Step-3). In this design flow, we leverage TraceAtlas~\cite{uhrie2020automated} for profile-based program analysis and update Compiler Integrated Extensible DSSoC Runtime (CEDR)~\cite{MackTECS22} for runtime task scheduling. TraceAtlas\cite{uhrie2020automated} offers flexible interfaces to support dynamic profiling and trace analysis of LLVM IR~\cite{lattner2004llvm} rapidly with lower resource requirements compared to other frameworks~\cite{aladdinShao2014,Wasabi2019Lehmann,luk2005pin}. We use TraceAtlas to collect memory address ranges accessed by each basic block in the IR along with the runtime control states of those blocks. We then use this information to identify tasks that can be executed in parallel during a task-level memory analysis of the user application. The CEDR ecosystem allows integrating compile-time application analysis with a Linux-based runtime system. We choose CEDR over other runtime frameworks ~\cite{augonnet2011starpu, donyanavard_sparta_2016, donyanavard_sosa_2019, maity_2021_SEAMSSelfOptimizing, martins_hierarchical_2019,tan_picos++_2019, moazzemi_2019_HESSLEFREEHeterogeneous} as it enables compilation and development of user applications for heterogeneous SoCs, evaluating the performance of pre-silicon heterogeneous hardware configurations based on dynamically arriving workload scenarios through distinct plug-and-play integration points in a unified workflow. \subsection{Program Model} Our system partitions the user application into two types of tasks to represent the execution flow as \emph{Type-1 Task} and \emph{Type-2 Task} regions as described below. \subsubsection{Type-1 Task} These tasks represent code segments that have low computation complexity, function as auxiliary, and are suitable for executing on the CPU core in a single thread. \subsubsection{Type-2 Task} These tasks represent compute-intensive segments of the execution at the task level such as FFT and matrix multiplication. We assume that there is accelerator support for \emph{Type-2 Tasks} and they can be executed in parallel using multi-threading on CPU or on the accelerator depending on the state of the system resources. Users are required to register Type-2 tasks with C-models and interface functions to invoke the driver of any supported accelerators. Here the C-model is flexible enough that users can either use third-party libraries or just manually implement them. \subsection{Running Example} We utilize a running example that consists of commonly used program structures with a representative computation task in our problem domain, namely, FFT(Fast Fourier Transform). Figure~\ref{fig:code-example} presents the workings of our tool flow for the user application. Figure~\ref{fig:code-example}a is crafted in such a way that it includes features where static analysis is not feasible for detecting data dependence such as the use of pointers for passing variables (lines 2 and 8), function pointers for tasks (lines 6 and 11), different function spaces while calling the tasks (lines 18 and 20), and loop iteration counter that is not known at compile time (line 19). Pre-processing step refactors the user application through inlining functions dynamically, redirecting function pointers and flattening loops that iterate over \emph{Type-2 Tasks}. This prepares the application for data DAG generation that captures concurrent execution paths over \emph{Type-2 Tasks} through dynamic memory analysis followed by control and data flow analysis. Finally refactored code is instrumented with pthread insertion for runtime system to be able to schedule the detected parallel Type-2 Tasks concurrently. In the following subsections, we present the details of our approach by walking through the steps of transforming a user application written in C/C++ into a representation that allows the runtime system to execute tasks without dependencies in parallel. \subsection{\textbf{Step 1}: Pre-processing} \label{sec:preprocess} In this step, we implement a profile-guided method to flatten the loops and inline the functions that interface with \emph{Type-2 Tasks}. The pre-processing step parses the user application and automatically instruments it with functions as illustrated in Figure~\ref{fig:code-example}b (lines 3, 15, 18 and 20) to be able to trace each Type-2 Task and loop identifier. The instrumented user application is compiled and executed to collect profiling data on loop trip count and function pointer destination. The pre-processing step then parses the instrumented user application second time, flattens the loop and redirects Type-2 Task calls. In our running example, assuming that loop trip count is two the output of the pre-processing step is illustrated in Figure~\ref{fig:code-example}c. \subsection{\textbf{Step 2}:Data DAG Generation} \label{sec:data-dag} Implementation flow for this step is illustrated in Figure~\ref{fig:ToolFlow}b where we start with compiling the preprocessed application using Clang to generate the LLVM IR. Recall that a user application is represented with a \emph{Control DAG}, where each node represents either \emph{Type-1} or \emph{Type-2} tasks and the edges between the nodes represent the flow of the execution. Here a task can contain multiple basic blocks. We trace basic block control related flags such as \texttt{BasicBlockEnter}, \texttt{BasicBlockExit} along with task control related flags such as \texttt{TaskEnter} and \texttt{TaskExit} using TraceAtlas~\cite{uhrie2020automated} in \emph{Dynamic Profiling} stage. We generate the \emph{Control DAG} structure of the application based on the above four control flags. In our running example, the serial flow of program execution (\emph{ReadData(a0), FFT(a0,b0,.,.), WriteData(b0), ReadData(a1), FFT(a1,b1,.,.), WriteData(b1)}) illustrated in Figure~\ref{fig:code-example}c corresponds to the \emph{Control DAG} with six nodes ($0\rightarrow1\rightarrow2 \rightarrow3\rightarrow4 \rightarrow5$) shown in Figure~\ref{fig:ToolFlow}b. The entry and exit points collected at the task level allow for segmenting the program into Type-1 and Type-2 tasks. Each time \texttt{TaskEnter} or \texttt{TaskExit} is seen, a new node is pushed into the Control DAG, and the basic blocks that appear between \texttt{TaskEnter} and \texttt{TaskExit} are recorded as the basic block elements of the node. The traced memory instructions in TraceAtlas include \texttt{LoadAddress}, \texttt{StoreAddress}, along with a set of LLVM intrinsic memory instructions such as \texttt{MemCpy}, \texttt{MemSet}, and \texttt{MemMove}. The addresses of memory instructions, control flags, and \emph{Control DAG} are used during \emph{Memory Analysis} stage to detect data dependencies and expose parallelism among the tasks. The \emph{Memory Analysis} parses the \emph{Control DAG} and uses the trace information collected from \emph{TraceAtlas} to generate the load/store memory tuple sets for each node. We define a \emph{load memory tuple set} and a \emph{store memory tuple set} for each task of the Control DAG to represent the continuously accessed memory space for read and write activities, respectively. Each tuple for a task is composed of \emph{start address}, \emph{end address}, \emph{number of memory accesses}, \emph{number of bytes accessed}, and \emph{task label}. The memory tuples are stored in a red-black tree structure with the start address as the key to reducing the indexing time. The visual representation of memory analysis is shown in Figure~\ref{fig:memory-analysis}, where the y-axis shows an index of each node in the \emph{Control DAG}, x-axis shows the address range accessed (load or store) by each node through labeled rectangles in a normalized form. For example, node index one is the first FFT task in the user application with 2048 samples and 16,384 bytes of memory footprint starting with the load address range followed by the store address range corresponding to read and write operations respectively. The read after write dependence is identified by checking if the load tuple set of the successor node overlaps with the store tuple set of the predecessor node. A data dependency between predecessor and successor nodes exists only if the predecessor node is the last node to write into the load address space of the successor node. In the running example, referring to the memory analysis result in Figure~\ref{fig:memory-analysis}, Nodes 1 and 3 (\emph{FFT()} and \emph{ReadData(a1)}) have the same store address space. Based on the order of execution, the true dependency is between Nodes 3 and 4 (second FFT task). The second FFT task (Node 4) has a load address space that does not overlap with the store address space of the first FFT task (Node 1). Finally, we implement the \texttt{LastWriterMap} data structure as illustrated in Figure~\ref{fig:DAGGen} to keep track of the address spaces modified by each node and utilize this representation to identify Type 2 Tasks that can be executed in parallel. The \emph{Data DAG} stage in this phase detects \emph{Type-2 Tasks} that can be executed in parallel and generates the corresponding \emph{Data DAG}. The \emph{Data DAG} is built on the \emph{Control DAG} structure, with each edge representing the data dependencies between the tasks. \begin{figure}[t] \centerline{\includegraphics[width=0.98\columnwidth]{updated-figure/memory-analysis.pdf}} \vspace{-2mm} \caption{ Memory Dependence Analysis. The address range that a node accesses when reading and writing are shown with the load and store tuple set. When the load tuple set of the successor does not overlap with the store tuple set of the predecessor, they do not have a data dependency.} \vspace{-2mm} \label{fig:memory-analysis} \end{figure} \begin{figure}[t] \centerline{\includegraphics[width=1\columnwidth]{DAGGen.pdf}} \caption{ Step by step (\textbf{t1} to \textbf{t3}) \emph{Data DAG} Generation for \emph{Control DAG} of $A \rightarrow B \rightarrow C$ scenario. \vspace{-4mm} } \label{fig:DAGGen} \end{figure} Figure~\ref{fig:DAGGen} shows state of the \texttt{LastWriterMap} and \emph{Data DAG} generation in three steps ($t1$ to $t3$) for a simple scenario composed of independent Tasks A and B feeding their data to Task C with a \emph{Control DAG} of $A \rightarrow B \rightarrow C$. Each time a new node is visited in the \emph{Control DAG}, its store memory tuple set is written into the \texttt{LastWriterMap}. We overwrite the existing memory tuples and change the pointed value to the new node's index if the store memory tuple set of the new node overlaps with the store memory tuple set maintained in the current \texttt{LastWriterMap}. At the end of the second step, the {Data DAG} with two independent nodes are generated as load address space of \emph{Task B} ($B.ld$) is not overlapping with the store address space in the \texttt{LastWriterMap}. On the other hand, in step 3, edges from \emph{Task A} and \emph{Task B} to \emph{Task C} are generated due to the load-store overlap between $C.ld$ and \emph{store memory tuple} in the \texttt{LastWriterMap} that points to both \emph{Task A} and \emph{Task B}. \begin{figure}[t] \centerline{\includegraphics[width=0.8\columnwidth]{taskReorder.pdf}} \caption{ Task Reordering for the running example from Figure~\ref{fig:code-example}: Nodes 1 and 4 represent two independent FFTs (Type-2 Tasks) with distinct IOs, Nodes 0 and 3 represent reading input data (Type-1 Tasks), and Nodes 2 and 5 represent writing output data (Type-1 Tasks). The algorithm in Figure~\ref{fig:schedule} generates a schedule with Control DAG nodes reordered following the fork and join model. } \vspace{-6mm} \label{fig:taskReord} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=1\columnwidth]{scheduleAlg.pdf} \caption{Schedule Generation algorithm flow.} \label{fig:schedAlg} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=1\columnwidth]{scheduleExpl.pdf} \caption{State of the \emph{Schedule} as Schedule Generation progresses.} \label{fig:schedExm} \end{subfigure} \caption{ The Schedule Generation algorithm reorders DAG nodes for the running example from Figure~\ref{fig:code-example}. } \vspace{-2mm} \label{fig:schedule} \end{figure} \subsection{\textbf{Step3:} Parallel Code Generation} \label{sec:codegen} We define the group of Type-1 tasks that can be scheduled for execution before a Type-2 task as the \emph{Type-1 Region} and the group of Type-2 tasks that can be scheduled before a Type-1 task as the \emph{Type-2 Region}. After identifying the Type-2 tasks that can be executed in parallel, we transform the user application to be represented as a series of code sections composed of Type-1 and Type-2 regions successively in \emph{Task Reordering} stage. We implement the \emph{Schedule Generation} algorithm for this transformation and generate a \emph{Schedule DAG} that follows the fork-and-join parallel model as illustrated in Figure~\ref{fig:taskReord} for our running example shown in Figure~\ref{fig:code-example}. The \emph{Schedule Generation} algorithm is realized through seven steps as illustrated in Figure~\ref{fig:schedule} using \emph{Data DAG} and \emph{Control DAG} as its inputs. We define a task as a ready task if all of its parent tasks in the \emph{Data DAG} have been scheduled. Steps 1-3 generate \emph{Schedule DAG} for Type-1 Tasks (Nodes 0 and 3) and mark these nodes as visited. Steps 4-6 update \emph{Schedule DAG} with Type-2 Tasks (Nodes 1 and 4) and mark these nodes as visited. All steps are repeated by visiting Type-1 Tasks followed by visiting Type-2 Tasks that remain in the DAG until all DAG nodes have been visited. While our running example in Figure~\ref{fig:code-example}c involves two parallel Type-2 Tasks, the \emph{Schedule Generation} algorithm implements a generalized solution that is capable of clustering \emph{N} parallel Type-2 Tasks into a single region to realize \emph{N-way} parallelism. \begin{figure}[t] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.84\columnwidth]{taskOrg.pdf} \caption{Type-2 tasks execute serially.} \label{fig:serial} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=0.9\columnwidth]{taskNew.pdf} \caption{Tasks rearranged and merged.} \label{fig:counter} \end{subfigure} \hfill \caption{ Code transformation for parallel execution through task merging. } \vspace{-4mm} \label{fig:kernelrep} \end{figure} Referring to the user application shown in Figure~\ref{fig:code-example}c, during the application refactoring process of \emph{Pthread Insertion}, all Type-2 Tasks (FFTs) are swapped with the \textit{enqueue kernel} as shown in the tool-generated code with Figure~\ref{fig:code-example}d. This representation is our interface for the Type-2 tasks to be scheduled by the runtime resource manager. CEDR operates as a background \emph{Daemon Process} in the Linux user space and applications are submitted through inter-process communication (IPC). The CEDR \emph{Management Thread} parses the dynamically arriving application binaries and through the \textit{enqueue kernel} function places Type-2 Tasks, whose dependencies are resolved, into the ready queue. The scheduler makes task-to-PE mapping decisions for those tasks that are in the ready queue. Then CEDR launches a worker thread for each Type-2 Task to execute on the selected PE and monitors the state of execution through \emph{PE Tracker}. The trace-based analysis and \emph{Data DAG} representation enable specifying that the user application in Figure~\ref{fig:code-example} can be refactored from the original structure shown in Figure~\ref{fig:serial} into Type-1 and Type-2 regions as illustrated in Figure~\ref{fig:counter}. Along with this transformation, the run time environment needs to know the number of independent Type 2 Tasks grouped together into a single region. For this, during \emph{Pthread Insertion} stage, code refactoring also involves automatically inserting \texttt{pthread} initialization and \texttt{conditional wait statements} into the transformed application. In this final code representation, two enqueue kernels are placed one after another without barrier synchronization but there is a while loop waiting for those two FFT functions to be completed. At run time, the OS thread initializes the counter value to 0 and waits for the counter to meet the condition. For our running example, the conditional wait value is set to two. We updated CEDR to support the execution of parallel \emph{Type-2 Tasks} in \emph{Type-2 Region}. Given that the data initializations are completed by the OS thread, CEDR places both FFTs into the ready queue as there is no barrier between the two FFTs. The scheduler then picks up both FFTs from the ready queue and makes the task to PE mapping decisions. For each FFT, a worker thread is launched and both functions get executed in parallel. Through the \emph{PE Tracker}, the CEDR management thread monitors the execution of each FFT. As soon as an FFT task gets completed, the CEDR worker thread increments the counter value by one and passes this information back to the OS thread. When the counter reaches to two, the OS thread resumes its Type-1 region possibly to the next phase that needs outputs of the two FFTs. The runtime system supports executing \emph{N-way} parallelism as the counter value is a parameter that is generated during the \emph{Pthread Insertion} stage. \begin{figure}[b] \centering \includegraphics[width=0.9\columnwidth]{updated-figure/DAG-figure.pdf} \caption{The tool generated \emph{Data DAG} of the test applications.} \label{fig:DAG-figure} \end{figure} \section{Experimental Setup} \label{sec:setup} \begin{table}[] \centering \begin{tabular}{|l|c|c|c|} \hline Application & Type-2 task & Phases & Parallelism/Phase \\ \hline 4-Pulse Doppler & FFT(256pt) & 3 & 8, 4, 4 \\ \hline 256-Pulse Doppler & FFT(256pt) & 3 & 512, 256, 256 \\ \hline WiFi-TX & FFT(128pt) & 1 & 10 \\ \hline Radar Correlator & FFT(512pt) & 2 & 2, 1 \\ \hline Temporal Mitigation & GeMM(4x64) & 2 & 2, 1 \\ \hline \end{tabular} \caption{Benchmark applications with \emph{Type-2 task}, the number of \emph{Type-2 task} phases, and the maximum number of parallel \emph{Type-2 tasks} for each phase.} \label{tab:benchmark} \vspace{-6mm} \end{table} \subsection{Applications} For our analysis, we use Pulse Doppler, WiFi-TX, Radar Correlator and Temporal Mitigation as real-world applications from the domain of software defined radio. These applications have varying levels of parallelization and help illustrate the generalizability of our approach. \\ {\it Pulse Doppler} determines both the distance of an object and its velocity based on a series of short radar pulses emitted, and the user application observes the shift in the frequencies of the return pulses with respect to the input pulse. \\ {\it WiFi TX} implements a WiFi transmit chain, generating a single packet with 64 bits of input data and scrambling, encoding, modulating, followed by forward error correction. \\ {\it Radar Correlator} models the use of a radar pulse to determine distance to an object by looking at the time delay in the received pulse compared to the input pulse.\\ {\it Temporal Interference Mitigation} receives a signal consisting of low-energy radar signals combined with high-energy communications data and applies a technique known as \textit{successive interference cancellation} to cancel out the communications data and extract the radar signals for further processing. Figure~\ref{fig:DAG-figure} shows the execution phases as a data flow graph (DFG) for each application generated by our tool chain, where a phase is defined as a distinct \emph{Type-2 Region}. For the sake of simplicity, we show DFG for the 4-Pulse Doppler version which has three execution phases, but in our evaluations, we also use the full scale implementation with 256-Pulses. In Table~\ref{tab:benchmark} we summarize the type of task, the number of execution phases, and the degree of task-level parallelism observed during each execution phase. \subsection{Evaluation Platforms} For our evaluations, we utilize three diverse platforms, namely, DS3~\cite{arda2020ds3}, an event-driven simulator, a homogeneous architecture based on Intel(R) Xeon(R) CPU E5-2650 v3 multicore processor, and Xilinx Zynq Ultrascale+ ZCU102 MPSoC development board. DS3 simulates the execution of an application represented as a DAG over a user specified heterogeneous SoC configuration where the task to processing element mapping decisions are handled through its built-in Earliest Finish Time (EFT) scheduler. This environment serves as a suitable platform to estimate performance gains of the task-level parallelism extracted through our tool chain. On the Xeon(R) CPU, each application is processed through our compiler tool chain and then executed through the CEDR environment over 8-cores. This setup allows us to validate the end-to-end integrated compiler and runtime flow over a homogeneous architecture and evaluate the performance gain with respect to serial execution. The Xilinx Zynq Ultrascale+ ZCU102 MPSoC development board is used to emulate a heterogeneous SoC with 3 ARM CPU cores and 1 FFT accelerator. This setup serves three purposes. First, it allows validating our ability to compile a user application for execution on a heterogeneous SoC. Second, we demonstrate our ability to execute parallel FFT tasks in a single application across a pool of heterogeneous resources concurrently. Third, we demonstrate our ability to manage workload scenarios where multiple independent user applications arrive dynamically and parallel execution at both application and task levels are realized through our integrated compile and runtime flow. To facilitate data transfer to and from accelerators, we use direct memory access (DMA) blocks to move data between the host ARM core and the FFT accelerator via the AXI4-Stream protocol. On the host side, we utilize \textit{udmabuf} to enable contiguous userspace-accessible buffers for transferring data to and from the hardware accelerators. A user application communicates with the accelerators by writing the data into a udmabuf buffer and a DMA engine is then configured to move data from this buffer into an accelerator for processing. This setup, while allowing experimentation on a heterogeneous hardware configuration, does not offer a realistic SoC representation since we are not emulating a dedicated NoC. Therefore we strictly use it for functional verification purpose rather than conducting realistic performance evaluations. For the FPGA evaluation we use the Radar Correlator and WiFi-TX. We process them through our compiler tool chain and submit them as single or 100 instance jobs where the CEDR management thread parses application binary, monitors system resources, schedules tasks based on Minimum Execution Time (MET), Round Robin (RR) and Earliest Finish Time (EFT) schedulers. We measure application execution time as the difference between the end of the last task and the start of the first task of an application, including the overhead of all scheduling decisions and data transfers to and from the accelerator in between. \section{Results and Analysis} \label{sec:results} \begin{figure*}[t] \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=0.9\columnwidth]{updated-figure/8-pulse-4core.pdf} \caption{4-Pulse Doppler on SoC with 4 FFTs. } \label{fig:4-pulse-4fft} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=0.9\columnwidth]{updated-figure/8-pulse-8core.pdf} \caption{4-Pulse Doppler on SoC with 8 FFTs.} \label{fig:4-pulse-8fft} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=0.95\columnwidth]{updated-figure/256-pulse-ds3.pdf} \caption{256-Pulse Doppler on SoC with 8 FFTs.} \label{fig:256-pulse-ds3} \end{subfigure} \caption{ Execution flow for Pulse Doppler on DS3-simulated SoCs with varying numbers of FFT accelerators: (a) 8 parallel FFTs in the first stage are executed over 4 FFT accelerators in two rounds, followed by single round execution in second and third stages. (b) 8 parallel FFTs in the first stage are executed over 8 FFT accelerators in one round, followed by single round execution in the following two stages. (c) Full scale Pulse Doppler on an SoC with 8 FFTs where 256 FFTs in the first stage are evenly distributed as 64 FFT rounds over 8 cores followed by 32 FFT rounds in the subsequent two stages. } \label{fig:pulse-gantt} \vspace{-4mm} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{updated-figure/256-pulse-cpu.pdf} \caption{ The execution flow of 256-Pulse Doppler in three stages over the 8-core Xeon processor managed by CEDR with 8 FFT accelerators matches the execution obtained from DS3 illustrated in Figure~\ref{fig:256-pulse-ds3} } \label{fig:256-pulse-cpu} \vspace{-2mm} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=0.95\columnwidth]{images/100_Radar_Correlators_MET_exp25.pdf} \vspace{-3mm} \caption{Minimum Execution Time - MET} \label{fig:parallel_radar_correlators_met} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=0.95\columnwidth]{images/100_Radar_Correlators_RR_exp26.pdf} \vspace{-3mm} \caption{RoundRobin - RR} \label{fig:parallel_radar_correlators_rr} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=0.95\columnwidth]{images/100_Radar_Correlators_EFT_exp27.pdf} \vspace{-3mm} \caption{Earliest Finish Time - EFT} \label{fig:parallel_radar_correlators_eft} \end{subfigure} \caption{100 instances of auto-parallelized Radar Correlator application running on ZCU102 FPGA using three types of schedulers. Coloring encodes the application instance number modulo 10.} \vspace{-6mm} \label{fig:parallel_radar_correlators} \end{figure*} \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.9\columnwidth]{images/Two_Radar_Correlators_EFT_exp51.pdf} \vspace{-3mm} \caption{Two instances of parallel Radar Correlator running with EFT scheduler.} \label{fig:two_parallel_radar_correlator_eft} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.9\columnwidth]{images/RadarCorr_and_WiFi_EFT_exp58.pdf} \vspace{-3mm} \caption{ Single instances of auto-parallelized Radar Correlator with serial WiFi TX.} \label{fig:radar_correlator_and_wifi_tx_eft} \end{subfigure} \caption{ Demonstration of inter and intra-application parallelism via simultaneous execution on ZCU102 FPGA} \vspace{-4mm} \label{fig:parallel_radar_correlator} \end{figure} \subsection{Functional Verification and Performance Analysis} Pulse Doppler shows higher degree of parallelism relative to the other applications we use in this study. Therefore, we start with functional verification based on the Pulse Doppler execution through DS3 based simulation over an SoC with 4 and 8 FFT accelerators as shown in Figure~\ref{fig:4-pulse-4fft} and Figure~\ref{fig:4-pulse-8fft}, respectively. Since the 4-pulse version has 8 FFTs during the first stage (Figure~\ref{fig:4-pulse-4fft}), an SoC with 4 FFT accelerators takes two rounds to complete the first stage, whereas on the 8 FFT configuration it takes one round. These two figures together illustrate ability to launch FFT tasks in parallel. Figure~\ref{fig:256-pulse-ds3} shows the 256-Pulse Doppler execution over the 8 FFT SoC configuration where first stage of the execution takes 64 rounds to complete and subsequent two phases take 32 rounds each. Overall, the plots in Figure~\ref{fig:pulse-gantt} show that our tool flow is able to extract parallelism in each stage of the execution and distribute the parallel tasks over the available compute resources. We demonstrate the execution of the real 256-Pulse Doppler implementation that is compiled for execution over the 8-core Xeon processor configuration using Figure~\ref{fig:256-pulse-cpu}, where execution of the parallelized application is managed by the CEDR. This plot shows same flow as Figure~\ref{fig:256-pulse-ds3} with three stage execution where first stage takes 64 rounds and subsequent two stages take 32 rounds each. This plot validates the runtime's ability to distribute FFT tasks as expected. Here we note that the DS3 based execution shows faster execution time than the Xeon processor based execution. There are two key factors to this observation. First the DS3 uses a model based execution where the FFT compute time is based on its actual execution over the Xilinx FFT IP Core synthesized for the ZCU102 FPGA. The compute time for the FFT accelerator is 128 ns whereas the execution time for the FFT task on the Xeon processor is 25,000 ns. Furthermore, overhead associated with the CEDR environment in terms of parsing application binary and dispatching task to the compute resources contribute to the increased overall execution time. Such runtime overhead is not modeled in DS3. \begin{table}[t] \centering \begin{tabular}{|l|cc|cc|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Application}} & \multicolumn{2}{c|}{Whole Application} & \multicolumn{2}{c|}{Only Tasks} \\ \cline{2-5} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Xeon CPU} & DS3 & \multicolumn{1}{c|}{Xeon CPU} & DS3 \\ \hline Radar Correlator & \multicolumn{1}{c|}{9.6} & 14.8 & \multicolumn{1}{c|}{30.5} & 91.6 \\ \hline Pulse Doppler & \multicolumn{1}{c|}{59.0} & 67.3 & \multicolumn{1}{c|}{73.5} & 84.0 \\ \hline Temporal Mitigation & \multicolumn{1}{c|}{3.9} & 15.5 & \multicolumn{1}{c|}{48.8} & 97.5 \\ \hline WiFi-TX & \multicolumn{1}{c|}{17.0} & 21.2 & \multicolumn{1}{c|}{23.4} & 29.4 \\ \hline Average & \multicolumn{1}{c|}{22.4} & 29.7 & \multicolumn{1}{c|}{44.1} & 75.6 \\ \hline \end{tabular} \caption{ Execution time reduction (\%) with respect to the serial single Xeon core-based execution covering the end-to-end execution of the whole application and the time spent only over the FFT or GEMM tasks.} \vspace{-4mm} \label{tab:speed-result} \end{table} Finally, Table~\ref{tab:speed-result} shows reduction in execution time with respect to the serial execution for each application through CEDR on Xeon processor and DS3 in terms of time taken by the whole application and time spent only for the Type-1 and Type-2 tasks excluding data loads/stores from/to the disk. Consistent with the observations on the Pulse Doppler plots (Figure~\ref{fig:256-pulse-ds3} and Figure~\ref{fig:256-pulse-cpu}), as shown in Table~\ref{tab:speed-result}, for the computation tasks, we see an average of 44\% and 75.6\% reduction in execution time with respect to the serial execution on the x86-based multi-core processor and DS3-based simulation respectively. For the whole application, the average reduction in execution time drops to 22.4\% and 29.7\% respectively, because of the reading from the disk and writing to the disk that contribute to the prolonged execution time. Note that WiFi-TX has less time saving compared to other applications. The reason is that WiFi-TX time spent on FFT is relatively less as this application involves other compute phases such as Scrambler, Interleaver, and Pilot-Insertion. \subsection{FPGA-based Emulation and Analysis} In Figure~\ref{fig:parallel_radar_correlators}, we show timing analysis for the Radar Correlator that is refactored into parallel execution form through our compiler tool chain. The three plots illustrate the makespan for completing a workload composed of 100 instances of Radar Correlator executed based on the MET, RR, and EFT schedulers. This experiment shows that making a greedy choice favoring the accelerator (common in hand-crafted code) only results in FFT tasks starving for the accelerator to become available. MET enforces serial execution on the accelerator and also serves as a baseline for total execution time. The RR and EFT schedulers allow for utilizing the parallelism embedded into the application binary, and each one completes the workload in 41.61ms (1.48x faster) and 40.66ms (1.51x faster), respectively. Figure~\ref{fig:parallel_radar_correlator} illustrates the runtime system's ability to execute FFT tasks from two applications as well as the independent FFTs within an application concurrently. In Figure~\ref{fig:two_parallel_radar_correlator_eft}, we show execution for two instances of Radar Correlator arriving concurrently where a total of four FFT tasks are dispatched to all PEs concurrently. Finally in Figure~\ref{fig:radar_correlator_and_wifi_tx_eft}, we show the PE utilization based on the concurrent execution of WiFi TX and Radar Correlator applications. The EFT in this case favors the FFT accelerator to be used by the WiFi TX as it has higher latency than the Radar Correlator. It splits the use of ARM cores among the two applications and assigns Cores 2 and 3 to WiFi TX after the completion of the Radar Correlator. \section{Conclusions} In order to make heterogeneous SoCs accessible, there is the need for integrating the application development, compilation and runtime processes vertically towards a unified ecosystem that enables productive application deployment without requiring users to become hardware experts in the process. Towards this goal, in this study, we present an integrated flow where we expose parallelism in a user application through dynamic profiling and memory analysis, and design a flexible binary structure where an application task can be invoked on any of its supported processing elements in the target SoC. We pass the parallelized binary to the runtime system that handles parsing dynamically arriving applications, scheduling tasks, and completing the workloads. We validate our approach through real-life radar applications executed on a diverse set of platforms. We believe that our integrated end-to-end system allows hardware-agnostic application development, enables exposing parallelism automatically in the user application and successful deployment on both multi-core homogeneous and heterogeneous architectures. The proposed dynamic analysis based method is sensitive to the nature of the inputs that may trigger different control flow for the same application. Future works will focus on addressing this open problem by merging the \emph{Control DAG} for different inputs and setting up a back-up control flow to recover the program execution and resolve unpredictable application behavior. \label{sec:conclusions} \bibliographystyle{IEEEtran}
1,108,101,566,045
arxiv
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{M}{alwares} have canonically been lumped into categories such as viruses, worms, Trojans, rootkits, etc. Today's advanced malwares, however, often include many components with different functionalities. For example, the same malware might behave as a virus when spreading over a host, behave as a worm when propagating through a network, exhibit \emph{botnet} behavior when communicating with command and control (C2) servers or synchronizing with other infected machines, and exhibit \emph{rootkit} behavior when concealing itself from an intrusion detection system (IDS). A thorough study of all aspects of malware is important for developing security products and computer forensics solutions, but stealth components pose particularly difficult challenges. The ease or difficulty of repairative measures is irrelevant if the malware can evade detection in the first place. While some authors refer to all stealth malwares as \emph{rootkits}, the term rootkit properly refers to the modules that redirect code execution and subvert expected operating system functionalities for the purpose of maintaining stealth. With respect to this usage of the term, rootkits deviate from other stealth features such as elaborate code mutation engines that aim to change the appearance of malicious code so as to evade signature detection without changing the underlying functionality. As malwares continue to increase in quantity and sophistication, solutions with improved generalization to previously unseen malware samples/types that also offer sufficient diagnostic information to resolve threats with as little human burden as possible are becoming increasingly desirable. Machine learning offers tremendous potential to aid in stealth malware intrusion recognition, but there are still serious disconnects between many machine learning based intrusion detection ``solutions'' presented by the research community and those actually fielded in IDS software. Robin and Paxson\cite{sommer2010outside} discuss several factors that contribute to this disconnect and suggest useful guidelines for applying machine learning in practical IDS settings. Although their suggestions are a good start, we contend that refinements must be made to machine learning algorithms themselves in order to effectively apply such algorithms to the recognition of stealth malware. Specifically, there are several flawed assumptions inherent to many algorithms that distort their mappings to realistic stealth malware intrusion recognition problems. The chief among these is the \emph{closed-world} assumption -- that only a fixed number of known classes that are available in the training set will be present at classification time. Our contributions are as follows: \begin{itemize} \item \textit{We present the first comprehensive academic survey of stealth malware technologies and countermeasures.} There have been several light and narrowly-scoped academic surveys on rootkits\cite{li2011survey,kim2012brief,shields2008survey}, and many broader surveys on the problem of intrusion detection, e.g. \cite{axelsson2000intrusion,vasilomanolakis2015taxonomy,zuech2015intrusion}, some specifically discussing machine learning intrusion detection techniques\cite{tsai2009intrusion,garcia2009anomaly,lee1999data,sommer2010outside}. However, none of these works come close to addressing the mechanics of stealth malware and countermeasures with the level of technical and mathematical detail that we provide. Our survey is \emph{broader in scope} and \emph{more rigorous in detail} than any existing academic rootkit survey and provides not only detailed discussion of the mechanics of stealth malwares that goes far beyond rootkits, but an overview of countermeasures, with rigorous mathematical detail and examples for applied machine learning countermeasures. \item \textit{We analyze six flawed assumptions inherent to many machine learning algorithms} that hinder their application to stealth malware intrusion recognition and other IDS domains. \item \textit{We propose an adaptive open world mathematical framework for stealth malware recognition} that obviates the six inappropriate assumptions. Mathematical proofs of relationships to other intrusion recognition algorithms/frameworks are included, and the formal treatment of open world recognition is mathematically generalized beyond previous work on the subject. \end{itemize} Throughout this work, we will mainly provide examples for the \Name{Microsoft Windows} family of operating systems, supplementing where appropriate with examples from other OS types. Our primary rationale for this decision is that, according to numerous recent tech reports from anti-malware vendors and research groups \cite{kaspersky2015,hpe2015,hpe2016,ibm2016,symantecistr2015,symantecistr2016,mcaffee2016,microsoft_security,mandiant_consulting}, malware for the \Name{Windows} family of operating systems is still far more prevalent than for any other OS type (cf. Fig.~\ref{fig:malware_proportions}). Our secondary rationale is that within the academic literature that we examined, we found comparatively little research discussing \Name{Windows} security. We believe that this gap needs to be filled. Note that many of the stealth malware attacks and defenses that apply to \Name{Windows} have their respective analogs in other systems, but each system has its unique strengths and susceptibilities. This can be seen by comparing our survey to parts of\cite{faruki2015android}, in which Faruki et al. provide a survey of generic Android security issues and defenses. Nonetheless, since our survey is about stealth malware; not exclusively \Name{Windows} stealth malware, we shall occasionally highlight techniques specific to other systems and mention discrepancies between systems. \Name{Unix/Linux} rootkits shall also be discussed because the development of \Name{Unix} rootkits pre-dates the development of \Name{Windows}. Any system call will be marked in \Windows{a special font}, while proper nouns are \Name{highlighted differently}. A complete list of \Name{Windows} system calls discussed in this paper is given in Tab.~\ref{tab:SysCalls} of the appendix. \begin{figure}[ht] \centering \subfloat[\label{fig:malware:proportions}Proportion of Malware by Platform Type]{\includegraphics[width=\linewidth]{doughnut}}\\ \subfloat[\label{fig:malware:rates}Growth Rate in Malware Proportion by Platform Type]{\includegraphics[width=\linewidth]{trends}} \cap{fig:malware_proportions}{Malware proportions and rates by platform}{ As shown in \protect\subref*{fig:malware:proportions}, malware designed specifically for the \Name{Microsoft Windows} family of operating systems accounts for over 90\,\% of all malware. While malware for other platforms is growing rapidly, at current growth rates, shown in \protect\subref*{fig:malware:rates}, the quantity of malware designed for any other platform is unlikely to surpass the quantity of \Name{Windows} malware any time soon. Examining overall growth rates per platform, we see higher growth rates in non windows malware, but a high growth rate on a small base is still quite small in terms of overall impact. For \Name{Windows} the 88\% growth rate is on a base of 135 million malware samples, which translates into about 118 million new \Name{Windows} malwares. In comparison the high growth rate for \Name{Apple iOS}, with an increase of more than 230\%, is on a base of 30,400 samples, with the total number of discovered \Name{Apple} malware samples in 2015 just under 70,000; very small compared to the number of \Name{Windows} malware samples as well as the 4.5 milion \Name{Android} malware samples. Numbers for these plots were obtained from a 2016 HP Enterprise threat report \cite{hpe2016}. } \end{figure} The remainder of this paper is structured as follows: In Sec.~\ref{sec:stealth_survey} we present the problems inherent to stealth malware by providing a comprehensive survey of stealth malware technologies, with an emphasis on rootkits and code obfuscation. In Sec.~\ref{sec:countermeasure_survey}, we discuss stealth malware countermeasures, which aim to protect the integrity of areas of systems known to be vulnerable to attacks. These include network intrusion recognition countermeasures as well as host intrusion recognition countermeasures. Our discussion highlights the need for these methods to be combined with more generic recognition techniques. In Sec.~\ref{sec:signatures_heuristics}, we discuss some of these more generic stealth malware countermeasures in the research literature, many of which are based on machine learning. In Sec.~\ref{sec:open_world_ids}, we identify six critical flawed algorithmic assumptions that hinder the utility of machine learning approaches for malware recognition and more generic IDS domains. We then formalize an \emph{adaptive open world framework} for stealth malware recognition, bringing together recent advances in several areas of machine learning literature including intrusion detection, novelty detection, and other recognition domains. Finally, Sec.~\ref{sec:conclusion} concludes this survey. \section{A Survey of Existing Stealth Malware} \label{sec:stealth_survey} We discuss four types of stealth technology: rootkits, code mutation, anti-emulation, and targeting mechanisms. Before getting into the details of each, we summarize them at a high level. Note that current malware usually uses a mixture of several or all concepts that are described in this section. For example, a rootkit might maintain malicious files on disk that survive reboots, while using hooking techniques to hide these files and processes so that they cannot be easily detected and removed, and applying code mutation techniques to prevent anti-malware systems from detecting running code. {\rm Rootkit technology} refers to software designed for two purposes: maintaining stealth presence and allowing continued access to a computer system. The stealth functionality includes hiding files, hiding directories, hiding processes, masking resource utilization, masking network connections, and hiding registry keys. Not all rootkit technology is malicious, for example, some anti-malware suites use their own rootkit technologies to evade detection by malware. \Name{Samhain} \cite{chuvakin2003ups,petroni2004copilot}, for example, was one of the first pieces of anti-malware (specifically anti-rootkit) software to hide its own presence from the system, such that a malware or hacker would not be able to detect and, thus, kill off the \Name{Samhain} process. Whether rootkit implementations are designed for malicious or benign applications, many of the underlying technologies are the same. In short, rootkits can be thought of as performing man-in-the-middle attacks between different components of the operating system. In doing so, different rootkit technologies employ radically different techniques. In this section, we review four different types of rootkits. Unlike rootkit technologies, code mutation does not aim to change the dynamic functionality of the code. Instead it aims to alter the appearance of code with each generation, generally at the binary level, so that copies of the code cannot be recognized by simple pattern-matching algorithms. Due in part to the difficulties of static code analysis, and in part to protect system resources, the behavior of suspicious executables is often analyzed by running these executables in virtual sandboxed environments. {\em Anti-emulation} technologies aim to detect these sandboxes; if a sandbox is detected, they alter the execution flow of malicious code in order to stay hidden. Finally, {\em targeting mechanisms} seek to manage the spread of malware and therefore minimize risk of detection and collateral damage, allowing it to remain in the wild for a longer period of time. \subsection{Type 1 Rootkits: Malicious System Files on Disk} \sad{Mimic system process files.}{Easy to install, survives reboots.}{Easy to detect and remove.} The first-generation of rootkits masqueraded as disk-resident system programs (e.g., \Windows{ls}, \Windows{top}) on early \Name{Unix} machines, pre-dating the development of \Name{Windows}. These early implementations were predominantly designed to obtain elevated privileges, hence the name ``rootkit''. Modern rootkit technologies are designed to maintain stealth, perform activity logging (e.g., key logging), and set up backdoors and covert channels for command and control (C2) server communication \cite{kim2012brief}. Although modern rootkits (types 2, 3, and 4) rely on privilege escalation for their functionalities, their main objective is stealth (although privilege escalation is often assumed) \cite{butler2005windows1}. Since first-generation rootkits reside on disk, they are easily detectable via a comparison of their hashes or checksums to hashes or checksums of system files. Due to early file integrity checkers such as \Name{Tripwire} \cite{kim1994tripwire}, first-generation rootkits have greatly decreased in prevalence and modern rootkits have trended toward memory residency over disk residency\cite{hoglund2005rootkits,szor2005theart}. This should not be conflated with saying that modern malwares are trending away from disk presence -- e.g. \Name{Gapz}\cite{Gaptz} and \Name{Olmasco} \cite{Olmasco} are advanced bootkits with persistent disk data. As we shall see below, many modern rootkits are specifically designed to intercept calls that enumerate files associated with a specific malware and hide these files from the file listing. \subsection{Type 2 Rootkits: Hooking and in-Memory Redirection of Code Execution} \label{sec:hooking} \sad{Code injection by modifying pointers to libraries/functions or by explicit insertion of code.}{Difficult to differentiate genuine and malicious hooking.}{Difficult to inject.} Second-generation rootkits hijack process memory and divert the flow of code execution so that malicious code gets executed. This rootkit technique is generally referred to as \emph{hooking}, and can be done in several ways \cite{butler2005windows1}; e.g., via modification of function pointers to point to malicious code or via inline function patching -- an approach involving overwriting of code; not just pointers\cite{butler2004vice}. For readability, however, we use the term hooking to refer to any in-memory redirection of code execution. Rootkits use hooking to alter memory so that malicious code gets executed, usually prior to or after the execution of a legitimate operating system call\cite{butler2004vice,szor2005theart}. This allows the rootkit to filter return values or functionality requested from the operating system. There are three types of hooking\cite{hoglund2005rootkits}: user-mode hooking, kernel-mode hooking, and hybrid hooking. Hooking in general is not an inherently malicious technique. Legitimate uses for hooking exist, including hot patching, monitoring, profiling, and debugging. Hooking is generally straightforward to detect, but distinguishing legitimate hooking instances from (malicious) rootkit hooking is a challenging task\cite{hoglund2005rootkits,szor2005theart}. \subsubsection{User-Mode Hooking} \label{sec:userhooking} \sad{Injection of code into User DLLs.}{Difficult to classify as malicious.}{Easy to detect.} \begin{figure*}[t] \centering \subfloat[\label{fig:Hooking:Normal}Normal Operation]{\begin{minipage}{.39\textwidth}\centering\includegraphics[scale=0.23]{Hooks_normal}\end{minipage}} \subfloat[\label{fig:Hooking:Infected}Infected Operation]{\begin{minipage}{.59\textwidth}\centering\includegraphics[scale=0.23]{Hooks_modified}\end{minipage}} \cap{fig:Hooking}{Hooking}{This figure shows an example of code redirection on shared library imports. \protect\subref*{fig:Hooking:Normal} displays the normal operation of API calls, where API pointers of the DLL's \Windows{EAT} are copied into the executable's \Windows{IAT}. \protect\subref*{fig:Hooking:Infected} shows how \Windows{IAT} hooking injects (malicious) code from a user DLL before executing the original \texttt{API call 1}.} \end{figure*} To improve resource utilization and to provide an organized interface to request kernel resources from user space, much of the \Name{Win32} API is implemented as dynamically linked libraries (DLLs) whose callable functions are accessible via tables of function pointers. DLLs allow multiple programs to share the same code in memory without requiring the code to be resident in each program's binary \cite{microsoft2007what}. In and of themselves, DLLs are nothing more than special types of portable executable (PE) files. Each DLL contains an \Windows{Export Address Table} (\Windows{EAT}) of pointers to functions that can be called outside of the DLL. Other programs that wish to call these exported functions generally have an \Windows{Import Address Table} (\Windows{IAT}) containing pointers to the DLL functions in their PE images in memory. This lays the ground for the popular user-mode rootkit exploit known as \emph{\Windows{IAT} hooking}\cite{kim2012brief}, in which the rootkit changes the function pointers within the \Windows{IAT} to point to malicious code. Fig.~\ref{fig:Hooking} illustrates both malicious and legitimate usage of \Windows{IAT} hooks. In the context of rootkit \Windows{IAT} hooking, the functions hooked are almost always operating system API functions and the malicious code pointed to by the overwritten \Windows{IAT} entry, in addition to its malicious behavior, almost always makes a call to the original API function in order to spoof its functionality\cite{hoglund2005rootkits,kim2012brief}. Prior to or after the original API call, however, the malicious code causes the result of the library call to be changed or filtered. By interposing the \Windows{FindFirstFile} and \Windows{FindNextFile} \Name{Win32} API calls, for example, a rootkit can selectively filter files of specific unicode identifiers so that they will not be seen by the caller. This particular exploit might involve calling \Windows{FindNextFile} multiple times within the malicious code to skip over malicious files and protect its stealth. \Windows{IAT} hooking is nontrivial and has its limitations\cite{hoglund2005rootkits,leitch2011iat,microsoft2007what}, for example, it requires the PE header of the target binary to be parsed and the correct addresses of target functions to be identified. Practically, \Windows{IAT} hooking is restricted to OS API calls, unless specifically engineered for a particular non-API DLL\cite{hoglund2005rootkits,leitch2011iat}. An additional difficulty of \Windows{IAT} hooking is that DLLs can be loaded at different times with respect to the executable launch \cite{hoglund2005rootkits}. DLLs can be loaded either at load time or at runtime of the executable. In the latter case, the \Windows{IAT} does not contain function pointers until just before they are used, so hooking the \Windows{IAT} is considerably more difficult. Further, by loading DLLs with the \Name{Win32} API calls \Windows{LoadLibrary} and \Windows{GetProcAddress}, no entries will be created in the \Windows{IAT}, making the loaded DLLs impervious to \Windows{IAT} hooking\cite{hoglund2005rootkits,microsoft2007what}. Inline function patching, a.k.a. \emph{detouring}, is another common second-generation technique, which avoids some of the shortcomings of \Windows{IAT} hooking \cite{hoglund2005rootkits}. Unlike function pointer modification, detouring uses the direct modification of code in memory. It involves overwriting a snippet of code with an unconditional jump to malicious code, saving the stub of code that was overwritten by the malicious code, executing the stub after the malicious code, and possibly jumping back to the point of departure from the original code so that the original code gets executed -- a technique known as \emph{trampolining} \cite{hoglund2005rootkits}. In practice, overwriting generic code segments is difficult for several reasons. First, stub-saving without corrupting memory is inherently difficult\cite{szor2005theart}. Second, the most common instruction sets, including \Name{x86} and \Name{x86-64}, are variable-length instruction sets, meaning that disassembly is necessary to avoid splitting instructions in a generic solution \cite{x64_introduction}. Not only is disassembly a high overhead task for stealth software, but even with an effective disassembly strategy, performing arbitrary jumps to malicious code can result in unexpected behavior that can compromise the stealth of the rootkit\cite{szor2005theart}. Consider, for example, mistakenly placing a jump to shell code and back into a loop that executes for many iterations. One execution of the shell code might have negligible overhead, but a detour placed within an otherwise tight loop may have a noticeable effect on system behavior. Almost all existing \Name{Windows} rootkits that rely on inline function patching consequently hook in the first few bytes of the target function \cite{hoglund2005rootkits}. In addition to the fact that an immediate detour limits the potential for causing strange behaviors, many \Name{Windows} compilers for \Name{x86} leave 5 \Windows{NOP} bytes at the beginning of each function. These bytes can easily be replaced by a single byte jump opcode and a 32 bit address. This is not an oversight. Rather, like hooking in general, detours are not inherently malicious and have a legitimate application, namely \emph{hot patching}\cite{microsoft1999detours}. Hot patching is a technique, which uses detours to perform updates to binary in memory. During hot patching, an updated copy of the function is placed elsewhere in memory and a jump instruction with the address of the updated copy as an argument is placed at the beginning of the original function. The purpose of hot patching is to increase availability without the need for program suspension or reboot \cite{hunt1999detours}. \Name{Microsoft Research} even produced a software package called \Name{Detours} specifically designed for hot patching \cite{hunt1999detours,microsoft1999detours}. In addition, detours are also used in anti-malware \cite{hoglund2005rootkits}. Like \Windows{IAT} hooks, detours are relatively easy to detect. However, the legitimate applications of detours are difficult to distinguish from rootkit uses of detours\cite{hunt1999detours,hoglund2005rootkits}. Detours are also not limited in scope to user mode API functions. They can also be used to hook kernel functions\cite{hoglund2005rootkits}. Regardless of the hooking strategy, when working in user mode, a rootkit must place malicious code into the address space of the target process. This is usually orchestrated through \emph{DLL injection}, i.e., by making a target process load a DLL into its address space. Having a process load a DLL is common, so common that DLL injection can simply be performed using \Name{Win32} API functionality. This makes DLL injections easy to detect. However, discerning benign DLL injections from malicious DLL injections is a more daunting task\cite{protean2013api, protean2013hookex, protean2013remotethread}. Three of the most common DLL injection techniques are detailed in \cite{protean2013api, protean2013hookex, protean2013remotethread}. The simplest technique exploits the \Name{AppInit\_DLLs} registry key, which proceeds as follows: a DLL, containing a \Windows{DllMain} function is written, optionally with a payload to be executed. The DLL main function takes three arguments: a DLL handle, the reason for the call (process attach/detach or thread attach/detach), and a third value which depends on whether the DLL is statically or dynamically loaded. By changing the value of the \Name{AppInit\_DLLs} registry key to include the path to the DLL to be executed, and changing the \Windows{LoadAppInit\_DLLs} registry key's value to 1, whenever any process loads \Windows{user32.dll}, the injected DLL will also be loaded and the \Windows{DllMain} functionality will be executed. Although the DLL gets injected only when a program loads \Windows{user32.dll}, \Windows{user32.dll} is prevalent in many applications, since it is responsible for key user interface functionality. Whether or not \Windows{DllMain} calls malicious functionality, the \Name{AppInit} technique can be used to inject a DLL into an arbitrary process' address space, as long as that process calls functionality from \Windows{user32.dll}. Note that, although the injection itself involves setting a registry key, which could indicate the presence of a rootkit, the rootkit can change the value of the registry key once resident in the target process' address space \cite{hoglund2005rootkits}. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{keylogger_s_fixed} \cap{fig:keylogger}{Exploiting event hook chains}{A prototypical keylogger application gets the target process' context, injects a malicious DLL into its address space, and prepends a function pointer to code within this DLL to the keypress event hook chain. Whenever a key is pressed, the newly introduced callback is triggered, thereby allowing the malicious code to log every keystroke. \end{figure*} A second method of DLL injection exploits \Name{Windows} event hook chains \cite{protean2013hookex,msdnSet,msdnHooks}. Event hook chains are linked lists containing function pointers to application-defined callbacks. Different hook chains exist for different types of events, including key presses, mouse motions, mouse clicks, and messages\cite{msdnHooks}. Additional procedures can be inserted into hook chains using the \Windows{SetWindowsHookEx} \Name{Win32} API call. By default, inserted hook procedures are placed at the front of a hook chain, e.g., prominent rootkits/bootkits ~\cite{Olmasco,TDL4-reboot} overwrite the pointer to the handlers in the \Windows{DRIVER\_OBJECT} structure, but some rootkits, e.g., \Name{Win32/Gapz}\cite{Gaptz}, use splicing, patching the handlers' code themselves. Upon an event associated with a particular hook chain, the operating system sequentially calls functions within the hook chain. Each hook function determines whether it is designed to handle the event. If not, it calls the \Windows{CallNextHookEx} API function. This invokes the next procedure within the hook chain. There are two specific types of hook chains: global and thread specific. Global hooks monitor events for all threads within the calling thread's desktop, whereas thread-specific hooks monitor events for individual threads. Global hook procedures must reside in a DLL disjoint from the calling thread's code, whereas local hook procedures may reside within the code of the calling thread or within a DLL \cite{msdnHooks}. For hook chain DLL injection a support program is required, as well as a DLL exporting the functionality to be hooked. The attack proceeds as follows\cite{protean2013hookex}: the support program gets a handle to the DLL and obtains the address of one of the exported functions through \Name{Win32} API calls. The support program then calls the \Windows{SetWindowsHookEx} API function passing it the action to be hooked and the address of the exported function from the DLL. \Windows{SetWindowsHookEx} places the hook routine into the hook chain of the victim process, so that the DLL functionality is invoked whenever a specified event is triggered. When the event first occurs, the OS injects the specified DLL into the process' address space, which automatically causes the \Windows{DllMain} function to be called. Subsequent events do not require the DLL to be reloaded since the DLL's exports get called from within the victim process' address space. An example keylogger rootkit is shown in Fig.~\ref{fig:keylogger}, which will log the pressed key and call \Windows{CallNextHookEx} to trigger the default handling of the keystroke. Again, benign addition of hook chain functions via \Windows{SetWindowsHookEx} is common, e.g., to decide which window/process should get the keystroke; the difficult task is determining if any of the added functions are malicious. A third common DLL injection strategy involves creating a remote thread inside the virtual address space of a target process using the \Windows{CreateRemoteThread} Win32 API call \cite{protean2013remotethread}. The injection proceeds as follows\cite{protean2013remotethread}: a support program controlled by the malware calls \Windows{OpenProcess}, which returns a handle to the target process. The support program then calls \Windows{GetProcAddress} for the API function \Windows{LoadLibrary}. \Windows{LoadLibrary} will be accessible from the target process because this API function is part of \Windows{kernel32.dll}, a user space DLL from which every \Name{Win32} user space process imports functionality. To insert the exported function name into the target process' address space, the malicious process must call the \Windows{VirtualAllocEx} API function. This API function allocates a virtual memory range within the target process' address space. The allocation is required in order to store the name of the rootkit DLL function. The \Windows{WriteProcessMemory} API call is then used to place the name of the malicious DLL into the target process' address space. Finally, the \Windows{CreateRemoteThread} API function calls the \Windows{LoadLibrary} function to inject the rootkit DLL. Like event chains, the \Windows{CreateRemoteThread} API call has legitimate uses. For example, a debugger might fire off a remote thread in a target process' address space for profiling and state inspection. An anti-malware module might perform similar behavior. Finally, IO might be handled through pointers to callbacks exchanged by several processes, where the callback method is intended to execute in another process' address space. The fact that the API call has so many potentially legitimate uses makes malicious exploits particularly difficult to detect. \subsubsection{Kernel-Mode Hooking} \sad{Injection of code into the Kernel via device drivers.}{Difficult to detect by user-mode IDSs.}{Intricate to implement correctly.} Rootkits implementing kernel hooks are more difficult to detect than those implementing user space hooks. In addition to the extended functionality afforded to the rootkit, user space anti-malwares cannot detect kernel hooks because they do not have the requisite permissions to access kernel memory\cite{hoglund2005rootkits,szor2005theart}. Kernel memory resides in the top part of a process' address space. For 32-bit \Name{Windows}, this usually corresponds to addresses above \Name{0x80000000}, but can correspond to addresses above \Name{0xC0000000}, if processes are configured for 3GB rather than 2GB of user space memory allocation. All kernel memory across all processes maps to the same physical location, and without special permissions, processes cannot directly access this memory. Kernel hooks are most commonly implemented as device drivers\cite{hoglund2005rootkits}. Popular places to hook into the kernel include the \Windows{System Service Descriptor Table} (\Windows{SSDT}), the \Windows{Interrupt Descriptor Table} (\Windows{IDT}), and \Windows{I/O Request Packet} (\Windows{IRP}) function tables of device drivers\cite{hoglund2005rootkits}. The \Windows{SSDT} was the hooking mechanism used in the classic \Name{Sony DRM Rootkit} \cite{sonydrm} and the more recent \Name{Necurs} malware \cite{necurs}, and is often a used as part of more complex multi-exploitation kits such as the \Name{RTF} zero-day (\Name{CVE-2014-1761}) attack\cite{rtf-necurs} which was detected in the wild. The \Windows{SSDT} is a \Name{Windows} kernel memory data structure of function pointers to system calls. Upon a system call, the operating system indexes into the table by the function call ID number, left-shifted 2 bits to find the appropriate function in memory. The \Windows{System Service Parameter Table} (\Windows{SSPT}) stores the number of bytes that arguments require for each system call. Since \Windows{SSPT} entries are one byte each, system calls can take up to 255 arguments. The \Windows{KeServiceDescriptorTable} contains pointers to both the \Windows{SSDT} and the \Windows{SSPT}. When a user space program performs a system call, it invokes functionality within \Windows{ntdll.dll}, which is the main interface library between user space and kernel space. The \Name{EAX} register is filled with the system function call ID and the \Name{EDX} register is filled with the memory address of the arguments. After performing a range check, the value of \Name{EAX} is used by the OS to index into the \Windows{SSDT}. The program counter register \Name{IP} is then filled with the appropriate address from the \Windows{SSDT}, executing the dispatcher call. Dispatches are triggered by the \Windows{SYSENTER} processor instruction or the more dated \Windows{INT 2E} interrupt. \Windows{SSDT} hooks are particularly dangerous because they can supplement any system call with their own functionality. Hoglund and Butler \cite{hoglund2005rootkits} provide an example of process hiding via \Windows{SSDT} hook, in which the \Windows{NTQuerySystemInformation} \Name{NTOS} system call is hooked to point to shell code, which filters \Windows{ZwQuerySystemInformation} structures corresponding to processes by their unicode string identifiers. Selective processes can be hidden by changing pointers in this data structure. \Name{Windows} provides some protection to prevent \Windows{SSDT} hooks by making \Windows{SSDT} memory read-only. Although this protection makes the attacker's job more difficult, there are ways around it. One method is to change the memory descriptor list (MDL)\cite{ssdt_hooking,hoglund2005rootkits} for the requisite area in memory. This involves casting the \Windows{KeServiceDescriptorTable} to the appropriate data structure, and using it to build an MDL from non-paged memory. By locking the memory pages and changing the flags on the MDL one can change the permissions on memory pages. Another method of disabling memory protections is by zeroing the write protection bit in control register \Name{CR0}. The \Windows{Interrupt Descriptor Table} (\Windows{IDT}) is another popular hook target\cite{idt_hooking}. The interrupt table contains pointers to callbacks that occur upon an interrupt. Interrupts can be triggered by both hardware and software. Because interrupts have no return values, \Windows{IDT} hooks are limited in functionality to denying interrupt requests. They cannot perform data filtration. Multiprocessing systems have made \Windows{IDT} hooking more difficult\cite{hoglund2005rootkits}. Since each CPU has its own \Windows{IDT}, an attacker must usually hook \Windows{IDT}s of all CPUs to be successful. Hooking only one of multiple \Windows{IDT}s causes an attack to have only limited impact. A final popular kernel hook target discussed by Hoglund and Butler \cite{hoglund2005rootkits} is the \Windows{IRP} dispatch table of a device driver. Since many devices access kernel memory directly, \Name{Windows} abstracts devices as device objects. Device objects may represent either physical hardware devices such as buses or they may represent software ``devices'' such as network protocol stacks or updates to the \Name{Windows} kernel. Device objects may even correspond to anti-virus components that monitor the kernel. Each device object has at least one device driver. Communication between kernel and device driver is performed via the I/O manager. The I/O manager calls the driver, passing pointers to the device object and the I/O request. The I/O request is passed in a standardized data structure called an \Windows{I/O Request Packet} (\Windows{IRP}). Within the device object is a pointer to the driver object. Drivers themselves are nothing more than special types of DLLs. The I/O manager passes the device object and the \Windows{IRP} to the driver. How the driver behaves depends on the contents and flags of the \Windows{IRP}. The function pointers for various \Windows{IRP} options are stored in a dispatch table within the driver. A rootkit can subvert the kernel by changing these function pointers to point to shell code \cite{hoglund2005rootkits}. An anti-virus implemented as a filter driver, for example, may be subverted by rewriting its dispatch table. Hoglund and Butler \cite{hoglund2005rootkits} provide an in-depth example of using driver function table hooking to hide TCP connections. Essentially any kernel service that uses a device driver can be subverted by hooking the \Windows{IRP} dispatch table in a similar manner. Note that while hooking device driver dispatch tables sounds relatively simple, the technique requires sophistication to be implemented correctly \cite{hoglund2005rootkits}. First, implementing bug-free kernel driver code is an involved task to begin with. Since drivers share the same memory address space as the kernel, a small implementation error can corrupt kernel memory and result in a kernel crash. This is one of the reasons that \Name{Microsoft} has gravitated to user-mode drivers when possible \cite{tanenbaum2007modern}. Second, in many applications drivers are stacked. When dealing with physical devices, the lowest level driver on the stack serves the purpose of abstracting bus-specific behavior to an intermediate interface for the upper level driver. Even in software, drivers may be stacked, for example, anti-virus I/O filtering or file system encryption/decryption can be performed by a filter driver residing in the mid-level of the stack\cite{hoglund2005rootkits}. A successful rootkit author must therefore understand how the device stack behaves, where in the device stack to hook, and how to perform I/O completion such that the hook does not result in noticeably different behavior. \subsubsection{Hybrid Hooking} \label{sec:hybrid} \sad{Hook user functions into kernel DLLs.}{Even more difficult to detect than kernel hooking.}{More difficult to implement than kernel hooking.} Hybrid hooks aim to circumvent anti-malwares by attacking user space and kernel space simultaneously. They involve implementing a user space hook to kernel memory. Hoglund and Butler \cite{hoglund2005rootkits} discuss a technique to hook the user space \Windows{IAT} of a process from the kernel. The motivation behind this technique is based on the observation that user space \Windows{IAT} hooks are detectable because one needs to allocate memory within the process' context or inject a DLL for the same effect. But is there some means to hook the \Windows{IAT} through the kernel, without the need to allocate user space memory for \Windows{IAT} hooks? The answer is yes: the attack in \cite{hoglund2005rootkits} leverages two aspects of the Windows architecture. First, it uses the \Windows{PsSetLoadImageNotifyRoutine}, a kernel mode support routine that registers driver callback functions to be called whenever a PE image gets loaded into memory \cite{msdnPs}. The callback is called within the target process' context after loading but before execution of the PE. By parsing the PE image in memory, an attacker can change the \Windows{IAT}. The question then becomes, how to run malicious code without overt memory allocation or DLL injection into the process' address space? One solution uses a specific page of memory \cite{jack2005step}: in \Name{Windows} there exists a physical memory address shared by both kernel space and user space, which the kernel can write to, but the user cannot. The user and kernel mode addresses are \Name{0x7FFE0000} and \Name{0xFFDF0000}, respectively. The reason for this virtual $\leftrightarrow$ physical mapping convention stems from the introduction of \Windows{SYSENTER} and \Windows{SYSEXIT} processor instructions, for fast switches between user mode and kernel mode. Approximately 1kB of this page is used by the kernel \cite{hoglund2005rootkits}, but the remaining 3kB are blank. Writing malicious code to addresses in the page starting at \Name{0xFFDF0000} in kernel space and placing a function pointer to the beginning of the code at the corresponding address in user space allows the rootkit to hook the \Windows{IAT} without allocating memory or performing DLL injection. Another hybrid attack is discussed in \cite{srivastava2011operating}. The attack is called \Name{Illusion}, and involves both kernel space and user space components. The motivation behind the attack is to circumvent intrusion detection systems that rely on system call analysis (cf. Sec.~\ref{sec:countermeasure_survey}). To understand the \Name{Illusion} attack, we review steps involved in performing a system call: first, a user space application issues an \Windows{INT 3} interrupt or a \Windows{SYSENTER} processor instruction, which causes the processor to switch to kernel mode and execute the dispatcher. The dispatcher indexes into the \Windows{SSDT} to find the handler for the system call in question. The handler performs its functionality and returns to the dispatcher. The dispatcher then passes return values to the user space application and returns the processor to user space. These steps should be familiar from the prior discussion of hooking the \Windows{SSDT}. \Name{Illusion} works by creating a one-to-all mapping of potential execution paths between system calls, which take array buffer arguments and the function pointers of the \Windows{SSDT}. Although the same effect could be obtained by making changes directly to the \Windows{SSDT}, the \Name{Illusion} approach, unlike \Windows{SSDT} hooking, cannot be detected using the techniques discussed in Sec.~\ref{sec:countermeasure_survey}. \Name{Illusion} exploits system calls such as \Windows{DeviceIoControl}, which is used to exchange data buffers between application and kernel driver. Parts of the rootkit reside in both in kernel space and in user space. Messages are passed between user space rootkit and kernel space rootkit by filling the buffer. Communication is managed via a dedicated protocol. This allows the user space rootkit to make system calls on its behalf without changing the \Windows{SSDT}. Further, metamorphic code as described in Sec.~\ref{sec:code_mutation} can be leveraged to change the communication protocol at each execution. \subsection{Type 3 Rootkits: Direct Kernel Object Manipulation} \sad{Modify dynamic kernel data structures.}{Extremely difficult to detect.}{Has limited applications.} \begin{figure}[t] \centering \includegraphics[width=.48\textwidth]{DKOM} \cap{fig:DKOM}{DKOM attack}{This figure displays a successful DKOM attack, where the malicious code of \texttt{Process 2} is hidden from the system yet continues to execute.} \end{figure} Although second-generation rootkits remain ubiquitous, they are not without their limitations. Their change of overt function behavior inherently leaves a detectable footprint because it introduces malicious code -- either in user space or in kernel space -- which can be detected and analyzed\cite{butler2005windows1}. Third-generation \emph{direct kernel object manipulation} (DKOM) attacks take a different approach. DKOM aims to subvert the integrity of the kernel by targeting dynamic kernel data structures responsible for bookkeeping operations \cite{butler2005windows1}. Like kernel space hooks, DKOM attacks are immune to user space anti-malware, which assumes a trusted kernel. DKOM attacks are also much harder to detect than kernel hooks because they target dynamic data structures whose values change during normal runtime operation. By contrast, hooking static areas of the kernel like the \Windows{SSDT} can be detected with relative ease because these areas should remain constant during normal operation\cite{butler2005windows1}. The canonical example of DKOM is process hiding. The attack can be carried out on most operating systems, and relies on the fact that schedulers use different data structures to track processes than the data structures used for resource bookkeeping operations\cite{beck2005detecting}. In the \Name{Windows NTOS} kernel, for example, the kernel layer\footnote{The kernel itself has three layers, one of which is called the \textit{kernel layer}. The other two layers of the kernel are the \textit{executive layer} and the \textit{hardware abstraction layer}.} is responsible for managing thread scheduling, whereas the executive layer, which contains the memory manager, the object manager, and the I/O manager is responsible for resource management \cite{tanenbaum2007modern}. Since the executive layer allocates resources (e.g., memory) on a per-process basis, it views processes as \Windows{EPROCESS} (executive process) data structures, maintained in double circularly linked lists. The scheduler, however, operates on a per-thread instance, and consequently maintains threads in its own double circularly linked list of \Windows{KTHREAD} (kernel thread) data structures. By modifying pointers, a rootkit with control over kernel memory can decouple an \Windows{EPROCESS} node from the linked list, re-coupling the next and previous \Windows{EPROCESS} structures' pointers. Consequently, the process will no longer be visible to the executive layer and calls by the \Name{Win32} API will, therefore, not display the process. However, the thread scheduler will continue CPU quantum allocation to the threads corresponding to the hidden \Windows{EPROCESS} node. The process will, thus, be effectively invisible to both user and kernel mode programs -- yet it will still continue to run. This attack is depicted in Fig.~\ref{fig:DKOM}. While process hiding is the canonical DKOM example, it is just one of several DKOM attack possibilities. Baliga et al.~\cite{baliga2011data} discuss several known DKOM attack variants, including zeroing entropy pools for pseudorandom number generator seeds, disabling pseudorandom number generators, resource waste and intrinsic DOS, adding new binary formats, disabling firewalls, and spoofing in-memory signature scans by providing a false view of memory. Proper DKOM implementations are extremely difficult to detect. Fortunately, DKOM is not without its shortcomings and difficulties from the rootkit developer's perspective. Changing OS kernel data structures is no easy task and incorrect implementations can easily result in kernel crashes, thereby causing an overt indication of a malware's presence. Also, DKOM introduces no new code to the kernel apart from the code to modify kernel data structures to begin with. Therefore, inherent limitations on the scope of a DKOM attack are imposed by the manner in which the kernel uses its data structures. For example, one usually cannot hide disk resident files via DKOM because most modern operating systems do not have kernel level data structures corresponding to lists of files. \subsection{Type 4 Rootkits: Cross Platform Rootkits and Rootkits in Hardware} \sad{Attack systems using very low-level rootkits.}{Undetectable by conventional software countermeasures.}{Requires custom low-level hypervisor, BIOS, hardware or physical/supply chain compromise to be effective.} Fourth-generation rootkit technologies operate at the virtualization layer, in the BIOS, and in hardware \cite{seifried2008fourth}. To our knowledge, fourth-generation rootkits have been developed only in proof-of-concept settings, as we could not find any documentation of fourth-generation rootkits in the wild. Because they reside at a lower level than the operating system, they cannot be detected through the operating system and are, therefore, OS independent. However, they still dependent on the type of BIOS version, instruction set, and hardware\cite{seifried2008fourth}. Since fourth-generation rootkits are theoretical in nature -- at least as of now -- we consider them outside the scope of this survey. We mention them in this section for completeness and because they may become relevant after the publication of this survey. \subsection{Code Mutation} \label{sec:code_mutation} \sad{Self-modifying malicious code.}{Avoids simple signature matching.}{Greater runtime overhead and detectable via emulation.} \begin{figure*}[!htbp] \centering \subfloat[\label{fig:metamorphic1}]{\includegraphics[width=0.48\textwidth]{metamorphic1}}\hspace*{.02\textwidth} \subfloat[\label{fig:metamorphic2}]{\includegraphics[width=0.48\textwidth]{metamorphic2}}\\ \subfloat[\label{fig:metamorphic3}]{\includegraphics[width=0.32\textwidth]{metamorphic3}}\hspace*{.01\textwidth} \subfloat[\label{fig:metamorphic4}]{\includegraphics[width=0.32\textwidth]{metamorphic4}}\hspace*{.01\textwidth} \subfloat[\label{fig:metamorphic5}]{\includegraphics[width=0.32\textwidth]{metamorphic5}} \cap{fig:metamorphic}{Metamorphic code obfuscation}{Five techniques employed by metamorphic engines to evade signature scans across malware generations. \protect\subref*{fig:metamorphic1} Register swap: exchanging registers as demonstrated by code fragments from the \Name{RegSwap} virus \cite{szor2001hunting}. \protect\subref*{fig:metamorphic2} Subroutine permutation: reordering subroutines of the virus code. \protect\subref*{fig:metamorphic3} Transposition: modifying the execution order of independent instructions. \protect\subref*{fig:metamorphic4} Semantic NOP-insertion: injecting NOPs or instructions that are semantically identical to NOPs. \protect\subref*{fig:metamorphic5} Code mutation: replacing instructions with semantically equivalent code. } \end{figure*} In early viruses, the viral code was often appended to the end of an executable file, with the entry point changed to jump to the viral code before running the original executable\cite{szor2005theart}. Once executed, the virus code in turn would jump to the beginning of the body of the executable so that the executable was run post-replication. The user would be none the wiser until the host system had been thoroughly infected. Anti-malware companies soon got wise and started checking hashes of code blocks -- generally at the end of files. To counter, malware authors began to encrypt the text of the viruses. This required a decryption routine to be called at the beginning of execution. The virus was then re-encrypted with a different key upon each replication\cite{szor2005theart}. These encrypted viruses had a fatal flaw: the decryption routine was jumped to somewhere in the executable. Anti-malware solutions merely had to look for the decrypter. Thus, \emph{polymorphic} engines were created, in which the decryption engine mutated itself at each generation, no longer matching a fixed signature. However, polymorphic viruses were still susceptible to detection\cite{szor2005theart}: although the detector mutated, the size of the malicious code did not change, was still placed at the end of the file, and was susceptible to entropy analysis, depending on the encryption technique. To this end, entry-point obscuring (EPO) viruses were created, where the body of the viral code is placed arbitrarily in the executable, sometimes in a distributed fashion\cite{szor2005theart}. To counter the threats from polymorphic viruses, Kaspersky (of \Name{Kaspersky Lab} fame) and others \cite{beaucamps2007advanced} created emulation engines, which run potentially malicious code in a virtual machine. In order to run, the body of the viral code must decrypt itself in memory in some form or another, and when it does, the body of the malicious code is laid bare for hashed signature comparison as well as behavioral/heuristic analysis. To combat emulation, \emph{metamorphic} engines were developed. Just as polymorphic malwares mutate their decryption engines at each generation, metamorphic engines mutate the full body of their code and, unlike polymorphics, change the size of the code body from one generation to another\cite{szor2005theart}. Some malwares still encrypt metamorphic code, or parts of metamorphic code, while others do not -- as encryption and run time packing techniques can reveal the existence of malicious code\cite{szor2005theart}. Metamorphic code mutation techniques, as shown in Fig.~\ref{fig:metamorphic}, include register swaps, subroutine permutations, transpositions of independent instructions, insertion of \Name{NOP}s or instruction sequences that behave as \Name{NOP}s, and parser-like mutations by context-free grammars (or other grammar types) \cite{sridhara2013metamorphic,filiol2007metamorphism,zbitskiy2009code,beaucamps2007advanced}. Many metamorphic techniques are similar to compilation techniques, but for a much different purpose. The metamorphic engine in the \Name{MetaPHOR} worm, for example, disassembles executable code into its own intermediate representation and uses its own formal grammar to carry out this mutation \cite{beaucamps2007advanced}. Code transformation techniques are not particular to native code either: Faruki et al.~\cite{faruki2014evaluation} presented several \Name{Dalvik} bytecode obfuscation techniques and tested them against several \Name{Android} security suites, which often failed at recognizing transformed malicious code. While some of the transformation targets are unique to obfuscation on \Name{Android} devices -- e.g., renaming packages and encrypting resource files -- the control, data, and layout transformations in \cite{faruki2014evaluation} follow the same principles of code obfuscation at the native level. \subsection{Anti-Emulation} \label{sec:antiemulation} \sad{Malware behaves differently when running in an emulated environment.}{Malware evades detection during emulation.}{Needs to detect the presence of the emulator reliably. May not run in certain virtualized environments.} Mutation engines, including metamorphics and polymorphics, change the instructions in the target code itself and naturally its runtime\cite{szor2001hunting}. However, they do not change the underlying functionality. Therefore, during emulation (cf.~Sec.~\ref{sec:virtualization}), behavioral and heuristic techniques can be used to fingerprint malicious code, for example, if the malware conducts a strange series of system calls, or if it attempts to establish a connection with a C2 server at a known malicious address. Hence, malware can be spotted regardless of the degree of obfuscation present in the code \cite{szor2005theart}. The success of early emulation techniques led to the usage of malicious anti-emulation tactics, which include attempts to detect the emulator by examining machine configurations -- e.g., volume identifiers and network interface -- and use of difficult to emulate functionality, e.g., invoking the GPU \cite{faruki2016droidanalyst,szor2005theart}. In turn, emulation strategies have become more advanced, for example, in their \Name{DroidAnalyst} framework \cite{faruki2016droidanalyst} for \Name{Android}, Faruki et al. implement a realistic emulation platform by overloading default serial numbers, phone numbers, geolocations, system time, email accounts, and multimedia files to make their emulator more difficult to detect. A realistic emulation environment is a good start to avoid emulator detection based on hardware characteristics, but it alone is insufficient to defeat all types of anti-emulation, for example, \Name{Duqu} only executes certain components after 10 minutes idle when certain requirements are met \cite{bencsath2012cousins}. Similarly the \Name{Kelihos} botnet \cite{kelihos}, and the \Name{Nap} Trojan \cite{nap} use the \Windows{SleepEx} and \Windows{NtDelayExecution} API calls to delay malicious execution until longer times than a typical emulator will devote to analysis. \Name{PoisonIvy} \cite{poisonivy} and similarly \Name{UpClicker} \cite{upclicker} establish malicious connections only when the left mouse button is released. \Name{PushDo} takes a more offensive approach, using \Windows{PspCreateProcessNotify} to de-register sandbox monitoring routines\cite{singh2013hot}. Other malwares take advantage of dialog boxes and scrolling\cite{singh2013hot}. Even mouse movements are taken into consideration and malware can differentiate between human and simulated mouse movements by assessing speed, curvature, and other features\cite{singh2013hot}. Thus, emulated environments for stealth malware detection face the tradeoff between realistic emulation and implementation cost. Anti-emulation in turn faces a different problem: with the explosion of virtualization technology, thanks largely to the heavy drive toward cloud computing, virtualized (emulated) environments are seeing increased general-purpose use. This draws into question the effectiveness of anti-emulation as a stealth technique: if malicious code will not run in a virtual environment, then it might not be an effective attack if the targeted machine is virtualized. \subsection{Targeting Mechanisms} \sad{Malware runs on or spreads to only chosen systems.}{Decreases risk of detection.}{Malware spreads at a lower rate. Motivation for the attack is given away if detected.} Stealth targeted attacks -- which aim to compromise specific targets -- are becoming more advanced and more widespread\cite{istr2016symantec}. While targeting mechanisms are not necessarily designed for stealth purposes, they have the effect of prolonging the amount of time that malware can remain undetected in the wild. This is done by allowing the malware to spread/execute only on certain high-value systems, thus minimizing the likelihood of detection while maximizing the impact of the attack. For example, recent point of sale (POS) compromises \cite{pos} targeted only specific corporations. The \Name{DarkHotel} \cite{darkhotel} \textit{advanced persistent threat} (APT) targets only certain individuals (e.g., business executives). The notorious \Name{Stuxnet} worm and its relatives \Name{Duqu}, \Name{Flame}, and \Name{Gauss} employed sophisticated targeting mechanisms \cite{bencsath2012cousins}, preventing the malwares from executing on un-targeted systems. \Name{Stuxnet} checks system configuration prior to execution; malicious components simply will not execute if the detected environment is not correct rather attempting to execute and failing\cite{falliere2011w32}. \Name{Gauss}'s \Name{G\"odel} module is even encrypted with an RC4 cipher, with a key derived from system-specific data; thus, much of the functionality of the malware remains unknown, since a large part of the body of the code can only be decrypted with knowledge of the targeted machines\cite{gauss_abnormal}. Hence, IDS developers and anti-malware researchers cannot get the malicious code running in un-targeted machines. Targeting mechanisms may also change the behavior of the malware depending on the configuration of the machine so as to evade detection. For example, \Name{Flame} dynamically changes file extensions depending on the type of anti-malware that it detects on the machine\cite{bencsath2012skywiper}. Other malwares may simply not run or choose to uninstall themselves to evade detection, while others will execute only under certain conditions on time, date, and geolocation\cite{szor2005theart,bencsath2012cousins}. \section{Component-Based Stealth Malware Countermeasures} \label{sec:countermeasure_survey} In this section, we discuss anti-stealth malware techniques that aim to protect the integrity of areas of systems, which are known to be vulnerable to attacks. These techniques include hook detection, cross-view detection, invariant specification, and hardware and virtualization solutions. When assessing the effectiveness of any malware recognition system, it is important to consider the system's respective precision/recall tradeoff. Recall refers to the proportion of malicious samples of a given type that were correctly detected as malicious samples of that type, while precision refers to the proportion of the samples that the system marked as a malicious type that are actually of that malicious type. Increased recall tends to decrease precision, whereas increased precision tends to decrease recall. The ``optimal'' tradeoff between precision and recall for a given system depends on the application at hand. The integrity based solutions discussed in this section tend to offer higher precision rates than the pattern recognition techniques discussed in Sec.~\ref{sec:signatures_heuristics}, but they are difficult to update because custom changes to hardware and software are required, making scalability an issue. It is important to realize that the \emph{component protection} techniques presented in this section are in practice often combined with more generic pattern recognition techniques discussed in Sec.~\ref{sec:signatures_heuristics}\cite{szor2005theart,sommer2010outside,singh2013hot}, for example, hardware and virtualization solutions might be used to achieve a clean view of memory, on which a signature scan can be run \cite{petroni2004copilot,garfinkel2003virtual}. \subsection{Detecting Hooks} \sad{Detect malwares that use hooking.}{Easy to implement.}{High false positive rates from legitimate benign hooks.} If a stealth malware uses in-memory hooks as described in Sec.~\ref{sec:hooking}, IDSs can detect the malware by detecting its hooks. Unfortunately, methods that simply detect hooks trigger high false alarm rates since hooks are not inherently malicious. This makes weeding out false positives a challenging task. Also, since DKOM is not a form of hooking, hook detection techniques cannot detect DKOM attacks. Ironically, an effective approach to detect hooks is to hook common attack points. By doing so, an anti-malware may not only be able to detect a rootkit loading into memory, but may also be able to preempt the attack. This might be accomplished by hooking the API functions used to inject DLLs into a target process' context (cf. Sec.~\ref{sec:userhooking}) \cite{hoglund2005rootkits}. However, one must know what functions to hook and where to look for malicious attacks. Pinpointing attack vectors is not easy. For example, symbolic links are often not resolved to a common name until system call hooks have been executed\cite{hoglund2005rootkits}. Therefore, if the anti-malware relies on hooking the \Windows{SSDT} alone and matching the name of the target in the hook routine, an attacker can simply use an alias. Once hooks are observed, some tradeoff between precision and recall must be made: One can easily catch all rootkits loading into memory, and in doing so, create a completely unusable system (i.e., very high recall rates but extremely low precision rates). Hook detection can be combined with signature and heuristic scans (discussed in Sec.~\ref{sec:signatures_heuristics}) for ingress point monitoring. Based on a signature of the hooked code, the ingress point monitoring system can determine whether or not to raise an alarm. In contrast to trying to detect rootkit hooks as a rootkit loads, \Name{VICE} \cite{hoglund2005rootkits,butler2004vice} uses memory scanning techniques that periodically inspect likely target locations of hooks such as the \Windows{IAT}, \Windows{SSDT}, or \Windows{IDT}. \Name{VICE} detects hooks based on the presence of unconditional jumps to memory values outside of acceptable address ranges. Acceptable ranges can be defined by \Windows{IAT} module ranges, driver address ranges, and kernel process address ranges. For example, a system call in the \Windows{SSDT} should not point to an address outside \Windows{ntoskrnl.exe}. Generic inline hooks cannot feasibly be detected via this method. Fortunately, as we discussed in Sec.~\ref{sec:hooking}, hooks beyond the first few bytes of a function are rare, since they can result in strange behaviors, including noticeable slow down and outright program failure. For \Windows{SSDT} functions, unconditional jumps within the first few bytes outside of \Windows{ntoskrnl.exe} are indicators of hooks. \Windows{IAT} range checks require context switching into the process in question, enumerating the address ranges of the loaded DLLs, checking whether the function pointers in the \Windows{IAT} fall outside of their corresponding ranges, and recursively repeating this for all loaded DLLs. A similar approach to \Name{VICE} was taken in the implementation of \Name{System Virginity Verifier} \cite{rutkowska2005system}, which attempts to separate malicious hooking from benign hooking by comparing the in-memory code sections of drivers and DLLs to their disk images. Since these sections are supposed to be read-only, they should match in most cases, with the exception of a few lines of self-modifying kernel code in the \Name{NTOS} kernel and hardware abstraction layers. Malicious hooks distinguish themselves from benign hooks when they exhibit discrepancies between in-memory and on-disk PE images, which will not occur under benign hooking \cite{rutkowska2005system}. Additionally, if the disk image is hidden then the hook likely corresponds to a rootkit. One must be careful in this case to distinguish missing files, which can occur in legitimate hooking applications, from hidden files. Other examples of image discrepancies associated with malicious hooks include failure of module attribution and code obfuscation. An indirect approach to detecting hooks was implemented in \Name{Patchfinder 2} \cite{rutkowska2004detecting}, in the form of API call tracing. This approach counts the number of instructions executed during an API call and compares the count to the number of instructions executed in a clean state. The intuition is based on the observation that in the context of rootkits, hooks almost always add instructions\cite{rutkowska2004detecting}. The technique requires proper baselining, which presents two challenges: first, deducing that the system is in a non-hooked state to begin with is difficult to establish, unless the system is fresh out of the box. Second, the \Name{Win32} API has many functions, which take many different arguments. Since enumerating all argument combination possibilities while acquiring the baseline is infeasible, API calls can vary substantially in instruction count even when unhooked. \subsection{Cross-View Detection and Specification Based Methods} \sad{Compare the output of API calls with that of low-level calls that are designed to do the same thing.}{Detects malware that hijacks API calls.}{Requires meticulous low-level code for to replicate functionality of most of the system API.} Cross-view detection is a technique aimed to reveal the presence of rootkits. The idea behind cross-view detection \cite{rutkowska2005thoughts} is to observe the same aspect of a system in multiple ways, analogous to interviewing witnesses at a crime scene: just as conflicting stories from multiple witnesses likely indicate the presence of a liar, if different observations of a system return different results, the presence of a rootkit is likely. First, OS objects -- processes, files, etc. -- are enumerated via system API calls. This count is compared to that obtained using a different approach not reliant on the system API. For example, when traversing the file system, if the results returned by \Windows{FindFirstFile} and \Windows{FindNextFile} are inconsistent with direct queries to the disk controller, then a rootkit that hides files from the system is likely present. One of the advantages of cross-view detection is that -- if implemented correctly -- maliciously hooked API calls can be detected with very few false positives because legitimate applications of API hooking rarely change the outputs of the API calls. Depending on the implementation, cross-view detection may or may not assume an intact kernel, and therefore may even be applied to detect DKOM. The main disadvantage of cross-view detection is that it is difficult to implement, especially for a commercial OS\cite{butler2005windows3}. API calls are provided for a reason: to simplify the interface to kernel and hardware resources. Cross-view detection must circumvent the API, in many cases providing its own implementation. Theoretically, in most cases combinations of other API calls could be used in place of a from-scratch implementation. However, API call combinations are susceptible to the risk of other hooked API calls or duplicate calls to the same underlying code for multiple API functions, a common feature of the \Name{Win32} API \cite{tanenbaum2007modern}. Several cross-view detection tools have been developed over the years. \Name{Rootkitrevealer} \cite{cogswell2006rootkitrevealer} by \Name{Windows SysInternals} applies a cross-view detection strategy for the purposes of detecting persistent rootkits, i.e., disk-resident rootkits that survive across reboots. \Name{Rootkitrevealer} uses the \Name{Windows} API to scan the file system and registry, and compares the results to a manual parsing of the file system volume and registry hive. \Name{Klister} \cite{rutkowska2004detecting} detects hidden processes in \Name{Windows 2000} by finding contradictions between executive process entries and kernel process entries used by the scheduler. \Name{Blacklight} \cite{butler2006raide} combines both hidden file detection and hidden process detection. \Name{Microsoft}'s \Name{Strider Ghostbuster} \cite{beck2005detecting} is similar to \Name{Rootkitrevealer}, except that it also detects hidden processes and it has the ability to compare an ``inside the box'' infected scan with an ``outside the box'' scan, in which the operating system is booted from a clean version. If properly applied, cross-view detection offers high precision rootkit detection\cite{butler2005windows3}. However, cross-view detection alone provides little insight on the \emph{type of the rootkit} and must be combined with recognition methods (e.g., signature/behavioral) to attain this information \cite{szor2005theart,butler2005windows3}. Cross-view detection methods are also cumbersome to update because they require new code, often interfacing with the kernel. Determining, which areas to cross-view, is also a challenging task\cite{butler2005windows3}. \subsection{Invariant Specification} \sad{Define constraints of an uninfected system.}{Detects DKOM attacks reliably.}{Constraints need to be well-specified, often by hand, and are highly platform-dependent.} A related approach to cross-view detection, especially applied to detecting DKOM, involves pinpointing kernel invariants -- aspects of the kernel that should not change under normal OS behavior -- and periodically monitoring these invariants. One example of a kernel invariant is that the length of the executive and kernel process linked lists should be equal, which is violated in the case of process hiding (cf. Fig.~\ref{fig:DKOM}). Petroni et al.~\cite{petroni2006architecture} introduce a framework for writing security specifications for dynamic kernel data structures. Their framework consists of five components: a low-level monitor used to access kernel memory, a model builder to synthesize the raw kernel memory binary into objects defined by the specification, a constraint verifier that checks the objects constructed by the model builder against the security specifications, response mechanisms that define the actions to take upon violation of a constraint, and finally, a specification compiler, which compiles specification constraints written in a high-level language into a form readily understood by the model builder. Compelling arguments can be made in favor of the kernel-invariant based security specification approaches described above\cite{petroni2006architecture}: first, they allow a decoupling of site-specific constraints from system-specific constraints. An organization may have a security policy that forbids behavior not in direct violation of proper kernel function (e.g., no shell processes running with root UUID in \Name{Linux}). Via a layered framework, specifications can be added without changing low-level implementations. Unlike signature-based approaches relying on rootkits having overlapping code fragments with other malwares, kernel-invariant specifications catch all DKOM attacks that violate particular specification constraints with only few false positives. The specification approach can even be extended beyond DKOM. However, using kernel invariant specification is not without its own difficulties. Proper and correct framework implementation is a tremendous programming effort in itself\cite{petroni2006architecture}. For closed-source operating systems like \Name{Windows}, full information about kernel data structures and their implementation is seldom available, unless the specification framework tool is being developed as part of or in cooperation with the operating system vendor. Specification approaches can also exhibit false positives, for example, if kernel memory is accessed asynchronously via an external PCI interface like \Name{Copilot} \cite{petroni2004copilot}, a legitimate kernel update to a data structure may trigger a false positive detection simply because the update has not completed. Finally, the degree to which the invariant-specification approach works depends on the quality of the specification\cite{petroni2006architecture}. Correct specifications require in-depth domain specific knowledge about the kernel and/or about the organization's security policy. Due to the massive sizes and heterogeneities of operating systems, even those of similar distribution, discerning a complete list of specifications is implausible without incorrect specifications that result in false positives. While a similar approach may have applications to other types of stealth malwares, Petroni et al.~\cite{petroni2006architecture} introduced invariant specification as specific solution tailored to DKOM rootkits . Although invariant specification provides more readily available diagnostic information than cross-view detection because it tells which invariants are violated, invariant specification cannot discern the type of DKOM rootkit. Hence, more generic signature/behavioral techniques are required. \subsection{Hardware Solutions} \sad{Via hardware interface, use a clean machine to monitor another machine for the presence of rootkits/stealth malware.}{Does not require an intact kernel on the monitored machine.}{Cannot interpose execution of malicious code.} The key motivation behind hardware based integrity checking is quite simple: a well-designed rootkit that has successfully subverted the OS kernel, or theoretically even the virtual layer and BIOS of a host machine, can return a spurious view of memory to a host based intrusion detection system (HIDS) such that the HIDS has no way of detecting the attack because its correct operation requires an intact kernel. Rather than relying on the kernel to provide a correct view of kernel memory, hardware solutions have been developed. For example, \Name{Copilot} \cite{petroni2004copilot} uses direct memory access (DMA) via the PCI bus to access kernel memory from the hardware of the host machine itself and displays that view of memory to another machine. This in turn subverts any rootkit's ability to change the view of memory, barring a rootkit implemented in hardware itself. Depending on the hardware integrity checker in question, further analysis of kernel memory on the host machine may be performed via a supervisory machine alone, or alternatively with the aid of additional hardware. \Name{Copilot} uses a coprocessor to perform fast hashes over static kernel memory and reports violations to a supervisory machine. Analysis mechanisms similar to those in \cite{srivastava2011operating,siddiqui2008survey} are employed on the supervisory machine in conjunction with DMA in order to properly parse kernel memory. Using DMA to observe the memory layout of the host system from a supervisory system is appealing since a correct view of host memory is practically guaranteed. However, like all of the techniques that we have discussed, hardware based integrity checking is no silver bullet. In addition to the added expense and annoyance of requiring a supervisory machine, DMA based rootkit detection techniques can only detect rootkits, but they cannot intervene in the hosts execution. They have no access to the CPU and, therefore, cannot prevent or respond to attacks directly. This CPU access limitation not only means that CPU registers are invisible to DMA, it also means that the contents of the CPU cache cannot be inspected, leaving the theoretical possibility of a rootkit hiding malicious code in the cache. However, a more pressing concern is that because DMA approaches operate at a lower level than the kernel they do not have a clear view of dynamic kernel data structures, which requires that these structures need to be located in memory, a problem discussed in \cite{dolangavitt2009robust}. Even after locating the kernel data structures, there remains a synchronization issue between DMA operations and the host kernel: DMA cannot be used to acquire kernel locks on data structures. Consequently, race conditions result when the kernel is updating a data structure contemporaneous with a DMA read. False positives were observed by Baliga et al.~\cite{baliga2011data} for precisely this reason. An inelegant solution \cite{petroni2004copilot} is to simply re-read memory locations containing suspicious values. Another consideration when implementing DMA approaches is the timing of DMA scans. Both \cite{petroni2004copilot} and \cite{baliga2011data} employed synchronous DMA scans, which are theoretically susceptible to timing attacks. Petroni et al.~\cite{petroni2004copilot} suggested introducing randomness to the scan interval timings to overcome this susceptibility. \subsection{Virtualization Techniques} \label{sec:virtualization} \sad{Use virtual environments to detect malware.}{Can be used to detect kernel-level rootkits and interpose state.}{Vulnerable to anti-emulation.} Virtualization, though technologically quite different from DMA, aims to satisfy the same goal of inspecting resources of the host machine without relying on the integrity of the operating system. Several techniques for rootkit detection, mitigation, and profiling that leverage virtualization have been developed, including \cite{srivastava2011operating, rutkowska2005system, garfinkel2003virtual, seshadri2007secvisor, riley2008guest}. The idea behind virtualization approaches is to involve a virtual machine monitor, a.k.a. the \emph{hypervisor}, in the inspection of system resources. Since the hypervisor resides at a higher level of privilege than the guest OS, either on the hardware itself or simulated in software, and the hypervisor controls the access of the guest OS to hardware resources, the hypervisor can be used to inspect these resources even if the guest OS is entirely compromised. Unlike \Name{Copilot}'s approach, in which kernel writes and DMA reads are unsynchronized, the hypervisor and the guest OS kernel are synchronous since the guest OS relies on the hypervisor for resources. Moreover, the hypervisor has access to state information in the CPU, meaning that it can interpose state, a valuable ability not only for rootkit detection, prevention and mitigation, but also for computer forensics. Additionally, the hypervisor can be used to enforce site specific hardware policies, for example the hypervisor can prevent promiscuous mode network interface operation \cite{garfinkel2003virtual}. Hypervisors themselves may be vulnerable to attack, but the threat surface is much smaller than for an operating system: hypervisors have been written in as little as 30,000 lines of C code as opposed to the tens of millions of lines of code in modern \Name{Windows} and \Name{Linux} distributions. Significant security validations on hypervisors have also been conducted by academia, private security firms, the open source community, and intelligence organizations (e.g., CIA, NSA) \cite{garfinkel2003virtual}. Garfinkel and Rosenblum \cite{garfinkel2003virtual} created \Name{Livewire}, a proof of concept intrusion detection system residing at the hypervisor layer. The authors refer to their approach as virtual machine introspection since the design utilizes an OS interface to translate raw hardware state into guest OS semantics and inspect guest OS objects via a policy engine, which interfaces with the view presented by the translation engine. The policy engine effectively is the intrusion detection system, which performs introspection on the virtual machine. The policy engine can monitor the machine and can also take mitigation steps such as pausing the state of the VM upon certain events or denying access to hardware resources. A particular advantage of virtualization is that it can be leveraged to prevent rootkits from executing code in kernel memory -- a task that all kernel rootkits must perform to load themselves into memory in the first place\cite{seshadri2007secvisor}. This includes DKOM rootkits: although the changes to kernel objects themselves cannot be detected as code changes to the kernel, code must be introduced at some point to make these changes. To this end, Seshadri et al.~\cite{seshadri2007secvisor} formulated \Name{SecVisor}. In contrast to the software-centric approach of \Name{Livewire}, \Name{SecVisor} leverages hardware support for virtualization of the \Name{x86} instruction set architecture as well as \Name{AMD}'s secure virtual machine technologies. \Name{SecVisor} intercepts code via modifications to the CPU's memory management unit (MMU) and the I/O memory management unit (IOMMU), so that only code conforming to a user supplied policy will be executable. As such, kernel code violating the policy will not run on the hardware. In fact, \Name{SecVisor}'s modification to the IOMMU even protects the kernel from malicious writes via a DMA device. \Name{SecVisor} works by allowing transfer of control to kernel mode only at entry points designated in kernel data structures, then performing comparisons to shadow copies of entry point pointers. This approach is analogous to that used in memory integrity checking modules of heavyweight dynamic binary instrumentation (DBI) frameworks like \Name{Valgrind} \cite{nethercote2007valgrind}. Unfortunately, \Name{SecVisor} has several drawbacks. First, modern \Name{Linux} and \Name{Windows} distributions mix code and data pages \cite{riley2008guest}, while \Name{SecVisor}'s approach -- enforcing \textit{write XOR execute} ($W \oplus X$) permissions for kernel code pages through hardware virtualization -- assumes that kernel code and data are not mixed within memory pages. The approach also fails for pages that contain self-modifying kernel code. Second, \Name{SecVisor} requires modifications to the kernel itself -- a difficult proposition for adoption on closed-source operating systems like \Name{Windows}. Riley et al.~\cite{riley2008guest} formulated \Name{NICKLE} (No Instruction Creeping into Kernel Level Executed), which, like \Name{SecVisor}, leverages virtualization to prevent execution of malicious code in kernel memory. \Name{NICKLE} approaches the problem via software virtualization and overcomes some of the limitations of \Name{SecVisor}. \Name{NICKLE} works by shadowing every byte of kernel code in a separate shadow memory writable only by the hypervisor. Because the hypervisor resides in a higher privilege domain than the kernel, even the kernel cannot modify the shadowed code. The shadowed code gets authenticated either during bootstrapping, when the kernel is loaded into memory, or when drivers are mounted or unmounted. Authentication consists of cryptographic hash comparisons of code segments with known good values taken by OS vendors or distribution maintainers. When the operating system requires access to kernel-level code an indirection mechanism in the hypervisor reroutes this request to shadow values. To maintain transparency to the guest OS, this guest memory address indirection is implemented after the ``virtual to physical'' address translation in the hypervisors MMU. When the guest VM attempts to execute kernel code, a comparison is made to shadow memory. If the code is the same, then the shadow memory copy is executed. If the kernel memory and shadow memory code differ then one of several responses can be taken including logging and observing -- an approach extended by Riley et al.~\cite{riley2009multi} for rootkit profiling -- rewriting the malicious kernel code with shadow values and continuing execution, or breaking execution. \Name{NICKLE}'s approach has two key advantages over \Name{SecVisor}: first, it does not assume homogeneous code and data pages. Second, it does not require any modifications to kernel code. These benefits, however, incur hits in speed due to software virtualization and memory indirection costs and require a two-fold increase in memory for kernel code\cite{riley2008guest}. An additional complication arises from code relocation: when driver code is relocated in kernel memory, cryptographic hashes change. Riley et al.~\cite{riley2008guest} handle this problem by tracking and ignoring relocated segments. Also, the \Name{NICKLE} implementation in \cite{riley2008guest} does not support kernel page swapping, which would need to ensure that swapped in pages had the same cryptographic hash as when they were swapped out. Finally, \Name{NICKLE} is ineffective in protecting self-modifying kernel code, a phenomenon present in both \Name{Linux} and \Name{Windows} kernels. Srivastava et al.~\cite{srivastava2011operating} leverage virtualization in their implementation of \Name{Sherlock} -- a defense system against the \Name{Illusion} attack mentioned in Sec.~\ref{sec:hybrid}. \Name{Sherlock} uses the \Name{Xen} hypervisor to monitor system call execution paths. Specifically, the guest OS is assumed to run on a virtual machine controlled by the \Name{Xen} hypervisor. Monitoring of memory is conducted by the hypervisor itself with the aid of a separate security VM for system call reconstruction, analysis, and notification of other intrusion detections systems. Watchpoints are manually and strategically placed in kernel memory off-line, and a \Name{B\"uchi} automaton \cite{srivastava2011operating} is constructed, which efficiently describes the expected and unexpected behavior of every system call in terms of watch points. Each watch point contains the \Windows{VMCALL} instruction, so that when it is hit, it notifies the hypervisor. Watch point identifiers are passed to the automaton as they are executed. During normal execution, the automaton remains in benign states and watch points are discarded. When a malicious state is reached, the hypervisor logs watch points and suspends the state of the guest VM. The function specific parameters at each watch point corresponding to a malicious state are then passed to the security VM for further analysis. An important consideration of this implementation is where to place watchpoints to balance effectiveness and efficiency. Srivastava et al.~\cite{srivastava2011operating} manually chose watch point locations based on a reachability analysis of a kernel control flow graph, but suggest that an autonomous approach \cite{ganapathy2005automatic} could be implemented. \section{Pattern-Based Stealth Malware Countermeasures} \label{sec:signatures_heuristics} Pattern-based approaches aim to achieve more generic recognition of heterogeneous malwares. While these approaches offer potential for efficient updates and scalability beyond most component protection techniques, their increased generalization causes them to tend to exhibit higher recall rates but lower precision rates. Pattern-based approaches can be applied on static code fragments or on dynamic memory snapshots or behavioral data (e.g., system/API calls, network connections, CPU usage, etc.) and may be coupled with component protection approaches \cite{szor2005theart,jang2016detecting,faruki2016droidanalyst}. Static analysis has the advantage that it is fast\cite{szor2005theart}, since the raw code is inspected but not executed; there is no need for an emulated environment. However, dynamic code mutation mechanisms outlined in Sec.~\ref{sec:code_mutation}, are often able to hide functionalities from static code analyzers \cite{jang2016detecting}, and obfuscated code recognition techniques discussed in Sec.~\ref{sec:feature_state} often rely on an emulated environment for decryption. Dynamic analysis tools that leverage emulated environments (cf. Sec.~\ref{sec:antiemulation}) are not fooled so easily, since much of the underlying code, data, and behavior of the malware is revealed. However, dynamic analysis is potentially vulnerable to anti-emulation techniques (cf. Sec.~\ref{sec:antiemulation}). While dynamic analysis techniques generally suffer lower false-positive rates than static analysis techniques \cite{jang2016detecting} dynamic techniques are far slower than static approaches \cite{szor2005theart} due to the need for an emulated environment, and can only be feasibly executed on a small number of code samples for short amounts of time. Consequently, hybrid approaches \cite{faruki2016droidanalyst} are often employed, in which static methods are used to select suspicious samples, which are then passed to a dynamic analysis framework for further categorization. \subsection{Signature Analysis} \sad{Compare code signatures with database of malicious signatures via exact-matching or machine-learnt techniques.}{Detects known malwares reliably.}{Difficult to detect novel malware types.} Code-signature-based malware defenses are techniques that compare malware signatures -- fragments of code or hashes of fragments of code -- to databases of signatures associated with known attacks. Although signatures cannot be directly used to discover new exploits \cite{butler2005windows3}, they can do so indirectly due to component overlap between malwares \cite{abouassaleh2004ngram,reddy2005new}. Ironically, shared stealth components have sometimes given away the presence of malwares that would have otherwise gone unnoticed \cite{szor2005theart}. Moreover, some byte sequences of length $n$ ($n$-grams) specific to a common type of exploit are often present even under metamorphism of the code. Machine learning approaches to malware classification via n-gram and sequence analysis have been widely studied and deployed as integral components of anti-malware systems for more than ten years \cite{szor2005theart,siddiqui2008survey}. While most in-memory rootkit signature recognition strategies behave much like on-disk signature strategies for detecting and classifying malicious code by matching raw bytes against samples from known malware, DKOM rootkit detection requires a different approach. Since DKOM involves changing existing data fields within OS data structures to hide them from view of certain parts of the OS, DKOM signature scanning techniques instead perform memory scans using signatures designed to pinpoint hidden data structures in kernel memory. Surprisingly, memory signature scans are useful both in live and forensics contexts. Chow et al.~\cite{chow2005shredding} demonstrated that structure data in kernel memory can survive up to 14 days after de-allocation, provided that the machine has not been rebooted. Schuster \cite{schuster2006searching} formulated a series of signature rules for detecting processes and threads in memory, for the general purpose of computer forensics. Several spinoffs of this approach have been implemented. Unfortunately, many of these signature approaches can be subverted by rootkits that change structure header information. Dolan-Gavitt et al.~\cite{dolangavitt2009robust} employed an approach to automatically obtain signatures for kernel data structures based on values in the structures that, if modified, cause the OS to crash. The approach includes data structure profiling and fuzzing stages. In the profiling stage, a clean version of the operating system is run, while a variety of tasks are performed. Kernel data structure fields commonly accessed by the OS are logged. The goal of the profiling stage is to determine fields that the OS often accesses and weed out fields that are not widely used for consideration as signatures. The fuzzing stage consists of running the OS on a virtual machine, pausing execution, and modifying the values in the candidate structure. After resuming, candidate structure values are added to the signature list if they cause the kernel to crash. The approach in \cite{dolangavitt2009robust} is in many ways the complement of the kernel invariant approach in \cite{baliga2011data}. Instead of traversing kernel data structures and examining which invariants are violated, Dolan-Gavitt et al. scan all of kernel memory for plausible data structures. If certain byte offsets within the detected structures do not contain signatures consistent with certain values, then the detections cannot correspond to actual data structures used by the kernel because otherwise they would crash the operating system. A limitation of the approach in \cite{dolangavitt2009robust} is that it is susceptible to attack by scattering copies or near copies of data structures throughout kernel memory. \subsection{Behavioral/Heuristic Analysis} \sad{Derived from system behavior rather than code fragments.}{Not affected by attempts to hide malicious code.}{Cannot detect malware prior to execution.} On the host level, signatures are not the only heuristic used for intrusion detection. System call sequences for intrusion and anomaly detection \cite{feng2003anomaly,forrest1996sense,giffin2002detecting,hofmeyr1998intrusion,krohn2007information,mutz2007exploiting,sekar2001fast} are an especially popular alternative for rootkit analysis since hooked \Windows{IAT} or \Windows{SSDT} entries often make repetitive patterns of system calls. Interestingly, rootkits can also be detected by network intrusion detection systems (NIDSs), because rootkits in the wild are almost always small components of a larger malware. The larger malware often performs some sort of network activity such as C2 server communication and synchronization across infected machines, or infection propagation. This is even true for some of the most sophisticated stealth malwares that leverage rootkit technologies to hide network connections from the host \cite{bencsath2012cousins}, while a rootkit cannot hide connections from a network. Therefore, signature scans at the network level as well as traffic flow analysis techniques can give away the presence of the larger malware as well as the underlying rootkit. Botnets with rootkits that effectively hide the behavior of an individual host may be easier to detect when analyzing macro, network-level traffic~\cite{yu2015fool,yu2015modeling}. NIDSs also have the advantage that they provide isolation between the malware and the intrusion detection system, reducing a malware's capacity to spoof or compromise the IDS. However, NIDSs have no way of inspecting the state of a host or interposing a host's execution at the network level. A hybrid approach, which extends the concept of cross-view detection is to compare network connections from a host query with those detected at the network level \cite{fink2005visual}. A discrepancy indicates the presence of a rootkit. \subsection{Feature Space Models vs. State Space Models} \label{sec:feature_state} \sad{Classify code sequences.}{Can detect similar malicious code patterns.}{Cannot detect unseen malicious code.} \begin{figure*}[!ht] \centering \subfloat[Binary OPCODE n-gram histogram classification\label{fig:FeatureSpace:SVM}]{\includegraphics[height=.15\textheight]{Histogram}\hspace*{.05\textwidth}\includegraphics[height=.15\textheight]{SVM}}\\ \subfloat[OPCODE sequence analysis\label{fig:FeatureSpace:HMM}]{% \begin{minipage}[c]{.35\textwidth}\centering\includegraphics[height=.17\textheight]{HMM}\end{minipage}% \begin{minipage}[c]{.28\textwidth}\small\input{A.tex}\end{minipage}% \begin{minipage}[c]{.32\textwidth}\small\input{B.tex}\end{minipage}% } \cap{fig:FeatureSpace}{Feature Space vs. State Space OPCODE classification}{This diagram depicts \protect\subref*{fig:FeatureSpace:SVM} a schematic interpretation of a linear classifier that separates benign and malicious OPCODE $n$-grams in feature space and \protect\subref*{fig:FeatureSpace:HMM} a sequential OPCODE analysis using a hidden Markov model. The feature space model must explicitly treat histograms of $n$-grams as independent dimensions for varying values of $n$ in order to capture sequential relationships. This approach is only scalable to a few sequence lengths. HMMs, on the other hand, impose a Markov assumption on a sequence of hidden variables which emit observations. State transition and observation probability matrices are inferred via expectation maximization on training sequences. An HMM factor graph is shown on the bottom left.} \end{figure*} As discussed above, code signatures and application behaviors/heuristics can be used in a variety of ways to detect and classify intrusions, and they operate across many levels of the intrusion detection hierarchy. For example, encrypted viruses are particularly robust against code signatures -- until they are decrypted -- but during emulation, once the virus is in memory, it might be particularly susceptible to signature analysis. This analysis may range from a simple frequency count of OPCODES to more sophisticated machine-learning techniques. Machine learning models can be divided into feature space and state space models. Examples of both are shown in Fig.~\ref{fig:FeatureSpace}, in which code fragments are classified as malicious or benign based on their OPCODE $n$-grams. Feature space models aim to treat signature/behavioral features as a spatial dimension and parameterize a manifold within this high-dimensional feature space for each class. Feature space models can be further broken down into generative and discriminative models. Generative models aim to model the joint distribution $P(x,y)$ of target variable $y$ and spatial dimension $x$, and perform classification via the product rule of probability: $P(y|x) = \frac{P(x,y)}{P(x)}$. Discriminative classifiers aim to model $P(y|x)$ directly \cite{bishop2006pattern}. By treating the frequencies of distinct n-gram hashes as elements of a high-dimensional feature vector, for example, the \emph{input feature space} becomes the domain of these vectors. Support vector machines (SVMs), which are discriminative feature space classifiers, aim to separate classes by fracturing the input feature space (or some transformation thereof) by a hyperplane that maximizes soft class margins. An advantage of feature space models is that in high dimensions, even if only a few of the dimensions are relevant, different classes of data tend to separate\cite{bishop2006pattern}. However, feature space models do not explicitly account for probabilistic dependencies, and a good feature space from a classification accuracy perspective is not necessarily intuitive. State space models are used to infer probabilities about \emph{sequences}. They leverage the fact that certain sequences of instructions exist within malicious binary due to functional overlap as well as general lack of creativity and laziness of malware authors. State space models can also be applied to functional sequences (e.g., sequences of system calls or network communications). The intuition is that we can use certain types of functional behaviors to describe classes of malware in terms of what they do, for example, ransomwares like \Name{CryptoLocker} typically generate a key that they use to encrypt files on disk and subsequently attempt to send that key to a C2 server. After a certain amount of time, they remove the local copy of the key and generate a ransom screen demanding money for the key \cite{ogorman2012ransomware}. State space models for intrusion recognition aim to recognize these sorts of malicious sequences. The most common type of state space models are based in some form on the Markov assumption -- that recent events will be independent of events that happened in the far past. While the Markov assumption is not always valid, it makes sequential inference tractable and is often reasonable. For example, if the last fifty assembly instructions were devoted to adding elements from two arrays together and incrementing respective pointers, with no other knowledge, it is a reasonable assumption that the next few instructions will add array elements. On the other hand, knowing that ``hello world'' was printed to the screen a million instructions ago provides little information about the probability of the next instruction. Hidden Markov models (HMMs) are perhaps the most widely used type of Markov models \cite{bishop2006pattern} and have been particularly useful in code analysis including recognition of metamorphic viruses\cite{sridhara2013metamorphic,venkatesan2008code,venkatachalam2010detecting,runwal2012opcode,lin2011hunting,desai2008towards,wong2006analysis,wong2006hunting,attaluri2009profile}. HMMs assume that latent variables, which take on \emph{states}, are linked in a Markov chain with conditional dependencies on the previous states. The \textit{order} of the HMM corresponds to the number of previous states on which the current state depends, for example, in an $n$-th order HMM the current state depends only on the previous $n$ states. In HMMs, previous states are fused with current states via a transition probability matrix $A$ governing the Markov chain, and an observation probability matrix $B$ -- the probability of observing the data in a given state -- as well as an initial state vector $\pi$. $A$, $B$, and $\pi$ can be estimated via expectation maximization (EM) inference on observation sequences $O$, which aims to find the maximum likelihood estimate (MLE) of a sequence of observations, i.e., $\argmax_{\lambda}P(O|\lambda)$, where $\lambda=(A,B,\pi)$. Although EM is guaranteed to converge to a local likelihood maximum, it is not guaranteed to converge to the global optimum. In the context of HMMs, this inference is usually carried out via the Baum-Welch algorithm \cite{bishop2006pattern} (aka. the Forward-Backward algorithm), which iterates between forward and backward passes and an update step until the likelihood of the observed sequence $O$ is maximized with respect to the model. The usage of HMMs for metamorphic virus detection has been documented in \cite{sridhara2013metamorphic,venkatesan2008code,venkatachalam2010detecting,runwal2012opcode,lin2011hunting,desai2008towards,wong2006analysis,wong2006hunting,attaluri2009profile}.\footnote{HMMs are used for many sequential learning problems and have several different notations. Here, we borrow notation from \cite{sridhara2013metamorphic}} These works assume a predominantly decrypted virus body, i.e., little to no encryption within the body to begin with, or that a previously encrypted metamorphic has been decrypted inside an emulator. The number of hidden states and therewith the state transition matrix is generally chosen to be small (2-3), while observation matrix is larger, with rows consisting of conditional probabilities of OPCODES for given states. For metamorphic detection, the semantic \emph{meaning} of the states themselves is unclear as is the optimal number of hidden states -- they only reflect some latent structure within the code. This contrasts with other applications of HMMs, for example, in handwriting sequence recognition, the latent structure behind a noisy scrawl of an ``X'' is the letter ``X'' itself; thus with proper training data there should be 26 latent variables (for the English alphabet) with transition probabilities corresponding to what one might expect from an English dictionary, e.g., a ``T'' $\rightarrow$ ``H'' transition is much more likely than a ``T'' $\rightarrow$ ``X'' transition A common metamorphic virus recognition measure is the thresholded negative log-likelihood probability per \Name{OPCODE} \cite{wong2006analysis,runwal2012opcode,sridhara2013metamorphic} obtained from a \textit{forward} pass on an HMM, i.e.: $$-\frac{\log(p(O_1,..,O_N,z_1,..,z_N))}{N},$$ where $O_1, \hdots, O_N$ are \Name{OPCODE}s in an $N$-length program and $z_1, \hdots, z_N$ are the hidden variables. The per-opcode normalization is required because different programs have different lengths. Most of the HMMs used in these works are first-order HMMs, in which the state space probability distribution of hidden variable $z_n$ is conditioned only on the value of $z_{n-1}$ and the current observation. For a $k-th$ order HMM, the probability of $z_{n}$ is conditioned on $z_{n-1} \hdots z_{n-k}$. However, the time complexity of HMMs increases exponentially with their order. Although in their works \cite{sridhara2013metamorphic,venkatesan2008code,venkatachalam2010detecting,runwal2012opcode,lin2011hunting,desai2008towards,wong2006analysis,wong2006hunting,attaluri2009profile} the authors claim that the number of hidden variables did not seem to make a difference, they might if higher-order Markov chains were used. As Lin and Stamp \cite{lin2011hunting} discuss, one problem with HMMs is that it ultimately measures similarity between code sequences; if the inter-class to intra-class sequential variation is large enough due to some exogenous factor such as very similar non-viral code in train/test, then HMM readout may be error-prone. \section{Toward Adaptive Models for Stealth Malware Recognition} \label{sec:open_world_ids} A large portion of the malware detected by both component protection and generic recognition techniques is previously observed malware with known signatures, deployed by \emph{script kiddies} -- attackers with little technical expertise that predominantly use pre-written scripts to propagate existing attacks \cite{zanero2004unsupervised}. Systems with up-to-date security profiles are not vulnerable to such attacks. Sophisticated stealth malwares, on the other hand, have propagated undetected for long periods of time because they do not match known signatures, do not attack protected system components with previously seen patterns, and mask harmful behaviors as benign. To reduce the amount of time that these previously unseen stealth malwares spend propagating in the wild, component protection and generic recognition techniques alike must be able to quickly recognize and adapt to new types of attacks. Typically, it is slower to adapt component techniques than it is to adapt generic recognition techniques because new hardware and software are required. However, even more generic algorithmic techniques may take time to update and this must be factored into the design of an intrusion recognition system. The choice of the algorithm for efficient updates is only one of several considerations that must be addressed in an intrusion recognition system. More elementary is how to autonomously make a decision that additional training data is needed and that the classifier \textit{needs to be updated} in the first place. In short, an intrusion recognition system must be \textit{adaptive} in order to efficiently mitigate the threat of stealth malware. It must also be \textit{interpretable} to yield actionable information to human operators and incident response modules. Unfortunately, many systems proposed in the literature are neither adaptive nor interpretable. We have isolated six flawed modeling assumptions, which we believe must be addressed at the algorithmic level. We discuss these flawed assumptions in Sec.~\ref{sec:flawed_assumptions}, and propose an algorithmic framework for attenuating them Sec.~\ref{sec:adaptive}. \subsection{Six Flawed Assumptions} \label{sec:flawed_assumptions} \subsubsection{Intrusions are Closed Set} \begin{figure*}[!htbp] \centering \subfloat[\label{fig:openset1}]{\includegraphics[width=0.32\textwidth]{openset_1}}\hspace*{.01\textwidth} \subfloat[\label{fig:openset2}]{\includegraphics[width=0.32\textwidth]{openset_2}}\hspace*{.01\textwidth} \subfloat[\label{fig:openset3}]{\includegraphics[width=0.32\textwidth]{openset_3}} \cap{fig:openset}{Problems with the closed world assumption}{\protect\subref*{fig:openset1} Red, green, and blue points correspond to a training set of different classes of malicious or benign samples in feature space. The intersecting lines depict a decision boundary learnt from training a linear classifier on this data. \protect\subref*{fig:openset2} The classifier categorizes points from a novel class (gray) as a training class (blue) with high confidence since the gray samples lie far on the blue side of the decision boundary and the classifier labels span infinitely in feature space. \protect\subref*{fig:openset3} An idealized open world classifier bounds the amount of space ascribed to each class' label by the support of the training data, labeling unlabeled (white) space as ``unknown''. With manually or automatically supplied labels, novel classes (gray) can be added to the classifier without retraining on the vast majority of data. } \end{figure*} Real intrusion recognition tasks have unseen classes at classification time. Neither all variations of malicious code nor all variations of benign behaviors can be known apriori. However, the majority of the intrusion recognition techniques cited in this paper implicitly assume that all classes seen at classification time are also present in the training set, evaluating recognition accuracy only for a fixed \emph{closed set} of classes. Consequently, good performance on IDS benchmarks does not necessarily translate into an effective classifier in a real application. In real \emph{open set} scenarios, a classifier that is trained on $M$ classes of interest, at classification time is confronted with instances of classes that are sampled from a distribution of nearly infinitely-many categories. Conventional classifiers are designed to separate classes from one another by dividing a hypothesis space into regions and assigning labels respectively. Effective classifiers roughly seek to approximate the Bayesian optimal classifier on the posterior probability $P(y_i |x;{\cal C}_1, {\cal C}_2, \ldots, {\cal C}_M), i\in \{1,\ldots, M\}$, where $x$ is a feature vector, $y_i$ is a class label, and ${\cal C}_i$ is a particular known class. However, in the presence of $\Omega$ unknown classes $U_n$ the optimal posterior model would become $P(y_i |x;{\cal C}_1, {\cal C}_2, \ldots, {\cal C}_M, U_{1}, \ldots, U_{\Omega})$. % Unfortunately, our ability to model this posterior distribution is limited because $U_{1}, \hdots, U_{\Omega}$ are unknown. Mining negatives during training may help to define known but uninteresting classes (e.g., ${\cal C}_{M+1} ), but it is impossible to span all negative space, and the costs of negative training become infeasible with increasing numbers of feature dimensions. Consequently, a classifier may label space belonging to ${\cal C}_i$ far beyond the support of the training data for ${\cal C}_i$. This fundamental machine learning problem has been termed \emph{open space risk} \cite{scheirer2013towards}. Worse yet, if probability calibration is used, $x$ may be ascribed to ${\cal C}_i$ with high confidence as distance from the positive side of the decision boundary increases. Therefore, the optimal closed set classifier operating in an open set intrusion recognition regime is not only wrong, it can be wrong while being very confident that it is correct. An open set intrusion recognition system needs to separate $M$ known classes from one another, but must also manage open space risk by labeling a decision as ``unknown'' when far from known class data. Problems with the closed set assumption as well as desirable open set behavior are shown in Fig.~\ref{fig:openset}. The binary intrusion recognition task, i.e., \textit{intrusion detection} appears to be a two-class closed set problem. However, each label -- \textit{intrusion} or \textit{no intrusion} -- is respectively a meta-label applied to a collection of many subclasses. While some of the subclasses will naturally be known, others will not, and the problem of open space risk still applies. \subsubsection{Anomalies Imply Class Labels} The incorrect assumption that anomalies imply class labels is largely related to the closed set assumption, and it is implicit to \emph{all} binary malicious/benign classification systems. Anomalies constitute data points that deviate from statistical support of the model in question. In the classification regime, anomalies are data points that are far from the class to which they belong. In the open set scenario, anomalies should be resolved by an operator, protocol, or other recognition modalities. Effective anomaly detection is necessary for open set intrusion recognition. Without it, the implicit assumption of an overly closed set can lead to undesirable classifications because it forces a decision to be made without support of statistical evidence. The conflation between anomaly and intrusion assumes that anomalous behavior constitutes intrusive behavior and that intrusive behavior constitutes anomalous behavior. Often, neither of these assumptions hold. Especially in large networks, previously unseen benign behavior is common: new software installations are routine, new users with different usage patterns come and go, new servers and switches are added, thereby changing the network topology, etc. Stealth malwares, on the other hand, are specifically designed to exhibit normal behavior profiles and are less likely to be registered as anomalies than many previously unseen benign behaviors \subsubsection{Static Models are Sufficient} \label{sec:incremental} In the anti-malware domain, the assumption of a static model, which is implicit to the closed set modeling assumption, is particularly insufficient because of the need to update new nominal behavior profiles and malicious code signatures. The attacks that a system sees \emph{will} change over time. This problem is often referred to as \emph{concept drift} in the incremental learning literature \cite{masud2011classification}. Depending on the model, the time required for a full batch retrain may not be feasible. A $k$th-order HMM with $k>>1$, for example, may perform quite well for some intrusion recognition tasks, but at the same time may be expensive to retrain in terms of both time and hardware and may require enormous amounts of training data in order to generalize well. There is a temporal risk associated with the amount of time that it takes to update a classifier to recognize new malicious samples. Therefore, even if that classifier exhibits high accuracy, it may be vulnerable to temporal risk unless it possesses an efficient update mechanism. \subsubsection{No Feature Space Transformation is Required} \label{sec:meaningul_features} A key reason why machine learning algorithms are not overwhelmed by the curse of dimensionality is that, due to statistical correlations, classes of data tend to lie on manifolds that are highly non-linear, but effectively much smaller in dimension than the input space. Obtaining a good manifold representation via a feature transformation obtained from either hand-tuned or machine-learnt optimization is often critical to effective and discriminative classification. Many approaches in the intrusion detection literature simply pass raw log data or aggregated log data directly to a decision machine \cite{mukkamala2002intrusion,catania2012automatic,lee1998data,portnoy2001intrusion,lazarevic2003comparative,ertoz2004minds,rehak2009adaptive}. The inputs often possess heterogeneous scale and nominal and continuous-valued features with aggregations, which ignore temporal scale and varying spatial bandwidths. We contend that, like any other machine learning task, fine-grained discriminative intrusion recognition requires a meaningful feature space transformation, whether learnt explicitly by the classifier or carried out as a pre-processing task. Feature spaces for intrusion recognition have been explored \cite{aggarwal2007data,helmer1998intelligent,ahmad2011feature,nguyen2010improving,lakhina2010feature,middlemiss2003feature,yu2010feature,mukkamala2003feature,stein2005decision}. While this research is a good start, we believe that much additional work is needed. \subsubsection{Model Interpretation is Optional} Effective feature space transformations must be balanced with semantically meaningful interpretation. Unfortunately, these two objectives are sometimes conflicting. Neural networks, which have been successfully applied to intrusion recognition tasks \cite{mukkamala2002intrusion,sung2003identifying,wang2010new,amini2006rt,liu2007letters,srinivasan2006self,shun2008network}, are appealing because they provide the ability to adapt a fixed set of basis functions to input data, thus optimizing the feature space in which the readout layer operates. However, these basis functions correspond to a composition of aggregations of non-linear projections/liftings onto a locally optimal manifold prior to final readout, and neither the semantic meaning of the space, nor the semantic meaning of the final readout is well understood. Recent work has demonstrated that neural networks can be vulnerable to adversarial examples \cite{nguyen2015deep,szegedy2014intriguing,goodfellow2015explaining}, which are misclassified with high confidence, yet appear very similar to known class data. The lack of interpretability of such models means that not only could intrusion recognition systems be vulnerable to such adversarial models, but more critically, machine learning techniques are not yet ``smart'' enough to resolve most potential intrusions. Instead, their role is to alert specialized anti-malware modules and human operators to take swift action. Fast response and resolution times are critical. Even if an intrusion detection system offers nearly perfect detection performance, if it cannot provide meaningful diagnostics to the operator, a temporal risk is induced, in which the operator or anti-malware wastes valuable time trying to diagnose the problem \cite{sommer2010outside}. Also, as we have seen from previous sections, many potential malware signals (e.g., hooking) may stem from legitimate uses. It is important to know why an alarm was triggered and which features triggered it to determine and refine the system's response to both malicious and benign behaviors. The interpretation and temporal risk problems are not unique to intrusion detection. They are a key reason why many diagnosis and troubleshooting systems rely on directed acyclic probabilistic graphical models such as Bayesian networks as well as rule mining instead of neural networks or support vector machines (SVMs) \cite{jensen2007bayesian}. To better resolve the model interpretation problem, intrusion detection should move to a more generic recognition framework, ideally providing additional diagnostic information. \subsubsection{Class Distributions are Gaussian} The majority of probabilistic models cited in this paper assume that class distributions are single or multi-modal Gaussian mixtures in feature space. Although Gaussian mixtures often appear to capture class distributions, barring special cases, they generally fail to capture distribution tails \cite{kotz2000extreme}. There are several different types of anomalies. \emph{Empirical anomalies} are anomalous with respect to a probabilistic model of training data, whereas \emph{idealized anomalies} are anomalous with respect to the joint distribution of training data. Provided good modeling, these two anomaly types are equivalent. However, from an anomaly detection perspective, na\"ive Gaussian assumptions do not provide a good match between empirical and idealized anomalies because an anomaly is defined with respect to the tail of a joint distribution and tails tend to deviate from Gaussian \cite{kotz2000extreme}. Theorems from statistical extreme value theory (EVT) provide theoretically grounded functional forms for the classes of distributions that these class-tails can assume, provided that positive class outliers are \emph{process anomalies} -- rare occurrences from an underlying generating stochastic process -- and not noise exogenous to the process, e.g., previously unseen classes. \subsection{An Open Set Recognition Framework} \label{sec:open_set} Accommodating new attack profiles and normative behavior models requires a method for diagnosing when query data are unsupported by previously seen training samples. This diagnosis is commonly referred to as \emph{novelty detection} in the literature \cite{markou2003novelty}. Specifically in IDS literature, novelty detection is often hailed as a means of detecting malicious samples with no prior knowledge. The intuition is that by spanning the space of normal behavior during training, any novel behavior will be either an attack or a serious system error. In practice however, it is infeasible to span the space of benign behavior. Even on an individual host, ``normal'' benign behavior can change dramatically depending on configurations, software installations, and user turnover. The network situation is even more complicated. Even for a medium size network, services, protocols, switches, routers, and topologies vary routinely. We contend that novelty detection has a particularly useful role in the recognition of stealth malware, but the premise that we can span the entire benign input space apriori is as unrealistic as the premise that signatures of all known attacks solve the intrusion detection problem. Instead, novelty detection should be treated in terms of what it does mathematically -- as a tool to recognize samples that are unsupported by the training data and to quantify the degree of confidence to ascribe a model's decision. Specifically, we propose treating the intrusion recognition task as an \emph{open set recognition} problem, performing discriminative multi-class recognition under the assumption of unknown classes at classification time. Scheirer et al.~\cite{scheirer2013towards,scheirer2014probability} formalized the open set recognition problem as tradeoff between minimizing \emph{empirical risk} and \emph{open space risk} -- the risk of labeling unknown space -- or mathematically, the ratio of positively labeled space that should have been labeled ``unknown'' to the total extent of positively labeled space. A classifier that can arbitrarily control this ratio via an adjustable threshold is said to \textit{manage open space risk}. Scheirer et al.~\cite{scheirer2013towards} extended the linear SVM objective to bound data points belonging to each known class by two parallel hyperplanes; one corresponding to a discriminative decision boundary, managing empirical risk, and the other limiting the extent of the classification, managing open space risk. Unfortunately, this ``slab'' model is not easily extensible to a non-linear classifier. In later work \cite{scheirer2014probability,jain2014multiclass}, they extended their solution to multi-class open set recognition problems using non-linear kernels, via posterior EVT calibration and thresholding of nonlinear SVM decision scores. EVT-calibrated one-class SVMs are used in conjunction with multi-class SVMs to simultaneously bound open-space risk and provide strong discriminative capability \cite{scheirer2014probability}. The authors refer to this combination as the \emph{W-SVM}. For our discussion, however, the theorems of Scheirer et al.~\cite{scheirer2014probability} are more interesting than the W-SVM itself. They prove that sum, product, min, and max fusions of compact abating probability (CAP) models again generate CAP models. Bendale and Boult \cite{bendale2015towards} extended this work to show that CAP models in linearly transformed spaces manage open space risk in the original input space. While these works are interesting, the formulations limit their application to probability distributions. Due to the need for efficient model updates in an intrusion recognition setting, enforcing probabilistic constraints on the recognition problem might be non-trivial, due to the need to re-normalize at each increment. We therefore generalize the theorems of Scheirer et al.~\cite{scheirer2014probability} as follows. \begin{mythm}{{\bf Abating Bounds for Open Space Risk: }} \label{thm:abating_bounds} Assume a set of non-negative continuous bounded functions $\{g_1,\hdots,g_n\}$ where $g_k(x,x')$ decreases monotonically with $||x-x'||$. Then thresholding any positively weighted sum, product, min, or max fusion of a finite set of non-negative discrete or continuous functions $\{f_1,\hdots,f_n\}$ that satisfy $f_k(x,x') \leq g_k(x,x')\ \forall k$ manages open space risk, i.e., it allows us to constrain open space risk below any given $\epsilon$. \end{mythm \begin{proof} Given $\tau > 0$, define \[ g'_k(x,x',\tau) \mathrel{\mathop:}= \left\{\def\arraystretch{1.2}% \begin{array}{@{}c@{\quad}l@{}} g_k(x,x') & \text{if $g_k(x,x') > \tau$}\\ 0 & \text{otherwise.}\\ \end{array}\right. \] \[ f'_k(x,x',\tau) \mathrel{\mathop:}= \left\{\def\arraystretch{1.2}% \begin{array}{@{}c@{\quad}l@{}} f_k(x,x') & \text{if $f_k(x,x') > \tau$}\\ 0 & \text{otherwise.}\\ \end{array}\right. \] This yields $f'_k(x,x') \leq g'_k(x,x')\ \forall k$. Because of the monotonicity of $g_k$, for any fixed constant $\delta$, $\exists \tau_{\delta} \colon \int g'_k(x,x',\tau)\ dx \le \delta$. Combining that with $f'_k(x,x') \leq g'_k(x,x')$, yields $\int f'_k(x,x',\tau_{\delta})\ dx \le \delta$, thus limiting positively labeled area to $f_k(x,x')>\tau$, which manages open space risk. Without loss of generality on $k$, it is easy to see that max and min fusion also manage open space risk. Because summation is a linear operator: $$\int \sum_k f'_k(x,x',\tau)\ dx = \sum_k \int f'_k(x,x',\tau)\ dx.$$ Since a finite sum of finite values is finite, and $\sum_k \int f'_k(x,x',\tau)\ dx \le k\delta$ it follows that thresholded positively weighted sums of $f'_k$ manage open space risk. In addition, $\prod_k f'_k$ is bounded because $$g_k \Rightarrow \exists \eta \colon g'_k(x,x',\tau) < \eta \Rightarrow \int \prod_k g'_k(x,x',\tau) dx \le \eta^k\delta.$ This latter bound may not be tight, but is sufficient to show that $\prod_k g'_k(x,x',\tau)$ manages open space risk. We have proven Theorem~\ref{thm:abating_bounds} without weights in the sums and products, but without loss of generality, non-negative weights can be incorporated directly into $g_k$. \end{proof} From Theorem~\ref{thm:abating_bounds}, it directly follows that many \emph{novelty detection} algorithms already in use by the IDS community provably manage open space risk and fit nicely into the open set recognition framework. For example, Scheirer et al.~\cite{scheirer2014probability} prove that thresholding neighbor methods by distance manages open space risk. Via such thresholding, clustering methods can be extended to an online regime, in which unknown classes $U_1, \hdots, U_\Omega$ are isolated \cite{markou2003novelty}. Similarly, thresholded kernel density estimation (KDE) of ``normal'' data distributions has been successfully applied to the IDS domain. Yeung and Chow \cite{yeung2002parzen} used kernel density estimates, in which they centered an isotropic Gaussian kernel on every data point $x_k$. It is easy to prove that such estimates also manage open space risk. \begin{mycorr}{{\bf Gaussian KDE Bounds for Open Space Risk.}} \label{cor:kde} Assume a Gaussian kernel density estimator where: $$p(x) = \frac{1}{N}\sum_{k=1}^{N} \frac{1}{(2\pi \sigma^2)^{D/2}}exp\left( \frac{||x-x_k||^2}{2\sigma^2}\right).$$ Thresholding $p(x)$ by $0 < \tau \leq 1$ manages open space risk. \end{mycorr} \begin{proof} When $N$ is the total number of points, each kernel is given by: $$f_k(x,x_k) = \frac{1}{N} \frac{1}{(2\pi \sigma^2)^{D/2}}exp\left(\frac{||x-x_k||^2}{2\sigma^2}\right).$$ By Theorem~\ref{thm:abating_bounds}, we can treat $f_k(x,x_k)$ as its own bound. When thresholded, $f_k(x,x_k)$ will finitely bound open space risk. The kernel density estimate: $$p(x) = \sum_{k=1}^{N}f_k(x,x_k) = \frac{1}{N}\sum_{k=1}^{N} \frac{1}{(2\pi \sigma^2)^{D/2}}exp\left( \frac{||x-x_k||^2}{2\sigma^2}\right)$$ also bounds open space risk because it is a positively weighted sum of functions that satisfy the bounding criteria in Theorem~\ref{thm:abating_bounds}. \end{proof} Thresholded nearest neighbor approaches and KDE require selection of a meaningful $\sigma$, and distance/probability threshold. They also implicitly assume local isotropy in the feature space, which highlights the need for a meaningful feature space representation. Neighbor and kernel density estimators are nonparametric models, but several parametric novelty detectors in use by the IDS community also provably manage open space risk. Thresholding density estimates from Gaussian mixture models (GMMs) is a popular parametric approach to novelty detection with a similar functional form to KDE \cite{alizadeh2015traffic,fan2013anomaly,gruhl2015building,lam2015outlier,yamanishi2004line}. For GMMs, however, the input data $x$ is assumed distributed as a superposition of a \textit{fixed} number of Gaussians: $$p(x) = \sum_k c_k N(x|\mu_k,\Sigma_k)$$ such that $\sum_k c_k = 1$. Unlike nonparametric Gaussian KDE, in which $\mu$ and $\sigma$ are selected apriori, the Gaussians in a GMM are fit via an expectation maximization technique similar to that used by HMMs. By generalizing Corollary~\ref{cor:kde}, we can prove that thresholding GMMs probabilities manages open space risk. When GMMs integrate to one, they are also CAP models. Although this constraint often holds, it is not required. \begin{mycorr}{{\bf GMM Bounds for Open Space Risk.}} \label{cor:GMM} Assume a Gaussian mixture model. The thresholded density estimate from this model bounds open space risk. \end{mycorr} \begin{proof} By Theorem~\ref{thm:abating_bounds}, the $k$th mode of a GMM: $$f_k(x,\mu) = c_k e^{-\frac{1}{2}(x-\mu)^T(x-\mu)}$$ is its own abating bound. Because the superposition of all modes is a sum of non-negatively weighted functions, each with an abating bound, GMMs have an abating bound. Thresholding GMM density estimates therefore manages open space risk. \end{proof} Note that Corollary~\ref{cor:GMM} only holds for the density estimate from an individual GMM, and not necessarily for all recognition functions that leverage multiple GMMs. For example, the log ratio of probabilities of two GMM estimates, $\log\frac{p_1(x)}{p_2(x)}$ does not bound open space risk when $\frac{p_1(x)}{p_2(x)}$ diverges as either $p_1(x)$ or $p_2(x) \rightarrow 0$. Similarly a recognition function $p_1(x) > p_2(x)$ does not provably manage open space risk because $p_1(x) > p_2(x)$ can hold over unbounded $x$. There is a strong connection between GMMs and the aforementioned HMMs. Similarly to HMMs, GMMs can also be viewed as discrete latent variable models. Given input data $x$ and multinomial random variable $z$, whose value corresponds to the generating Gaussian, the joint distribution factors according to the product rule: $p(x,z) = p(z)p(x|z)$. $p(z)$ is determined by the Gaussian mixture coefficients $c_k$. Therefore, the factorization of GMMs can be viewed as a simplification of HMMs, with a factor graph, in which latent variables are not connected and, therefore, treated independent of sequence. This raises two questions: first, can HMMs can be used for novelty detection? And second, do HMMs manage open space risk? Indeed, HMMs \textit{can} be used for novelty detection on sequential data by running inference on sequences in the training set and thresholding the estimated joint probability (or log of the estimated joint probability) outputs. This approach was taken by Yeung et al.~\cite{yeung2003host} for host-based intrusion detection using system call sequences. To assess, whether HMMs manage open space risk, we need to consider the form of an HMM's estimated joint distribution. For an $N$-length sequence, an HMM factors as: $$p(x_1,..,x_N,z_1,..,z_N) = p(z_1) \prod_{n=2}^{N} p(z_n | z_{n-1}) \prod_{n=1}^N p(x_n | z_n)$$ where $x_1,\hdots,x_N$ are observations and $z_1,\hdots,z_N$ are latent variables. This leads to Corollary~\ref{cor:HMM}. \begin{mycorr}{{\bf HMM Bounds for Open Space Risk.}} \label{cor:HMM} Assume HMM factors $p(z_1)$, $p(z_n | z_{n-1})$, and $p(x_n | z_n)$ satisfy the bounding constraints in Theorem~\ref{thm:abating_bounds}. Then thresholding the output of a forward pass of an HMM bounds open space risk. \end{mycorr} \begin{proof} Under the assumption that $p(z_1)$, $p(z_n | z_{n-1})$, and $p(x_n | z_n)$ satisfy the bounding constraints in Theorem~\ref{thm:abating_bounds}, then the HMM factorization above is a product of these functions, which by Theorem~\ref{thm:abating_bounds} manages open space risk. \end{proof} Corollary~\ref{cor:HMM} states that under certain assumptions on the form of the factors in HMMs, an HMM will provably manage open space risk. Unfortunately, it is not immediately clear how to enforce such a form, so many HMMs, including those in \cite{yeung2003host} are not proven to manage open space risk and may ascribe known labels to infinite open space. Formulating HMMs that manage open space risk and provide adequate modeling of data is an important topic, which we leave for future research. GMMs and HMMs are linear models. One-class SVMs are popular nonlinear models, which have been successfully applied to detecting novel intrusions \cite{amer2013enhancing,yang2015adaptive,heller2003one,wang2004anomaly,li2003improving,perdisci2006using}. In their Theorem 2, Scheirer et al.~\cite{scheirer2014probability} prove that one-class SVM density estimators \cite{scholkopf2001estimating} with a Gaussian radial-basis function (RBF) kernel manage open space risk. The decision functions for these machines are given by $\sum_k \alpha_k K(x,x_k)$, where $K(x,x_k)$ is the kernel function and $\alpha_i$ are the Lagrange multipliers. It is important to note that non-negative $\alpha_k$ are required to satisfy Theorem 1 in \cite{scheirer2014probability}, and that multi-class RBF SVMs and one-class SVMs under different objective functions are not proven to manage open space risk. \subsection{Open World Archetypes for Stealth Malware Intrusion Recognition} \label{sec:adaptive} The open set recognition framework introduced in Sec.~\ref{sec:open_set} can be \emph{incorporated} into existing intrusion recognition algorithms. This means that there is no need to abandon closed set algorithms in order to manage open space risk, provided that they are fused with open set recognition algorithms. Closed set techniques may be excellent solutions when they are well supported by training data, but open set algorithms are required in order to ascertain whether the closed set decisions are meaningful. Therefore, the open set problem can be addressed by using an algorithm that is inherently open set for novelty detection and rejecting any closed set decision as unknown if its support is below the open set threshold. A model with easily interpreted diagnostic information, e.g., a decision tree or Bayesian network, can be fused with the open set algorithm as well, in order to decrease response/mitigation times and to compensate for other discriminative algorithms that are not so readily interpretable. Note that many of the algorithms proposed by Scheirer et al. are discriminative classifiers themselves, but underperform the state of the art in a purely closed setting. The interpretation of a thresholded open set decision is trivial, assuming that the recognition function represents some sort of density estimation. For a query point, if the maximum density with respect to each class is below the open set threshold, $\tau$, then the class is labeled as ``unknown''. Otherwise, the query sample is ascribed the label corresponding to the class of maximum density. Under the open set formulation, the degree of \emph{openness} can be controlled by the value of $\tau$. The desired amount of openness will vary depending on the algorithm and the application's optimal precision/recall requirements. For example, a high security non-latency sensitive virtualized environment that is administered by many security experts can label many examples as unknown and interpose state frequently for an expert opinion. Systems that are latency sensitive, but for which potential harm of intrusion is relatively low, might have much looser open space bounds. Note that an open set density estimator can be applied with or without normalization to a probability distribution. However, we can only prove that it manages open space risk if the estimator's decision function satisfies Theorem~\ref{thm:abating_bounds}. Open set algorithms can also be applied under many different feature space transformations. When open set algorithms are fused with closed set algorithms, the two need not necessarily operate in the same feature space. Research has demonstrated\cite{rudd2015extreme,bendale2015towards} the effectiveness of the open set classification framework in machine-learnt feature spaces. Bendale and Boult \cite{bendale2015towards} bounded a nearest class mean (NCM) classifier in a metric-learnt transformed space, an algorithm they dubbed nearest non-outlier (NNO). They also proved that under a linear transformation, open space risk management in the transformed feature space will manage open space risk in the original input space. Rudd et al.~\cite{rudd2015extreme} formulated extreme value machine (EVM) classifiers to perform open set classification in a feature space learnt from a convolutional neural network. The EVM formulation performs a kernel-like transformation, which supports variable data bandwidths, that implicitly transforms \emph{any} feature space to a probabilistically meaningful representation. This research indicates that open set algorithms can support meaningful feature space transformations, although what constitutes a ``good'' feature space depends on the problem and classifier in question. Bendale and Boult\cite{bendale2015towards} and Rudd et al.~\cite{rudd2015extreme} also extended open set recognition algorithms to an online regime, which supports incremental model updates. They dubbed this recognition scenario \emph{open world recognition}, referring to online open set recognition. The incremental aspects of this work are in a similar vein to other online intrusion recognition techniques \cite{lane1998approaches,karthick2012adaptive,wang2004anomalous,zhong2007clustering,cannady2000applying,hu2014online,wang2013concept}, which, given a batch of training points $X_{t}$ at time $t$, aim to update the prior for time $t+1$ in terms of the posterior for time $t$, so that $P_{t+1}(\theta_{t+1}) \leftarrow P_{t}(\theta_{t}|X_{t},T_{t})$, where $T$ is the target variable, $P$ is a recognition function, and $\theta$ is a parameter vector. If $P$ is a probability, a Bayesian treatment can be adopted, where: {\small $$ P_{t + 1}(\theta_{t + 1}|X_{t + 1},T_{t + 1}) = \frac{P_{t+1}(T_{t+1}|\theta_{t+1},X_{t+1})P_{t}(\theta_{t}|X_{t},T_{t})}{P_{t+1}(T_{t+1})} $$% }% With a few exceptions, however, recognition functions in the incremental learning intrusion recognition literature generally do not satisfy Theorem~\ref{thm:abating_bounds}, and are not proven to manage open space risk. This means that they are not necessarily true \textit{open world} classifiers. Moreover, none of the work in \cite{lane1998approaches,karthick2012adaptive,wang2004anomalous,zhong2007clustering,cannady2000applying,hu2014online,wang2013concept} addresses the pressing need to \emph{prioritize labeling of detected novel} data for incremental training. This is problematic, because the objective of online learning is to adapt a model to recognize new attack variations and benign patterns -- insights that would otherwise be perishable within a useful time horizon. When intrusion recognition subsystems exhibit high recall rates, however, updating the model with new attack signatures is \emph{much more vital} than updating the model with novel benign profiles. Since labeling capacity is often limited by the number of knowledgeable security practitioners, we contend that the ``optimal'' labeling approach is to greedily rank the unknown samples in terms of their likelihood of being associated with known malicious classes. Given bounded radially abating functions from Theorem~\ref{thm:abating_bounds}, i.e., open set decision functions, we can do just that, prioritizing labeling by some malicious likelihood criterion (MLC). The intuition behind the MLC ranking is as follows: from the discussion in Sec.~\ref{sec:signatures_heuristics}, malwares often share components: even for vastly different malwares, similar components yield similar patterns in feature space. Although minor code overlap will not necessarily cause (mis)categorizations of malware classes, it may cause novel malware classes to be close enough to known ones in feature space that they are ranked higher by MLC criterion than most novel benign samples. Label prioritization by MLC ranking could, therefore, improve resource allocation of security professionals and dramatically reduce the amount of time that stealth malwares are able to propagate unnoticed. Of course, other considerations besides MLC are relevant to a truly ``optimal'' ranking, including difficulty of diagnosis and likely degree of harm, but these properties are difficult to ascertain autonomously. A final useful aspect of the open world intrusion recognition framework is that it is not confined to naive Gaussian assumptions. Mixtures of Gaussians can work well for modeling densities, but tend to deteriorate at the distribution tails, because the tails of the models tend toward tails from unimodal Gaussians, whereas the tails of the data distributions generally do not. For recognition problems, however, accurate modeling of tail behavior is important, in fact, more important than accurate modeling of class centers \cite{scheirer2010robust,scheirer2012multi}. To this end, researchers have turned to statistical extreme value theory techniques for density estimation, and open world recognition readily accommodates them. Both \cite{jain2014multiclass} and \cite{scheirer2014probability} apply EVT modeling to open set recognition scenarios based posterior fitting of point distances to classifier decision boundaries, while \cite{rudd2015extreme} incorporated EVT calibration into a generative model, which performs loose density estimation as a mixture of EVT distributions. Importantly, the EVT distributions employed by Rudd et al.~\cite{rudd2015extreme}, unlike Gaussian kernels in an SVM or KDE application, are variable bandwidth functions of the data. They are also are directly derived from EVT and incorporate higher-order statistics which Gaussian distributions cannot (e.g., skew, curtosis). Finally, they provably manage open space risk \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Conclusions and Open Issues} \label{sec:conclusion} Stealth malwares are a growing threat because they exploit many system features that have legitimate uses and they can propagate undetected for long periods of time. We, therefore, felt the need to provide the first academic survey specifically focused on malicious stealth technologies and mitigation measures. We hope that security professionals in both academic and industrial environments can draw on this work in their research and development efforts. Our work also highlights the need to combine countermeasures that aim to protect the integrity of system components with more generic machine learning solutions. We have identified flawed assumptions behind many machine learning algorithms and proposed steps to improve them based on research from other recognition domains. We encourage the security community to consider these suggestions in future development of intrusion recognition algorithms. While we are the first to propose a mathematically formalized \textit{open world} approach to intrusion recognition, there are open issues that must be addressed through experimentation and implementation, including how tightly to bound open space risk, and more generally how to determine the openness of the problem in operational scenarios. An overly aggressive bound may actually degrade performance for problems that are predominantly closed-set, prioritizing the minimization of open space risk over the minimization of empirical risk. Another important consideration is the cost of misclassifying an unknown sample as belonging to a known class, which depends in part on the operational resources available to label novel classes, and in part on the degree of threat expected of novel classes. These tradeoffs are important subjects for experimental analysis and operational deployment of open world anti-malware systems. For benchmarking and experimentation, good datasets that support open world protocols are vital for future research. While some effort has been made, e.g., \cite{creech2014semantic,creech2013generation,creech2014developing}, there are few modern publicly available datasets for intrusion detection, specifically of stealth malwares. We believe that the collection and distribution of modernized and realistic publicly available datasets containing stealth malware samples are vital to the furtherance of academic research in the field. While many corporate security companies have good reasons for keeping their datasets private, a guarded increase in collusion with academia to allow extended -- yet still restricted -- sharing of data is in the best interest of all parties in developing better stealth malware countermeasures. We have proven that a number of existing algorithms currently used in the intrusion recognition domain already satisfy the requirements of an open set framework, and we believe that they should be leveraged and extended both in theory and in practice to address the flawed assumptions behind many existing algorithms that we detailed in Sec.~\ref{sec:flawed_assumptions}. Adopting an open world mathematical framework obviates the assumptions that \textit{intrusions are closed set}, \textit{anomalies imply class labels}, and that \textit{static models are sufficient}. How to appropriately address the other assumptions requires further research. Although some progress has been made in open world algorithms, the question, how to obtain a nicely discriminable feature space while accommodating a readily interpretable model, merits future research. Finally, how to model class distributions without Gaussian assumptions demands further mathematical treatment -- statistical extreme value theory is a good start, but it has yet to be gracefully defined how select distributional tail boundaries. Also, with the exception of special cases it is still not well formalized, how to model the remainder of the distribution (the non-extreme values) of non-Gaussian data. \begin{table*}[htp] \scriptsize \centering \section*{Appendix: API Calls, Data Structures, and Registry Keys} \vspace{1em} \renewcommand\arraystretch{1.5} \setlength\tabcolsep{1em} \begin{tabularx}{.9\textwidth}{c|X} \bf \Name{Windows} API entry & \bf Documentation \\\hline \Windows{CallNextHookEx} & Passes the hook information to the next hook procedure in the current hook chain. A hook procedure can call this function either before or after processing the hook information.\\ \Windows{CreateRemoteThread} & Creates a thread that runs in the virtual address space of another process.\\ \Windows{DeviceIoControl} & Sends a control code directly to a specified device driver, causing the corresponding device to perform the corresponding operation.\\ \Windows{DllMain} & An optional entry point into a dynamic-link library (DLL). When the system starts or terminates a process or thread, it calls the entry-point function for each loaded DLL using the first thread of the process. The system also calls the entry-point function for a DLL when it is loaded or unloaded using the \Windows{LoadLibrary} and \Windows{FreeLibrary} functions.\\ \Windows{FindFirstFile} & Searches a directory for a file or subdirectory with a name that matches a specific name (or partial name if wildcards are used).\\ \Windows{FindNextFile} & Continues a file search from a previous call to the \Windows{FindFirstFile}, \Windows{FindFirstFileEx}, or \Windows{FindFirstFileTransacted} functions.\\ \Windows{GetProcAddress} & Retrieves the address of an exported function or variable from the specified dynamic-link library (DLL).\\ \Windows{LoadLibrary} & Loads the specified module into the address space of the calling process. The specified module may cause other modules to be loaded.\\ \Windows{NtDelayExecution} & \emph{Undocumented export of \Windows{ntdll.dll}.}\\ \Windows{NTQuerySystemInformation} & Retrieves the specified system information.\\ \Windows{OpenProcess} & Opens an existing local process object.\\ \Windows{PsSetLoadImageNotifyRoutine} & The \Windows{PsSetLoadImageNotifyRoutine} routine registers a driver-supplied callback that is subsequently notified whenever an image is loaded (or mapped into memory).\\ \Windows{PspCreateProcessNotify} & \emph{Completely undocumented function.}\\ \Windows{SetWindowsHookEx} & Installs an application-defined hook procedure into a hook chain. You would install a hook procedure to monitor the system for certain types of events. These events are associated either with a specific thread or with all threads in the same desktop as the calling thread.\\ \Windows{SleepEx} & Suspends the current thread until the specified condition is met.\\ \Windows{WriteProcessMemory} & Writes data to an area of memory in a specified process. The entire area to be written to must be accessible or the operation fails.\\ \Windows{ZwQuerySystemInformation} & Retrieves the specified system information.\\\hline \Windows{EAT} & The \Windows{Export Address Table} is a table where functions exported by a module are placed so that they can be used by other modules.\\ \Windows{EPROCESS} & The \Windows{EPROCESS} structure is an opaque executive-layer structure that serves as the process object for a process.\\ \Windows{IAT} & The \Windows{Import Address Table} is where the dynamic linker writes addresses of loaded modules such that each entry points to the memory locations of library functions.\\ \Windows{IDT} & The \Windows{Interrupt Descriptor Table} is a kernel-level table of function pointers to callbacks that are called upon interrupts / exceptions.\\ \Windows{IRP} & \Windows{I/O Request Packets} are used to communicate between device drivers and other areas of the kernel.\\ \Windows{KTHREAD} & The \Windows{KTHREAD} structure is an opaque kernel-layer structure that serves as the thread object for a thread.\\ \Windows{KeServiceDescriptorTable} & Contains pointers to the \Windows{SSDT} and \Windows{SSPT}. It is an undocumented export of \Windows{ntoskrnl.exe}.\\ \Windows{SSDT} & The \Windows{System Service Descriptor Table} is a kernel-level dispatch table of callbacks for system calls.\\ \Windows{SSPT} & The \Windows{System Service Parameter Table} is a kernel-level table containing sizes (in bytes) of arguments for \Windows{SSDT} callbacks.\\\hline \Windows{AppInit\_DLLs} & Space or comma delimited list of DLLs to load.\\ \Windows{LoadAppInit\_DLLs} & Globally enables or disables \Windows{AppInit\_DLLs}.\\\hline \Windows{kernel32.dll} & Exposes to applications most of the \Name{Win32} base APIs, such as memory management, input/output (I/O) operations, process and thread creation, and synchronization functions.\\ \Windows{ntdll.dll} & Exports the \Name{Windows Native API}. The \Name{Native API} is the interface used by user-mode components of the operating system that must run without support from \Name{Win32} or other API subsystems.\\ \Windows{ntoskrnl.exe} & Provides the kernel and executive layers of the \Name{Windows NT} kernel space, and is responsible for various system services such as hardware virtualization, process and memory management, thus making it a fundamental part of the system.\\ \Windows{user32.dll} & Implements the \Name{Windows} user component that creates and manipulates the standard elements of the Windows user interface, such as the desktop, windows, and menus. \end{tabularx} \cap{tab:SysCalls}{System Calls}{This table explains the \Name{Windows} system calls, data structures, registry keys and system files (in this order) that are used in the malware described in this section, in alphabetical order. Many of the entries are copied directly from the \Name{Microsoft Developer Network} (MSDN) documentation\cite{MSDN} or \Name{Wikipedia} for file descriptions\cite{Wikipedia}. Others are summaries of descriptions from later in the text with their own respective citations.} \end{table*} \newpage \clearpage \bibliographystyle{ieee}
1,108,101,566,046
arxiv
\section{Introduction} Core-collapse (CC) supernovae (SNe) are caused by the gravitational collapse of the core in massive stars. The diversity of the events that we observe reflects the diversity of the progenitor stars and their surrounding circumstellar media (CSM). In particular, the extent to which the star has lost its hydrogen envelope has a profound impact on the observed properties of the SN. Through the presence or absence of hydrogen lines in their spectra these SNe are classified as Type II or Type I, respectively. The ejecta mass of Type I SNe tends to be smaller and thus the diffusion time shorter and the expansion velocity higher. The designation IIb is used for SNe which show a spectral transition from Type II (with hydrogen) at early times to Type Ib (without hydrogen but with helium) at later times. These SNe are thought to arise from stars that have lost most, but not all, of their hydrogen envelope. The prime example of such a SN is 1993J, where the progenitor star was a yellow (extended) supergiant proposed to have lost most of its hydrogen envelope through interaction with its blue (compact) companion star \citep{Pod93,Mau04,Sta09}. As Type IIb SNe are surprisingly common given the brief period single stars spend in the appropriate state, binary stars have been suggested as the main production channel - but the issue remains unresolved. Bright and nearby Type IIb SNe are rare but detection of the progenitor star in archival pre-explosion images and, when the SN has faded, a search for the companion star is feasible. By comparison of the magnitude and colour of the progenitor star to predictions from stellar evolutionary models, basic properties such as the initial mass can be estimated \citep{Sma09}. High quality multi-wavelength monitoring of these SNe followed by detailed modelling of the data is crucial to improve our understanding of Type IIb SNe and their progenitor stars. This paper presents the first 100 days of the extensive optical and NIR dataset we have obtained for such a SN, the Type IIb 2011dh. Detailed hydrodynamical modelling of the SN using these data have been presented in \citet[hereafter \citetalias{Ber12}]{Ber12} and identification and analysis of the plausible progenitor star in \citet[hereafter \citetalias{Mau11}]{Mau11}. The remaining data and further modelling will be presented in forthcoming papers. \subsection{Supernova 2011dh} SN 2011dh was discovered by A. Riou on 2011 May 31.893 UT \citep{Gri11} in the nearby galaxy M51 at a distance of about 8 Mpc (Sect.~\ref{s_distance}). The latest non-detection reported in the literature is by Palomar Transient Factory (PTF) from May 31.275 UT \citep[hereafter \citetalias{Arc11}]{Arc11}. In this paper we adopt May 31.5 UT as the epoch of explosion and the phase of the SN will be expressed relative to this date throughout the paper. The host galaxy M51, also known as the Whirlpool galaxy, was the first galaxy for which the spiral structure was observed \citep{Ros50} and is frequently observed. Thus it is not surprising that excellent pre-explosion data were available in the Hubble Space Telescope (HST) archive. In \citetalias{Mau11} we used these data to identify a yellow (extended) supergiant progenitor candidate which, by comparison to stellar evolutionary models, corresponds to a star of $13 \pm 3$ M$_\odot$ initial mass. A similar analysis by \citet{Dyk11} estimated an initial mass between 17 and 19 M$_\odot$, the difference mainly stemming from the different method used to identify the evolutionary track in the HR-diagram. Recent HST \citep{Dyk13b} and Nordic Optical Telescope (NOT) \citep[this paper]{Erg13} observations show that the yellow supergiant is now gone and indeed was the progenitor of SN 2011dh. We discuss this issue in Sect.~\ref{s_prog_dis} and provide details of the NOT observations in Appendix~\ref{a_prog_obs}. The SN has been extensively monitored from X-ray to radio wavelengths by several teams. Optical and NIR photometry and spectroscopy, mainly from the first 50 days, have been published by \citetalias{Arc11}, \citetalias{Mau11}, \citet[hereafter \citetalias{Tsv12}]{Tsv12}, \citet[hereafter \citetalias{Vin12}]{Vin12}, \citet[hereafter \citetalias{Mar13}]{Mar13}, \citet[hereafter \citetalias{Dyk13b}]{Dyk13b} and \citet[hereafter \citetalias{Sah13}]{Sah13}. Radio and millimeter observations have been published by \citet{Mar11}, \citet{Kra12}, \citet{Bie12}, \citet{Sod12} and \citet{Hor12} and X-ray observations by \citet{Sod12}, \citet{Sas12} and \citet{Cam12}. The SN has been monitored in the ultraviolet (UV) using SWIFT, in the mid-infrared (MIR) using Spitzer and at sub-millimeter wavelengths using Herschel. In this paper we will focus on the UV to MIR emission. The nature of the progenitor star is an issue of great interest and there has been some debate in the literature. Using approximate models \citetalias{Arc11} argued that the SN cooled too fast and \citet{Sod12} that the speed of the shock was too high to be consistent with an extended progenitor. However, in \citetalias{Ber12} we have used detailed hydrodynamical modelling to show that a 3.3-4 M$_\odot$ helium core with an attached thin and extended hydrogen envelope well reproduces the early photometric evolution and is also consistent with the temperature inferred from early spectra. The findings in \citetalias{Ber12} are in good agreement with those in \citetalias{Mau11} and the issue now seems to be settled by the disappearance of the yellow supergiant. See also \citet{Mae12} for a discussion of the assumptions made in \citet{Sod12}. The presence of a companion star (as for SN 1993J) or not is another issue of great interest. As shown in \citet{Ben12} a binary interaction scenario that reproduces the observed and modelled properties of the yellow supergiant is certainly possible. Furthermore, the prediction of a blue (compact) companion star would be possible to confirm using HST observations, preferably in the UV where the star would be at its brightest. The paper is organized as follows. In Sections \ref{s_distance} and \ref{s_extinction} we discuss the distance and extinction, in Sect.~\ref{s_obs} we present the observations and describe the reduction and calibration procedures, in Sect. \ref{s_analysis} we analyse the observations and calculate the bolometric lightcurve, in Sect. \ref{s_sn_comp} we compare the observations to other SNe and in Sect.~\ref{s_discussion} we provide a discussion, mainly related to the hydrodynamical modelling in \citetalias{Ber12} and the disappearance of the progenitor. Finally, we conclude and summarize the paper in Sect.~\ref{s_conclusions}. In Appendix \ref{a_phot_cal} we provide details on the calibration of the photometry and in Appendix \ref{a_prog_obs} we provide details on the progenitor observations. \subsection{Distance} \label{s_distance} In Table \ref{t_d} we list all estimates for the distance to M51 we have found in the literature. As the sample is reasonably large and as it is not clear how to judge the reliability of the individual estimates we will simply use a median and the 16 and 84 percentiles to estimate the distance and the corresponding error bars. This gives a distance of 7.8$^{+1.1}_{-0.9}$ Mpc which we will use throughout this paper. \begin{table*}[tb] \caption{Distance to M51. Literature values.} \begin{center} \begin{tabular}{lll} \toprule Distance & Method & Reference \\ (Mpc) & & \\ \midrule 9.60 $\pm{0.80}$ & Size of HII regions & \citet{San74} \\ 6.91 $\pm{0.67}$ & Young stellar clusters & \citet{Geo90} \\ 8.39 $\pm{0.60}$ & Planetary nebula luminosity function & \citet{Fel97} \\ 7.62 $\pm{0.60}$ & Planetary nebula luminosity function & \citet{Cia02} \\ 7.66 $\pm{1.01}$ & Surface brightness fluctuations & \citet{Ton01} \\ 7.59 $\pm{1.02}$ & Expanding photosphere method (SN 2005cs) &\citet{Tak06} \\ 6.36 $\pm{1.30}$ & Type IIP SN standard candle method (SN 2005cs) & \citet{Tak06} \\ 8.90 $\pm{0.50}$ & Spectral expanding photosphere method (SN 2005cs) & \citet{Des08} \\ 6.92 & Type Ic SN properties (SN 1994I) & \citet{Iwa94} \\ 7.90 $\pm{0.70}$ & Spectral expanding photosphere method (SN 2005cs) & \citet{Bar07} \\ 6.02 $\pm{1.92}$ & Spectral expanding photosphere method (SN 1994I) & \citet{Bar96} \\ 8.36 & Type IIP SN standard candle method (SN 2005cs) & \citet{Poz09} \\ 9.30 & Tully-Fisher relation & \citet{Tul88} \\ 8.40 $\pm{0.7}$ & Expanding photosphere method (SNe 2005cs and 2011dh) & \citet{Vin12} \\ \bottomrule \end{tabular} \end{center} \label{t_d} \end{table*} \subsection{Extinction} \label{s_extinction} The interstellar line-of-sight extinction towards SN 2011dh within the Milky Way as given by the extinction maps presented by \citet[hereafter \citetalias{Sch98}]{Sch98} and recently recalibrated by \citet[hereafter \citetalias{Sch11}]{Sch11} is $E$($B$-$V$)$_\mathrm{MW}$=0.031 mag. Here and in the following the extinction within the Milky Way, the host galaxy and in total will be subscripted "MW", "H" and "T" respectively and, except where otherwise stated, refer to the interstellar line-of-sight extinction towards the SN. The extinction within host galaxies is generally difficult to estimate. One class of methods used are empirical relations between the equivalent widths of the interstellar \ion{Na}{i} D absorption lines and $E$($B$-$V$). Relations calibrated to the extinction within other galaxies as the one by \citet{Tur03} are based on low resolution spectroscopy and as demonstrated by \citet{Poz11} the scatter is very large. Relations based on high or medium resolution spectroscopy as the ones by \citet[hereafter \citetalias{Mun97}]{Mun97} and \citet[hereafter \citetalias{Poz12}]{Poz12} show a surprisingly small scatter but are calibrated to the extinction within the Milky Way. Nevertheless, given the line of sight nature of the method and the rough similarity between M51 and the Milky Way we will use these for an estimate of the extinction within M51. \citet{Rit12} presented high-resolution spectroscopy of SN 2011dh resolving 8 \ion{Na}{i} D components near the M51 recession velocity. The total widths of the \ion{Na}{i} D$_2$ and D$_1$ lines were $180.1 \pm 5.0$ and $106.2 \pm 5.1$ m\AA~respectively. Using the \citetalias{Mun97} relations and summing the calculated extinction for all individual components (see discussions in \citetalias{Mun97} and \citetalias{Poz12}) we get $E$($B$-$V$)$_\mathrm{H}$=0.05 mag. Using the \citetalias{Poz12} relations for the total equivalent widths we get $E$($B$-$V$)$_\mathrm{H}$=0.03 mag. Taking the average of these two values and adding the extinction within the Milky Way (see above) gives $E$($B$-$V$)$_\mathrm{T}$=0.07 mag. Such a low extinction is supported by estimates from X-rays \citep{Cam12} and the progenitor Spectral Energy Distribution (SED) \citepalias{Mau11} and we will use this value throughout the paper. The stellar population analysis done by \citet{Mur11} suggests a somewhat higher extinction ($E$($B$-$V$)$_\mathrm{T}$=0.14 mag). We will adopt that value and the extinction within the Milky Way as our upper and lower error bars giving $E$($B$-$V$)$_\mathrm{T}$=0.07$^{+0.07}_{-0.04}$ mag. Further constraints on the extinction from the SN itself and comparisons to other SNe is discussed in Sect. \ref{s_extinction_rev}. To calculate the extinction as a function of wavelength we have used the reddening law of \citet{Car89} and R$_V$=3.1. For broad-band photometry the extinction was calculated at the mean energy wavelength of the filters. In this paper we will consequently use the definitions from \citet[hereafter \citetalias{Bes12}]{Bes12} for the mean energy wavelength and other photometric quantities. \section{Observations} \label{s_obs} \subsection{Software} \label{s_software} Two different software packages have been used for 2-D reductions, measurements and calibrations of the data. The {\sc iraf} based {\sc quba} pipeline \citep[hereafter \citetalias{Val11}]{Val11} and another {\sc iraf} based package developed during this work which we will refer to as the {\sc sne} pipeline. This package has been developed with the particular aim to provide the high level of automation needed for large sets of data. \subsection{Imaging} \label{s_obs_image} An extensive campaign of optical and NIR imaging was initiated for SN 2011dh shortly after discovery using a multitude of different instruments. Data have been obtained with the Liverpool Telescope (LT), the Nordic Optical Telescope (NOT), Telescopio Nazionale (TNG), Telescopio Carlos Sanchez (TCS), the Calar Alto 3.5m and 2.2m telescopes, the Faulkes Telescope North (FTN), the Asiago 67/92cm Schmidt and 1.82m Copernico telescopes, the William Herschel Telescope (WHT), the Large Binocular Telescope (LBT) and Telescopi Joan Oro (TJO). Amateur observations obtained at the Cantabria and Montcabrer observatories have also been included. The major contributors were the LT, the NOT, the TCS and the TNG. The dataset includes 85 epochs of optical imaging and 23 epochs of NIR imaging for the first 100 days and have been obtained thanks to a broad collaboration of European observers. \subsubsection{Reductions and calibration} \label{s_obs_image_red_cal} The optical raw data were reduced with the {\sc quba} pipeline except for the LT data for which the automatic telescope pipeline reductions have been used. The NIR raw data were reduced with the {\sc sne} pipeline except for UKIRT data for which the reductions provided by CASU have been used. Except for the standard procedures the pipeline has support for second pass sky subtraction using an object mask, correction for field distortion and unsharp masking. Correction for field distortion is necessary to allow co-addition of images with large dithering shifts and has been applied to the TNG data. Unsharp masking removes large scales structures (e.g. the host galaxy) in the images to facilitate the construction of a master sky in the case of large scale structure overlap. Given the (usually) small fields of view and the large size of the host galaxy this technique has been applied to all data where separate sky frames were not obtained. Photometry was performed with the {\sc sne} pipeline. We have used aperture photometry on the reference stars as well as the SN using a relatively small aperture (1.5$-$2.0 times the FWHM). A mild ($>$0.1 mag error) rejection of the reference stars as well as a mild (3 $\sigma$) rejection of the calculated zero points were also used. Both measurement and calibration errors were propagated using standard formulae. To ensure that the photometry is free from background contamination we have, as a test, template-subtracted the NOT and LT data sets using a {\sc hotpants}\footnote{http://www.astro.washington.edu/users/becker/hotpants.html} based tool provided by the {\sc sne} pipeline and late-time ($\sim$200 days) SN subtracted images. The contamination was negligible in all bands which is not surprising as the SN is still bright compared to the background at $\sim$100 days. The optical and NIR photometry was calibrated to the Johnson-Cousins (JC), Sloan Digital Sky Survey (SDSS) and 2 Micron All Sky Survey (2MASS) systems using reference stars in the SN field in turn calibrated using standard fields. The calibration procedure is described in detail in Appendix~\ref{a_phot_cal} where we also discuss the related uncertainties. The photometry was transformed to the standard systems using S-corrections \citep{Str02} except for the JC $U$ and SDSS $u$ bands which were transformed using linear colour-terms. We find the calibration to be accurate to within five percent in all bands, except for the early (0-40 days) NOT $U$ band observations, which show a systematic offset of $\sim$20 percent, possibly due to the lack of S-corrections in this band. Comparisons to S-corrected SWIFT JC photometry as well as the photometry published in \citetalias{Arc11}, \citetalias{Vin12}, \citetalias{Tsv12}, \citetalias{Mar13}, \citetalias{Dyk13b} and \citetalias{Sah13} supports this conclusion although some datasets show differences in the 15-30 percent range in some bands. Note that we have used JC-like $UBVRI$ filters and SDSS-like $gz$ filters at NOT whereas we have used JC-like $BV$ filters and SDSS-like $ugriz$ filters at LT and FTN. The JC-like $URI$ and SDSS-like $uri$ photometry were then tied to both the JC and SDSS systems to produce full sets of JC and SDSS photometry. \subsubsection{Space Telescope Observations} \label{s_obs_image_space} We have also performed photometry on the Spitzer 3.6 and 4.5 $\mu$m imaging\footnote{Obtained through the DDT program by G. Helou.} and the SWIFT optical and UV imaging. For the Spitzer imaging we performed aperture photometry using the {\sc sne} pipeline and the zero points and standard aperture provided in the IRAC Instrument Handbook to calculate magnitudes in the natural (energy flux based) Vega system of IRAC. The Spitzer images were template subtracted using a {\sc hotpants} based tool provided by the {\sc sne} pipeline and templates constructed from archive images. Comparing with photometry on the original images, the background contamination was less than five percent in all bands. For the SWIFT imaging we performed aperture photometry using the {\sc uvotsource} tool provided by the {\sc heasoft} package and the standard aperture of 5 arcsec to calculate magnitudes in the natural (photon count based) Vega system of UVOT. Observations were combined using the {\sc uvotimsum} tool provided by the {\sc heasoft} package and after day 5 subsequently combined in sequences of three to increase the signal-to-noise ratio (SNR). The SWIFT UV images were template subtracted using a {\sc hotpants} based tool provided by the {\sc sne} pipeline and templates constructed from archive images ($UVW1$) and SN subtracted late-time ($\sim$80 days) images ($UVM2$ and $UVW2$). Comparing with photometry on the original images, the background contamination was negligible in the $UVW1$ band whereas the $UVM2$ and $UVW2$ bands were severely affected, differing by more than a magnitude at late times. Our SWIFT photometry is in good agreement with that published in \citetalias{Mar13} except in the $UVM2$ and $UVW2$ bands after $\sim$10 days, which is expected since \citetalias{Mar13} did not perform template subtraction. Our SWIFT photometry is also in good agreement with that published in \citetalias{Arc11} (given in the natural AB system of UVOT) except in the $UVM2$ and $UVW2$ bands after $\sim$30 days, the differences probably arising from differences in the template subtraction. \subsubsection{Results} \label{s_obs_image_results} The S-corrected optical (including SWIFT JC) and NIR magnitudes and their corresponding errors are listed in Tables \ref{t_jc}, \ref{t_jc_swift}, \ref{t_sloan} and \ref{t_nir} and the JC $UBVRI$, SDSS $gz$ and 2MASS $JHK$ magnitudes shown in Fig. \ref{f_uv_opt_nir_mir}. The Spitzer 3.6 and 4.5 $\mu$m magnitudes and their corresponding errors are listed in Table~\ref{t_mir} and shown in Fig.~\ref{f_uv_opt_nir_mir}. The SWIFT UV magnitudes and their corresponding errors are listed in Table~\ref{t_uv} and the SWIFT $UVM2$ magnitudes shown in Fig.~\ref{f_uv_opt_nir_mir}. As discussed in Appendix~\ref{a_phot_cal}, because of the red tail of the filters and the strong blueward slope of the SN spectrum, the $UVW1$ and $UVW2$ lightcurves do not reflect the evolution of the spectrum at their mean energy wavelengths. These bands will therefore be excluded from any subsequent discussion and the calculation of the bolometric lightcurve in Sect.~\ref{s_bol_lightcurve}. Figure~\ref{f_uv_opt_nir_mir} also shows cubic spline fits using 3-5 point knot separation, error weighting and a 5 percent error floor. The standard deviation around the fitted splines is less than 5 percent and mostly less than a few percent except for the SWIFT $UVM2$ band for which the standard deviation is between 5 and 10 percent on the tail. All calculations in Sect. \ref{s_analysis}, including the bolometric lightcurve, are based on these spline fits. In these calculations the errors have been estimated as the standard deviation around the fitted splines and then propagated. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-UV-opt-NIR-MIR-app.pdf} \caption{Photometric evolution of SN 2011dh in the UV, optical, NIR and MIR. For clarity each band have been shifted in magnitude. Each lightcurve have been annotated with the name of the band and the shift applied. We also show the S-corrected SWIFT JC photometry (crosses) and cubic spline fits (solid lines).} \label{f_uv_opt_nir_mir} \end{figure} \subsection{Spectroscopy} \label{s_obs_spec} An extensive campaign of optical and NIR spectroscopic observations was initiated for SN 2011dh shortly after discovery with data obtained from a multitude of telescopes. Data have been obtained with the NOT, the TNG, the WHT, the Calar Alto 2.2m telescope, the Asiago 1.82m Copernico and 1.22m Galileo telescopes and the LBT. The major contributors were the NOT and the TNG. Details of all spectroscopic observations, the telescope and instrument used, epoch and instrument characteristics are given in Table~\ref{t_speclog}. The dataset includes 55 optical spectra obtained at 26 epochs and 18 NIR spectra obtained at 10 epochs for the first 100 days. \subsubsection{Reductions and calibration} \label{s_obs_spec_red_cal} The optical and NIR raw data were reduced using the {\sc quba} and {\sc sne} pipelines respectively. Flats for NOT Grisms 4 and 5 were spatially shifted, typically by one pixel, to minimize the fringing in the reduced data. The flux of optical and NIR spectra was extracted using the {\sc quba} and {\sc sne} pipelines respectively. A large aperture and error weighting was used to reduce the wavelength dependent effect on the size of the PSF in the spatial direction. No corrections were done for this effect in the dispersion direction. The slit was always (initially) vertically aligned so the position of the PSF in the dispersion direction should not vary much. The optical spectra were flux calibrated using the {\sc quba} pipeline. A sensitivity function was derived using a spectroscopic standard star and corrected for the relative atmospheric extinction using tabulated values for each site. Telluric absorption was removed using a normalized absorption profile derived from the standard star. The significant second order contamination present in NOT Grism 4 spectra was corrected for using the method presented in \citet{Sta07}. The optical spectra were wavelength calibrated using arc lamp spectra and cross-correlated and shifted to match sky-lines. The NIR spectra were flux calibrated and the telluric absorption removed with the {\sc sne} pipeline. A sensitivity function was derived using solar or Vega analogue standard stars selected from the Hipparchos catalogue and spectra of the sun and Vega. The interstellar extinction of the standards have been estimated from Hipparchos $BV$ photometry and corrected for when necessary. The NIR spectra were wavelength calibrated using arc lamp spectra and cross-correlated and shifted to match sky-lines. Finally, the absolute flux scale of all spectra has been calibrated against interpolated photometry using a least square fit to all bands for which the mean energy wavelength is at least half an equivalent width within the spectral range. \subsubsection{Results} \label{s_obs_spec_results} All reduced, extracted and calibrated spectra will be made available for download from the Weizmann Interactive Supernova data REPository\footnote{http://www.weizmann.ac.il/astrophysics/wiserep/} (WISeREP) \citep{Yar12}. Figure~\ref{f_spec_evo_opt_NIR_trad} shows the sequence of observed spectra where those obtained on the same night using the same telescope and instrument have been combined. For clarity, and as is motivated by the frequent sampling of spectra, all subsequent figures in this and the following sections are based on time-interpolations of the spectral sequence. Interpolated spectra separated more than half the sampling time from observed spectra are displayed in shaded colour and should be taken with some care whereas interpolated spectra displayed in full colour are usually more or less indistinguishable from observed spectra. To further visualize the evolution, the spectra have been aligned to a time axis at the right border of the panels. The interpolations were done as follows. First all spectra were re-sampled to a common wavelength dispersion. Then, for each interpolation epoch the spectra closest in time before and after the epoch were identified resulting in one or more wavelength ranges and associated pre- and post-epoch spectra. For each wavelength range the pre- and post-epoch spectra were then linearly interpolated and finally scaled and smoothly averaged using a 500 \AA~overlap range. Spectra interpolated using this method were also used in the calculations of the bolometric lightcurve (Sect.~\ref{s_bol_lightcurve}) and S-corrections (Appendix~\ref{a_phot_cal}). Figure~\ref{f_spec_evo_opt_NIR} shows the interpolated optical and NIR spectral evolution of SN 2011dh for days 5$-$100 with a 5-day sampling. All spectra in this and subsequent figures spectra have been corrected for redshift and interstellar extinction. \begin{figure*}[tb] \includegraphics[width=1.0\textwidth,angle=0]{figs/sn2011dh-spec-evo-opt-NIR.pdf} \caption{Optical and NIR (interpolated) spectral evolution for SN 2011dh for days 5$-$100 with a 5-day sampling. Telluric absorption bands are marked with a $\oplus$ symbol in the optical and shown as grey regions in the NIR.} \label{f_spec_evo_opt_NIR} \end{figure*} \begin{figure*}[p] \includegraphics[width=1.0\textwidth,angle=0]{figs/sn2011dh-spec-evo-opt-NIR-trad.pdf} \caption{Sequence of the observed spectra for SN 2011dh. Spectra obtained on the same night using the same telescope and instrument have been combined and each spectra have been labelled with the phase of the SN. Telluric absorption bands are marked with a $\oplus$ symbol in the optical and shown as grey regions in the NIR.} \label{f_spec_evo_opt_NIR_trad} \end{figure*} \section{Analysis} \label{s_analysis} \subsection{Photometric evolution} \label{s_analysis_phot} Absolute magnitudes were calculated as $M_i=m_i-\mu-A_i$, where $m_{i}$ is the apparent magnitude in band $i$, $\mu$ the distance modulus and $A_i$ the interstellar absorption at the mean energy wavelength of band $i$. The systematic errors stemming from this approximation (as determined from synthetic photometry) is less than a few percent and can be safely ignored. The systematic errors stemming from the uncertainty in distance (Sect.~\ref{s_distance}) and extinction (Sect.~\ref{s_extinction}) on the other hand are at the 30 percent level and this should be kept in mind in the subsequent discussions. All bands except the SWIFT $UVM2$ band show a similar evolution (the Spitzer MIR imaging did not start until day 20) with a strong initial increase from day 3 to the peak followed by a decrease down to a tail with a roughly linear decline rate. The maximum occurs at increasingly later times for redder bands. The drop from the maximum down to the tail is more pronounced for bluer bands and is not seen for bands redder than $z$. Both these trends are reflections of the strong decrease in temperature seen between 10 and 40 days (Fig.~\ref{f_bb_T_evo}). The tail decline rates are highest for the reddest bands and almost zero for the bluest bands. It is interesting to note that the Spitzer 4.5 $\mu$m band breaks this pattern and shows a markedly slower decline than the 3.6 $\mu$m and the NIR bands. Warm dust or CO fundamental band emission are two possible explanations (Sect.~\ref{s_45_micron_excess}). The times and absolute magnitudes of the maximum as well as the tail decline rates at 60 days are listed in Table~\ref{t_lc_char} as measured from cubic spline fits (Fig.~\ref{f_uv_opt_nir_mir}). Early time data for the first three days have been published in A11 and T12 and show a strong decline in the $g$, $V$ and $R$ bands. This initial decline phase ends at about the same time as our observations begins. \begin{table}[tb] \caption{Times and absolute magnitudes of the maximum and tail decline rates at 60 days as measured from cubic spline fits.} \begin{center} \include{sn2011dh-lc-char-table} \end{center} \label{t_lc_char} \end{table} \subsection{Colour evolution and blackbody fits} \label{s_analysis_colour} Figure~\ref{f_colour_evo} shows the intrinsic $U$-$V$, $B$-$V$, $V$-$I$ and $V$-$K$ colour evolution of SN 2011dh given the adopted extinction. Initially we see a quite strong blueward trend in the $V$-$I$ and $V$-$K$ colours reaching a minimum at $\sim$10 days which is not reflected in the $U$-$V$ and $B$-$V$ colours. Subsequently all colours redden reaching a maximum at $\sim$40 days for the $U$-$V$ and $B$-$V$ colours and $\sim$50 days for the $V$-$I$ and $V$-$K$ colours followed by a slow blueward trend for all colours. Figures \ref{f_bb_T_evo} and \ref{f_bb_R_evo} show the evolution of blackbody temperature and radius as inferred from fits to the $V$, $I$, $J$, $H$ and $K$ bands given the adopted extinction. As discussed in Sect. \ref{s_analysis_spec}, the flux in bands blueward of $V$ is strongly reduced by the line opacity in this region, in particular between 10 and 30 days. Therefore we have excluded these bands from the fits whereas the $R$ band has been excluded to avoid influence from H$\alpha$ emission at early times. Note that the temperature and radius obtained correspond to the surface of thermalization rather than the photosphere (total optical depth $\sim$1) and lose physical meaning when the ejecta become optically thin in the continuum. The evolution of the $V$-$I$ and $V$-$K$ colours is reflected in the evolution of the blackbody temperature, initially increasing from $\sim$7000 K at 3 days to a maximum of $\sim$9000 K at 8 days, subsequently decreasing to a minimum of $\sim$5000 K at $\sim$50 days followed by a slow increase. The blackbody radius shows an almost linear increase from ~$\sim$0.4 $\times$10$^{15}$ cm to a maximum of $\sim$1.2 $\times$10$^{15}$ cm and a subsequent almost linear decrease. In Fig.~\ref{f_bb_R_evo} we also show the radius corresponding to the P-Cygni minimum of the \ion{Fe}{ii} 5169 \AA~line. Interpreting this (Sect.~\ref{s_analysis_spec}) as the photospheric radius and the blackbody radius as the thermalization radius we see a fairly consistent evolution between 8 and 40 days corresponding to a dilution factor (ratio of photospheric and blackbody radius) increasing from $\sim$0.7 to $\sim$0.8 as the temperature decreases. The figure also suggests that such an interpretation breaks down for later epochs. Dilution factors for Type IIP SNe have been discussed extensively in the literature because of their importance for the EPM method \citep{Des05} but are not well known for Type IIb SNe. In Fig~\ref{f_bb_T_xi} we show dilution factors as a function of colour temperature as inferred from blackbody fits compared to the $\xi_{BV}$, $\xi_{BVI}$, $\xi_{VI}$ and $\xi_{JHK}$ dilution factors determined for Type IIP SNe using NLTE modelling by \citet{Des05}. The $VI$ and $JHK$ dilution factors are $\sim$10 percent higher and $\sim$10 percent lower on average as compared to $\xi_{VI}$ and $\xi_{JHK}$ respectively. If free-free absorption is dominating the absorptive opacity in the NIR but not in the optical, this is naively consistent with the lower charge density for helium core composition as compared to the hydrogen envelope composition of Type IIP SNe. The $BV$ and $BVI$ dilution factors are $\sim$25 and $\sim$40 percent higher on average as compared to $\xi_{BVI}$ and $\xi_{BV}$ respectively. The main reason for this is likely a stronger flux deficit (caused by a higher line opacity) in the $B$ band as compared to Type IIP SNe for a given thermalization temperature. \citetalias{Vin12} argue for higher values of the dilution factors as compared to Type IIP SNe because of the lower charge density and, as they point out, \citet{Bar95} have used NLTE modelling of SN 1993J to determine a $BV$ dilution factor $\sim$60 percent higher than for Type IIP SNe. This is similar to our (observational) result although in our interpretation this is rather due to a stronger flux deficit in the $B$ band. In the end \citetalias{Vin12} chose a value of 1.0 for their $BVRI$ dilution factor which is $\sim$10 percent higher than our average value, the difference explained by the $\sim$10 percent longer distance they derive. Dilution factors can never be observationally determined with better accuracy than the distance is known and NLTE modelling of Type IIb SNe is probably needed to accurately determine these. We find dilution factors involving bands redwards $B$, in particular, the $VI$ dilution factor most promising for future use in the EPM method applied to Type IIb SNe. \begin{figure} \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-colour-evo.pdf} \caption{$U$-$V$, $B$-$V$, $V$-$I$ and $V$-$K$ intrinsic colour evolution for SN 2011dh for the adopted extinction (black dots). The upper and lower error bars for the systematic error arising from extinction (black dashed lines) are also shown.} \label{f_colour_evo} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-bb-T-evo.pdf} \caption{Evolution of the blackbody temperature for SN 2011dh as inferred from fits to the $V$, $I$, $J$, $H$ and $K$ bands for the adopted extinction (black dots). The upper and lower error bars for the systematic error arising from extinction (black dashed lines) and two higher extinction scenarios, $E$($B$-$V$)$_\mathrm{T}$=0.2 mag (red crosses) and $E$($B$-$V$)$_\mathrm{T}$=0.3 mag (blue pluses), discussed in Sect. \ref{s_extinction_rev}, are also shown.} \label{f_bb_T_evo} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-bb-R-evo.pdf} \caption{Evolution of blackbody radius for SN 2011dh as inferred from fits to the $V$, $I$, $J$, $H$ and $K$ bands for the adopted extinction. The upper and lower error bars for the systematic error arising from extinction and distance (black dashed lines) and the radius corresponding to the P-Cygni minimum of the \ion{Fe}{ii} 5169 \AA~line (black dotted line) are also shown.} \label{f_bb_R_evo} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-bb-T-xi.pdf} \caption{Dilution factors as a function of colour temperature as inferred from blackbody fits (dots) compared to the dilution factors for Type IIP SNe determined by \citet{Des05} (solid lines) for the $B$ and $V$ (upper left panel), the $B$, $V$ and $I$ (upper right panel), the $V$ and $I$ (lower left panel) and the $J$, $H$ and $K$ (lower right panel) bands. In all panels we also show the upper and lower error bars for the systematic error arising from the extinction and distance.} \label{f_bb_T_xi} \end{figure} \subsection {Bolometric evolution} \label{s_bol_lightcurve} To calculate the pseudo-bolometric lightcurve of SN 2011dh we have used a combination of two different methods. One, which we will refer to as the spectroscopic method, for wavelength regions with spectral information and one, which we will refer to as the photometric method, for wavelength regions without. The prefix pseudo here refers to the fact that a true bolometric lightcurve should be integrated over all wavelengths. We do not assume anything about the flux in wavelength regions not covered by data but discuss this issue at the end of the section. When using the spectroscopic method we divide the wavelength region into sub-regions corresponding to each photometric band. For each epoch of photometry in each of the sub-regions a bolometric correction $BC_{i}=M_{\mathrm{bol},i}^{\mathrm{syn}}-M_{i}^{\mathrm{syn}}$ is determined. Here $M_{i}^{\mathrm{syn}}$ and $M_{\mathrm{bol},i}^{\mathrm{syn}}$ are the absolute and bolometric magnitudes respectively, as determined from synthetic photometry and integration of the sub-region flux per wavelength using observed spectra. The bolometric magnitude in the region $M_{\mathrm{bol}}=-2.5 \log \sum 10^{-0.4(M_{i}+BC_{i})}$ is then calculated as the sum over all sub-regions, where $M_{i}$ is the absolute magnitude as determined from observed photometry. Spectra are linearly interpolated to match each epoch of photometry as described in Sect.~\ref{s_obs_spec_results}. This method makes use of both spectral and photometric information and is well motivated as long as the spectral sampling is good. When using the photometric method we log-linearly interpolate the flux per wavelength between the mean energy wavelengths of the filters. This is done under the constraint that the synthetic absolute magnitudes as determined from the interpolated SED equals the absolute magnitudes as determined from observed photometry. The solution is found by a simple iterative scheme. The total flux in the region is then calculated by integration of the interpolated flux per wavelength. The absolute magnitudes in each band were calculated using cubic spline fits as described in Sect.~\ref{s_obs_image_results}, which is justified by the frequent sampling in all bands. When necessary, as for the SWIFT UV and Spitzer MIR magnitudes, extrapolations were done assuming a constant colour. The filter response functions and zeropoints used to represent the different photometric systems are discussed in Appendix~\ref{a_phot_cal}. For SN 2011dh we have optical and NIR spectra with good sampling between 3 and 100 days and we have used the spectroscopic method in the $U$ to $K$ region and the photometric method in the UV and MIR regions. The pseudo-bolometric UV to MIR (1900-50000 \AA) lightcurve of SN 2011dh is shown in Fig.~\ref{f_UV_MIR_bol} and listed in Table~\ref{t_UV_MIR_bol} for reference. These data together with the photospheric velocity as estimated in Sect.~\ref{s_analysis_spec} provide the observational basis for the hydrodynamical modelling of SN 2011dh presented in B12. For comparison we also show the pseudo-bolometric lightcurve calculated using the photometric method only. The difference is small but, as expected, increases slowly when the spectrum evolves to become more line dominated. The bolometric lightcurve shows the characteristics common to Type I and Type IIb SNe with a rise to peak luminosity followed by a decline phase and a subsequent tail phase with a roughly linear decline rate (Sect.~\ref{s_physics_sn_IIb}). The maximum occurs at 20.9 days at a pseudo-bolometric luminosity of 16.67$\pm{0.05}^{+6.59}_{-3.67}$$\times$10$^{41}$ ergs s$^{-1}$, where the second error bars give the systematic error arising from the distance and extinction. The tail decline rates are 0.033, 0.021, 0.022 and 0.020 mag day$^{-1}$ at 40, 60, 80 and 100 days respectively. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-UV-MIR-bol.pdf} \caption{Pseudo-bolometric UV to MIR lightcurve for SN 2011dh calculated with the spectroscopic (black dots) and photometric (red dots) method. The upper and lower error bars for the systematic error arising from extinction and distance (black dashed lines) are also shown.} \label{f_UV_MIR_bol} \end{figure} Figure~\ref{f_bol_frac} shows the fractional luminosity in the UV (1900-3300 \AA), optical (3300-10000 \AA), NIR (10000-24000 \AA) and MIR (24000-50000 \AA) regions respectively. The optical flux dominates and varies between $\sim$75 and $\sim$60 percent whereas the NIR flux varies between $\sim$15 and $\sim$30 percent. The UV flux initially amounts to $\sim$10 percent, decreasing to $\sim$1 percent at the beginning of the tail and onwards. The MIR flux initially amounts to $\sim$1 percent, increasing to $\sim$5 percent at the beginning of the tail and onwards. The evolution of the fractional luminosities mainly reflects the evolution of the temperature (Fig. \ref{f_bb_T_evo}) although we expect the UV to be quite sensitive to the evolution of the line opacity (Sect. \ref{s_analysis_spec}). \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-bol-fraction.pdf} \caption{Fractional UV (black dots), optical (blue dots), NIR (green dots) and MIR (red dots) luminosity for SN 2011dh. The upper and lower error bars for the systematic error arising from extinction (dashed lines) and the fractional Rayleigh-Jeans luminosity redwards of 4.5 $\mu$m (red solid line) are also shown.} \label{f_bol_frac} \end{figure} Figure~\ref{f_sed_evo} shows the evolution of the SED as calculated with the photometric method overplotted with the blackbody fits discussed in Sect.~\ref{s_analysis_colour} as well as the observed spectra interpolated as described in Sect.~\ref{s_obs_spec_results}. The strong blueward slope in the UV region (except for the first few days) suggests that the flux bluewards of the $UVM2$ band is negligible. The flux redwards of 4.5 $\mu$m could be approximated with a Rayleigh-Jeans tail or a model spectrum. As shown in Fig. \ref{f_bol_frac} the fractional Rayleigh-Jeans luminosity redwards of 4.5 $\mu$m is at the percent level. Note again the excess at 4.5 $\mu$m that develops between 50 and 100 days. Whereas the other bands redward of $V$ are well approximated by the blackbody fits the flux at 4.5 $\mu$m is a factor of $\sim$5 in excess at 100 days. Note also the strong reduction of the flux as compared to the fitted blackbodies in bands blueward of $V$ between 10 and 30 days (Sect.~\ref{s_analysis_spec}). \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-sed-evo.pdf} \caption{The evolution of the SED as calculated with the photometric method (black dots and dashed lines) overplotted with the blackbody fits discussed in Sect.~\ref{s_analysis_colour} (black dotted lines) as well as the observed spectra interpolated as described in Sect. \ref{s_obs_spec_results} (red solid lines).} \label{f_sed_evo} \end{figure} \subsection{Spectroscopic evolution} \label{s_analysis_spec} We have used a SN atmosphere code implementing the method presented by \citet{Maz93} and \citet{Abb85} and the \citetalias{Ber12} He4R270 ejecta model with all elements except hydrogen and helium replaced with solar abundances to aid in identification of lines and some qualitative analysis of the spectra. The factor $\xi$ in eq. 15 in \citet{Abb85} has been set to one which might lead to overestimates of the line absorption in the optically thick limit. The Monte-Carlo based method treats line and electron scattering in the nebular approximation where the ionization fractions and level populations of bound states are determined by the radiation field approximated as a diluted blackbody parametrized by a radiation temperature. Line emission will be underestimated as the contribution from recombination is not included whereas line absorption is better reproduced. Following \citet{Maz93}, for each epoch we have determined the temperature for the blackbody emitting surface from fits to the $V$, $I$, $J$, $H$ and $K$ bands and iterated the radius until the observed luminosity was achieved. Note that, except for the temperature peak between $\sim$10 and $\sim$20 days, the \ion{He}{i} lines cannot be reproduced by the model as non-thermal excitation from the ground state is needed to populate the higher levels \citep{Luc91}. For a quantitative analysis a NLTE-treatment solving the rate equations is necessary, in particular with respect to non-thermal excitations and ionizations. Figure~\ref{f_spec_model_comp} shows a comparison between model and observed spectra at 15 days where we also have marked the rest wavelengths of lines identified by their optical depth being $\gtrsim$1. The atmosphere model is appropriate at early times when the approximation of a blackbody emitting surface is justified and we do not use it for phases later than $\sim$30 days. To aid in line-identifications at later times we use preliminary results from NLTE spectral modelling of the SN spectrum at 100 days to be presented in Jerkstrand et al. 2013 (in preparation). The details of this code have been presented in \citet{Jer11,Jer12}. Both the atmosphere and NTLE code uses the same atomic data as described in these papers. The lines identified by the atmosphere modelling, the NLTE modelling or both are discussed below and have been marked in Fig. \ref{f_spec_evo_opt_NIR}. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-spec-model-comp.pdf} \caption{Modelled and observed optical spectrum at 15 days. Lines identified by their optical depth being $\gtrsim$1 have been marked at their rest wavelength. We also show model spectra for two higher extinction scenarios, $E$($B$$-$$V$)$_\mathrm{T}$=0.2 mag and $E$($B$$-$$V$)$_\mathrm{T}$=0.3 mag, discussed in Sect. \ref{s_extinction_rev}.} \label{f_spec_model_comp} \end{figure} The transition of the spectra from hydrogen (Type II) to helium (Type Ib) dominated starts at $\sim$10 days with the appearance of the \ion{He}{i} 5876 and 10830 \AA~lines and ends at $\sim$80 days with the disappearance of the H$\alpha$ line. This transition is likely determined by the photosphere reaching the helium core, the ejecta gradually becoming optically thin to the $\gamma$-rays and eventually to the hydrogen lines. At 3 days the hydrogen signature in the spectrum is strong and we identify the Balmer series $\alpha$$-$$\gamma$, Paschen series $\alpha$$-$$\gamma$ as well as Bracket $\gamma$ using the atmosphere modelling. H$\alpha$ shows a strong P-Cygni profile, extending in absorption to at least $\sim$25000 km s$^{-1}$, which gradually disappears in emission but stays strong in absorption until $\sim$50 days. Most other hydrogen lines fade rather quickly and have disappeared at $\sim$30 days. Weak absorption in H$\alpha$ and H$\beta$ remains until $\sim$80 days. Figure~\ref{f_spec_evo_H} shows closeups of the evolution centred on the hydrogen Balmer lines. Note that the absorption minimum for H$\alpha$ as well as H$\beta$ is never seen below $\sim$11000 km s$^{-1}$ but approaches this value as the lines get weaker (see also Fig.~\ref{f_vel_evo_p_cygni}). This suggests that a transition in the ejecta from helium core to hydrogen rich envelope material occurs at this velocity. Atmosphere modelling of the hydrogen lines using the \citetalias{Ber12} He4R270 ejecta model with all elements except hydrogen and helium replaced with solar abundances well reproduce the observed evolution of the absorption minima and the minimum velocity coincides with the model interface between the helium core and hydrogen rich envelope at $\sim$11500 km s$^{-1}$. The good agreement with the observed minimum velocity gives further support to the \citetalias{Ber12} ejecta model. \citetalias{Mar13} estimated hydrogen to be absent below $\sim$12000 km s$^{-1}$ by fitting a {\sc synow} \citep{Bra03} model spectrum to the observed spectrum at 11 days. We find the behaviour of the hydrogen lines in the weak limit to provide a better constraint and conclude that the interface between the helium core and hydrogen rich envelope is likely to be located at $\sim$11000~km~s$^{-1}$. By varying the fraction of hydrogen in the envelope we find a hydrogen mass of 0.01-0.04 M$_{\odot}$, in agreement with the 0.02 M$_{\odot}$ in the original model, to be consistent with the observed evolution of the hydrogen lines. \citetalias{Arc11} used spectral modelling similar to the one in this paper, but with a NLTE treatment of hydrogen and helium, to estimate the hydrogen mass to 0.024 M$_{\odot}$. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-spec-evo-H.pdf} \caption{Closeup of (interpolated) spectral evolution centred on the H$\alpha$ (left panel), H$\beta$ (middle panel) and H$\gamma$ (right panel) lines. All panels in this and the following figure show the minimum velocity for the H$\alpha$ absorption minimum (marked H$_{\mathrm{MIN}}$) interpreted as the interface between the helium core and hydrogen envelope.} \label{f_spec_evo_H} \end{figure} The \ion{He}{i} lines appears in the spectra between $\sim$10 (\ion{He}{i} 10830 and 5876 \AA) and $\sim$15 (\ion{He}{i} 6678, 7065 and 20581 \AA) days. Later on we see the 5016 and 17002 \AA~lines emerge as well. As mentioned the atmosphere modelling does not well reproduce the \ion{He}{i} lines but those identified here are present in the model spectrum with optical depths of 0.1$-$5 during the temperature peak between $\sim$10 and $\sim$20 days. Increasing the \ion{He}{i} excitation fraction to mimic the non-thermal excitation reproduce the \ion{He}{i} lines and their relative strengths reasonably well. At 100 days, all \ion{He}{i} lines, except \ion{He}{i} 17002 \AA, are present and identified by the NLTE modelling. Given the low ionization potential of \ion{Na}{i} and the high temperatures we find it unlikely that \ion{He}{i} 5876 is blended with \ion{Na}{i} 5890/5896 at early times. Using the atmosphere modelling we find a very low ion fraction of \ion{Na}{i} (<10$^{-7}$) and the optical depth for \ion{Na}{i} 5890/5896 to be negligible during the first 30 days. Using the NLTE modelling at 100 days we find emission to arise primarily from \ion{Na}{i} 5890/5896 and absorption to be a blend. \ion{He}{i} 10830 is likely to be blended with Paschen $\gamma$ at early times and \ion{He}{i} 5016 \AA~is likely to be blended with \ion{Fe}{i} 5018 \AA. Figure~\ref{f_spec_evo_He} shows a closeup of the evolution centred on the \ion{He}{i} lines. Helium absorption is mainly seen below the $\sim$11000 km s$^{-1}$ attributed to the interface between the helium core and the hydrogen rich envelope although \ion{He}{i} 10830 \AA~absorption extends beyond this velocity and also shows a narrow dip close to it between $\sim$30 and $\sim$60 days. We may speculate that this dip is caused by a denser shell of material close to the interface as was produced in explosion modelling of SN 1993J \citep[e.g.][]{Woo94}. Whereas the fading and disappearance of the hydrogen lines are driven by the decreasing density and temperature of the envelope the appearance and growth of the helium lines is likely to be more complex. \citetalias{Mar13} suggest that the helium lines appear because the photosphere reaches the helium core. However, Fig.~\ref{f_vel_evo_p_cygni} (see below) suggests that the photosphere reaches the helium core at 5-7 days whereas the helium lines appear later, at lower velocities, close to the region where we expect the continuum photosphere to be located and then move outwards in velocity until $\sim$40 days. This rather suggest the appearance and subsequent evolution to be driven by increasing non-thermal excitation due to the decreasing optical depth for the $\gamma$-rays. For the line optical depth at a given velocity (using the \citet{Sob57} approximation) we have $\tau \propto t^{-2} x_{l}$, where $x_{l}$ is the fraction of \ion{He}{i} in the lower state. As the temperature decreases after $\sim$10 days and the ion fraction of \ion{He}{i} is high according to the modelling, we would expect the line optical depth at a given velocity to decrease if non-thermal excitation was not important. Detailed modelling including a treatment of non-thermal excitation of the helium lines is needed to better understand the behaviour of the \ion{He}{i} lines. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-spec-evo-He.pdf} \caption{Closeup of (interpolated) spectral evolution centred on the \ion{He}{i} 10830 \AA~(left panel), \ion{He}{i} 20581 \AA~(middle panel) and \ion{He}{i} 5876 \AA~(right panel) lines.} \label{f_spec_evo_He} \end{figure} Except for \ion{H}{i} and \ion{He}{i} we also identify lines from \ion{Ca}{ii}, \ion{Fe}{ii}, \ion{O}{i}, \ion{Mg}{i} and \ion{Na}{i} in the spectra. The \ion{Ca}{ii} 3934/3968 \AA~and 8498/8542/8662 \AA~lines are present throughout the evolution showing strong P-Cygni profiles and are identified by both the atmosphere and NLTE modelling whereas the [\ion{Ca}{ii}] 7291/7323 \AA~line is identified by the NLTE modelling at 100 days. The \ion{O}{i} 5577, 7774, 9263, 11300, and 13164 \AA~lines are all identified by the NLTE modelling at 100 days. The atmosphere modelling does not reproduce the \ion{O}{i} lines at early times but the \ion{O}{i} 7774 \AA~line seems to appear already at $\sim$25 days and the other lines between $\sim$30 and $\sim$50 days. The NLTE modelling also identifies the emerging [\ion{O}{i}] 6300/6364 lines at 100 days. The \ion{Mg}{i} 15040 \AA~line is identified by the NLTE modelling at 100 days and seem to emerge at $\sim$40 days. As mentioned above, we identify the \ion{Na}{i} 5890/5896 \AA~lines in emission and blended in absorption with the \ion{He}{i} 5876 \AA~line at 100 days using the NLTE modelling. In the region 4000$-$5500 \AA, we identify numerous \ion{Fe}{ii} lines using the atmosphere modelling, the most prominent being \ion{Fe}{ii} 4233, 4549, 4584, 4924, 5018, 5169 and 5317 \AA. These lines are present already at $\sim$5 days and most of them persist to at least 50 days. As mentioned in Sect. \ref{s_bol_lightcurve} and as can be seen in Fig.~\ref{f_sed_evo} there is a strong reduction of the flux bluewards of 5000 \AA~between $\sim$10 and $\sim$30 days. This well known behaviour, which is also reproduced by the modelling, is caused by an increased line opacity from a large number of metal ion (e.g. \ion{Fe}{ii} and \ion{Cr}{ii}) lines. This explains the initial redward trend in the $U$-$V$ and $B$-$V$ colours contrary to the blueward trend in $V$-$I$ and $V$-$K$ caused by the increasing temperature (see Fig.~\ref{f_colour_evo}). Judging from Fig.~\ref{f_sed_evo} the reduction of the flux is considerably reduced after $\sim$30 days. Figure~\ref{f_vel_evo_p_cygni} shows the evolution of the absorption minimum for a number of lines as determined from the spectral sequence. These were measured by a simple automatic centring algorithm where the spectra were first smoothed down to 500 km s$^{-1}$ and the absorption minimum then traced through the interpolated spectral sequence and evaluated at the dates of observation. We also show the velocity corresponding to the blackbody radius as determined from fits to the photometry and as iteratively determined by the atmosphere modelling. Because of backscattering, the model blackbody radius is larger than the fitted. It is reasonable to expect that the photosphere is located somewhere between the blackbody surface and the region where the line with the lowest velocity is formed. This line is the \ion{Fe}{ii} 5169 \AA~line which was used in \citetalias{Ber12} to estimate the photospheric velocities. \citet{Des05} have used NLTE modelling to show that the absorption minimum of the \ion{Fe}{ii} 5169 \AA~line is a good estimator for the photospheric velocity in Type IIP SNe but it is not clear that this also apply to Type IIb SNe. Thus we have to consider the possibility that the photospheric velocities could be overestimated with up to 50 percent and in section \ref{s_error_b12} we will discuss how such an error would effect the results in \citetalias{Ber12}. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-vel-evo-p-cygni.pdf} \caption{Velocity evolution of the absorption minimum of the \ion{Fe}{ii} 5169 \AA~(black circles), \ion{He}{i} 5876 \AA~(yellow upward triangles), \ion{He}{i} 6678 \AA~(red downward triangles), \ion{He}{i} 10830 \AA~(green rightward triangles), \ion{He}{i} 20581 \AA~(blue leftward triangles), H$\alpha$~(red squares) and H$\beta$~(yellow diamonds) lines as automatically measured from the spectral sequence. For comparison we also show the velocity corresponding to the blackbody radius as determined from fits to the photometry (black dashed line) and as iteratively determined by the atmosphere modelling (black dotted line).} \label{f_vel_evo_p_cygni} \end{figure} \section{Comparison to other SNe} \label{s_sn_comp} In this section we compare the observations of SN 2011dh to the well observed Type IIb SNe 1993J and 2008ax. In order to do this we need to estimate their distance and extinction. This will be done without assuming similarity among the SNe and in analogy with SN 2011dh we will use high-resolution spectroscopy of the \ion{Na}{i} D and \ion{K}{i} 7699 \AA~interstellar absorption lines to estimate the extinction. In the end of the section we will investigate what difference an assumption of similarity among the SNe will make. \subsection{SN 1993J} \label{s_sn_1993J} SN 1993J which occurred in M81 is one of the best observed SNe ever and the nature of this SN and its progenitor star is quite well understood. \citet{Shi94} and \citet{Woo94} used hydrodynamical modelling to show that a progenitor star with an initial mass of 12$-$15 $M_{\odot}$ with an extended (not specified) but low mass (0.2$-$0.9 $M_{\odot}$) hydrogen envelope reproduces the observed bolometric lightcurve. This was confirmed by the more detailed modelling of \citet{Bli98}. Progenitor observations were presented in \citet{Mau04} while \citet{Sta09} used stellar evolutionary models to show that a progenitor star with an initial mass of 15$-$17 $M_{\odot}$ with an extended but low mass hydrogen envelope, stripped through mass transfer to a companion star, reproduces the observed progenitor luminosity and effective temperature. Photometric and spectroscopic data for SN 1993J were taken from \citet{Lew94}, \citet{Ric96}, \citet{Mat02}, \citet{Wad97} and IAU circulars. The distance to M81 is well constrained by Cepheid measurements, the mean and standard deviation of all such measurements listed in the NASA/IPAC Extragalactic Database (NED) being 3.62$\pm{0.22}$ Mpc, which we will adopt. The extinction within the Milky Way as given by the \citetalias{Sch98} extinction maps recalibrated by \citetalias{Sch11} is $E$($B$-$V$)$_\mathrm{MW}$=0.07 mag. \citet{Ric94} discuss the extinction in some detail and suggest a total $E$($B$-$V$)$_\mathrm{T}$ between 0.08 and 0.32 mag. High-resolution spectroscopy of the \ion{Na}{i} D lines was presented in \citet{Bow94}. Given the rough similarity between M81 and the Milky Way we will use the \citetalias{Mun97} and \citetalias{Poz12} relations to estimate the extinction within M81. \citet{Bow94} resolve a system of components near the M81 recession velocity and another one near zero velocity. There is also a third system which the authors attribute to extragalactic dust in the M81/M82 interacting system. The individual components of all three systems are quite heavily blended. As it is not clear whether the third system belongs to the Milky Way or M81, we calculate the extinction for all the three systems with the \citetalias{Mun97} and \citetalias{Poz12} relations and sum to get estimates of the total extinction. The \citetalias{Mun97} relation gives $E$($B$-$V$)$_\mathrm{T}$=0.28 mag and the \citetalias{Poz12} relations $E$($B$-$V$)$_\mathrm{T}$=0.17 mag (on average). Given that each system clearly consists of multiple components the \citetalias{Mun97} relation rather provides an upper limit (see discussion in \citetalias{Mun97}) and we will adopt the lower value given by the \citetalias{Poz12} relations. Adopting the higher value given by the \citetalias{Mun97} relation and the extinction within the Milky Way as upper and lower error limits we then get $E$($B$-$V$)$_\mathrm{T}$=0.17$^{+0.11}_{-0.10}$ mag. \subsection{SN 2008ax} \label{s_sn_2008ax} SN 2008ax is another well observed Type IIb SN but the nature of this SN and its progenitor star is not as well understood as for SN 1993J. \citet{Tsv09} used the hydrodynamical code STELLA \citep{Bli98} to show that a progenitor star with an initial mass of 13 $M_{\odot}$ with an extended (600 $R_{\odot}$) and low mass (not specified) hydrogen envelope well reproduces the $UBVRI$ lightcurves except for the first few days. Progenitor observations were presented in \citet{Cro08} but the conclusions about the nature of the progenitor star were not clear. Photometric and spectroscopic data for SN 2008ax were taken from \citet{Pas08}, \citet{Rom09}, \citet{Tsv09}, \citet{Tau11} and \citet[hereafter \citetalias{Cho11}]{Cho11}. The distance to the host galaxy NGC 4490 is not very well known. We have found only three measurements in the literature \citep{Tul88,Ter02,The07}. Taking the median and standard deviation of these and the Virgo, Great Attractor and Shapley corrected kinematic distance as given by NED we get 9.38$\pm{0.85}$ Mpc which we will adopt. The extinction within the Milky Way as given by the \citetalias{Sch98} extinction maps recalibrated by \citetalias{Sch11} is $E$($B$-$V$)$_\mathrm{MW}$=0.02 mag. High resolution spectroscopy of the \ion{Na}{i} D and \ion{K}{i} 7699 \AA~lines were presented in \citetalias{Cho11}. The host galaxy NGC 4490 is a quite irregular galaxy so it is not clear if relations calibrated to the Milky Way are applicable. However, as we have no alternative, we will use the \citetalias{Mun97} relations to estimate the extinction within NGC 4490. The \citetalias{Cho11} spectra show blended multiple components of the \ion{Na}{i} D$_2$ line most of which are clearly saturated. We measure the total equivalent width to 1.0 \AA~which using the linear (unsaturated) part of the \citetalias{Mun97} relation corresponds to a lower limit of $E$($B$-$V$)$_\mathrm{H}$$>$0.25 mag. As the \ion{Na}{i} D$_2$ lines are saturated we cannot use these to derive a useful upper limit. \citetalias{Cho11} measures the total equivalent width of the \ion{K}{i} 7699 \AA~line components to 0.142 \AA~which using the corresponding \citetalias{Mun97} relation gives $E$($B$-$V$)$_\mathrm{H}$=0.54 mag. Adding the extinction within the Milky way and adopting the lower limit from the \citetalias{Mun97} \ion{Na}{i} D$_2$ relation and the extinction corresponding to the bluest SN colours allowed for a blackbody (Sect. \ref{s_extinction_rev}) as the lower and upper error limits we then get $E$($B$-$V$)$_\mathrm{T}$=0.56$^{+0.14}_{-0.29}$ mag. \subsection{Comparison} \label{s_sn_comp_comp} The absolute magnitudes in each band were calculated using cubic spline fits as described in Sect. \ref{s_obs_image_results} and, when neccesary, extrapolated assuming constant colour. The left panel of Fig.~\ref{f_UK_bol_comp_comb} shows the pseudo-bolometric $U$ to $K$ (3000-24000 \AA) lightcurves of SNe 2011dh, 1993J and 2008ax as calculated with the photometric method (Sect.~\ref{s_bol_lightcurve}). The absolute magnitudes have been calculated using cubic spline fits as described in Sect.~\ref{s_obs_image_results} and extrapolated assuming constant colour. Except for the first few days the shape is similar and they all show the characteristics common to Type I and IIb SNe lightcurves (Sect.~\ref{s_physics_sn_IIb}). As shown in \citetalias{Ber12} the differences during the first few days could be explained by differences in the radius and mass of the hydrogen envelope. Given the adopted distances and extinctions SN 2011dh is fainter than SN 1993J which, in turn, is fainter than SN 2008ax. The peak luminosity occurs at similar times but the peak-to-tail luminosity ratio for SN 2011dh is smaller than for SN 1993J which, in turn, is smaller than for SN 2008ax. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-UK-bol-comp-comb.pdf} \caption{Pseudo-bolometric $U$ to $K$ lightcurve for SN 2011dh (black circles) as compared to SNe 1993J (red triangles) and 2008ax (blue squares) for the adopted extinctions (left panel) and for a revised scenario where we have set $E$($B$-$V$)$_\mathrm{T}$ to 0.14, 0.09 and 0.27 mag for SNe 2011dh, 1993J and 2008ax respectively (right panel). In the left panel we also show the systematic error arising from the distance and extinction (dashed lines).} \label{f_UK_bol_comp_comb} \end{figure} The left panel of Fig.~\ref{f_colour_evo_comp_comb} shows the colour evolution for the three SNe. The absolute magnitudes have been calculated using cubic spline fits as described in Sect.~\ref{s_obs_image_results}. As for the lightcurves, the shape is quite different for the first few days which could again be explained by differences in the radius and mass of the hydrogen envelope. The shape of the subsequent evolution is quite similar with a blueward trend in the $V$-$I$ and $V$-$K$ colours (corresponding to increasing temperature) during the rise to peak luminosity and then a redward trend in all colours (corresponding to decreasing temperature) to a colour maximum at 40$-$50 days and a subsequent slow blueward trend. Given the adopted extinction, SN 2011dh is redder than SN 1993J which, in turn, is redder than SN 2008ax. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-colour-evo-comp-comb.pdf} \caption{Colour evolution of SN 2011dh (black circles) as compared to SNe 1993J (red triangles) and 2008ax (blue squares) for the adopted extinctions (left panel) and for a revised scenario where we have set $E$($B$-$V$)$_\mathrm{T}$ to 0.14, 0.09 and 0.27 mag for SNe 2011dh, 1993J and 2008ax respectively (right panel). In the left panel we also show the systematic error arising from the extinction (dashed lines).} \label{f_colour_evo_comp_comb} \end{figure} In Figures \ref{f_spec_evo_Ha_comp} and \ref{f_spec_evo_He_10833_comp} we show closeups of the spectral evolution centred on the H$\alpha$ and the \ion{He}{i} 10830 \AA~lines. The minimum velocity for the H$\alpha$ absorption minimum has been marked and occurs at $\sim$9000, $\sim$11000 and $\sim$13000 km s$^{-1}$ for SNe 1993J, 2011dh and 2008ax respectively. As discussed in Sect.~\ref{s_analysis_spec} this velocity likely corresponds to the interface between the helium core and the hydrogen envelope for SN 2011dh. The H$\alpha$ line disappears at $\sim$50 days for SN 2008ax, at $\sim$80 days for SN 2011dh and is still strong at 100 days for SN 1993J. Figure~\ref{f_vel_evo_p_cygni_comp} shows the evolution of the absorption minimum for the \ion{Fe}{ii} 5169 \AA, \ion{He}{i} 5876 and 6678 \AA~and H$\alpha$ lines measured as described in Sect. \ref{s_analysis_spec}. Interpreting the \ion{Fe}{ii} 5169 \AA~absorption minimum as the photosphere and the minimum velocity for the H$\alpha$ absorption minimum as the interface between the helium core and the hydrogen envelope the photosphere reaches the helium core at $\lesssim$10, $\sim$5 and $\lesssim$10 days for SNe 1993J, 2011dh and 2008ax respectively. The helium lines appear at $\sim$20, $\sim$10 and $\sim$5 days for SNe 1993J, 2011dh and 2008ax respectively, at lower velocities close to the region where we expect the continuum photosphere to be located. The initial evolution is different among the SNe but after $\sim$30 days the helium lines have increased in strength, moved outward as compared to the photosphere and show a quite similar evolution for all three SNe. The evolution of the \ion{Fe}{ii} 5169 \AA~line is very similar for SNe 1993J and 2011dh but a bit different for SN 2008ax. In general, lines originating closer to the photosphere seem to have similar velocities for the three SNe whereas lines originating further out in the ejecta seem to have progressively higher velocities for SNe 1993J, 2011dh and 2008ax respectively. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-spec-evo-Ha-comp.pdf} \caption{The (interpolated) evolution of the H$\alpha$ line for SN 2011dh (left panel) as compared to SNe 2008ax (middle panel) and 1993J (right panel). All panels in this and the following figure show the minimum velocity for the H$\alpha$ absorption minimum (marked H$_{\mathrm{MIN}}$) interpreted as the interface between the helium core and hydrogen envelope.} \label{f_spec_evo_Ha_comp} \end{figure} \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-spec-evo-He-10833-comp.pdf} \caption{The (interpolated) evolution of the \ion{He}{i} 10830 \AA~line for SN 2011dh (left panel) as compared to SNe 2008ax (middle panel) and 1993J (right panel). Given the sparse data available for SNe 1993J and 2008ax we show observed spectra for these SNe.} \label{f_spec_evo_He_10833_comp} \end{figure} The differences in peak and tail luminosities suggest differences in the mass of ejected \element[ ][56]{Ni} (Sect.~\ref{s_physics_sn_IIb}). The differences in peak-to-tail luminosity ratios suggest differences in the ejecta mass, explosion energy and/or distribution of \element[ ][56]{Ni} (Sect.~\ref{s_physics_sn_IIb}). However, as seen in the left panels of Figures \ref{f_UK_bol_comp_comb} and \ref{f_colour_evo_comp_comb} the systematic errors in the luminosity and colour arising from the distance and extinction is large so similarity among the SNe cannot be excluded. \citetalias{Mar13} find both the luminosities and the colours to be similar, mainly due to differences in the adopted distances and extinctions. The similar velocities of lines originating closer to the photosphere and the times at which peak luminosity occurs, both of which are independent of the distance and extinction, suggests similar ejecta masses and explosion energies (Sect.~\ref{s_physics_sn_IIb}). Although the differences in the bolometric lightcurves could possibly be explained by differences in the mass and distribution of ejected \element[ ][56]{Ni} this is not fully satisfactory as the mass of ejected \element[ ][56]{Ni} is known from observations to be correlated with initial mass and expansion velocity \citep{Fra11,Mag12}. In all the observed characteristic of the SNe does not seem entirely consistent and we have to consider the possibility that the adopted distances and extinctions are in error. Interestingly enough, it is possible to revise the extinctions alone, within the adopted error bars, in such a way that it brings the colour evolution, the bolometric luminosities and the peak-to-tail luminosity ratios in good agreement. This is shown in the right panels of Figures \ref{f_UK_bol_comp_comb} and \ref{f_colour_evo_comp_comb} where we have set $E$($B$-$V$)$_\mathrm{T}$ to 0.14, 0.09 and 0.27 mag for SNe 2011dh, 1993J and 2008ax respectively. Intrinsic differences among the SNe can not be excluded and the arguments used are only suggestive so we can not make a definite conclusion. It is clear, however, that a scenario where all three SNe have similar ejecta masses, explosion energies and ejected masses of \element[ ][56]{Ni} is possible. As shown in \citetalias{Ber12} the differences in the early evolution and the velocities of lines originating further out in the ejecta could be explained by differences in the mass and radius of the hydrogen envelope. The progressively higher minimum velocities for the H$\alpha$ absorption minimum, if interpreted as the interface between the helium core and the hydrogen envelope, would naively suggest progressively lower masses of this envelope for SNe 1993J, 2011dh and 2008ax respectively. Such a conclusion is supported by early photometric evolution, the strength and persistence of the H$\alpha$ line, the hydrodynamical modelling of SNe 1993J and 2011dh in \citetalias{Ber12}, the spectral modelling of SN 2011dh in this paper and by \citetalias{Arc11} and the spectral modelling of SN 2008ax by \citet{Mau10}. \citet{Mar13} reach a similar conclusion based on the progressively later times at which the helium lines appears although we do not find their physical argument convincing (Sect.~\ref{s_analysis_spec}). \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-vel-evo-p-cygni-comp.pdf} \caption{Velocity evolution of the absorption minimum for the H$\alpha$ (upper panel), \ion{He}{i} 5876 \AA~(upper middle panel), \ion{He}{i} 6678 \AA~(lower middle panel) and \ion{Fe}{ii} 5169 \AA~(lower panel) lines for SNe 2011dh (black circles), 2008ax (blue squares) and 1993J (red triangles) measured as described in Sect. \ref{s_analysis_spec}.} \label{f_vel_evo_p_cygni_comp} \end{figure} \section{Discussion} \label{s_discussion} In Sect. \ref{s_extinction_rev} we revisit the issue of extinction and discuss constraints from the SN itself and the comparisons to SNe 1993J and 2008ax made in Sect.~\ref{s_sn_comp_comp}. In Sect.~\ref{s_physics_sn_IIb} we discuss the physics of Type IIb lightcurves as understood from approximate models, in particular in relation to the hydrodynamical modelling made in \citetalias{Ber12}. In Sect.~\ref{s_error_b12} we discuss the sensitivity of the SN and progenitor parameters derived in \citetalias{Ber12} to errors in the distance, extinction and photospheric velocity and also revise these parameters to agree with the distance and extinction adopted in this paper. In Sect.~\ref{s_prog_dis} we discuss the results on the disappearance of the progenitor star and what consequences this have for the results in \citetalias{Mau11} and \citetalias{Ber12} and our understanding of this star and Type IIb progenitors in general. Finally, in Sect.~\ref{s_45_micron_excess}, we discuss the excess in the Spitzer 4.5 $\mu$m band and possible explanations. \subsection{Extinction revisited} \label{s_extinction_rev} In Sect.~\ref{s_extinction} we discussed different estimates of the extinction for SN 2011dh. Most estimates suggested a low extinction and we adopted $E$($B$-$V$)=0.07$^{+0.07}_{-0.04}$ mag as estimated from the equivalent widths of the \ion{Na}{i} D lines. The near simultaneous $V$ and $R$ band observations from day 1 presented in \citetalias{Arc11} and \citetalias{Tsv12} corresponds to an intrinsic $V$-$R$ colour of about $-$0.2 mag for the adopted extinction. The bluest $V$-$R$ colour allowed for a blackbody, which can be calculated from the Rayleigh-Jeans law, is $-$0.16 mag so this suggests a very high temperature. In higher extinction scenarios this colour would be even bluer and, even taking measurement and calibration errors into account, in conflict with the bluest $V$-$R$ colour allowed for a blackbody. Figure~\ref{f_bb_T_evo} shows the evolution of the blackbody temperature for two higher extinction scenarios where we have increased $E$($B$-$V$)$_\mathrm{T}$ in $\sim$0.1 steps to 0.2 and 0.3 mag. As seen the blackbody temperature would become quite high between 10 and 20 days and we would expect lines from low ionization potential ions such as \ion{Ca}{ii} and \ion{Fe}{ii} to be quite sensitive to this. As shown in Fig.~\ref{f_spec_model_comp}, the SN atmosphere code described in Sect.~\ref{s_analysis_spec} can neither reproduce the \ion{Ca}{ii} 8498/8542/8662 \AA~lines, nor the the \ion{Fe}{ii} lines, between 10 and 20 days for these higher extinction scenarios. Even though NLTE effects may change the ion fractions, this again suggests a low extinction scenario for SN 2011dh. Comparisons to SNe 2008ax and 1993J provides another source of information. As discussed in Sect.~\ref{s_sn_comp} an assumption of similarity in luminosity and colour among the SNe requires a revision of the extinctions adopted in this paper and suggest a revision of the extinction for SN 2011dh towards the upper error bar. However, as pointed out, intrinsic differences among the SNe cannot be excluded and as such a revision would be within our error bars we do not find this argument sufficient to revise our adopted value E(B-V)=0.07$^{+0.07}_{-0.04}$ mag. \subsection{Physics of Type IIb SNe lightcurves} \label{s_physics_sn_IIb} The bolometric lightcurves of SN 2011dh and other Type IIb SNe can be divided in two distinct phases depending on the energy source powering the lightcurve. The first phase is powered by the thermal energy deposited in the ejecta by the explosion. The second phase is powered by the energy deposited in the ejecta by the $\gamma$-rays emitted in the radioactive decay chain of \element[ ][56]{Ni}. In \citetalias {Ber12} we used the \element[ ][56]{Ni} powered phase to estimate the ejecta mass, explosion energy and ejected mass of \element[ ][56]{Ni} whereas the explosion energy powered phase was used to estimate the radius of the progenitor star. The explosion energy powered phase ends at $\sim$3 days when our observations begin but $V$ $R$ and $g$ band data have been published in \citetalias{Arc11} and \citetalias{Tsv12}. These data are insufficient to construct a bolometric lightcurve but it is clear that this phase corresponds to a strong decline of the bolometric luminosity. In the \citetalias{Ber12} modelling roughly half the explosion energy is deposited as thermal energy in the core but most of this is lost before shock breakout due to expansion. Only a small fraction of the explosion energy is deposited as thermal energy in the envelope and it is the cooling of this, both by expansion and radiative diffusion, that gives rise to the strong decline of the bolometric luminosity. The shape and extent of the bolometric lightcurve in the explosion energy powered phase depends on the mass, radius, density profile and composition of the envelope and, as discussed in \citetalias{Ber12}, requires detailed hydrodynamical modelling. The subsequent \element[ ][56]{Ni} powered phase is well covered by our data and the bolometric lightcurve (Fig.~\ref{f_UV_MIR_bol}) shows the characteristics common to all Type I and IIb SNe; a rise to peak luminosity followed by a decline phase and a subsequent tail phase with a roughly linear decline rate. These characteristics can be qualitatively understood by approximate models such as the ones by \citet{Arn82} or \citet{Ims92}. The rising phase is caused by radiative diffusion of the energy deposited in the ejecta by the $\gamma$-rays. The radioactive heating decreases with time and so does the diffusion time because the ejecta are expanding. As shown by \citet{Arn82} the luminosity peak is reached when the radioactive heating equals the cooling by radiative diffusion. During the subsequent decline phase the diffusion time continues to decrease until the SN reaches the tail phase where the diffusion time is negligible and the luminosity equals the radioactive heating (instant diffusion). The shape of the tail is not exactly linear but is modulated by a term determined by the decreasing optical depth for $\gamma$-rays as the ejecta continue to expand. From approximate models the qualitative dependence of the bolometric lightcurve in the \element[ ][56]{Ni} powered phase on basic parameters as the explosion energy, ejecta mass and mass of ejected \element[ ][56]{Ni} can be understood. Increasing the explosion energy will increase the expansion velocities which will decrease the diffusion time for thermal radiation and the optical depth for $\gamma$-rays. Increasing the ejecta mass will have the opposite effect but, as the optical depth $\tau \propto (M^{2}/E)$ and the diffusion time $t_{\mathrm{d}} \propto (M^{3}/E)^{1/4}$ \citep{Arn82}, the bolometric lightcurve depends stronger on the ejecta mass than on the explosion energy. Either an increase of the explosion energy or a decrease of the ejecta mass will result in an earlier and more luminous peak of the bolometric lightcurve whereas the tail luminosity will be decreased. Increasing the mass of \element[ ][56]{Ni} will increase the radioactive heating and thus result in an overall increase of the luminosity and in fact corresponds to a pure scaling in the approximate models. The distribution of \element[ ][56]{Ni} also affects the lightcurve and if the \element[ ][56]{Ni} is distributed further out in the ejecta the lightcurve will rise faster to the peak because of the decreased diffusion time for thermal radiation and have a lower luminosity on the tail because of the decreased optical depth for $\gamma$-rays. As shown in figures 2, 4, 5 and 6 in \citetalias{Ber12} all these qualitative dependencies are well followed by the hydrodynamical models. If the optical depth for $\gamma$-rays in the tail phase is high the shape of the bolometric lightcurve in the \element[][56]{Ni} powered phase depends exclusively on the diffusion time for thermal radiation, which determines the quantity $(M^{3}/E)$, and the ejecta mass and explosion energy become degenerate. In this case knowledge of the expansion velocity, which determines the quantity $(M/E)$, is needed to determine the SN parameters. However, as seen in Fig.~\ref{f_UV_MIR_bol_model_comp}, the optical depth for $\gamma$-rays becomes $\lesssim$1 at $\sim$40 days for SN 2011dh. The bolometric lightcurve in the tail phase then depends on the optical depth for $\gamma$-rays, which determines the quantity $(M^{2}/E)$, and provides the constraint needed to break the degeneracy. However, as the bolometric lightcurve also depends on the distribution of \element[ ][56]{Ni} the problem is not necessarily well-conditioned. In our experience, knowledge of the expansion velocity, which corresponds to the fitting of photospheric velocities in \citetalias{Ber12}, is needed to robustly determine the SN parameters. \subsection{Error sensitivity and revisions of the \citetalias{Ber12} modelling} \label{s_error_b12} What was not discussed in \citetalias{Ber12} was the sensitivity of the results to errors in the adopted distance and extinction. A change in the distance corresponds to a scaling of the bolometric lightcurve whereas a change in the extinction is more complicated as the change in luminosity also depends on the colour. However, as seen in Fig.~\ref{f_UV_MIR_bol}, the change in luminosity for SN 2011dh due to the combined errors in distance and extinction does not differ significantly from a scaling. As the adopted distance and extinction have been revised as compared to \citetalias{Ber12} we also need to investigate the effect of this change on the derived quantities. In the \element[ ][56]{Ni} powered phase, according to approximate models, the luminosity is proportional to the mass of ejected \element[ ][56]{Ni} (Sect.~\ref{s_physics_sn_IIb}). Therefore, ignoring possible degeneracy among the parameters, we expect the derived ejecta mass and explosion energy to be insensitive to the errors in the distance and extinction and the error in the derived mass of ejected \element[ ][56]{Ni} to be similar to the error in the luminosity. We have re-run the He4 model with the \element[ ][56]{Ni} mass increased to 0.075 M$_\odot$ to account for the revisions in the adopted distance and extinction. The model bolometric lightcurve is shown in Fig.~\ref{f_UV_MIR_bol_model_comp} and well reproduces the bolometric lightcurve presented in this paper. Estimating the errors arising from the distance and extinction as described the revised mass of ejected \element[ ][56]{Ni} becomes 0.05-0.10 M$_{\odot}$ whereas the ejecta mass and explosion energy remains the same as in \citetalias{Ber12}, 1.8-2.5 M$_{\odot}$ and 0.6-1.0$\times$10$^{51}$ erg respectively. \begin{figure}[tb] \includegraphics[width=0.48\textwidth,angle=0]{figs/sn2011dh-UV-MIR-bol-model-comp.pdf} \caption{Revised \citetalias{Ber12} He4 model bolometric lightcurve with the \element[ ][56]{Ni} mass increased to 0.075 M$_\odot$ (blue solid line) compared to the pseudo-bolometric UV to MIR lightcurve for SN 2011dh calculated with the spectroscopic method (black dots). For comparison we also show the total $\gamma$-ray luminosity corresponding to this amount of \element[ ][56]{Ni} (black dashed line).} \label{f_UV_MIR_bol_model_comp} \end{figure} In the explosion energy powered phase, according to approximate models, the luminosity is proportional to the radius. However, the lightcurve in this phase is not well described by approximate models and detailed hydrodynamical modelling is needed (Sect.~\ref{s_physics_sn_IIb}). As discussed in \citetalias{Ber12} the sensitivity of the estimated radius to changes in the luminosity is modest. Examining the range of envelope models consistent with the \citetalias{Arc11} $g$ band observations, the distance and extinction adopted in this paper and the systematic error arising from these we find a revised progenitor radius of 200-300 R$_{\odot}$. As discussed in Sect.~\ref{s_analysis_spec} the region where the \ion{Fe}{ii} 5169 \AA \ line is formed rather provides an upper limit for the location of the photosphere and the photospheric velocities used in the hydrodynamical modelling might therefore be overestimated. The dependence of the derived quantities on the photospheric velocity is complicated and a full scan of the model parameter space is probably needed to make a quantitative estimate. This is a potential problem and further work is needed to well constrain the photospheric velocities. \subsection{Disappearance of the proposed progenitor star} \label{s_prog_dis} In \citet{Erg13} we presented $B$, $V$ and $r$ observations of the SN site obtained on Jan 20 2013 ($V$ and $r$) and Mar 19 2013 ($B$), 601 and 659 days past explosion respectively. An additional set of $B$, $V$ and $r$ band observations was obtained on Apr 14 2013, May 15 2013 and Jun 1 2013, 685, 715 and 732 days past explosion. In Appendix \ref{a_prog_obs} we give the details on these observations and the photometric measurements and calibration. Subtraction of pre-explosion images shows that the flux from the yellow supergiant proposed as the progenitor by \citetalias{Mau11} have been reduced with at least 74$\pm{9}$, 73$\pm{4}$ and 77$\pm{4}$ percent in the $B$, $V$ and $r$ bands respectively. The HST observations obtained on Mar 2 2013 and presented by \citetalias{Dyk13b} corresponds to a reduction of the flux of 71$\pm{1}$ and 70$\pm{1}$ in the $F555W$ and $F814W$ bands respectively. \citet{Szc12} find the progenitor to be variable at the five percent level so variability of the star is unlikely to explain the flux reduction. We find a 0.57, 0.76 and 0.64 mag decline corresponding to a decline rate of 0.0073, 0.0090 and 0.0053 mag day$^{-1}$ in the $B$, $V$ and $r$ bands respectively between the first and the second set of observations which is consistent with the remaining flux being emitted by the SN. As can be derived from the approximate models discussed in Sect. \ref{s_physics_sn_IIb}, in the limit of low optical depth for the $\gamma$-rays and if all positrons are trapped, the decline rate is $(1 + (1-f_{\mathrm{e^{+}}}(t)) \, 223/t) \, 0.0098$ mag day$^{-1}$, where $f_{\mathrm{e^{}+}}(t)$ is the fractional positron contribution to the luminosity and $t$ is given in days. This gives an expected decline rate of of 0.0098-0.0132 mag at 650 days, in rough agreement with the observed decline rates if the contribution from positrons is assumed to dominate. Given all this, although we can not exclude a minor contribution to the pre-explosion flux from other sources, we find that the yellow supergiant has disappeared. The only reasonable explanation is that the star was the progenitor of SN 2011dh as originally proposed in \citetalias{Mau11} which is also the conclusion reached by \citetalias{Dyk13b}. The disappearance of the yellow supergiant confirms the results in \citetalias{Ber12} in which we showed that an extended progenitor with the observed properties of the yellow supergiant could well reproduce the early optical evolution. This shows that the duration of the initial cooling phase for a SN with an extended progenitor can be significantly shorter than commonly thought and that approximate models as the one by \citet{Rab11} used in \citetalias{Arc11} does not necessarily apply. It also indicates that the proposed division of Type IIb SNe progenitors in compact and extended ones and the relation between the speed of the shock and the type of progenitor \citep{Che10} needs to be revised. It is interesting to note that the two progenitors of Type IIb SNe (1993J and 2011dh) whose nature have been revealed were in both cases extended supergiants. The disappearance of the yellow supergiant in M51 was a major step in achieving one of our main goals, to determine the initial mass of the progenitor star. This mass has now been estimated to $\sim$13 M$_{\odot}$ by two different methods, the hydrodynamical modelling in \citetalias{Ber12} and the progenitor analysis in \citetalias{Mau11}, respectively. Both methods use results from stellar evolutionary modelling to relate the He core mass and the progenitor luminosity respectively to the initial mass but are otherwise independent. We note that, contrary to most other types, the initial mass of Type IIb SNe progenitors might be derived from hydrodynamical modelling without any assumptions of uncertain mass-loss rates as the star is essentially a bare He core (although with a thin and extended envelope). \subsection{4.5 $\mu$m excess} \label{s_45_micron_excess} As mentioned in Sections \ref{s_analysis_phot} and \ref{s_bol_lightcurve} there is a flux excess in the Spitzer 4.5 $\mu$m band as compared to the 2MASS $JHK$ and Spitzer 3.6 $\mu$m bands developing during the first 100 days. This is most clearly seen in Fig. \ref{f_sed_evo}. Whereas other bands redward of $V$ are well approximated by the blackbody fits the flux at 4.5 $\mu$m is a factor of $\sim$5 in excess at 100 days. Warm dust or CO fundamental band emission are two possible explanations. For day 50-100 the excess (relative the blackbody fits discussed in Sect.~\ref{s_analysis_colour}) is well fitted by a blackbody with T$\simeq$400 K, R$\simeq$$5 \times 10^{16}$ cm and L$\simeq$$6 \times 10^{40}$ erg s$^{-1}$. \citet{Hel13} find a blackbody luminosity declining from $\sim$$7 \times 10^{40}$ to $\sim$$3 \times 10^{40}$ erg s$^{-1}$ and cooling from $\sim$1600 to $\sim$600 K between $\sim$20 and $\sim$90 days by fitting the MIR bands alone. Using a simple model for heated CSM dust they also find a thermal dust echo to be consistent with the observed MIR fluxes. Given the lack of MIR spectra we can neither confirm nor excluded CO fundamental band emission as the explanation of the excess. There is a possible excess (as compared to the continuum) developing near the location of the first overtone band at $\sim$23000 \AA~though. NIR spectra from later epochs may help to resolve this issue as we expect first overtone emission to grow stronger as compared to the continuum. \section{Conclusions} \label{s_conclusions} We present extensive photometric and spectroscopic optical and NIR observations of SN 2011dh obtained during the first 100 days. The calibration of the photometry is discussed in some detail and we find it to be accurate to the five percent level in all bands. Using our observations as well as SWIFT UV and Spitzer MIR observations we calculate the bolometric UV to MIR lightcurve using both photometric and spectroscopic data. This bolometric lightcurve together with the photospheric velocity as estimated from the absorption minimum of the \ion{Fe}{ii} 5169 \AA~line provides the observational basis for the hydrodynamical modelling done in \citetalias{Ber12}. We adopt a distance of 7.8$^{+1.1}_{-0.9}$ Mpc based on all estimates in the literature and find an extinction of $E$($B$-$V$)$_\mathrm{T}$=0.07$^{+0.07}_{-0.04}$ mag to be consistent with estimates and constraints presented in the literature and in this paper. The sensitivity of the results in \citetalias{Ber12} to these uncertainties is discussed and we find that only the derived mass of ejected \element[ ][56]{Ni} and radius is likely to be affected. We also revise the modelling made in \citetalias{Ber12} to agree with the values of the distance and extinction adopted in this paper and find that only the derived mass of ejected \element[ ][56]{Ni} and radius needs to be revised. The uncertainty in the photospheric velocity as estimated from the absorption minimum of the \ion{Fe}{ii} 5169 \AA~line is discussed and we find that we can not constrain this velocity very well. This is a potential problem as we are unable to quantify the sensitivity of the results in \citetalias{Ber12} to this uncertainty. We present and discuss pre- and post-explosion observations which show that the yellow supergiant coincident with SN 2011dh has disappeared and indeed was the progenitor as proposed in \citetalias{Mau11}. Furthermore, the results from the progenitor analysis in \citetalias{Mau11} are consistent with those from the hydrodynamical modelling in \citetalias{Ber12}. Given the revisions in this paper, we find that an almost bare helium core with a mass of 3.3-4.0 M$_{\odot}$ surrounded by a thin hydrogen rich envelope extending to 200-300 R$_{\odot}$ exploded with an energy of 0.6$-$1.0 $\times10^{51}$ erg ejecting a mass of 1.8-2.5 M$_{\odot}$ of which 0.05-0.10 M$_{\odot}$ consisted of synthesised \element[ ][56]{Ni}. The absorption minimum of the hydrogen lines is never seen below $\sim$11000 km s$^{-1}$ but approaches this value when the lines get weaker. Spectral modelling of the hydrogen lines using the \citetalias{Ber12} He4R270 ejecta model well reproduces this behaviour and the minimum velocity of the absorption minima coincides with the model interface between the helium core and the hydrogen rich envelope. The good agreement between the modelled and observed minimum velocities gives support to the \citetalias{Ber12} He4R270 ejecta model and we find it most likely that the observed minimum velocity of $\sim$11000 km s$^{-1}$ corresponds to the interface between the helium core and the hydrogen rich envelope. We note that the minimum velocity of the H$\alpha$ absorption minimum for SNe 1993J, 2011dh and 2008ax is $\sim$9000, $\sim$11000 and $\sim$13000 km s$^{-1}$ respectively which suggest that the interface between the helium core and the hydrogen rich envelope is located near these progressively higher velocities. By varying the fraction of hydrogen in the envelope we find a hydrogen mass of 0.01-0.04 M$_{\odot}$ to be consistent with the observed evolution of the hydrogen lines. This is in reasonable agreement with the 0.02 M$_{\odot}$ in the original model and the 0.024 M$_{\odot}$ estimated by \citetalias{Arc11} using spectral modelling similar to the one in this paper. We estimate that the photosphere reaches the interface between the helium core and the hydrogen rich envelope at 5-7 days. The helium lines appear between $\sim$10 days (\ion{He}{i} 10830 and 5876 \AA) and $\sim$15 days (\ion{He}{i} 6678, 7065 and 20581 \AA), close to the region where we expect the photosphere to be located and then move outward in velocity until $\sim$40 days. This suggests that the early evolution of these lines is driven by increasing non-thermal excitation due to decreasing optical depth for the $\gamma$-rays. The photometric and spectral characteristics of SNe 2011dh, 1993J and 2008ax are compared and we find the colours and luminosities to differ significantly for the distances and extinctions adopted in this paper. However, the errors arising from the distance and extinction are large and a revision of the extinctions, just within the error bars, would bring the colours and luminosities in good agreement. Although a definite conclusion can not be made it is clear that a scenario where all three SNe have similar ejecta masses, explosion energies and ejected masses of $^{56}$Ni is possible. As shown in \citetalias{Ber12} the differences in the early evolution could be explained by differences in the mass and radius of the hydrogen envelope. Progressively higher velocities of the interface between the helium core and the hydrogen rich envelope, as proposed above, would naively correspond to progressively lower masses of this envelope for SNe 1993J, 2011dh and 2008ax respectively. Such a conclusion is supported by the early photometric evolution, the strength and persistence of the H$\alpha$ line and hydrodynamical as well as spectral modelling of these SNe. We detect a flux excess in the 4.5 $\mu$m Spitzer band as compared to the NIR and the 3.6 $\mu$m Spitzer band. A thermal dust echo in the CSM as proposed by \citet{Hel13} or CO fundamental band emission are possible explanations but further work using late time observations is needed to resolve this issue. The high quality dataset presented in this paper provides an ideal base for further modelling of the SN. One of the most interesting issues which remains unsolved is the possible existence of a bluer and more compact companion star as predicted by the binary evolutionary modelling in \citet{Ben12}. It is still not clear which of the single or binary star channels is dominating the production of Type IIb SNe. HST imaging, preferably in the UV, would have a good chance to detect such a companion. \section{Acknowledgements} This work is partially based on observations of the European supernova collaboration involved in the ESO-NTT and TNG large programmes led by Stefano Benetti. This work is partially based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. We acknowledge the exceptional support we got from the NOT staff throughout this campaign. This work is partially based on observations observations made with the Italian Telescopio Nazionale Galileo (TNG) operated by the Fundacio\'n Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias; the 1.82m Copernico and Schmidt 67/92 telescopes of INAF- Asiago Observatory; the 1.22m Galileo telescope of Dipartimento di Fisica e Astronomia (Universita' di Padova); the LBT, which is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are The Ohio State University, and The Research Corporation, on behalf of the University of Notre Dame, University of Minnesota and University of Virginia; the University of Arizona on behalf of the Arizona university system; INAF, Italy. We are in debt with S. Ciroi, A. Siviero and L. Aramyan for help with the Galileo 1.22m observations. This work is partially based on observation made with the William Herschel Telescope, operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias and the Liverpool Telescope, operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. This work is partially based on observations made with the Carlos S\'anchez Telescope operated on the island of Tenerife by the Instituto de Astrof\'{i}sica de Canarias in the Spanish Observatorio del Teide, and the Joan Or\'o Telescope of the Montsec Astronomical Observatory, which is owned by the Generalitat de Catalunya and operated by the Institute for Space Studies of Catalonia (IEEC). L.T., A.P., S.B., E.C., and M.T. are partially supported by the PRIN-INAF 2011 with the project Transient Universe: from ESO Large to PESSTO. N.E.R. acknowledges financial support by the MICINN grant AYA08-1839/ESP, AYA2011-24704/ESP, and by the ESF EUROCORES Program EuroGENESIS (MINECO grants EUI2009-04170). S.T. acknowledges support by TRR 33 “The Dark Universe” of the German Research Foundation. J.S. and the OKC are supported by The Swedish Research Council. R.K. and M.T. gratefully acknowledges the allocation of Liverpool Telescope time under the programmes ITP10-04 and PL11A-03 on which this study is partially-based. F.B. acknowledges support from FONDECYT through grant 3120227 and by the Millennium Center for Supernova Science through grant P10-064-F (funded by "Programa Bicentenario de Ciencia y Tecnolog\'{i}a de CONICYT" and "Programa Iniciativa Cient\'{i}fica Milenio de MIDEPLAN"). Finally, we thank the referee Jozsef Vinko for his useful suggestions. \begin{table*}[p] \caption{Optical colour-corrected JC $U$ and S-corrected JC $BVRI$ magnitudes for SN 2011dh. Errors are given in parentheses.} \begin{center} \include{sn2011dh-jc-table-1} \end{center} \label{t_jc} \end{table*} \setcounter{table}{2} \begin{table*}[p] \caption{Continued.} \begin{center} \include{sn2011dh-jc-table-2} \end{center} \end{table*} \begin{table*} \caption{Optical S-corrected SWIFT JC $UBV$ magnitudes for SN 2011dh. Errors are given in parentheses.} \begin{center} \include{sn2011dh-jc-swift-table} \end{center} \label{t_jc_swift} \end{table*} \begin{table*}[p] \caption{Optical colour-corrected SDSS $u$ and S-corrected SDSS $griz$ magnitudes for SN 2011dh. Errors are given in parentheses.} \begin{center} \include{sn2011dh-sloan-table} \end{center} \label{t_sloan} \end{table*} \begin{table*}[p] \caption{NIR S-corrected 2MASS $JHK$ magnitudes for SN 2011dh. Errors are given in parentheses.} \begin{center} \include{sn2011dh-2mass-table} \end{center} \label{t_nir} \end{table*} \begin{table*}[p] \caption{MIR Spitzer 3.6 $\mu$m and 4.5 $\mu$m magnitudes for SN 2011dh. Errors are given in parentheses.} \begin{center} \include{sn2011dh-spitzer-table} \end{center} \label{t_mir} \end{table*} \begin{table*}[p] \caption{UV SWIFT $UVW1$, $UVM2$ and $UVW2$ magnitudes for SN 2011dh. Errors are given in parentheses.} \begin{center} \include{sn2011dh-swift-table} \end{center} \label{t_uv} \end{table*} \begin{table*}[p] \caption{List of optical and NIR spectroscopic observations.} \begin{center} \scalebox{1.0}{ \begin{tabular}{l l l l l l l} \hline\hline \\ [-1.5ex] JD (+2400000) & Phase & Grism & Range & Resolution & Resolution & Telescope (Instrument)\\ [0.5ex] (d) & (d) & & (\AA) & & (\AA) &\\ [0.5ex] \hline \\ [-1.5ex] 55716.41 & 3.41 & LRB & 3300-8000 & 585 & 10.0 & TNG (LRS) \\ 55716.41 & 3.41 & LRR & 5300-9200 & 714 & 10.4 & TNG (LRS) \\ 55716.47 & 3.47 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55716.49 & 3.49 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55717.37 & 4.37 & b200 & 3300-8700 & .... & 12.0 & CA-2.2m (CAFOS) \\ 55717.37 & 4.37 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55717.49 & 4.49 & Grism 4 & 3500-8450 & 613 & ... & AS 1.82m (AFOSC) \\ 55718.42 & 5.42 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55718.44 & 5.44 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55719.40 & 6.40 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55719.42 & 6.42 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55719.47 & 6.47 & VPH4 & 6350-7090 & ... & 3.7 & AS-1.82m (AFOSC) \\ 55721.39 & 8.39 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55721.40 & 8.40 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55721.45 & 8.45 & R300B & 3200-5300 & ... & 4.1 & WHT (ISIS) \\ 55721.45 & 8.45 & R158R & 5300-10000 & ... & 7.7 & WHT(ISIS) \\ 55722.57 & 9.57 & R300B & 3200-5300 & ... & 4.1 & WHT (ISIS) \\ 55722.57 & 9.57 & R158R & 5300-10000 & ... & 7.7 & WHT (ISIS) \\ 55722.42 & 9.42 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55722.46 & 9.48 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55723.61 & 10.61 & VHRV & 4752-6698 & 2181 & 2.6 & TNG (LRS) \\ 55725.38 & 12.38 & R300B & 3200-5300 & ... & 4.1 & WHT (ISIS) \\ 55725.38 & 12.38 & R158R & 5300-10000 & ... & 7.7 & WHT (ISIS) \\ 55730.45 & 17.45 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55730.46 & 17.46 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55730.52 & 17.52 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55730.57 & 17.57 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55733.37 & 20.37 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55733.37 & 20.37 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55733.42 & 20.42 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55733.43 & 20.43 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55737.68 & 24.68 & 200 H+K & 14900-24000 & 1881(H)/2573(K) & ... & LBT (LUCIFER) \\ 55738.49 & 25.49 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55738.50 & 25.50 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55738.41 & 25.41 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55743.40 & 30.40 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55743.44 & 30.44 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55743.43 & 30.43 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55743.43 & 30.43 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55748.40 & 35.40 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55748.41 & 35.41 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55748.39 & 35.39 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55748.42 & 35.42 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55753.41 & 40.41 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55753.43 & 40.43 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55757.39 & 44.39 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55757.41 & 44.41 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55757.41 & 44.41 & gt300 & 3200-7700 & 555 & 9.0 & AS-1.22m (DU440) \\ 55758.39 & 45.39 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55758.42 & 45.42 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55760.38 & 47.38 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55762.39 & 49.39 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55762.40 & 49.40 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ 55765.40 & 52.40 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55765.42 & 52.42 & Grism 5 & 5000-10250 & 415 & 16.8 & NOT (ALFOSC) \\ \hline \end{tabular}} \end{center} \label{t_speclog} \end{table*} \setcounter{table}{8} \begin{table*}[p] \caption{Continued.} \begin{center} \scalebox{1.0}{ \begin{tabular}{l l l l l l l} \hline\hline \\ [-1.5ex] JD (+2400000) & Phase & Grism & Range & Resolution & Resolution & Telescope (Instrument)\\ [0.5ex] (d) & (d) & & (\AA) & & (\AA) &\\ [0.5ex] \hline \\ [-1.5ex] 55765.39 & 52.39 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55765.42 & 52.42 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55771.41 & 58.41 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55771.41 & 58.41 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55780.39 & 67.39 & Grism 4 & 3200-9100 & 355 & 16.2 & NOT (ALFOSC) \\ 55780.43 & 67.43 & zJ & 8900-15100 & 700 & ... & WHT (LIRIS) \\ 55780.40 & 67.40 & HK & 14000-23800 & 700 & ... & WHT (LIRIS) \\ 55784.40 & 71.40 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55784.40 & 71.40 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55795.39 & 82.39 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55795.39 & 82.39 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55801.37 & 88.37 & IJ & 9000-14500 & 333 & ... & TNG (NICS) \\ 55801.40 & 88.40 & HK & 14000-25000 & 333 & ... & TNG (NICS) \\ 55802.37 & 89.37 & gt300 & 3200-7700 & 396 & 12.6 & AS-1.22m (DU440) \\ 55804.36 & 91.36 & R300B & 3200-5300 & ... & 4.1 & WHT (ISIS) \\ 55804.36 & 91.36 & R158R & 5300-10000 & ... & 7.7 & WHT (ISIS) \\ 55812.36 & 99.36 & b200 & 3300-8700 & ... & 12.0 & CA-2.2m (CAFOS) \\ 55812.36 & 99.36 & r200 & 6300-10500 & ... & 12.0 & CA-2.2m (CAFOS) \\ \hline \end{tabular}} \end{center} \end{table*} \begin{table*}[p] \caption{Pseudo-bolometric UV to MIR lightcurve for SN 2011dh calculated from spectroscopic and photometric data. Random errors are given in the first parentheses and systematic lower and upper errors (arising from the distance and extinction) respectively in the second parentheses.} \begin{center} \include{sn2011dh-UV-MIR-bol-table} \end{center} \label{t_UV_MIR_bol} \end{table*}
1,108,101,566,047
arxiv
\section{Introduction} The recent observations of gravitational wave (GW) signals from compact binary mergers by the LIGO-Virgo collaborations~\cite{LIGOScientific:2016aoc, LIGOScientific:2017bnn,LIGOScientific:2018mvr,TheLIGOScientific:2017qsa,Abbott:2018wiz,Abbott:2020uma,Abbott:2020khf} have greatly moved our understanding of black holes and compact stars forward. The detected binary black hole merger events inspired many studies on black hole mimickers termed exotic compact objects (ECOs), whose defining feature is their large compactness: their radius is very close to that of a black hole with the same mass while lacking an event horizon. While some non-GW probes of ECOs have been studied~\cite{Holdom:2020uhf}, most studies are on the distinctive signatures from gravitational wave echoes in the postmerger signals~\cite{Ignatev:1978ax,Abedi:2018npz, Ferrari:2000sr,Cardoso:2016rao,Cardoso:2016oxy,Cardoso:2017njb,Abedi:2016hgu, Mark:2017dnq,Conklin:2017lwb, Conklin:2019fcs,Conklin:2019smy, Cardoso:2019rvt,Holdom:2016nek, Holdom:2019ouz,Ren:2019afg,Holdom:2019bdv,Abedi:2020sgg}, in which a wave that falls inside the gravitational potential barrier travels to a reflecting boundary before returning to the barrier at the photon sphere after some time delay. Considering the detected binary neutron star merger events, we want to explore the possibility of GW echoes also being signature of realistic compact stars. Generating GW echoes requires the star object to feature a photon sphere at $R_P=3M$, where $M$ is the object's mass. For compact stars, the minimum radius should be above the Buchdahl's limit $R_B=9/4M$~\cite{Buchdahl:1959zz}. Therefore, GW echo signals are possible if $R_B<R<R_P$. This compactness criterion excludes the realistic neutron stars~\cite{Chandrasekhar, Pani:2018flj}. This motivates the exploration of other more compact star objects such as quark stars composed of quark matter. Bodmer~\cite{Bodmer:1971we}, Witten~\cite{Witten} and Terazawa~\cite{Terazawa:1979hq} proposed that quark matter with comparable numbers of $u, \,d, \,s$ quarks, also called strange quark matter (SQM), might be the ground state of baryonic matter at the zero temperature and pressure. A recent study~\cite{Holdom:2017gdc} demonstrated that $u, d$ quark matter ($ud$QM) is, in general, more stable than SQM, and it can be more stable than the ordinary nuclear matter at a sufficiently large baryon number beyond the periodic table. The SQM hypothesis and $ud$QM hypothesis, as mentioned above, allow the possibility of bare quark stars, such as strange quark stars (SQSs)~\cite{Haensel:1986qb,Alcock:1986hz} that consist of SQM or up-down quark stars ($ud$QSs)~\cite{Zhang:2019mqb,Wang:2019jze} that consist of $ud$QM. In the context of recent LIGO events, there are a lot of studies on the related astrophysical implications of SQSs~\cite{Zhou:2017pha,Burgio:2018yix, Roupas:2020nua, Horvath:2020cjz,Kanakis-Pegios:2020kzp,Harko:2009ysn} and $ud$QSs~\cite{Zhang:2019mqb,Ren:2020tll,Zhang:2020jmb,Zhao:2019xqy,Cao:2020zxi}, many of which involve interacting quark matter (IQM) that includes the interquark effects such as the perturbative QCD (pQCD) corrections and the color superconductivity. pQCD corrections are due to the gluon-mediated interaction~\cite{Farhi:1984qu,Fraga:2001id,Fraga:2013qra}. Color superconductivity is the superconductivity in quark matter, arising from the spin-0 Cooper-pair condensation antisymmetric in color-flavor space~\cite{Alford:1998mk,Rajagopal:2000ff,Lugones:2002va}. This can result in two-flavor color superconductivity, where $u$ quarks pair with $d$ quarks [conventionally termed ``2SC" (``2SC+s") without (with) strange quarks], or in a color-flavor locking (CFL) phase, where $u,d,s$ quarks pair with each other antisymmetrically. In order to achieve a large compactness for stars to generate GW echoes, people commonly assumed ad-hoc exotic equations of state (EOS)~\cite{Pani:2018flj,Mannarelli:2018pjb, Bora:2020cly} or special semiclassical treatment of gravity~\cite{Volkmer:2021zjx}. Here we demonstrate that the physically motivated interacting quark stars (IQSs) composed of IQM can have GW echo signatures within classical Einstein gravity framework. It has been shown~\cite{Zhang:2020jmb} that IQSs can meet the various constraints from observed large pulsar masses~\cite{Demorest:2010bx,Antoniadis:2013pzd, Cromartie:2019kug}, analysis of the NICER X-ray spectral-timing event data~\cite{Riley:2019yda,Miller:2019cac}, and the recent LIGO events~\cite{TheLIGOScientific:2017qsa,Abbott:2018wiz,Abbott:2020uma,Abbott:2020khf}. Referring to our previous paper~\cite{Zhang:2020jmb}, we first rewrite the free energy $\Omega$ of the superconducting quark matter~\cite{Alford:2002kj} into a general form with the pQCD correction included: \begin{equation}\begin{aligned} \Omega=&-\frac{\xi_4}{4\pi^2}\mu^4+\frac{\xi_4(1-a_4)}{4\pi^2}\mu^4- \frac{ \xi_{2a} \Delta^2-\xi_{2b} m_s^2}{\pi^2} \mu^2 \\ &-\frac{\mu_{e}^4}{12 \pi^2}+B_{\rm eff} , \label{omega_mu} \end{aligned}\end{equation} where $\mu$ and $\mu_e$ are the respective average quark and electron chemical potentials. The first term represents the unpaired free quark gas contribution. The second term with $(1-a_4)$ represents the pQCD contribution from one-gluon exchange for gluon interaction to $O(\alpha_s^2)$ order. To phenomenologically account for higher-order contributions, we can vary $a_4$ from $a_4=1$, corresponding to a vanishing pQCD correction, to very small values where these corrections become large~\cite{Fraga:2001id,Alford:2004pf,Weissenborn:2011qu}. The term with $m_s$ accounts for the correction from the finite strange quark mass if applicable, while the term with the gap parameter $\Delta$ represents the contribution from color superconductivity. \begin{align} (\xi_4,\xi_{2a}, \xi_{2b}) = \left\{ \begin{array} {ll} ( \left(\frac{1}{3}\right)^{\frac{4}{3}}+ \left(\frac{2}{3}\right)^{\frac{4}{3}})^{-3},1,0) & \textrm{2SC phase}\\ (3,1,3/4) & \textrm{2SC+s phase}\\ (3,3,3/4)& \textrm{CFL phase} \\ \end{array} \nonumber \right. \end{align} The corresponding equation of state is~\cite{Zhang:2020jmb}: \begin{equation} p=\frac{1}{3}(\rho-4B_{\rm eff})+ \frac{4\lambda^2}{9\pi^2}\left(-1+\rm sgn(\lambda)\sqrt{1+3\pi^2 \frac{(\rho-B_{\rm eff})}{\lambda^2}}\right), \label{eos_tot} \end{equation} where \begin{equation} \lambda=\frac{\xi_{2a} \Delta^2-\xi_{2b} m_s^2}{\sqrt{\xi_4 a_4}}. \label{lam} \end{equation} Note that $\rm sgn(\lambda)$ represents the sign of $\lambda$. One can easily see that a larger $\lambda$ (i.e., smaller $B_{\rm eff}, a_4, m_s$ or larger $\Delta$) leads to a stiffer EOS which results in a more compact stellar structure that is more likely to have GW echoes. Thus, for this study, we only need to explore positive $\lambda$ space. As shown in Ref.~\cite{Zhang:2020jmb}, one can further remove the $B_{\rm eff}$ parameter by doing the dimensionless rescaling: \begin{equation} \bar{\rho}=\frac{\rho}{4\,B_{\rm eff}}, \,\, \bar{p}=\frac{p}{4\,B_{\rm eff}}, \,\, \label{scaling_prho} \end{equation} and \begin{equation} \bar{\lambda}=\frac{\lambda^2}{4B_{\rm eff}}= \frac{(\xi_{2a} \Delta^2-\xi_{2b} m_s^2)^2}{4\,B_{\rm eff}\xi_4 a_4}, \label{scaling_lam} \end{equation} so that the EOS Eq.~(\ref{eos_tot}) reduces to the dimensionless form \begin{equation} \bar{p}=\frac{1}{3}(\bar{\rho}-1)+ \frac{4}{9\pi^2}\bar{\lambda} \left(-1+\sqrt{1+\frac{3\pi^2}{\bar{\lambda}} {(\bar{\rho}-\frac{1}{4})}}\right). \label{eos_p} \end{equation} As $\bar{\lambda}\to0$, Eq.~(\ref{eos_p}) reduces to the conventional non-interacting rescaled quark matter EOS $\bar{p}=(\bar{\rho}-1)/3$. When $\bar{\lambda}$ goes extremely large, Eq.~(\ref{eos_p}) approaches the special form \begin{equation} \bar{p}\vert_{\bar{\lambda}\to \infty}=\bar{\rho}-\frac{1}{2}, \label{eos_infty} \end{equation} or equivalently $p={\rho}-2B_{\rm eff}$ using Eq.~(\ref{scaling_prho}). We see that strong interaction effects can reduce the surface mass density of a quark star from $\rho_0= 4B_{\rm eff}$ down to $\rho_0=2B_{\rm eff}$, and increase the quark matter sound speed $c_s^2=\partial p/\partial \rho$ from $1/3$ up to $1$ (the light speed) maximally. \section{GW Echoes from IQS} To study the stellar structure of IQSs, we first rescale the mass and radius into dimensionless form in geometric units ($G=c=1$)\footnote{Note that $B_{\rm eff}$, which is in units of $\rm MeV^4$ or $\rm MeV/fm^3$ in natural units, is in dimension of $[L^{-2}]$ in geometric units here.} \begin{equation} \bar{m}=m{\sqrt{4\,B_{\rm eff}}}, \quad \bar{r}={r}{\sqrt{4\,B_{\rm eff}}}, \label{scaling_mr} \end{equation} so that the Tolman-Oppenheimer-Volkov (TOV) equation~\cite{Oppenheimer:1939ne,Tolman:1939jz} \begin{eqnarray} \begin{aligned} \frac{d m}{d r}& = 4 \pi \rho r^2\,,\label{eq:dm}\\ \frac{d p}{d r} &= (\rho+p) \frac{m + 4 \pi p r^3}{2 m r -r^2},\,\\ \end{aligned} \label{tov} \end{eqnarray} can be converted into the dimensionless form (simply replace nonbarred symbols with barred ones). Solving the dimensionless TOV equation, we obtain the results for the rescaled $\bar{M}-\bar{R}$ shown in Fig.~\ref{rescaledMR}. Note that beyond the maximum mass point (red dot), the object begins to be unstable against radial perturbations. It turns out that all $\bar{M}-\bar{R}$ configurations meet the Buchdahl’s limit, and those with $\bar{\lambda}\gtrsim10$ can cross the photon sphere line, satisfying the necessary condition to generate GW echoes. Interestingly, referring to Fig 3 of Ref.~\cite{Zhang:2020jmb}, this $\bar{\lambda}\gtrsim10$ range well saturates the joint constraints set by the GW170817 and GW190814 analyses assuming related objects are IQSs. \begin{figure}[h] \centering \includegraphics[width=8cm]{IQS_rescaledMR.pdf} \caption{$\bar{M}$-$\bar{R}$ of IQSs for given $\bar{\lambda}$, sampling $(0,0.1, 5, 10, 20, 50, 100)$ from the lighter black line to darker black line respectively. The red line corresponds to $\bar{\lambda}\to \infty$, with the corresponding EOS Eq.~(\ref{eos_infty}). The solid dots denote the maximum mass configurations for given $\bar{\lambda}$. GW echoes require the star to have $\bar{M}$-$\bar{R}$ configurations above the photon sphere line. } \label{rescaledMR} \end{figure} From Eq.~(\ref{scaling_lam}), we see that the echo criterion $\bar{\lambda}\gtrsim10$ maps to the constraint on dimensional parameters \begin{equation} (\xi_{2a} \Delta^2-\xi_{2b} m_s^2)^2\gtrsim 40\xi_4 a_4 \,B_{\rm eff}, \label{criteria} \end{equation} which can be satisfied for a large strong interaction effect (i.e., large $\Delta$ or small $a_4$) or a small effective bag constant. Considering $m_s$ has been constrained in a result of $95\pm 5 \rm \, MeV$~\cite{PDG}, we fix $m_s=(90, 100) \rm MeV$ and obtain Fig.~(\ref{paraspace}) for CFL phase from Eq.~(\ref{criteria}) for illustration. We can see that the strange quark mass variation $90-100$ MeV has a negligible effect on the saturation of Eq.~(\ref{criteria}), while a larger pQCD correction (smaller $a_4$) results in a smaller $\Delta$ for a given bag constant to meet Eq.~(\ref{criteria}). \begin{figure}[h] \centering \includegraphics[width=8cm]{paraspace.pdf} \caption{ $B^{1/4}$-$\Delta$ of IQSs in CFL phase that gives $\bar{\lambda}=10$ for $m_s=90\, \rm MeV$ (red solid), and $m_s=100\, \rm MeV$ (black dashed), with $a_4=(1, 0.75, 0.5,0.25)$ from the lighter black line to darker black line respectively. The region below each line represents the corresponding parameter space that has IQSs compact enough to generate GW echoes.} \label{paraspace} \end{figure} The characteristic echo time is the light time from the star center to the photon sphere~\cite{Cardoso:2017njb,Cardoso:2016rao,Cardoso:2016oxy}, \begin{equation} \tau_\text{echo} = \int_0^{3M} \hspace{-.5cm}\frac{dr}{\sqrt{e^{2\Phi (r)}\left(1- \frac{2 m(r)}{r}\right)}}\,, \label{tau_echo} \end{equation} where \begin{equation} \frac{ d \Phi}{dr} =-\frac{1}{\rho+p} \frac{d p}{d r}. \label{Phi} \end{equation} We can also do the dimensionless rescaling \begin{equation} \quad \bar{\tau}_\text{echo} ={\tau}_\text{echo} {\sqrt{4\,B_{\rm eff}}}, \, \end{equation} so that Eq.(\ref{Phi}) can also be solved in a dimensionless approach. After obtaining the echo time, we directly obtain the rescaled GW echo frequency from the relation~\cite{Cardoso:2017njb,Cardoso:2016rao,Cardoso:2016oxy} \begin{equation} f_\text{echo}=\frac{\pi}{\tau_\text{echo}}, \label{ftau} \end{equation} and similarly, we rescale it into dimensionless form $\bar{f}_\text{echo} $ via relation \begin{equation} \bar{f}_\text{echo} =\frac{{f}_\text{echo}}{\sqrt{4\,B_{\rm eff}}}. \label{scaling_f} \end{equation} In Fig.~\ref{rescaled_fpc}, we show the results of rescaled GW echo frequencies $\bar{f}_\text{echo}$ versus the rescaled center pressure $\bar{p}_c$ for the stellar configurations of Fig.~\ref{rescaledMR} that can generate echoes (i.e., $\bar{\lambda}\gtrsim10$). Note that each curve's left and right ends are truncated at the point where $\bar{R}=3\bar{M}$, and at the point of maximum mass, respectively. The gray dot at $(\bar{f}_\text{echo},\bar{p}_c)\approx(3.05, 0.90)$ denotes the configuration with $\bar{\lambda}=10$, in which the $\bar{R}=3\bar{M}$ point overlaps with the maximum mass point. For the lines of different $\bar{\lambda}$, the left end of each curve maps to a similiar $\bar{f}_\text{echo}\sim3.0$ value due to the same compactness there ($\bar{R}=3\bar{M}$). As the center pressure increases, $\bar{f}_\text{echo}$ decreases due to the increasing compactness until the maximum mass point is reached. The $\bar{\lambda}\to \infty$ case has the smallest $\bar{f}_\text{echo}$ at its maximum mass configuration since it maps to the largest compactness. After rescaling back with Eq.~(\ref{scaling_f}), we obtain a simple relation between the minimal echo frequency and the effective bag constant\footnote{Note that for the ad-hoc EOS $p=\rho-4B$ used in Ref.~\cite{Mannarelli:2018pjb}, we derived the corresponding scaling relation of $f_\text{echo}$ in a form of simply multiplying the right side of Eq.~(\ref{fB1}) or (\ref{fB2}) by a $\sqrt{2}$ factor. This can sucessfully reproduce their results for their bag constant choices.} \begin{equation} f_\text{echo}^{\rm min}\approx 3.16 {\sqrt{B_{\rm eff}/(\rm 10\, MeV/ fm^{3})}} \,\,\, \rm kHz, \label{fB1} \end{equation} where $B_{\rm eff}$ is in units of $\rm MeV/fm^{3}$, or equivalently \begin{equation} f_\text{echo}^{\rm min}\approx 5.76 {\sqrt{B_{\rm eff}/\text{(100 MeV)}^4}} \,\,\, \rm kHz, \label{fB2} \end{equation} where $B_{\rm eff}$ is in units of $\rm MeV^4$. Thus, we see the minimal echo frequency is on the order of a few kHz when the effective bag constant is on its conventional order of magnitude $B_{\rm eff} \sim (100 \rm \,MeV)^4$. \begin{figure}[h] \centering \includegraphics[width=8.2cm]{IQS_fpc.pdf} \caption{$\bar{f}_\text{echo}$ -$\bar{p}_c$ of IQSs for different $\bar{\lambda}$, sampling $\bar{\lambda}=10$ (gray dot) and $\bar{\lambda}=(20, 50, 100)$ from the lighter black line to darker black line respectively. The red line denotes $\bar{\lambda}\to \infty$ case, with the corresponding EOS Eq.~(\ref{eos_infty}). The left and right ends of each line are truncated at the point where $\bar{R}=3\bar{M}$ and at the point of maximum star mass, respectively.} \label{rescaled_fpc} \end{figure} \section{Summary} Interacting quark stars composed of interacting quark matter, including the interquark effects such as the pQCD corrections and the color-superconductivity, can have large compactness with large $\bar{\lambda}$, which characterizes the size of strong interactions in a dimensionless rescaling approach that can maximally reduce the number of degrees of freedom. We have shown that interacting quark stars with $\bar{\lambda}\gtrsim 10$ can meet the compactness condition for generating GW echoes, i.e., featuring a photon sphere within the Buchdahl's limit. Taking the CFL phase for illustration, after rescaling the results back into the dimensional form, we explicitly constructed the corresponding dimensional parameter space of $B_{\rm eff}$ and $\Delta$ with variations of $a_4$ and $m_s$ in their empirical range. Furthermore, we showed that a smaller echo frequency is achieved for a larger center pressure and a larger $\bar{\lambda}$, from which we obtain a general scaling relation for the minimal echo frequency $f_\text{echo}^{\rm min}\approx 5.76 {\sqrt{B_{\rm eff}/\text{(100 MeV)}^4}} \,\,\, \rm kHz$. Therefore, the echo frequencies for IQSs are on the order of a few kilohertz when the effective bag constant is on its conventional order $B_{\rm eff} \sim (100 \rm \,MeV)^4$. This study opens up the possibility of gravitational wave echoes being generated from physical compact stars in the conventional Einstein gravity framework. \begin{acknowledgments} \noindent\textbf{Acknowledgments. } We thank Bob Holdom for helpful discussions. This research is supported in part by the Natural Sciences and Engineering Research Council of Canada. \end{acknowledgments}
1,108,101,566,048
arxiv
\section{Introduction\label{intro}} The measurement of the Galactic rotation curve provides a powerful tool for constraining the mass distribution in the Milky Way and enters various branches of Galactic kinematics as an essential ingredient. The measurement of the rotation curve inside the solar orbit at $R_0$ can be done without even knowing the distances to the tracers. Spectroscopic observations of HI regions and molecular clouds emitting in the radio range yield their line-of-sight velocities, which under the assumption of the circularity of orbits can be converted to the circular velocities purely geometrically. Though this technique known as the tangent point method (TPM, \mbox{\citealp{binney}}) provides a quite accurate measurement of the inner rotation curve, it loses its applicability (1) at small Galactocentric distances $R \, < \, 5$ kpc, where the bar starts to dominate (however, see \mbox{\citealp{wegg15}} for a recent long bar model) and therefore the orbits can significantly deviate from circular ones (\citealp{sofue}) and (2) in the outer disc for $R \, > \, R_0$, where the distances to the tracers cannot be derived from a simple geometry. And even in the 'good' range of Galactocentric distances $5\,\mathrm{kpc}<R<R_0$, the reliability of the TPM may be questionable as was shown by recent studies, which consider the spiral structure in galactic discs \mbox{\citep{chemin15,chemin16}}. In order to probe the outer rotation curve one is obligated to determine distances to the tracers. The distance uncertainties together with the decrease of tracers' density with increase of R explain why the outer rotation curve is known less confidently than its inner part. However, very long baseline interferometry (VLBI) provides high-accuracy measurements of parallaxes and proper motions of young star-forming regions and covers a broad range of Galactocentric distances giving a strong constraint on the shape of the rotation curve \citep{reid14}. Still, due to the variety of techniques, difficulties in obtaining the accurate 6D dynamical information on coordinates and velocities, change in methodology at $R=R_0$ if one relies on the TPM inside the solar radius, the rotation curves determined by different authors are not in a perfect agreement with each other \citep{bhawthorn}. \mbox{\citet{sofue}} presented a comprehensive analysis of previous measurements of the outer rotation curve, including data for HII regions and C stars, as well as points obtained by the HI-disc thickness method and VLBI observations. The authors claimed a dip in the rotation curve between 7 and 11 kpc from the Galactic centre, which they attribute to the presence of a ring of stellar overdensity in the Galactic disc influencing the gravitational potential. The dip is centred at 9 kpc, where the circular velocity drops by \mbox{$\sim$15 km s$^{-1}$}. \mbox{\citet{huang16}} also found a similar depression in the rotation curve obtained from stars of the Sloan Digital Sky Survey III's Apache Point Observatory Galactic Evolution Experiment (SDSS/APOGEE, \mbox{\citealp{eisenstein11}}) and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) Spectroscopic Survey of the Galactic Anti-centre (LSS-GAC, \mbox{\citealp{liu14}}). \mbox{\citet{kafle12}} used blue horizontal branch stars to construct a rotation curve, which also has a dip, although at bigger radii, about 11 kpc. In contrast to these results, \mbox{\citet{lopez14}}, who studied proper motions of disc red clump giants, obtained a flat rotation curve without any dip, although the errors still remain substantial. Having analysed a sample of APOGEE stars covering the range of Galactocentric distances 4-14 kpc, \mbox{\citet{bovy12a}} arrived at the same conclusion - the authors found that the rotation curve is approximately flat. Another flat rotation curve was obtained by \mbox{\citet{reid14}} by studying high-mass star-forming regions. These apparently contradictory results show that a more detailed study of the local Galactic rotation curve is strongly warranted. Now, with the abundant data for distances and velocities of millions of stars from photometric and spectroscopic surveys of the last decade, the situation is more encouraging. However, as we show in this paper, the solution of this task is not straightforward: while deriving the mean rotation velocity from the kinematic data, one has carefully account for the asymmetric drift correction, which itself requires some knowledge or plausible assumptions about the Galactic potential. Because of this inter-dependence between the input and output we have to approach the problem of the rotation curve reconstruction in an iterative and consistent way. This paper is organized as follows. Section \ref{samples} describes the data samples and the selection criteria. Section \ref{Jeans} contains the basics of our analytic approach in the most general form. Then we apply it in two consequent steps. First, we investigate the peculiar motion of the Sun with the local data from the RAdial Velocity Experiment (RAVE, \mbox{\citealp{steinmetz06}}). This analysis is presented in \mbox{Section \ref{RE}}. In the second step in \mbox{Section \ref{RC}} we construct the rotation curve of the extended solar neighbourhood using the sample of G-dwarfs from the Sloan Extension for Galactic Understanding and Exploration (SEGUE, \citealp{yanny09}). In Section \ref{conclusions} we summarize our findings, discuss dependencies on the assumed constants and parameters and conclude. Our treatment of the tilt term of the velocity ellipsoid is discussed in Appendix \ref{tracer}. \section{Data samples\label{samples}} \begin{figure*} \centerline{\resizebox{\hsize}{!}{\includegraphics{cmds_h.pdf}}} \caption{The colour-magnitude diagrams of the whole RAVE and SEGUE G-dwarf data from 2MASS and SDSS photometry, respectively. The applied cuts are shown in black lines.} \label{cmds} \end{figure*} \begin{figure} \centerline{\resizebox{\hsize}{!}{\includegraphics{Rz_map.pdf}}} \centerline{\resizebox{\hsize}{!}{\includegraphics{Vphi_rVphi_pm.pdf}}} \caption{The spatial distribution of the RAVE and SEGUE samples. The top panel shows Galactic cylindrical coordinates $R$ and $z$ of the final SEGUE (both thin and thick disc stars) and RAVE data samples (only thin disc stars). The position of the Sun at Galactocentric distance \mbox{$R_0=8$ kpc} and height \mbox{$z=0$ kpc} is marked with a yellow circle. The bottom panel shows the whole SEGUE sample projected on the Galactic plane. The value of $\phi=0^\circ$ corresponds to the Sun-Galactic centre axis. } \label{samples-plot} \end{figure} Our analysis is performed in two steps. Firstly, we re-analyse the determination of the peculiar motion of the Sun and the radial scalelengths of the three thin disc populations in the framework of the approach of \mbox{\citet{golubov13}}, but based on the most recent data release of RAVE (DR5, \mbox{\citealp{kunder17}}) and a more careful treatment of the asymmetric drift correction (see \mbox{Section \ref{RE}}). RAVE is a kinematically unbiased spectroscopic survey with medium resolution (R$\sim$7500), which provides line-of-sight velocities, stellar parameters, and element abundances for more than 520~000 stars. In RAVE DR5 improved stellar parameters and abundances were published and \mbox{\citet{McMillan17}} derived new distances taking into account the parallaxes of the Tycho-Gaia Astrometric Solution (TGAS) from the first Gaia data release (Gaia DR1, \mbox{\citealp{gaia16}}). Together with the proper motions from UCAC5 (The fifth US Naval Observatory CCD Astrograph Catalog, \mbox{\citealp{zacharias17}}) provided for most of the RAVE DR5 stars, this sample constitutes the basis of the most reliable local kinematic dataset to date. From this sample we select stars at Galactic latitudes $|b|>20^{\circ}$ (to avoid the necessity to consider extinction), belonging to the stripe with $0.75<K-4(J-K)<2.75$ on the colour-magnitude diagram (CMD) constructed with 2 Micron All Sky Survey photometry (2MASS, \citealp{skrutskie06}) (to select the main sequence and have a more uniform population of stars; \mbox{Figure \ref{cmds}}, left panel), with a signal-to-noise ratio S/N$\geq$30, relative distance errors $\delta d/d<0.5$, errors in proper motions $\delta\mu<10$ mas yr$^{-1}$ and line-of-sight velocities $\delta V_{los}<3$ km s$^{-1}$. To select a cleaner thin disc sample we also use RAVE abundances and take only stars with [Mg/Fe]$<$0.2 \mbox{\citep{wojno16}}. Our final RAVE sample contains 23~478 stars. Being very local (Figure~\ref{samples-plot}, top panel), RAVE data cannot be directly employed for the reconstruction of the rotation curve, but rather are used for the purpose of investigating the solar peculiar motion. In the second step we use a sample of G-dwarfs from SEGUE, a low-resolution (R$\sim$2000) spectroscopic sub-survey of SDSS. The sample contains 40~496 G-dwarfs with photometric parallaxes measured by \mbox{\citet{lee11}} with a better than 10-\% accuracy for individual stars. For our analysis we select stars with signal-to-noise ratio S/N$\geq$30 (to ensure good accuracy of the spectroscopic data), colour index $0.48\leq g-r\leq 0.55$ (to have a more uniform sample; most stars from the initial sample belong to this colour range anyway), and absolute magnitude $M_g>4.5$\,mag (which when combined with the colour cut, neatly selects the main sequence; Figure \ref{cmds}, right panel). We separate the sample into the thin disc with [$\alpha$/Fe]$<$0.25, $|z|<1.5$ kpc and the thick disc with [$\alpha$/Fe]$>$0.25, $|z|<2$ kpc and [Fe/H]$>$-1.2 (to decrease contamination by halo stars). After applying all the selection criteria, 10~700 stars remain in the thin disc, and 7~040 stars in the thick disc samples. SEGUE data extend over the range of Galactocentric distances of 7-10 kpc (Figure~\ref{samples-plot}), which makes them suitable for the local rotation curve analysis. The tangential velocities $\upsilon_{\phi}$ of the SEGUE stars rely on measurements of line-of-sight velocities and proper motions with distances to individual stars. Radial velocities and proper motions are obtained by different observational techniques, so they have different accuracy (about 3 km s$^{-1}$ for line-of-sight velocities and 3-4 mas yr$^{-1}$ for proper motions, which at a distance of 2 kpc from the Sun converts to a $\sim$ \mbox{30-40 km s $^{-1}$} error) and might have essentially different systematics. The relative contributions of line-of-sight velocity $\upsilon_{\phi,los}$ and proper motion $\upsilon_{\phi,pm}$ terms to the resulting tangential velocity $\upsilon_{\phi}$ change with Galactocentric longitude, so it is important to check that our data when binned in Galactocentric distance are not entirely dominated by one of the terms. The bottom panel of Figure \ref{samples-plot} shows the SEGUE sample (both thin and thick disc stars selected with our criteria) projected on the Galactic plane. We calculate and compare the contributions $\upsilon_{\phi,los}$ and $\upsilon_{\phi,pm}$ measured relative to the solar Galactocentric velocity $v_\odot$. The contribution of the $\upsilon_{\phi,los}$ term is negligible on the axis Sun-Galactic centre, and in other regions values of the ratio reflect an interplay between the direction and the speed of stellar motions. Importantly, all longitudes are represented in our sample equally well, so in the useful range of Galactocentric distances of 7-10 kpc we expect no bias in $\upsilon_{\phi}$ with respect to the observational techniques. It's also important to note here that the RAVE data are expected to be kinematically unbiased by construction (see \mbox{\citealp{wojno17}}), so we do not need to worry about selection effects while working with them. As to the SEGUE \mbox{G-dwarfs}, simple selection criteria were used to construct this survey \mbox{\citep{yanny09}}, so we expect these data to be relatively free of kinematic bias and therefore we do not include a correction for the selection effects for SEGUE as well. But as a measure of precaution, before using SEGUE data for the rotation curve reconstruction, we check our RAVE and SEGUE samples for consistency in terms of kinematics in order to justify our approach (see Section \ref{RE}). \section{Jeans analysis\label{Jeans}} In this section we discuss the properties of the asymmetric drift, when applied to a large volume in Galactocentric distance $R$ and height above the Galactic midplane $z$. The equation that we are going to formulate here is the basis of our work. The asymmetric drift is defined as the lag in tangential speed of tracer populations with respect to the rotation curve, $V_a = v_c - \overline{v_{\phi}}$. The exact value of this quantity depends on stellar population properties and varies with the position in the Galaxy. This implies that in order to convert mean tangential velocities $\overline{v_{\phi}}$ to the rotation curve, we need to correct them for $V_a$. To quantify the asymmetric drift we use the same notation as in \mbox{\cite{golubov13}} and start with the radial Jeans equation for a stationary and axisymmetric system (in cylindrical coordinates and with negligible mean radial and vertical motion): \begin{eqnarray} v_c^2 - \overline{v_{\phi}}^2 &=& -R \left ( \sigma_{R}^2 \frac{\partial \ln(\nu \sigma_R^2)}{\partial R} + \frac{\sigma_R^2 -\sigma_{\phi}^2}{R} \right.\nonumber\\ &&\qquad\left. + \sigma_{Rz}^2 \frac{\partial \ln(\nu \sigma_{Rz}^2)}{\partial z} + F(R,z) \right ). \label{JE1} \end{eqnarray} Here $\nu$ and $\sigma^2_{R,z,\phi,Rz}$ are the tracer density and velocity ellipsoid, $\overline{v_{\phi}}$ is the mean tangential speed of the population, $v_c$ is its circular speed defined in the midplane and $F(R,z)$ measures the vertical variation of the radial force, i.e., \begin{equation} F(R,z) \equiv \left.\frac{\partial \Phi}{\partial R} \right|_z - \left.\frac{\partial \Phi}{\partial R} \right|_0 \quad;\quad v_c^2 = R \left.\frac{\partial \Phi}{\partial R} \right|_{0}, \label{ver_def} \end{equation} where $\Phi$ is the total gravitational potential. The right-hand side of Eq. \ref{JE1} is a measure of the asymmetric drift we are interested in. The last term vanishes at $z=0$, but it should not be neglected at $z\neq0$. Indeed, as with the increase of height above the midplane the radial gradient of the Galactic gravitational potential decreases, so does the measured tangential speed. This effect, if not taken into account, can result in two biases. Firstly, the derived circular velocity could be underestimated by a few kilometres per second, causing a shift of the rotation curve as a whole. Secondly, the typical distance of stars from the midplane varies at different Galactocentric radii due to the sample geometry (see SEGUE sample in Figure \ref{samples-plot}). This can cause a distortion of the rotation curve, which is unacceptable as the robust reconstruction of the local rotation curve's shape is the very aim of this work. The second last term is the well-known tilt term, which does not vanish in general even in the midplane (see Eq. \ref{tilt-term} below). Since these vertical correction terms lead to a systematic variation of the asymmetric drift increasing with $|z|$, we will take them into account everywhere throughout this work, even for the relatively local RAVE sample with the range of useful distances limited to $\pm$ 0.5 kpc from the Sun (\mbox{Figure \ref{samples-plot}}, top panel). The details of the derivation and the assumptions made are provided in \mbox{Appendix \ref{tracer}}. Here we explain only the essential part of our treatment of \mbox{Eq. \ref{JE1}}. \begin{table}[b] \centering \begin{threeparttable} \caption{The properties of the Galaxy used for calculations.} \begin{tabular}{l|l|l} \hline Variable & Value & Source \\ \hline $\rho_{h\odot}$ ($M_\odot$ pc$^{-3}$) & 0.014 & (1) \\ $a_{h}$ (kpc) & 25 & \ -- \\ \hline $M_b$ ($M_\odot$) & 1.1 $10^{10}$ & \ -- \\ \hline $\Sigma_d$ ($M_\odot$ pc$^{-2}$) & 30 & (1) \\ $h_d$ (pc) & 300 & (1) \\ $R_d$ (kpc)& 2.5 & (2) \\ \hline $\Sigma_t$ ($M_\odot$ pc$^{-2}$) & 6 & (3) \\ $h_t$ (pc) & 800 & (3) \\ $R_t$ (kpc) & 1.8 & (4) \\ \hline $\Sigma_g$ ($M_\odot$ pc$^{-2}$) & 10 & (1) \\ $h_g$ (pc) & 100 & (1) \\ $R_g$ (kpc) & 4.5 & (5) \\ $R_{cut,g}$ (kpc) & 4.0 & \ -- \\ \hline \end{tabular} \begin{tablenotes} \item References. (1) \citealp{just10}; (2) \citealp{golubov13}; (3) \citealp{just11}; (4) \citealp{cheng12}; (5) \citealp{robin3}. \end{tablenotes} \end{threeparttable} \label{tab-var} \end{table} The rotation curve, which we wish to determine, depends on the same Galactic potential. Therefore, we need to check carefully that we are not biasing the result implicitly by adopting a special model for the potential, entering \mbox{Eq. \ref{JE1}} through the vertical gradients of tracer density and the radial force. We use a five-component model of the Galaxy with a spherical Navarro-Frenk-White (NFW) Dark Matter (DM) halo \mbox{\citep{nfw}} with a local density $\rho_{h\odot}$ and power law slope $\gamma_h=-1$, a bulge component with mass $M_b$ and three exponential discs for the gas, and the thin and thick disc with local surface densities $\Sigma_i$ and radial and vertical scalelengths $R_i$ and $h_i$. The gaseous disc has an inner hole with the radius of $R_{cut,g}=4$ kpc. The values of all used parameters are given in \mbox{Table \ref{tab-var}}. With this Galactic model we calculate the exact value of the vertical variation of the radial force using the $GalPot$ code\footnote{Developed by P.McMillan and available at \newline \url{https://github.com/PaulMcMillan-Astro/GalPot}} (a stand-alone version of Walter Dehnen's GalaxyPotential C++ code, \mbox{\citealp{dehnen98a}}). In the tilt term in Eq. \ref{JE1} we parametrize $\sigma_{Rz}^2$ as \begin{equation} \sigma_{Rz}^2 = \eta (\sigma_R^2 - \sigma_z^2) z/R, \label{sig_rz} \end{equation} where the parameter $\eta$ describes the orientation of the velocity ellipsoid relative to the Galactic centre direction. The logarithmic derivative of $\eta\nu (\sigma_R^2-\sigma_z^2)$ can be further parametrized with some characteristic scaleheight $h_{\nu \sigma}$ (which describes the vertical variation of the tracer density and velocity ellipsoid orientation and is in general a function of $R$ and $z$), leading to \begin{equation} \sigma_{Rz}^2 \frac{\partial \ln (\nu \sigma_{Rz}^2)}{\partial z} = \eta \frac{\sigma_R^2 - \sigma_z^2}{R}\left[ 1 - \frac{z}{h_{\nu \sigma}}\right]. \label{tilt-term} \end{equation} The first term in Eq. \ref{JE1} can be characterized by the radial scalelength $R_E$ (which again may depend on $R$ and $z$), so it reads \begin{equation} \sigma_R^2 \frac{\partial \ln(\nu \sigma_R^2)}{\partial R} = - \frac{\sigma_R^2}{R_E}. \label{term5} \end{equation} At this point we remark that under the assumption of a constant disc thickness and a constant shape of the velocity ellipsoid $\sigma_z^2/\sigma_R^2 $, which implies $\nu \propto \sigma_z^2 \propto \sigma_R^2$, we find $R_E$ related to the radial scalelength of the tracer density $\nu$ through \mbox{$R_\nu = 2\,R_E = const.$} We will use this assumption later in order to convert the measured $R_E$ into the radial scalelengths of the subpopulations (see Section \ref{RE}). Finally, we can write the Jeans equation (Eq. \ref{JE1}) as \begin{eqnarray} v_c^2 - \overline{v_{\phi}}^2 &=& \sigma_R^2\left( \frac{R}{R_E}-1\right) +\sigma_{\phi}^2 \nonumber\\ && -R\, F(R,z) - \eta \left(\sigma_R^2 - \sigma_z^2\right)\left( 1 - \frac{z}{h_{\nu \sigma}}\right). \label{JE2} \end{eqnarray} This equation is still equivalent to Eq. \ref{JE1}. With a specification of the parameters/functions $F(R,z)$, $\eta$, $h_{\nu \sigma}$, $R_E$ (\mbox{e.g. $R_E$} independent of $R,z$) we lose generality. \section{Solar motion and radial scalelengths\label{RE}} Before using the mean measured relative velocities of the tracer stars to find the mean tangential velocities $\overline{v_{\phi}}$ and correct them for the asymmetric drift, we need to correct for the peculiar motion of the Sun $(U,V,W)_\odot$, i.e., to convert the velocities to the local standard of rest. The determination of $U_{\odot}$ and $W_{\odot}$ from the stellar kinematics in the solar neighbourhood is not a matter of difficulty. For the radial and vertical components of the solar motion we adopt the values from \mbox{\citet{schoenrich10}} \mbox{$U_\odot=11.1$ km s$^{-1}$}, \mbox{$W_\odot=7.25$ km s$^{-1}$}, which are also consistent with our data sets. In contrast, the determination of the tangential component $V_{\odot}$ is challenging. Due to the fact that the observed tangential motion of stellar populations is affected both by the solar peculiar velocity and the asymmetric drift, the task of disentangling them poses a problem. The classical value based on \textit{Hipparcos} data is $V_\odot=5.2$ km s$^{-1}$ \mbox{\citep{dehnen98}}. The more recent and widely used ones lie approximately between 10 and 12 km s$^{-1}$ (e.g., $12.24 \pm 0.47$ km s$^{-1}$ in \mbox{\citealp{schoenrich10}}). However, among recent estimates there are also values as high as $\sim$24 km s$^{-1}$ \mbox{\citep{bovy12}} indicating that the local stellar motions could be also influenced by non-axisymmetric Galactic features like spiral arms \citep{siebert12,monari16} and a bar \citep{dehnen00,monari17,perez-villegas17}. At the lower limit there is $V_{\odot}=3.06 \pm 0.68$ km s$^{-1}$ found by us previously in \mbox{\citet{golubov13}}. Rather than taking an old value of $V_{\odot}$ together with the radial scalelengths for three metallicity populations, we re-determine them here in order to quantify the impact of our improved analysis in combination with the new distances and improved proper motions. To find the tangential component $V_{\odot}$ we apply to the local RAVE data the new Str\"omberg relation as in \mbox{\citet{golubov13}}, but now including the vertical correction terms discussed in the previous section. We assume a Galactocentric distance of the Sun of \mbox{$R_0=8$ kpc}, which is consistent with \mbox{\citet{reid93}} as well as with a more recent study by \mbox{\citet{gillessen09}}. Then adopting the proper motion of Sgr A* from \mbox{\citet{reid05}} we get the solar Galactocentric velocity \mbox{$v_\odot=241.6 \, \pm 15$ km s$^{-1}$}. Applying Eq. \ref{JE2} at $R=R_0$ and using \mbox{$v_0 := v_c(R_0) = v_{\odot} - V_{\odot} $} and \mbox{$\Delta V = v_\odot - \overline{v_{\phi}}$}, we write for the left-hand side: \begin{eqnarray} v_0^2 - \overline{v_{\phi}}^2 = (v_{\odot} - V_{\odot})^2 - (v_{\odot}- \Delta V)^2 \nonumber \\ = - \Delta V^2 + 2 v_{\odot}\Delta V - 2 v_{\odot}V_{\odot} + V_{\odot}^2. \label{lhs2} \end{eqnarray} This leads to the new version of the Str\"{o}mberg relation: \begin{equation} V^{\prime} = V_{\odot} - \frac{V_{\odot}^2}{2 v_{\odot}} + \frac{\sigma_R^2}{k^{\prime}} \quad\mbox{with}\quad k^{\prime} = \frac{2 R_E}{R_0}v_{\odot} \label{stromberg} \end{equation} and the generalized version of $V^{\prime}$: \begin{eqnarray} && V^{\prime}:=\Delta V + \label{Vprime}\\ && \frac{\sigma_R^2 -\sigma_{\phi}^2 +R_0 F(R_0,z) + \eta (\sigma_R^2 - \sigma_z^2)\left[ 1 - \frac{z}{h_{\nu\sigma}}\right] -\Delta V^2}{2 v_{\odot}}.\nonumber \end{eqnarray} \begin{figure} \centerline{\resizebox{1.\hsize}{!}{\includegraphics{vel_J-K.pdf}}} \centerline{\resizebox{1.\hsize}{!}{\includegraphics{AD.pdf}}} \caption{The recalculation and consistency check of the asymmetric drift correction for the RAVE and local SEGUE data samples. The RAVE data are recalculated using the improved distances from \mbox{\citet{McMillan17}} and UCAC5 proper motions. The top panel shows $\sigma_\mathrm{R}$ and $\Delta V$ as functions of (J-K) 2MASS colour for different metallicity bins (compare to Figure 2 in \mbox{\citealp{golubov13}}). The bottom panel shows $V'$ as function of $\sigma_\mathrm{R}^2$ in view of the new Str\"omberg relation (Eqs. \ref{stromberg} and \ref{Vprime}) for each metallicity bin. The data points for the local G-dwarfs of SEGUE are added as full circles. The metallicity binning for RAVE and SEGUE data is identical and has the same colour coding. Only stars with $7.5\,\mathrm{kpc}<R<8.5\,\mathrm{kpc}$ and in the case of SEGUE with $|z|<1.5$ kpc are selected for the plot. Dashed colour-coded lines are the linear least squares fit to the RAVE data. Black solid lines are added to readout the radial scalelengths corresponding to the positions of the SEGUE points and the value of $V_\odot$ determined from RAVE. Here and later on the error bars are calculated using the observational errors. } \label{stromberg-plot} \end{figure} To make practical use of Eqs. \ref{stromberg} and \ref{Vprime}, i.e., to determine values of $V_\odot$ and $R_E$, we need to bin our data sample into sub-bins with different kinematics. For this purpose we separate the RAVE sample in three subpopulations with different metallicities and then bin each subpopulation in (J-K) colour. The top panel of Figure \ref{stromberg-plot} shows such a binning of the squared radial velocity dispersion $\sigma_R^2$ and measured velocity $\Delta V$, which can be calculated straightforwardly, without knowledge of $V_\odot$. One can clearly see that the kinematic properties change systematically with both metallicity and colour. Using the same binning we plot $V^{\prime}$ versus $\sigma_R^2$ (Figure \ref{stromberg-plot}, bottom panel). For simplicity, we assume here that $\eta=const.=0.8$ as derived in \mbox{\citet{binney14}} from the RAVE data. We also treat $h_{\nu\sigma}$ as independent of colour, though differing with metallicity and select appropriate values from the disc model of \mbox{\citet{just10}} (see Appendix \ref{tracer}). The height above the midplane $z$ in Eq. \ref{Vprime} is calculated for each metallicity-colour bin as a mean value of the absolute $z$ for individual stars. As in \mbox{\citet{golubov13}} we still see the systematic difference of $V'(\sigma_{R}^2)$ between metallicity bins, but the linear dependences show a larger scatter. The assumption that $R_E$, similar to $h_{\nu\sigma}$, is approximately the same for all colours inside a given metallicity bin, but differs with metallicity, allows us to derive $R_E$ values for the three metallicity subpopulations as well as $V_{\odot}$. To do so we perform a simultaneous linear fit of the metallicity-colour sequences (shown with colour-coded dashed lines in Figure \ref{stromberg-plot}, bottom panel). The inverse slopes of the fitting lines $k^\prime_i$ can be directly converted to $R_{E,i}$, which under the assumption of constancy of the disc thickness and the shape of velocity ellipsoid (see Section \ref{Jeans}) gives us the radial scalelengths for the selected populations. The solar peculiar motion can be readout from the $V^{\prime}$ extrapolated to zero radial velocity dispersion: \begin{eqnarray} R_{d,i} &=& \frac{R_0 k^\prime_i}{v_{\odot}}\\ \nonumber V_{\odot} &=& v_{\odot} - \sqrt{v_{\odot}^2 - 2 v_{\odot} \left. V^{\prime}\right|_{\sigma_R^2=0}} \, . \end{eqnarray} The updated value of the tangential component of the solar peculiar motion is found to be $V_\odot=4.47\pm 0.8$ km s$^{-1}$, which translates to the local circular velocity $v_0 \approx 237 \pm 16$ \mbox{km s$^{-1}$}. The radial scalelengths for the metallicity bins are $R_d$(0$<$[Fe/H]$<$0.2$)=2.07\pm0.2$ kpc, $R_d$(-0.2$<$[Fe/H]$<$0$)=2.28\pm0.26$ kpc and $R_d$(-0.5$<$[Fe/H]$<$-0.2$)=3.05\pm0.43$ kpc, which together with $V_\odot$ is in agreement with our old values from \mbox{\citet{golubov13}}. \ifx As an attentive reader may notice, one of the terms in $V'$ depends on the solar peculiar motion itself: $R_0 F(R_0,z)$ includes $v_0$, which depends on $V_{\odot}$ (consider the special case of Eq. \ref{JE2} for $R=R_0$ and the definition of $v_0$). However, changing the assumed value of $V_\odot$ by 3-4 km s$^{-1}$ influences the results only in the second decimal place, so we do not need to approach this in an iterative way. To be consistent with our findings, we use \mbox{$V_\odot=4.45\pm 0.8$ km s$^{-1}$} also for the vertical correction term. \fi Now we must check whether SEGUE stars also follow the same trend. For this purpose we split the thin disc SEGUE sample into the same metallicity bins and consider only stars with Galactocentric distances $7.5<R<8.5$ kpc. The subdivision in colours is not possible for this data set because of its narrow colour range. Another characteristic of this sample is that G-dwarfs are distributed over a significantly larger range of $|z|$ (Figure \ref{samples-plot}, top). And though we do not reconstruct here the shape and orientation of the velocity ellipsoid as a function of $R$ and $z$, we may roughly account for the vertical gradients of $\sigma_R^2$ and $\sigma_z^2$. To do so we apply Eq. \ref{Vprime} for the calculation of $V'$ not to the whole metallicity bin, but separately in vertical sub-bins ($|z|=0 \, ... \, 1.5$ kpc with a step of 0.5 kpc) and calculate the resulting $V'$ by taking a weighted mean of the values obtained for different $|z|$. The SEGUE points (filled circles in Figure \ref{stromberg-plot}, bottom panel) demonstrate good consistency with the RAVE data, which means that for further analysis of SEGUE thin disc sample we can securely use the values of scalelengths derived from RAVE. This is an important result, accounting for all uncertainties of the metallicity calibration in RAVE and possible velocity biases for SEGUE. To be even more precise we can inverse the problem and read out scalelengths for individual points adopting the solar peculiar velocity derived with the RAVE data. The values of $R_d$ calculated for the SEGUE data in such a way (see the three solid black lines in Figure \ref{stromberg-plot}, bottom panel) are: $R_d$(0$<$[Fe/H]$<$0.2$)=1.91\pm0.23$ kpc, $R_d$(-0.2$<$[Fe/H]$<$0$)=2.51\pm0.25$ kpc and $R_d$(-0.5$<$[Fe/H]$<$-0.2$)=3.55\pm0.42$ kpc. We use these new values and $V_\odot=4.47\pm 0.8$ km s$^{-1}$ in our further analysis. The linearity of the asymmetric drift correction, i.e., constancy of $R_d$ versus $\sigma_R^2$ is still under debate. Nevertheless, even if the asymmetric drift dependence on $\sigma_R^2$ in fact turns out to be nonlinear for small $\sigma_R^2$ (as assumed by \mbox{\citealp{schoenrich10}}), it will correspond to some shift in the measured circular velocity, but we do not expect this shift to change drastically at different Galactocentric radii. To the first approximation this would produce only a parallel displacement of the measured rotation curve. Being interested in the general shape of the rotation curve, not in the exact value of the rotation velocity, we apply the solar velocity and the radial scalelengths derived by Eq. \ref{stromberg} at all Galactocentric radii. \section{Asymmetric drift and rotation curve\label{RC}} Now, with updated values for the solar peculiar velocity and the radial scalelengths for three populations of the selected metallicities, we proceed to the determination of the rotation curve in the extended solar neighbourhood. We go back to the Jeans equation formulated for arbitrary $(R,z)$ (see Eq. \ref{JE2}) and apply it to the thin disc SEGUE stars. In principle, Eq. \ref{JE2} can be directly used for the determination of the rotation velocity as all terms on the right-hand side are now known and the mean tangential speed can be expressed as $\overline{v_{\phi}} = v_\odot - \Delta V$, where $v_\odot$ is the tangential speed of the Sun and $(-\Delta V)$ is the mean observed tangential velocity with respect to the Sun. However, we would prefer to formulate the expression of the rotation velocity in terms of the asymmetric drift $V_a$, so we recall its definition: \begin{equation} V_a = v_c - \overline{v_{\phi}} = v_c(R) - v_\odot + \Delta V. \label{AD_def} \end{equation} Inserting $\overline{v_{\phi}} = v_c - V_a$ into the left-hand side of Eq. \ref{JE2}, we arrive at the quadratic equation for the asymmetric drift: \begin{eqnarray} V_a^2 &-&2 v_c V_a - R \Bigg( F(R,z) + \Bigg. \label{JE_general}\\ && \Bigg. \eta \frac{\sigma_R^2 - \sigma_z^2}{R}\left[ 1 - \frac{z}{h_{\nu \sigma}}\right] - \frac{\sigma_R^2}{R_E} + \frac{\sigma_R^2 -\sigma_{\phi}^2}{R} \Bigg ) = 0. \nonumber \end{eqnarray} We solve Eq. \ref{JE_general} with respect to $V_a$ and get: \begin{eqnarray} V_a &=& v_c(R) - \Bigg\{ v_c^2(R) + R \bigg( F(R,z) + \phantom{\bigg(} \Bigg.\label{Va_root}\\ \Bigg. \phantom{\bigg(} && \eta \frac{\sigma_R^2 - \sigma_z^2}{R}\left[ 1 - \frac{z}{h_{\nu \sigma}}\right] - \frac{\sigma_R^2}{R_E} + \frac{\sigma_R^2 -\sigma_{\phi}^2}{R} \bigg) \Bigg\}^{1/2} \approx \nonumber \end{eqnarray} $$ \frac{-R F(R,z) - \eta (\sigma_R^2 - \sigma_z^2)\left[ 1 - \frac{z}{h_{\nu \sigma}}\right] + \sigma_R^2\frac{R}{R_E} - (\sigma_R^2 -\sigma_{\phi}^2) }{2v_c(R)}, $$ where the last line corresponds to the linear approximation ignoring the $V_a^2$ term. The difference of the non-linear and linear values for $V_a$ is of the order of 5\% or 1\,km s$^{-1}$. The final formula for calculating the rotation velocity at radius R for each bin $(R,z)$ is from Eq. \ref{AD_def} \begin{equation} v_c(R) = \overline{v_{\phi}} + V_a = v_\odot - \Delta V + V_a \label{Vc} \end{equation} with the asymmetric drift correction $V_a$ given by Eq. \ref{Va_root}. As $V_a$ is itself a function of $v_c(R)$, the determination of the rotation velocity is an iterative procedure, during which we assume $v_c \propto R^{\alpha}$. We start with some small $\alpha$ as initial value, at each step of the iteration the reconstructed rotation curve is fitted and the new value of $\alpha$ is derived to be plugged back in Eq. \ref{Va_root} via $v_c(R)$. The iteration procedure converges very quickly, after two or three cycles. \begin{figure} \centerline{\resizebox{1\hsize}{!}{\includegraphics{RC.pdf}}} \caption{The rotation curve in the extended solar neighbourhood traced via SEGUE stars. The thin disc stars are split into the same three metallicity bins as before. For each distance bin the mean rotation velocity $\overline{v_{\phi}}=v_{\odot}-\Delta V$ is measured (dashed curves, additional binning in $|z|$ applied in the range of $7\,\mathrm{kpc}<R<9\,\mathrm{kpc}$ is not shown here). The circular velocity (solid curves) is calculated for the three metallicity bins. The best power-law fit of the form $v_c=v_0(R/R_0)^{\alpha}$ is shown with a black dashed line. The areas of the \mbox{1, 2 and 3$\sigma$-deviation} are shown with increasingly lighter shades of grey. The solar tangential velocity $v_\odot$ is marked with a yellow circle.} \label{RC-drift-plot} \end{figure} \begin{figure} \centerline{\resizebox{1\hsize}{!}{\includegraphics{Rd_thick.pdf}}} \caption{The radial scalelength $R_\mathrm{d}$ of the thick disc versus metallicity. Considered are SEGUE thick disc stars with $7.5\,\mathrm{kpc}<R<8.5\,\mathrm{kpc}$, 4195 in total. Red points show the radial scalelengths calculated for each metallicity bin with the 'effective' quantities in Eq. \ref{Vprime}. The points are fitted with a constant and also with $R_d$ linearly dependent on [Fe/H] (solid and dashed lines). One-sigma areas are shown in grey. The blue point corresponds to the value of scalelength derived in case of binning the same sample in 4 $|z|$-bins. } \label{Rd-plot} \end{figure} We bin thin disc SEGUE stars, again separated in three metallicities as before, in Galactocentric distances with equal step of 0.4 kpc. The further data analysis is not the same for all distance bins. As we mentioned before, the SEGUE stars are in general distributed over a large range in $|z|$, so we would like to take into account the vertical gradients in $\sigma_R^2$ and $\sigma_z^2$ by applying our equations separately at different heights for each distance. As one can see from the top panel of Figure \mbox{\ref{samples-plot}}, such $|z|$-binning for the thin disc sample is justified only close to $R_0$, approximately for 7\,kpc $<R<$ 9\,kpc, because at larger Galactocentric distances the majority of stars is located at approximately the same height, such that low-$|z|$ bins would be essentially empty and suffering from high Poisson noise. For this reason we do not bin in $|z|$ outside this distance range. Taking this into account, at $R<7$ kpc and $R>9$ kpc for each R-bin we find the mean tangential velocity $\overline{v_\phi}$ (dashed colour-coded lines in Figure~\ref{RC-drift-plot}), as well as three velocity dispersions and mean values of $R$ and $|z|$. Then we apply the correction for the asymmetric drift from Eq. \ref{Va_root}, which is typically $\sim$20 \mbox{km s$^{-1}$}. For $7\,\mathrm{kpc}<R<9\,\mathrm{kpc}$ we do the same, but separately for three vertical bins (as before, $|z|=0 \, ... \, 1.5$ kpc with a step of 0.5 kpc), and then calculate the weighted mean $v_c$ at each R. The obtained rotation curves representing different metallicity bins are shown as solid lines in Figure~\ref{RC-drift-plot} and are broadly consistent with each other within the error bars. Between 8 and 10 kpc the rotation curve is flat to an accuracy of a few kilometres per second, and definitely does not show the 10\,km s$^{-1}$ dip described by \mbox{\citet{sofue,sofue10}}. On the other hand, the inner part of our rotation curve demonstrates a metallicity-dependent rise with the amplitude up to \mbox{10 km s$^{-1}$}, similar to the one attested by \mbox{\citet{sofue,sofue10}}. We perform a power-law fit $V_c\propto R^\alpha$ simultaneously to all three rotation curves and find a small power law index $\alpha=0.033\pm 0.034$ (fit shown in Figure~\ref{RC-drift-plot} with a dashed line). This transforms into the local slope of the rotation curve $dV_c/dR=0.98\pm 1$ \mbox{km s$^{-1}$ kpc$^{-1}$}. However, the existing measurements of the Oort constants point at a moderately negative slope: the classical value from \mbox{\citet{binney}} is $dV_c/dR=-2.4 \pm 1$ \mbox{km s$^{-1}$ kpc$^{-1}$}, while the more recent study by \mbox{\citet{bovy17}} from TGAS data suggests an even steeper slope of $-3.4 \pm 0.6$ \mbox{km s$^{-1}$ kpc$^{-1}$}. Still, we must keep in mind that the Oort constants measure only a very local slope of the rotation curve, which may be stronger perturbed by the local spiral arm structure and the bar, while our analysis goes all the way to 1-2 kpc away from the Sun. The thick disc SEGUE stars are not very instrumental in reconstructing the rotation curve, as the radial scalelength of the thick disc is poorly constrained. We can solve the inverse problem: use the data to reconstruct the radial scalelength of the thick disc assuming the rotation curve already known. We express $R_\mathrm{d}$ from Eq. \ref{stromberg} and calculate it for ten metallicity bins of the thick disc. For the parameter $h_{\nu\sigma}$ of the thick disc we use a value of 800 pc, similar to its scaleheight \mbox{\citep{just11}}. Furthermore, we take only the local thick disc sample with $7.5\,\mathrm{kpc}<R<8.5\,\mathrm{kpc}$ in order to avoid regions, where our results can be biased by the uncertainty in the vertical correction term (this effect is not so important for the thin disc sample, as its stars are in general located at smaller $|z|$ than those of the thick disc). The resulting $R_\mathrm{d}$ is shown in Figure~\ref{Rd-plot} with red points. Though the data points show some small variation of $R_d$ with [Fe/H], the constant value of the scalelength \mbox{$2.1 \pm 0.05$ kpc} is more robust (see the darker one-sigma area in Figure \ref{Rd-plot}). In other words, the chemically defined thick disc behaves as a single kinematically homogeneous population. The value of the thick disc scalelength found here is consistent with simulations by \mbox{\citet{minchev15}} and with the data analysis by \mbox{\citet{bovy12}} and \mbox{\citet{kordopatis17}}. When binning the local SEGUE thick disc sample in metallicity, we calculate $R_d$ for the 'effective' quantities in the bin: velocity dispersions determined for all stars and also mean $|z|$ and R. Such 'effective' values are quite representative in case of R and velocity dispersions, because our data are very local in terms of Galactocentric distances and expected to be kinematically homogeneous, so the velocity dispersions should not have a strong gradient in vertical direction. On the other hand, the mean value of $|z|$ might be misleading as the vertical distribution of stars is quite inhomogeneous. The vertical correction and the tilt term in $V'$ (Eq. \ref{Vprime}) are quite sensitive to $z$, so to cross-check our results we apply Eq. \ref{stromberg} for 4 $|z|$-bins (0...2 kpc with a 0.5 kpc step) with no binning in metallicity as the data are not abundant enough to allow a simultaneous separation in both $|z|$ and [Fe/H]. The resulting scalelength, which is calculated again as a weighted mean of the values found for different $|z|$, deviates only slightly from the value found at the previous step: \mbox{$R_d = 2.05 \pm 0.22$ kpc}. We overplot it in Figure \ref{Rd-plot} with a blue dot for the mean metallicity of the local thick disc sample. A good agreement between the values of scalelength calculated via the different binning of our sample in the parameter space ensures us of the robustness of the derived result. \section{Discussion and conclusions\label{conclusions}} In this paper we performed the revision and improvement of the methods developed previously in \mbox{\citet{golubov13}}. Starting from the classical Jeans analysis we arrived at the new Str\"omberg relation for the asymmetric drift and applied it locally to the most recent RAVE data. This enabled us to update the value of the solar peculiar motion to $V_\odot=4.47\pm 0.8$ \mbox{km s$^{-1}$}. This is lower than the typical values reported by other authors, which are around 10-12 \mbox{km s$^{-1}$}. However, in the study of \mbox{\cite{sharma14}} also based on the RAVE data, though in the framework of a different Galaxy model, the solar peculiar velocity is smaller as well, $V_\odot=7.62^{+0.13}_{-0.16}$ km s$^{-1}$. We also found radial scalelengths for the three metallicity populations, which are $R_d$(0$<$[Fe/H]$<$0.2$)=2.07\pm0.2$ kpc, $R_d$(-0.2$<$[Fe/H]$<$0$)=2.28\pm0.26$ kpc and $R_d$(-0.5$<$[Fe/H]$<$-0.2$)=3.05\pm0.43$ kpc. Our analysis demonstrates good consistency of the SEGUE and the RAVE data in terms of kinematics. With the peculiar velocity of the Sun derived from the RAVE sample, SEGUE data give similar values for the scalelengths, $R_d$(0$<$[Fe/H]$<$0.2$)=1.91\pm0.23$ kpc, $R_d$(-0.2$<$[Fe/H]$<$0$)=2.51\pm0.25$ kpc and $R_d$(-0.5$<$[Fe/H]$<$-0.2$)=3.55\pm0.42$ kpc. Then we used the SEGUE sample of the thin disc G-dwarfs to reconstruct the rotation curve of the Milky Way, ranging from 7 to 10 kpc in Galactocentric radius. We took into account the asymmetric drift correction (Eq. \ref{Va_root}) and showed that the resulting rotation curve is essentially flat (Figure~\ref{RC-drift-plot}). Thus, the existence of any features in the rotation curve just outside the solar radius is discarded in the framework of our analysis. The formal power-law fit to the rotation curve implies a positive slope $\alpha=0.033\pm 0.034$ consistent with a flat rotation curve, although we see that its local value is probably smaller. The corresponding radial gradient of the circular speed is $dV_c/dR=0.98\pm 1$ \mbox{km s$^{-1}$ kpc$^{-1}$}, which is in agreement with the findings of \mbox{\cite{sharma14}}, who derived a similar value from the RAVE data: $dV_c/dR=0.67^{+0.25}_{-0.26}$ \mbox{km s$^{-1}$ kpc$^{-1}$}. Using SEGUE data and relying on the determined slope of the rotation curve, we also calculated the radial scalelength of the thick disc. It is $2.05\pm0.22$ kpc, and no strong dependence on metallicity was observed. Values of the quantities derived in this paper are summarized in Table \ref{tab-rez}. Finally, we have to discuss the dependence of our results on the assumed constants and parameters. The pair ($R_0$,$v_\odot$), on the one hand, influences the derived stellar spatial distribution and velocities from the observables. On the other hand, it enters the equation for the asymmetric drift correction (see Eq. \ref{Va_root}). For the recommended values from \mbox{\citet{bhawthorn}} ($R_0$,$v_\odot$)$=$\mbox{(8.2 kpc, 248 km s$^{-1}$)} the change in the re-calculated solar peculiar velocity and scaleheights for the three metallicity bins lie well within one sigma and the changes in the rotation curve can be mostly described in terms of a vertical shift to higher velocities and horizontal translation to larger Galactocentric distances. Its slope is in this case $\alpha=0.024\pm 0.031$, which is again consistent with a flat rotation curve. \begin{table} \caption{The summary of the results.} \begin{tabular}{l|l|l} \hline Quantity & from RAVE & from SEGUE \\ \hline $V_\odot$ (km s$^{-1}$) & $4.47\pm 0.8$ & \\ $R_d^\mathrm{thin}$(0$<$[Fe/H]$<$0.2$)$ (kpc) & $2.07\pm0.2$ & $1.91\pm0.23$ \\ $R_d^\mathrm{thin}$(-0.2$<$[Fe/H]$<$0$)$ (kpc) & $2.28\pm0.26$ & $2.51\pm0.25$ \\ $R_d^\mathrm{thin}$(-0.5$<$[Fe/H]$<$-0.2$)$ (kpc) & $3.05\pm0.43$ & $3.55\pm0.42$ \\ $\alpha$ & & $0.033\pm 0.034$ \\ $R_d^\mathrm{thick}$ (kpc) & & 2.05 $\pm$ 0.22\\ \hline \end{tabular} \label{tab-rez} \end{table} What is the impact of the solar peculiar motion? Changes of $(dU,dV,dW)_\odot$ add quadratically to the corresponding velocity dispersions. The vertical velocity has no other impact on the result, but we should check that the vertical component of the mean measured relative velocity of the stars in the sample is approximately $-W_\odot$, which is indeed the case. $U_\odot$ enters the velocity transformations to cylindrical coordinates, so every time we change $V_{\odot}$, $R_0$ or $v_\odot$, we have to adapt it to have $\overline{v_R} \approx 0$ as we assumed in Section \ref{Jeans}. However, this correction is small and we can neglect it as it is surely beyond the accuracy we can hope to achieve in the framework of our approach. $V_{\odot}$ has a larger impact on the results as the asymmetric drift correction depends on it, mainly via $v_c$ and the scalelengths (see Eq. \ref{Va_root}). We test two values of $V_{\odot}$, $\sim$ 3 km s$^{-1}$ \mbox{\citep{golubov13}} and $\sim$ 7.6 \mbox{km s$^{-1}$} \mbox{\citep{sharma14}}. The rotation curve slope is then $\alpha=0.039\pm 0.034$ and $\alpha=0.014\pm 0.028$, respectively. We also test the sensitivity of our results to the vertical scaleheights $h_{\nu\sigma}$ as they are not tightly constrained. Changing $h_{\nu\sigma}$ by $\pm20$\% leads to the slopes of $\alpha=0.022\pm 0.03$ and $\alpha=0.049\pm 0.038$, which deviate from our standard value by less than 0.5$\sigma$. To quantify the vertical gradient of the radial force term we assume a Galaxy model, i.e., use some form of the potential as an input. However, we believe that the rotation curve obtained in Section \ref{RC} is not strongly predefined by this choice. The $RF(R,z)$ term is not a dominating one in the asymmetric drift correction, so with respect to the rotation velocity it is a first-order correction. The modification of the potential will enter $v_c$ as a second-order correction only, and this already meets the limit of our accuracy. As we inferred by running the tests with the $GalPot$ code, the main contribution to the vertical correction of the radial force comes from the thin disc, thus its scalelength and scaleheight are the main sources of uncertainty in this term. Taking a $R_{d}$ value of 2 or 3 kpc we arrive at a rotation curve, which is correspondingly slightly steeper ($\alpha = 0.041 \pm 0.041$) or flatter ($\alpha = 0.027 \pm 0.029$) than the one we presented in Figure \ref{RC-drift-plot}. Testing $h_z$ of 200 and 400 pc results in similar changes: $\alpha = 0.041 \pm 0.036$ and $\alpha = 0.026 \pm 0.03$. The impact of varying $R_d$ and $h_z$ of the other discs is negligible. None of these deviations from our main result produce essential changes in the derived rotation curve shape. So we can conclude that in the framework of the developed analysis our outcome is robust with respect to the small changes of our constants and the pre-choice of the Galactic potential. Our analysis of the local rotation curve does not support the existence of any special features in its shape like a significant dip at R = 9 kpc. \section*{Acknowledgements} This work was supported by Sonderforschungsbereich SFB 881 'The Milky Way System' (subprojects A6 and A5) of the German Research Foundation (DFG). Funding for RAVE has been provided by: the Australian Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP); the Australian National University; the Australian Research Council; the French National Research Agency; the German Research Foundation (SPP 1177 and SFB 881); the European Research Council (ERC-StG 240271 Galactica); the Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the National Science Foundation of the USA (AST-0908326); the W. M. Keck foundation; the Macquarie University; the Netherlands Research School for Astronomy; the Natural Sciences and Engineering Research Council of Canada; the Slovenian Research Agency (P1-0188); the Swiss National Science Foundation; the Science \& Technology Facilities Council of the UK; Opticon; Strasbourg Observatory; and the Universities of Groningen, Heidelberg and Sydney. The RAVE web site is at \url{https://www.rave-survey.org}. Funding for SDSS-I and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is \url{http://www.sdss.org}. UCAC5 proper motions and the distances from \mbox{\cite{McMillan17}} were obtained with the use of the European Space Agency (ESA) mission Gaia (\url{https://www.cosmos.esa.int/gaia}), processed by the Gaia Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. The authors are very grateful to Young-Sun Lee and Timothy C. Beers for providing their SEGUE data sample for our analysis and for fruitful discussions. We also thank the anonymous referee for the detailed suggestions, which improved the paper significantly. \bibliographystyle{aa}
1,108,101,566,049
arxiv
\section{Introduction} Experiments at the Large Hadron Collider (LHC) have already started testing many models of particle physics beyond the Standard Model (SM), and particular attention is being paid to the Minimal Supersymmetric SM (MSSM) and to other scenarios involving softly-broken supersymmetry (SUSY). In the last few years, parameter inference methodologies have been developed, applying both Frequentist and Bayesian statistics (see e.g.,~\cite{Baltz:2004aw,Allanach:2005kz,deAustri:2006pe,Allanach:2007qk,Roszkowski:2007fd, Buchmueller:2009fn}). While the efficiency of Markov Chain Monte Carlo (MCMC) techniques has allowed for a full exploration of multidimensional models, the likelihood function from present data is multimodal with many narrow features, making the exploration task with conventional MCMC methods challenging. A powerful alternative to classical MCMC has emerged in the form of Nested Sampling~\cite{skilling04}, a Monte Carlo method whose primary aim is the efficient calculation of the Bayesian evidence, or model likelihood. As a by-product, the algorithm also produces samples from the posterior distribution. Those same samples can also be used to estimate the profile likelihood. {\sc MultiNest} \cite{multinest}, a publicly available implementation of the nested sampling algorithm, has been shown to reduce the computational cost of performing Bayesian analysis typically by two orders of magnitude as compared with basic MCMC techniques. {\sc MultiNest} has been integrated in the \texttt{SuperBayeS} code\footnote{Available from: \texttt{www.superbayes.org}} for fast and efficient exploration of SUSY models. Having implemented sophisticated statistical and scanning methods, several groups have turned their attention to evaluating the sensitivity to the choice of priors \cite{Allanach:2007qk,Lafaye:2007vs,Trotta:2008bp} and of scanning algorithms \cite{Akrami:2009hp}. Those analyses indicate that current constraints are not strong enough to dominate the Bayesian posterior and that the choice of prior does influence the resulting inference. While confidence intervals derived from the profile likelihood or a chi-square have no formal dependence on a prior, there is a sampling artifact when the contours are extracted from samples produced from Bayesian sampling schemes, such as MCMC or {\sc MultiNest}~\cite{Trotta:2008bp}. Given the sensitivity to priors and the differences between the intervals obtained from different methods, it is natural to seek out a quantitative assessment of their performance, namely their \textit{coverage}: the probability that an interval will contain (cover) the true value of a parameter. The defining property of a 95\% confidence interval is that the procedure adopted for its estimation should produce intervals that cover the true value 95\% of the time; thus, it is reasonable to check if the procedures have the properties they claim. While Bayesian techniques are not designed with coverage as a goal, it is still meaningful to investigate their coverage properties. Moreover, the frequentist intervals obtained from the profile likelihood or chi-square functions are based on asymptotic approximations and are not guaranteed to have the claimed coverage properties. Here we report on recent studies investigating the coverage properties of both Bayesian and Frequentist procedures commonly used in the literature. We also highlight the numerical and sampling challenges that have to be met in order to obtain a sufficienlty high-resolution mapping of the profile likelihood when adopting Bayesian algorithms (which are typically designed to map out the posterior mass, instead). For the sake of example, we consider in the following the so-called mSUGRA or Constrained Minimal Supersymmetric Standard Model (CMSSM), a model with fairly strong universality assumptions regarding the SUSY breaking parameters, which reduce the number of free parameters to be estimated to just five, denoted by the symbol ${\bf \Theta}$: common scalar ($m_0$), gaugino ($m_{1/2}$) and tri--linear ($A_0$) mass parameters (all specified at the GUT scale) plus the ratio of Higgs vacuum expectation values $\tan\beta$ and $\text{sign}(\mu)$, where $\mu$ is the Higgs/higgsino mass parameter whose square is computed from the conditions of radiative electroweak symmetry breaking (EWSB). \section{Coverage study of the CMSSM} \subsection{Accelerated inference from neural networks} Coverage studies require extensive computational expenditure, which would be unfeasible with standard analysis techniques. Therefore, in Ref.~\cite{Bridges:2010de} a class of machine learning devices called Artificial Neural Networks (ANNs) was used to approximate the most computationally intensive sections of the analysis pipeline. Inference on the parameters of interest ${\bf \Theta}$ requires relating them to observable quantities, such as the sparticle mass spectrum at the LHC, denoted by $\bf{m}$, over which the likelihood is defined. This is achieved by evolving numerically the Renormalization Group Equations (RGEs) using publicly available codes, which is however a computationally intensive procedure. One can view the RGEs simply as a mapping from ${\bf \Theta} \to \bf{m}$, and attempt to engineer a computationally efficient representation of the function. In~\cite{Bridges:2010de}, an adequate solution was provided by a three-layer perceptron, a type of feed-forward neural network consisting of an input layer (identified with ${\bf \Theta}$), a hidden layer and an output layer (identified with the value of ${\bf m}({\bf \Theta})$ that we are trying to approximate). The weight and biases defining the network were determined via an appropriate training procedure, involving the minimization of a loss function (here, the discrepancy between the value of ${\bf m}({\bf \Theta})$ predicted by the network and its correct value obtained by solving the RGEs) defined over a set of 4000 training samples. A number of tests on the accuracy and noise of the network were performed, showing a correlation in excess of 0.9999 between the approximated value of ${\bf m}({\bf \Theta})$ and the value obtained by solving the RGEs for an independent sample. A second classification network was employed to distinguish between physical and un-physical points in parameter space (i.e., values of ${\bf \Theta}$ that do not lead to physically viable solutions to the RGEs). The final result of replacing the computationally expensive RGEs with the ANNs is presented in Fig.~\ref{fig:nn_comparison}, which shows that the agreement between the two methods is excellent, within numerical noise. By using the neural network, a speed-up factor of about $3 \times 10^4$ compared with scans using the explicit spectrum calculator was observed. \begin{figure} \begin{center} \includegraphics[width=0.48\linewidth]{m0m12_NN_compare.eps} \includegraphics[width=0.48\linewidth]{A0tanb_NN_compare.eps} \end{center} \caption{\label{fig:nn_comparison} Comparison of Bayesian posteriors obtained by solving the RGEs fully numerically (black lines, giving 68\% and 95\% regions) and neural networks (blue lines and corresponding filled regions), from simulated ATLAS data. The red diamond gives the true value for the benchmark point adopted. From~\cite{Bridges:2010de}.} \end{figure} \subsection{Coverage results for the ATLAS benchmark} \label{sec:coverage} We studied the coverage properties of intervals obtained for the so-called ``SU3'' benchmark point. To this end, we need the ability to generate pseudo-experiments with ${\bf \Theta}$ fixed at the value of the benchmark. We adopted a parabolic approximation of the log-likelihood function (as reported in Ref.~\cite{atlas09}), based on the measurement of edges and thresholds in the invariant mass distributions for various combinations of leptons and jets in final state of the selected candidate SUSY events, assuming an integrated luminosity of 1 $\text{fb}^{-1}$ for ATLAS. Note that the relationship between the sparticle masses and the directly observable mass edges is highly non-linear, so a Gaussian is likely to be a poor approximation to the actual likelihood function. Furthermore, these edges share several sources of systematic uncertainties, such as jet and lepton energy scale uncertainties, which are only approximately communicated in Ref.~\cite{atlas09}. Finally, we introduce the additional simplification that the likelihood is also a multivariate Gaussian with the same covariance structure. We constructed $10^4$ pseudo-experiments and analyzed them with both MCMC (using a Metropolis-Hastings algorithm) and {\sc MultiNest}. Altogether, our neural network MCMC runs have performed a total of $4 \times 10^{10}$ likelihood evaluations, in a total computational effort of approximately $2\times 10^4$ CPU-minutes. We estimate that the solving the RGEs fully numerically would have taken about 1100-CPU years, which is at the boundary of what is feasible today, even with a massive parallel computing effort. The results are shown in Fig.~\ref{fig:mcmc_coverage}, where it can be seen that the methods have substantial over-coverage for 1-d intervals, which means that the resulting inferences are conservative. While it is difficult to unambiguously attribute the over-coverage to a specific cause, the most likely cause is the effect of boundary conditions imposed by the CMSSM. When ${\bf \Theta}$ is composed of parameters of interest, $\theta$, and nuisance parameters, $\mbox{$\psi$}$, the profile likelihood ratio is defined as \begin{equation} \lambda(\theta) \equiv \frac{\mathcal{L}(\theta, \hat{\hat{\mbox{$\psi$}}})}{\mathcal{L}(\hat{\theta}, \hat{\mbox{$\psi$}})}. \label{eq:profile_like} \end{equation} where $\hat{\hat{\mbox{$\psi$}}}$ is the conditional maximum likelihood estimate (MLE) of $\mbox{$\psi$}$ with $\theta$ fixed and $\hat{\theta}, \hat{\mbox{$\psi$}}$ are the unconditional MLEs. When the fit is performed directly in the space of the weak-scale masses (i.e., without invoking a specific SUSY model and hence bypassing the mapping ${\bf \Theta} \to {\bf m}$), there are no boundary effects, and the distribution of $-2\ln \lambda(\bf{m})$ (when $\bf{m}$ is true) is distributed as a chi-square with a number of degrees of freedom given by the dimensionality of $\bf{m}$. Since the likelihood is invariant under reparametrizations, we expect $-2\ln \lambda(\theta)$ to also be distributed as a chi-square. If the boundary is such that $\bf{m}(\hat{\theta},\hat{\psi}) \ne \hat{\bf{m}}$ or $\bf{m}(\theta,\hat{\hat{\psi}}) \ne \hat{\hat{\bf{m}}}$, then the resulting interval will modified. More importantly, one expects the denominator ${\mathcal L}(\hat\theta, \hat{\psi}) < {\mathcal L}(\hat{\bf{m}} )$ since $\bf{m}$ is unconstrained, which will lead to $-2\ln \lambda(\theta) < -2\ln \lambda(\bf{m})$. In turn, this means more parameter points being included in any given contour, which leads to over-coverage. The impact of the boundary on the distribution of the profile likelihood ratio is not insurmountable. It is not fundamentally different than several common examples in high-energy physics where an unconstrained MLE would lie outside of the physical parameter space. Examples include downward fluctuations in event-counting experiments when the signal rate is bounded to be non-negative. Another common example is the measurement of sines and cosines of mixing angles that are physically bounded between $[-1,1]$, though an unphysical MLE may lie outside this region. The size of this effect is related to the probability that the MLE is pushed to a physical boundary. If this probability can be estimated, it is possible to estimate a corrected threshold on $-2\ln\lambda$. For a precise threshold with guaranteed coverage, one must resort to a fully frequentist Neyman Construction. A similar coverage study (but without the computational advantage provided by ANNs) has been carried out for a few CMSSM benchmark points for simulated data from future direct detection experiments~\cite{YasharCoverage}. Their findings indicate substantial under-coverage for the resulting intervals, especially for certain choices of Bayesian priors. Both works clearly show the timeliness and importance of evaluating the coverage properties of the reconstructed intervals for future data sets. \begin{figure} \begin{center} \includegraphics[width=0.32\linewidth]{coverage_marg_mcmc} \includegraphics[width=0.32\linewidth]{coverage_short_mcmc} \includegraphics[width=0.32\linewidth]{coverage_pl_mcmc} \end{center} \caption{\label{fig:mcmc_coverage} Coverage for various types of intervals for the CMSSM parameters, from $10^4$ realizations, employing MCMC for the reconstruction (each pseudo-experiment is reconstructed with $10^6$ samples). Green/circles (red/squares) is for the 68\% (95\%) error. From~\cite{Bridges:2010de}.} \end{figure} \section{Challenges of profile likelihood evaluation} For highly non-Gaussian problems like supersymmetric parameter determination, inference can depend strongly on whether one chooses to work with the posterior distribution (Bayesian) or profile likelihood (frequentist)~\cite{Allanach:2007qk,Trotta:2008bp,2010JCAP...01..031S}. There is a growing consensus that both the posterior and the profile likelihood ought to be explored in order to obtain a fuller picture of the statistical constraints from present-day and future data. This begs the question of the algorithmic solutions available to reliably explore both the posterior and the profile likelihood in the context of SUSY phenomenology. The profile likelihood ratio defined in Eq.~\eqref{eq:profile_like} is an attractive choice as a test statistics, for under certain regularity conditions, Wilks~\cite{Wilks} showed that the distribution of $-2\ln\lambda(\theta)$ converges to a chi-square distribution with a number of degrees of freedom given by the dimensionality of $\theta$. Clearly, for any given value of $\theta$, evaluation of the profile likelihood requires solving a maximisation problem in many dimensions to determine the conditional MLE $\hat{\hat{\mbox{$\psi$}}}$. While posterior samples obtained with {\sc MultiNest} have been used to estimate the profile likelihood, the accuracy of such an estimate has been questioned~\cite{Akrami:2009hp}. As mentioned above, evaluating profile likelihoods is much more challenging than evaluating posterior distributions. Therefore, one should not expect that a vanilla setup for {\sc MultiNest} (which is adequate for an accurate exploration of the posterior distribution) will automatically be optimal for profile likelihoods evaluation. In Ref.~\cite{Feroz:2010} the question of the accuracy of profile likelihood evaluation from {\sc MultiNest} was investigated in detail. We report below the main results. The two most important parameters that control the parameter space exploration in {\sc MultiNest} are the number of live points $n_{\rm live}$ -- which determines the resolution at which the parameter space is explored -- and a tolerance parameter $\mathrm{tol}$, which defines the termination criterion based on the accuracy of the evidence. Generally, a larger number of live points is necessary to explore profile likelihoods accurately. Moreover, setting $\mathrm{tol}$ to a smaller value results in {\sc MultiNest} gathering a larger number of samples in the high likelihood regions (as termination is delayed). This is usually not necessary for the posterior distributions, as the prior volume occupied by high likelihood regions is usually very small and therefore these regions have relatively small probability mass. For profile likelihoods, however, getting as close as possible to the true global maximum is crucial and therefore one should set $\mathrm{tol}$ to a relatively smaller value. In Ref.~\cite{Feroz:2010} it was found that $n_{\rm live} = 20,000$ and $\mathrm{tol} = 1 \times 10^{-4}$ produce a sufficiently accurate exploration of the profile likelihood in toy models that reproduce the most important features of the CMSSM parameter space. In principle, the profile likelihood does not depend on the choice of priors. However, in order to explore the parameter space using any Monte Carlo technique, a set of priors needs to be defined. Different choices of priors will generally lead to different regions of the parameter space to be explored in greater or lesser detail, according to their posterior density. As a consequence, the resulting profile likelihoods might be slightly different, purely on numerical grounds. We can obtain more robust profile likelihoods by simply merging samples obtained from scans with different choices of Bayesian priors. This does not come at a greater computational cost, given that a responsible Bayesian analysis would estimate sensitivity to the choice of prior as well. The results of such a scan are shown in Fig.~\ref{fig:cmssm_profile_1D}, which was obtained by tuning {\sc MultiNest} with the above configuration, appropriate for an accurate profile likelihood exploration, and by merging the posterior samples from two different choices of priors (see~\cite{Feroz:2010} for details). This high-resolution profile likelihood scan using {\sc MultiNest} compares favourably with the results obtained by adopting a dedicated Genetic Algorithm technique~\cite{Akrami:2009hp}, although at a slightly higher computational cost (a factor of $\sim 4$). In general, an accurate profile likelihood evaluation was about an order of magnitude more computationally expensive than mapping out the Bayesian posterior. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth]{cmssm_profile_merged_1D.eps} \caption{1-D profile likelihoods from present-day data for the CMSSM parameters normalized to the global best-fit point. The red solid and blue dotted vertical lines represent the global best-fit point ($\chi^2 = 9.26$, located in the focus point region) and the best-fit point found in the stau co-annihilation region ($\chi^2 = 11.38$) respectively. The upper and lower panel show the profile likelihood and $\Delta\chi^2$ values, respectively. Green (magenta) horizontal lines represent the $1\sigma$ ($2\sigma$) approximate confidence intervals. {\sc MultiNest} was run with 20,000 live points and $\mathrm{tol}=1 \times 10^{-4}$ (a configuration deemed appropriate for profile likelihood estimation), requiring approximately 11 million likelihood evaluations. From \cite{Feroz:2010}. } \label{fig:cmssm_profile_1D} \end{center} \end{figure} \section{Conclusions} As the LHC impinges on the most anticipated regions of SUSY parameter space, the need for statistical techniques that will be able to cope with the complexity of SUSY phenomenology is greater than ever. An intense effort is underway to test the accuracy of parameter inference methods, both in the Frequentist and the Bayesian framework. Coverage studies such as the one presented here require highly-accelerated inference techniques, and neural networks have been demonstrated to provide a speed-up factor of up to $30,000$ with respect to conventional methods. A crucial improvement required for future coverage investigations is the ability to generate pseudo-experiments from an accurate description of the likelihood. Both the representation of the likelihood function and the ability to generate pseudo-experiments are now possible with the workspace technology in RooFit/RooStats~\cite{RooStats}. We encourage future experiments to publish their likelihoods using this technology. Finally, an accurate evaluation of the profile likelihood remains a numerically challenging task, much more so than the mapping out of the Bayesian posterior. Particular care needs to be taken in tuning appropriately Bayesian algorithms targeted to the exploration of posterior mass (rather than likelihood maximisation). We have demonstrated that the {\sc MultiNest} algorithm can be succesfully employed for approximating the profile likelihood functions, even though it was primarily designed for Bayesian analyses. In particular, it is important to use a termination criterion that allows {\sc MultiNest} to explore high-likelihood regions to sufficient resolution. {\em Acknowledgements:} We would like to thank the organizers of PHYSTAT11 for a very interesting workshop. We are grateful to Yashar Akrami, Jan Conrad, Joakim Edsj\"o, Louis Lyons and Pat Scott for many useful discussions.
1,108,101,566,050
arxiv
\section*{Methods} \label{sec:Methods} Samples for these measurements were fabricated by dry-peel methodology described elsewhere~\cite{woods2014commensurate, kretinin2014}. Control of the alignment is achieved by positioning long, straight edges of the crystals (which tend to be one crystallographic axis, zig-zag or arm-chair) parallel to each other. We use single- and multilayer graphene flakes placed on top of thick hBN ($\sim150$\,nm). Raman spectroscopy measurements were performed using a Horiba Raman spectrometer (grating 1200 GPI) operating with an incident laser at a wavelength of 532\,nm and $\sim0.5$\,mW power. A confocal microscope was used to focus on the sample through a $\times100$ objective. For more details please see the SI~\cite{SI}. For nonlinear characterization of the samples WITec alpha300 S confocal microscope was used in reflection geometry. Samples were irradiated by Ti:sapphire oscillator at 800\,nm and $\sim100$\,fs pulse width. Typical laser power was $\sim220$\,mW before the microscope where about 70\% of power reached and were focused on a sample with $\times20$ Zeiss objective. Detected nonlinear response was separated from fundamental wavelength by use of two types of filters. SCHOTT BG39 filter (390-650\,nm transmission) was used for experiments when SHG/TPL combined response was detected. Thorlabs FB400-40 filter was used in experiments where only SHG signal was of interest. \begin{acknowledgments} Authors thank Chris Berkhout for technical support. Authors also thank Clement Dutreix and Vladimir Kukushkin for fruitful discussions and useful comments. The work of E.A.S. was supported by the Russian Science Foundation, Grant No. 17-72-20041. K.S.N acknowledges support from EU Graphene Flagship Program (contract CNECTICT-604391), European Research Council Synergy Grant Hetero2D, the Royal Society, EPSRC grant EP/N010345/1. The work of M.I.K. was supported by European Research Council via Synergy Grant 854843 - FASTCORR. \end{acknowledgments}
1,108,101,566,051
arxiv
\section{Introduction} Demodulation of interference fringes is of importance to a wide ranging applications in optics \cite{Hariharan} including optical metrology, digital holography for live cell imaging, surface inspection, particle-field holography, astronomical imaging etc. Currently the interference fringes in all these applications are predominantly recorded on CCD or CMOS array sensors that are readily available. The digitally recorded fringe pattern is then numerically processed for recovering the complex-valued object wave of interest. Traditionally there are two main methods that are used for interferogram analysis. For off-axis interference pattern the dc and the cross-terms in the interference pattern can be nominally separated in Fourier space and thus the cross term can be obtained by Fourier space filtering \cite{Takeda, Kreis}. This method requires single interference pattern but as explained later, the Fourier space filtering operation inherently causes loss of resolution. Phase shifting interferometry \cite{Creath,Yamaguchi} on the other hand can achieve full pixel resolution using a multi-shot interferogram recording approach and requires stringent vibration isolation. The CCD/CMOS arrays available today have good sensitivity so that just a milli-second of exposure (during which the recording is not affected much by ambient vibrations) is often sufficient to record good quality interferograms with nominal tabletop interferometer setups. In view of this it is highly desirable to have a practical algorithm that can demodulate a single-shot interferogram without compromising on resolution and accuracy. In this work we are primarily interested in complex-valued object recovery from a single-shot image plane digital hologram/interferogram represented as: \begin{equation}\label{hologram} H = |R|^2 + |O|^2 + RO^{*} + R^{*}O, \end{equation} Here the hologram $H$, the reference beam $R = |R|e^{i\phi_R}$ and the object beam $O = |O|e^{i\phi_O}$ are two-dimensional functions of the pixel coordinates $(x,y)$ at the sensor plane. Further we will assume that the reference beam $R$ (both its magnitude $|R|$ and phase $\phi_R$) is known and the problem is to recover the object wave function $O$ from the single-shot hologram. We observe that since we are trying to fit two functions $|O(x,y)|$ and $\phi_O (x,y)$ to a single hologram frame $H(x,y)$, the problem does not have a unique solution and additional constraint is required in order to obtain a desirable solution. For example in the present case of image plane holography, it is expected that the object of interest to be imaged will typically have sharp edges and hence constraints like minimal Total Variation (TV) may be applied to the object field $O(x,y)$. Such constraints can be included effectively if the object wave recovery is modeled as an optimization problem. The first optimization framework for interferogram analysis may be considered to be the regularized phase tracking (RPT) method\cite{Servin, Servin2014}. In this approach the amplitude and phase of an unknown object are separately recovered pixel by pixel by fitting a local polynomial model for the unknown object wave after the removal of low frequency intensity terms from the interferogram data. In another work on complex wave retrieval algorithm \cite{Liebling} the phase and amplitude of the object are separately recovered by local least-square fitting with a variable sized window after changing the non-linear variable equation of hologram to a set of linear equations. A nominal constrained optimization approach using the complex (or Wirtinger) derivatives \cite{Brandwood} for functional gradients was proposed in \cite{Kedar}. This approach demonstrated recovery of object wave from a single-shot off-axis hologram even when the dc and the cross terms showed significant overlap in Fourier space. This method was further used to demonstrate highly accurate phase recovery from low light level interference data \cite{Mandeep2015}. An alternating amplitude and phase optimization strategy was developed in \cite{Bouman} which is an interesting approach that again demonstrated resolution improvement over the Fourier transform method. The optimization methodology in \cite{Kedar, Mandeep2015} was modified further in \cite{Mandeep2017} where an adaptive optimization approach was proposed. In the optimization framework the problem of reconstruction of the object wave $O$ typically modelled as minimization of a two-term cost function: \begin{align}\label{Cost} C(O,O^*) &= ||H - |O+R|^2||^{2}_{2} + \alpha \: TV(O,O^*) \\ &= C_{err}(O,O^*) + \alpha \: C_{TV}(O,O^*), \end{align} Here $C, C_{err}, C_{TV}$ are functions of the hologram $H$, the reference beam $R$ and the unknown complex object wave $O$. The first term $C_{err}$ refers to the L2-norm squared error or data-consistency term and $C_{TV}$ refers to the TV penalty. The positive number $\alpha$ determines the weight between two terms of the cost function. The definition of TV we use for the present work is\cite{Shi}: \begin{equation} TV(O,O^*) = \sum_{i= all pixels} [\;\; |\nabla_x O_i| \: + |\nabla_y O_i| \;\;] \:, \end{equation} where, $\nabla_x$ and $\nabla_y$ are the $x$ and $y$ partial derivative operators respectively. In \cite{Mandeep2017}, it was shown that starting with any initial guess $O^{(0)}$ for the solution, if the cost function is iteratively reduced via a gradient descent scheme, the quality of the resultant solution depends on the numerical value of $\alpha$. The parameter $\alpha$ therefore needs to be determined empirically which is often not desirable. In the present work we propose a novel approach - Mean Gradient Descent (MGD) - that does not require any weight parameter $\alpha$. The aim of MGD is not to achieve minimum of any cost function as in Eq. (\ref{Cost}) but to instead achieve balance between the two terms $C_{err}$ and $C_{TV}$. Our methodology is inspired by a successful image reconstruction algorithm ASD-POCS \cite{Pan} for X-ray computed tomography. In an earlier work \cite{Mandeep2017}, a methodology very similar to ASD-POCS was employed for the single-shot interferogram analysis. The main idea in ASD-POCS is to reach a solution point where the directions $- \nabla_{O^{\ast}} C_{err}$ and $- \nabla_{O^{\ast}} C_{TV}$ corresponding to descent directions for the reduction in $C_{err}$ and $C_{TV}$ make an obtuse angle. The descent directions corresponding to the two terms of the cost function can thus be thought to achieve an equilibrium. ASD-POCS achieves this balance by alternating minimization of $C_{err}$ and $C_{TV}$. Starting with $O = O^{(n)}$ the error term $C_{err}$ is reduced leading to an intermediate solution $O^{(n)}_{int}$. The $C_{TV}$ associated with this intermediate solution is then recursively reduced to obtain the next guess $O^{(n+1)}$ such that the distances $d_1 = ||O^{(n)} - O^{(n)}_{int}||_2$ and $d_2 = ||O^{(n+1)} - O^{(n)}_{int}||_2$ are approximately matched in every iteration. Once the solution reaches near the optimal point, ensuring that $d_1 \approx d_2$ in further iterations causes negligible change in the solution as the two descent directions oppose each other. We show here that reaching the balance point as in ASD-POCS is possible without employing an alternating minimization scheme but by iteratively progressing the solution in a direction that bisects that of the two functional gradients. The MGD iteration is computationally much simpler as compared to the alternating minimization approach. While MGD is discussed here in the context of single-shot interferogram analysis, we believe that it may be useful to two term optimization problems in general. In the context of interferogram analysis we show that procedure is uniformly applicable to various interferometric configurations (off-axis as well as on-axis) and thus amenable to be used with multiple digital holography/ interferometry system configurations. The paper is organized as follows. In section 2 we provide a detailed description of MGD algorithm with the help of numerical illustrations for different noise levels present in the interferogram data corresponding to off-axis plane reference beam and step phase object. In section 3, we demonstrate the performance of MGD iteration for on-axis and near on-axis spherical reference beam configurations of the interferogram setup. Finally,in section 4, we summarize our observations and provide our insights over the results obtained by MGD algorithm. \section{Description of MGD iteration} For simplicity we describe the MGD iteration with an illustration of single-shot off-axis interferogram. For object wave in the interferogram plane, we use a square-shaped binary phase object on a $500 \times 500$ pixel grid. We have taken unit amplitude across the entire object and a step phase of $2\pi/3 $ radians inside the square area of $250 \times 250$ as shown in Fig. 1(a). A step phase object is used here as we are interested in image plane holography where the object wave $O$ may contain sharp edges. Although in a realistic case the edge sharpness will be limited by numerical aperture of the imaging system, in this case we have assumed a phase object with ideal phase step without any band-limit which is generally considered to be a hard problem for traditional Fourier filtering approach. In the methods based on optimization approach the step phase reconstruction is still a hard problem due to involvement of empirical parameters \cite{Legarda, Galvan}. The reference beam is an off-axis plane wave given by $R = \exp[i 2\pi (f_1 x + f_2 y)$ with $f_1 = f_2 = 0.04$ /pixel. The corresponding interferogram is shown in Fig. 1(b). The interferogram has been simulated with Poisson noise equivalent to an average light level $10^4$ photon/pixel. Since the dc and cross-term peaks of this interferogram are separated in Fourier space, the cross term may be filtered as shown in Fig. 1(c) where a filter of radius $0.5$ times the distance between the dc and cross terms peaks in Fourier space has been used for illustration. A Hamming window is also applied to the filtered cross term prior to computing the inverse Fourier transform in order to mitigate ringing artifacts. The resultant phase map is shown in Fig. 1 (d) has poor resolution compared to the step phase object in Fig. 1(a). Recently a Hilbert transform processing methodology \cite{Baek} has been demonstrated which provides superior resolution than what is obtained in Fig. 1(d), however, this procedure still requires a band-limited object wave. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{figure1.pdf} \caption{(a) Phase map of square shaped object with a step phase of $2\pi/3$ radians over $250 \times 250$ pixels defined on a $500 \times 500$ pixel grid. (b) Hologram of the object in (a) with a tilted plane reference beam simulated with Poisson noise for an average light level of $10^4$ photons/pixel. (c) Zoomed-in version of Fourier magnitude of hologram showing the circular filter of radius $0.5$ times the distance between dc and cross term peaks. For display purpose the Fourier magnitude of hologram is shown as $|H'(f_x,f_y)|^{0.5}$ (d) Phase image reconstructed by Fourier Transform method (FTM). } \label{fig:my_label} \end{figure} We will now proceed with a description of MGD iteration for recovering the complex object wave $O$ from a single-shot interferogram $H$. In general the goal of MGD is to find a solution $O$ such that costs associated with both data consistency $C_{err}$ and the Total Variation $C_{TV}$ as in Eq. (\ref{Cost}) simultaneously achieve minimal numerical values and that additionally the two functional gradients associated with these costs balance each other. For a given guess solution $O$, we start by defining two unit vectors: \begin{equation}\label{erred_uv} \hat{\bf u}_1 =\frac{\nabla_{O^*} C_{err}(O,O^*)}{|| \nabla_{O^*}C_{err}(O,O^*)||_2} \end{equation} and \begin{equation}\label{tvred_uv} \hat{\bf u}_2 =\frac{\nabla_{O^*} C_{TV}(O,O^*)}{|| \nabla_{O^*}C_{TV}(O,O^*)||_2}. \end{equation} Here, the two functional gradients (or Wirtinger derivatives) are defined as: \begin{equation} \nabla_{O^*} C_{err}(O,O^*) = -2(H-|O + R|^2)(O+R), \end{equation} and \begin{equation} \nabla_{O^*} C_{TV}(O,O^*) = - \nabla.[\frac{\nabla_x O}{|\nabla_x O|}\hat{x}+\frac{\nabla_y O}{|\nabla_y O|}\hat{y}]. \end{equation} Next we introduce a vector ${\bf u}$ that is along the mean direction that bisects $\hat{\bf u}_1$ and $\hat{\bf u}_2$: \begin{equation}\label{mean_dir} {\bf u} = \frac{\hat{\bf u}_1\:+ \hat{\bf u}_2}{2}, \end{equation} and consider an iteration of the form: \begin{equation}\label{mgd} O^{(n+1)} = O^{(n)} \: - t\:||O^{(n)}||_2\: [{\bf u}]_{O=O^{(n)}}. \end{equation} Here the parameter $t$ denotes the step size in units of the norm $||O^{(n)}||_2$ of the guess solution after $n$ iterations. Note that since $\hat{\bf u}_1$ and $\hat{\bf u}_2$ are unit vectors, for any arbitrary value of $t$, the changes in the solution \begin{align} D_{1,n} &= ||\;\;\frac{t}{2}\:||O^{(n)}||_2\: [\hat{\bf u}_1]_{O=O^{(n)}} \;\;||_2, \nonumber \\ D_{2,n} &= ||\;\;\frac{t}{2}\:||O^{(n)}||_2\: [\hat{\bf u}_2]_{O=O^{(n)}} \;\;||_2 \end{align} due to the progression in the error and TV reducing directions are always guaranteed to be equal. The iteration is much easier computationally as compared to the alternating minimization scheme required for ASD-POCS type algorithms. The scheme for selecting $t$ will be explained later. In order to understand the progression of the solution by MGD, we examine the behaviour of $C_{err}$, $C_{TV}$ and the angle $\theta$ between the directions of $\hat{\bf u}_1$ and $\hat{\bf u}_2$ as the iterations progress. Since the two vectors $\hat{\bf u}_1$ and $\hat{\bf u}_2$ are complex-valued, for the purpose of calculating angle between them, we form new vectors by concatenating their real and imaginary parts: \begin{align}\label{v1v2} {\bf v}_1 = [\textrm{real}(\hat{\bf u}_{1j}) , \textrm{imag}(\hat{\bf u}_{1j})] \nonumber \\ {\bf v}_2 = [\textrm{real}(\hat{\bf u}_{2j}) , \textrm{imag}(\hat{\bf u}_{2j})]. \end{align} Here the index $j$ runs over computational window size ($j = 1, 2, ..., (500)^2$). The angle between ${\bf u}_1$ and ${\bf u}_2$ is then defined as: \begin{equation}\label{theta} \theta = \arccos[\frac{ {\bf v}_1 \cdot {\bf v}_2 } {||{\bf v}_1 ||_2 ||{\bf v}_2 ||_2}]. \end{equation} As seen in illustrations below, we observe that following this scheme leads to successive solutions where both $C_{err}$ and $C_{TV}$ nominally decrease and eventually, the angle between $\hat{\bf u}_1$ and $\hat{\bf u}_2$ becomes obtuse implying that the resultant solution balances the two terms $C_{err}$ and $C_{TV}$. We term this scheme as ``Mean Gradient Descent'' in view of the definition Eq. (\ref{mean_dir}) of vector ${\bf u}$ and that the two costs are seen to nominally progress to their minimal possible values. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{figure2.pdf} \caption{(a) Phase and (b) amplitude of solution for object wave after $200$ MGD iterations with step size $t$ kept fixed.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.43\textwidth]{figure3.pdf} \caption{Behaviour of (a) logarithm of $C_{err}$ with iteration number, and (b) logarithm of $C_{TV}$ with iteration number for three light levels of $10^{3}$,$10^4$ and $10^5$ photons/pixel.(c) Variation of angle between $\hat{\bf u}_1$ and $\hat{\bf u}_2$ with number of iterations corresponding to the average light levels of $10^{3}$,$10^4$ and $10^5$ photons/pixel. } \end{figure} For a typical illustration with Poisson noise corresponding to the average light level of $10^4$ photons/pixel in the hologram data frame, we initiate the MGD iteration with a random complex valued function with amplitude and phase distributed uniformly in $[0,1]$ and $[0,2\pi]$ respectively. As per Eq. (\ref{mgd}), in each iteration the solution is simply pushed in the direction $-{\bf u} = -(\hat{\bf u}_1\:+ \hat{\bf u}_2)/{2}$. In the following discussion we provide our thought process for selecting the step size $t$ which is kept constant for initial iterations and then reduced slowly. As already explained before, the problem of determining amplitude $|O(x,y)|$ and phase $\phi_O (x,y)$ from a single hologram data frame $H(x,y)$ does not have a unique solution even when the reference beam $R(x,y)$ is known exactly. There can be multiple combinations of the amplitude and phase functions that may satisfy the hologram data. Starting with any random guess solution, if we reduce $C_{err}$ alone by a gradient descent scheme, we observe that we reach a solution representing a local minimum of $C_{err}$ that does not simultaneously have a low value for $C_{TV}$. The progression in the mean direction however leads to moving away from such undesirable solutions. We note that since $\hat{\bf u}_1$ and $\hat{\bf u}_2$ are unit vectors, the maximum magnitude of ${\bf u}$ is equal to $1$. We therefore initiate $t$ with a nominal trial value of $0.1$, suggesting that the solution can maximally change in norm by $10 \%$ in a single iteration. When this initial value of $t$ is held constant for the first few hundred iterations, we find that the solution reaches close to the desirable solution in the solution landscape. The phase and amplitude maps corresponding to the resultant solution after $200$ iterations with a fixed $t$ value are shown in Fig. 2 (a), (b) respectively. We note that the solution has the expected features of a step phase object but additionally contains undesirable fringe-like artifacts. The blue curves in Fig. 3(a), (b), (c) show the plots for $C_{err}$, $C_{TV}$ and $\theta$ as a function of iteration number respectively. Fig. 3 shows plots of these quantities for three different noise levels as we will discuss later. For now we will concentrate on the blue curves in these plots that correspond to the Poisson noise corresponding to the average light level of $10^4$ photons/pixel. At the end of $200$ iterations we observe that the three curves for $C_{err}$, $C_{TV}$ and $\theta$ have nearly flattened. (The blue, red and green curves in Fig. 3 representing different noise levels are nearly overlapping in this region). This behaviour suggests that the solution has essentially stagnated. While the solution appears to be close to what we want (with some additional artifacts) the fixed value of $t$ is too large at this point and the solution is unable to approach the desired minimum in $C_{err}$ or $C_{TV}$. In the further iterations, we check the value of error term $C_{err}$. If the numerical value of $C_{err}$ has increased compared to that in the previous iteration, the step size $t$ is decreased slowly by a constant factor $0.99$. The plots in Fig. 3 (a), (b), (c) start showing interesting behaviour at this point - the numerical values of $C_{err}$ and $C_{TV}$ are seen to start nominally decreasing while the angle $\theta$ starts increasing as we reduce $t$. The oscillations in these curves right after $t$ starts reducing represents the fact that $t$ is still too large than what is desired and is being slowly adjusted to a lower value. It is important to note that every iteration simply involves a fixed straightforward computation of $\hat{\bf u}_1$ and $\hat{\bf u}_2$ followed by progressing the solution in the mean direction, thus the computational cost per iteration is minimal. In Fig. 4 (a),(c) and Fig. 4 (b),(d) we show the amplitude and phase maps of the resultant solutions after $500$ and $2000$ MGD iterations for the data with Poisson noise corresponding to the average light level of $10^4$ photons/pixel. The computational time for a MATLAB implementation on a $3.5$ GHz processor and $16$ GB RAM was observed to be approximately $0.07$ seconds per iteration. The profile of the phase function after $2000$ iterations along the dotted line in Fig. 4 (d) is plotted in Fig. 4 (e) and shows excellent recovery of the sharp edge in phase function. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figure4.pdf} \caption{Progression of solution with MGD algorithm for Poisson noise realization with light level of $10^4$ photons/pixel. Amplitude of the solution after (a) $500$ and (b)$2000$ iterations. Phase reconstruction after (c)$500$ and (d)$2000$ iterations. (e) Phase profile of the resultant solution in (d) along the dotted line. Note that the solution contains sharp edges as compared to FTM solution shown in Fig. 1(d). } \end{figure} In order to understand the sensitivity of MGD approach to noise we considered two additional Poisson noise realizations of the interferogram in Fig. 1(b) corresponding to an average light level of $10^3$ and $10^5$ photons/pixel. The behaviour of $C_{err}$, $C_{TV}$ and $\theta$ for these cases as a function of iterations is also shown in Fig. 3 (a), (b), (c) respectively (red and green curves). We observe that with increasing light level, the resultant numerical values of $C_{err}$ and $C_{TV}$ are consistently lower and the numerical value of $C_{TV}$ increasingly approaches the ground truth value of TV of the simulated step-phase object (shown in magenta). We note that the rise in angle $\theta$ after $200$ iterations when we start reducing step size $t$ is fastest for the data with the highest relative noise ($10^3$ photons/pixel). This is expected since the balancing of the two terms of the cost function should start happening at higher value of $C_{err}$ for data with higher noise. Table 1 shows the RMS phase error between the ground truth phase map and the reconstructed phase map for the three noise realizations and the RMS error performance is best for the data with lowest relative noise as expected. The RMS error performance for all the three cases is excellent and in fact superior to the expected performance purely based on shot noise considerations\cite{Walls}. Superior performance compared to single-pixel based shot noise level is expected due to the sparsity of the object wave as already demonstrated in \cite{Mandeep2015}. A more detailed analysis of performance of MGD with respect to the light level and the sparsity of the object wave will be taken up in future. For random initial guess we observe that the initial direction between $\hat{{\bf u}}_1$ and $\hat{{\bf u}}_2$ was observed to be close to $90^\circ$ suggesting that the two directions were independent. As the iterations progressed, the angle $\theta$ was initially acute but eventually became obtuse and close to $180^\circ$ as the number of iterations was made very large. This behaviour of angle $\theta$ confirms our main motivation for using the MGD approach. \begin{table}[htbp] \centering \caption{\bf Phase rms error values after $2000$ iterations of MGD algorithm corresponding to three different noise levels added to the hologram data. } \begin{tabular}{ |p{2cm}|p{1cm}|p{1cm}|p{1cm}|p{1cm}| p{1cm}| } \hline Light level (N photons/pixel) & $10^3$ & $10^4$ & $10^5$ \\ \hline RMS error (rad) & $0.0125$ &$0.0042$ &$0.0021$ \\ \hline Shot Noise level $1/\sqrt{N}$ (rad) & $0.0316$ & $0.010$ & $0.0032$ \\ \hline \end{tabular} \end{table} \section{Performance of MGD for on-axis and near on-axis interferograms with spherical reference beam} From the previous section it is clear that the working of MGD is independent of the various noise levels in the hologram data. It should also be noted that the MGD approach never utilized the fact that we were analyzing an off-axis interferogram. As long as the form of reference beam is known, MGD should be able to handle the hologram data in the same manner as the off-axis case as we illustrate in this section. We now test the evolution of MGD solution for two interferogram recording configurations shown in Fig. 5 (a), (b) where the reference beam is in the form of on-axis and near on-axis spherical beams. The on-axis spherical wave is taken of the form $R = \exp(i 2 \pi p (x^2 +y^2))$ and the near on-axis spherical wave has form $R = \exp(i 2 \pi q [(x-51)^2 +(y-51)^2])$ with $p = q = 0.0004/\textrm{pixel}^{2}$. The interferograms in Fig. 5(a), (b) are generated with Poisson random noise corresponding to average light level of $10^4$ photons/pixel. Note that for both on-axis and the near on-axis spherical reference beam configurations, the dc and the cross terms in the interferograms substantially overlap in the Fourier domain as clearly visible in Fig. 5(c),(d). As a result there is no effective Fourier filtering strategy (as in Fig. 1) that can separate out the object wave even in an approximate sense. The MGD iterations is independent of such considerations and high quality object wave reconstructions are obtained as shown in Fig. 5 (e), (f). The edge profiles corresponding to resultant solutions in Fig. 5 (e), (f) are plotted in Fig. 5(g),(h) along the dotted lines respectively. The phase profiles clearly show the excellent step phase recovery for both the cases. The behaviour of $C_{err}$, $C_{TV}$ and angle $\theta$ for the two configurations as the iterations progress is observed to be similar to that of the previously illustrated off-axis case in Fig. 3(a)-(c) respectively. The RMS phase error, with respect to the ground truth solution, calculated after $2500$ MGD iterations for the reconstructed phase solutions in Fig. 5(e), (f) are $0.0040$rad and $0.0048$rad respectively which is similar to the numerical value in Table 1. The computational time per iteration for both the cases illustrated above is identical to the off-axis case as the steps involved in MGD are independent of the nature of interference pattern. From these results, MGD appears to be a robust methodology that uniformly works with multiple interferometric configurations and noise levels. The algorithm is therefore expected to have applicability for interferometric systems in various geometrical configurations. To the best of our knowledge, the MGD iteration as presented here has not been explored in the prior literature. At present we are reporting results of MGD with illustrative examples for lack of a formal proof for its convergence properties. Such proofs if possible will have to be worked out in future. Our numerical trials in this and the previous section however suggest that MGD works uniformly in a robust manner and provides excellent complex object wave recoveries from single-shot interferogram data. We believe that MGD as a concept may be useful beyond interferometry in a multitude of optimization problems that are similar in nature to the present problem as described in Eq. (\ref{Cost}). \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{figure5.pdf} \caption{ Hologram of step phase object with (a) on-axis and (b) near on-axis spherical reference wave simulated with Poisson noise with an average light level of $10^4$ photons/pixel. (c),(d) Fourier magnitudes of the holograms in (a) and (b) respectively showing overlap between dc and cross terms. (e),(g) and (f),(h) are the reconstructed phase maps and their corresponding phase profiles along the dotted lines respectively after $2500$ MGD iterations.} \end{figure} \section{Discussion and future outlook} In summary we have presented a new optimization approach that we call as Mean Gradient Descent (MGD) for single-shot interferogram analysis. Unlike the usual optimization approaches which aim to minimize a cost function, we aim to reach a solution point where the data consistency and constraint penalty functions balance each other. This is achieved by iteratively progressing the solution in a direction that bisects the descent directions for the data consistency (or error) and the penalty terms. This approach does not require any free parameters. MGD scheme works uniformly for varying noise levels in the data as well as data representing different interferometry configurations. MGD involves straightforward computation per iteration which is very simple to implement compared to alternating minimization schemes. As illustrated in our work MGD showed excellent object wave recoveries when interferograms in different configurations (on axis or off axis) were used. For the step phase object used, MGD showed excellent phase step recovery indicating full pixel resolution performance. Since MGD effectively utilized the expected object wave sparsity for phase reconstruction, the rms phase error performance better than the usual definition of single-pixel based shot-noise level was observed. A detailed study of this aspect will be carried out in future. We believe that a robust approach MGD can lead to widespread employment of optimization based methodologies in interferometry and digital holographic imaging applications. The associated devices can thus operate in single-shot mode with full pixel resolution performance as well as superior accuracy. MGD as a concept can potentially work for a number of optimization problems as we will explore in future.
1,108,101,566,052
arxiv
\section{Introduction}\label{section: Introduction} By $\mathcal H$ we will always denote an undirected graph without multiple edges (and by abuse of notation also denote its set of vertices). In this paper we focus on $\mathcal H$ which is four-cycle free, that is, it is finite, it has no self-loops and the four-cycle, denoted by $C_4$ is not a subgraph of $\mathcal H$. Fix $d\geq 2$. The basic object of study is $X_\mathcal H$, the space of graph homomorphisms from $\mathbb{Z}^d$ to $\mathcal H$. Here by $\mathbb{Z}^d$ we will mean both the group and its standard Cayley graph. Such a space of configurations $X_\mathcal H$, referred to as a hom-shift, can be obtained by forbidding certain patterns on edges of $\mathcal H^{\mathbb{Z}^d}$. If $\mathcal H$ is a finite graph then $X_\mathcal H$ is a nearest neighbour shift of finite type. In addition it is also `isotropic' and `symmetric', that is, given vertices $a, b \in \mathcal H$ if $a$ is not allowed to sit next to $b$ in $X_\mathcal H$ for some coordinate direction, then $a$ is not allowed to sit next to $b$ in all coordinate directions. Most of the concepts related to shift spaces are introduced in Section \ref{section: Hom-shifts}. Related to a shift space $X$ is its topological entropy denoted by $h_{top}(X)$ which measures the growth rate of the number of patterns allowed in $X$ with the size of the underlying shape (usually rectangular). For a given shift space, its computation is a very difficult task (look for instance in \cite{pavlovhardsquare2012} and the references within). We will focus on a different aspect: as in \cite{covensmital}, a shift space $X$ is called entropy minimal if for all shift spaces $Y\subsetneq X$, $h_{top}(Y)<h_{top}(X)$. Thus if a shift space has zero entropy and is entropy minimal then it is a topologically minimal system. For $d=1$, it is well known that all irreducible nearest neighbour shifts of finite type are entropy minimal \cite{LM}. However not much is known about it in higher dimensions: Shift spaces with a strong mixing property called uniform filling are entropy minimal \cite{Schraudner2010minimal}, however nearest neighbour shifts of finite type with weaker mixing properties like block-gluing may not be entropy minimal \cite{boyle2010multidimensional}. If $\mathcal H$ is a four-cycle free graph then $X_\mathcal H$ is not even block-gluing (we do not prove this but is implied by our results). There has been some recent work \cite{lightwoodschraudnerentropy} which describes some conditions which are equivalent to entropy minimality for shifts of finite type. Our first main result is Theorem \ref{theorem:four cycle free entropy minimal} which states that $X_\mathcal H$ is entropy minimal for all connected four-cycle free graphs $\mathcal H$. Our approach to this seemingly combinatorial question will be via thermodynamic formalism, using measures of maximal entropy, more generally `adapted' Markov random fields. They are defined and described in Section \ref{section:thermodynamic formalism}. Our second main result is with regard to the pivot property (Section \ref{section: the pivot property}). A shift space $X$ is said to have the pivot property if for all distinct configurations $x,y\in X$ which differ at finitely many sites, there exists a sequence $x=x^1, x^2, \ldots, x^n=y\in X$ such that successive configurations $x^i, x^{i+1}$ differ exactly at a single site. We will prove that for four-cycle free graphs $\mathcal H$, $X_{\mathcal H}$ has the pivot property (Theorem \ref{theorem: pivot property for four cycle free}). Many properties similar to the pivot property have appeared in the literature (often by the name local-move connectedness) \cite{brightwell2000gibbs,dominolocal2010,ordentlich2014data,Shiffieldribbon2002}. For instance consider the following problem: Let $G$ be a finite undirected graph without multiple edges and self-loops. Given two graph homomorphisms $x,y$ from $G$ to $H$, can we find graph homomorphisms $x=x^1, x^2, \ldots, x^n=y$ from $G$ to $H$ such that successive configurations $x^i, x^{i+1}$ differ exactly at a single site? Such a problem is called a homomorphism reconfiguration problem. If $\mathcal H$ is four-cycle free it was recently shown in \cite{Marcinfourcyclefree2014} that the homomorphism reconfiguration problem is solvable in time polynomial in the size of the graph $\mathcal H$. Our interest in such problems comes from the connections between the pivot property and the study of Markov and Gibbs cocycles \cite{chandgotia2013Markov}. A critical part of our proofs for both Theorem \ref{theorem:four cycle free entropy minimal} and \ref{theorem: pivot property for four cycle free} depends on the identification of the associated height functions (defined in Section \ref{Section:heights}). Some of these ideas come from \cite{chandgotia2013Markov}: Let $C_n$ denote the cycle with vertices $0, 1, \ldots, n-1$. If $n \neq 1, 2, 4$ then it was proven that $X_{C_n}$ has the pivot property. Further, Lemma 6.7 in \cite{chandgotia2013Markov} implied that it is entropy minimal as well. We give a brief description of the latter: Given a configuration $x\in X_{C_n}$ it was proved that there exists a corresponding height function $h_x \in X_{\mathbb{Z}}$ such that $h_x\mod n= x$. (for $n=3$ also look at \cite{schmidt_cohomology_SFT_1995}) Further given any ergodic measure $\mu$ (assuming `adaptedness' cf. Section \ref{section:thermodynamic formalism}) on $X_{C_n}$ it was shown that a well-defined notion of slope (bounded by $-1$ and $1$) exists which measures the average rate of the increase of the height for every direction. If the slope is maximal in any direction it was proven that $\mu$ is frozen and otherwise it was shown that it is fully supported; frozen meaning that if $x, y\in supp(\mu)$ differ only at finitely many sites then $x=y$. From standard results in thermodynamic formalism it follows that $X_{C_n}$ is entropy minimal. These ideas do not immediately translate to four-cycle free graphs. For this we will develop a different notion of `heights': Fix a connected four-cycle free graph $\mathcal H$. For all $u \in \mathcal H$ let $D_\mathcal H(u)$ denote the ball (for the graph distance) of radius $1$ around $u$. As in algebraic topology, $C$ is a covering space of $\mathcal H$ if there is a graph homomorphism (called the covering map) $f: C\longrightarrow \mathcal H$ such that for all $u \in \mathcal H$, $f^{-1}(D_{\mathcal H}(u))$ is a disjoint union of a constant number of subgraphs of $C$ isomorphic to $D_{\mathcal H}(u)$ via $f$. Given a graph $\mathcal H$, its universal cover (denoted by $E_\mathcal H$) is the unique covering space of $\mathcal H$ which is a tree. (Section \ref{section:universal covers} and \cite{Angluin80}) Denote by $\pi:E_\mathcal H\longrightarrow \mathcal H$ the corresponding covering map. $\pi$ induces a map from $X_{E_\mathcal H}$ to $X_\mathcal H$ (also denoted by $\pi$). It is not difficult to see that $\pi(X_{E_\mathcal H})\subset X_\mathcal H$; we will prove that the map is surjective and that the preimage of a configuration in $X_\mathcal H$ is unique once we fix the `lift' in $X_{E_\mathcal H}$ at any vertex of $\mathbb{Z}^d$. Thus for every $x\in X_\mathcal H$ we can associate $\tilde x \in X_{E_\mathcal H}$ such that $\pi(\tilde x)= x$ and a `height' function $h_x: \mathbb{Z}^d\times \mathbb{Z}^d \longrightarrow \mathbb{Z}$ such that $h_x(\vec i , \vec j)$ is the graph distance between $\tilde x_{\vec{i}}$ and $\tilde x_{\vec{j}}$. From here on the steps for the proofs of Theorems \ref{theorem:four cycle free entropy minimal} and \ref{theorem: pivot property for four cycle free} are similar to the steps for the corresponding proofs in \cite{chandgotia2013Markov}, but the proofs for the individual steps are quite different because unlike in \cite{chandgotia2013Markov} the height functions are not additive but subadditive, meaning $$h_x({\vec{i}},{\vec{j}})\leq h_x({\vec{i}}, \vec k)+h_x(\vec k, {\vec{j}}).$$ To streamline the proofs, we use graph folding \cite{nowwinkler}; much of this is discussed in Section \ref{section:Folding, Entropy Minimality and the Pivot Property}. \section{Shifts of Finite Type and Hom-Shifts} \label{section: Hom-shifts} In this paper $\mathcal H$ will always denote an undirected graph without multiple edges and single isolated vertices. For such a graph we will denote the adjacency relation by $\sim_{\mathcal H}$ and the set of vertices of $\mathcal H$ by $\mathcal H$ (abusing notation). We identify $\mathbb{Z}^d$ with the set of vertices of the Cayley graph with respect to the standard generators $\vec e_1,\vec e_2, \ldots, \vec e_d$, that is, $\vec{i}\sim_{\mathbb{Z}^d}\vec{j}$ if and only if $\|\vec{i}-\vec{j}\|_1=1$ where $\|\cdot\|_1$ is the $l^1$ norm. We drop the subscript in $\sim_{\mathcal H}$ when $\mathcal H=\mathbb{Z}^d$. Let $D_n$ and $B_n$ denote the $\mathbb{Z}^d$-balls of radius $n$ around $\vec{0}$ in the $l^1$ and the $l^\infty$ norm respectively. The graph $C_n$ will denote the $n$-cycle where the set of vertices is $\{0, 1, 2, \ldots, n-1\}$ and $i\sim_{C_n} j$ if and only if $i \equiv j \pm 1\!\!\mod n$. The graph $K_n$ will denote the complete graph with $n$ vertices where the set of vertices is $\{1, 2, \ldots, n\}$ and $i \sim_{K_n} j$ if and only if $i \neq j$. Let ${\mathcal A}$ be a finite \emph{alphabet} (with the discrete topology) and ${\mathcal A}^{\mathbb{Z}^d}$ be given the product topology, making it compact. We will refer to elements $x\in {\mathcal A}^{\mathbb{Z}^d}$ as \emph{configurations}, denoting them by $(x_{\vec{i}})_{\vec{i}\in \mathbb{Z}^d}$ where $x_{\vec{i}}$ is the value of $x$ at $\vec{i}$. For all $\vec{i}\in \mathbb{Z}^d$ the map $\sigma^{\vec{i}}:{\mathcal A}^{\mathbb{Z}^d}\longrightarrow {\mathcal A}^{\mathbb{Z}^d}$ given by $$(\sigma^{\vec{i}}(x))_{\vec{j}}:=x_{\vec{i}+\vec{j}}$$ is called the \emph{shift map} and defines a $\mathbb{Z}^d$-action on ${\mathcal A}^{\mathbb{Z}^d}$. Closed subsets of ${\mathcal A}^{\mathbb{Z}^d}$ which are invariant under the shift maps are called \emph{shift spaces}. A \emph{sliding block code} from a shift space $X$ to a shift space $Y$ is a continuous map $f: X\longrightarrow Y$ which commutes with the shifts, that is, $\sigma^{{\vec{i}}}\circ f= f\circ \sigma^{\vec{i}}$ for all ${\vec{i}}\in \mathbb{Z}^d$. A surjective sliding block code is called a \emph{factor map} and a bijective sliding block code is called a \emph{conjugacy}. We note that a conjugacy defines an equivalence relation; in fact, it has a continuous inverse since it is a continuous bijection between compact sets. There is an alternate description of shift spaces using forbidden patterns: A \emph{pattern} is an element of ${\mathcal A}^F$ for some finite set $F\subset \mathbb{Z}^d$. Given a pattern $a\in {\mathcal A}^F$, we will often denote both the pattern and its corresponding cylinder set by $[a]_F$ or by $[x]_F$ when $x|_F=a$. For any set $F\subset \mathbb{Z}^d$ and $v\in {\mathcal A}$, $x|_F=v$ will mean that $x_{\vec{i}}=v$ for all ${\vec{i}}\in F$. For a set of (forbidden) patterns ${\mathcal F}$ define $$X_{\mathcal F}:=\{x\in {\mathcal A}^{\mathbb{Z}^d}\:\Big{|}\: \sigma^{\vec{i}}(x)|_F\notin {\mathcal F} \text{ for all }F\subset \mathbb{Z}^d\text{ and }{\vec{i}} \in \mathbb{Z}^d\}.$$ It can be proved that a subset $X\subset {\mathcal A}^{\mathbb{Z}^d}$ is a shift space if and only if there exists a set of forbidden patterns ${\mathcal F}$ such that $X_{\mathcal F}= X$. Note that given two distinct sets of patterns ${\mathcal F}_1, {\mathcal F}_2$ it is possible that $X_{{\mathcal F}_1}=X_{{\mathcal F}_2}$. A \emph{shift of finite type} is a shift space $X$ such that $X=X_{{\mathcal F}}$ for some finite set ${\mathcal F}$. A \emph{nearest neighbour shift of finite type} is a shift of finite type $X$ such that $X=X_{{\mathcal F}}$ for some set ${\mathcal F}$ consisting of patterns on single edges and vertices of $\mathbb{Z}^d$. It follows from a simple recoding argument that any shift of finite type is conjugate to a nearest neighbour shift of finite type \cite{schimdt_fund_cocycle_98}. In this paper, we will focus on a special class of nearest neighbour shifts of finite type where the forbidden patterns are the same in every `direction': Given a graph $\mathcal H$ let $$X_{\mathcal H}:=\{x\in\mathcal H^{\mathbb{Z}^d}\:|\: x_{\vec {i}}\sim_\mathcal H x_{\vec{j}}\text{ for all }\vec{i}\sim \vec{j}\}.$$ Such spaces will be called \emph{hom-shifts}. Note that $X_\mathcal H$ is the space of graph homomorphisms from $\mathbb{Z}^d$ to $\mathcal H$. If $\mathcal H$ is finite and $${\mathcal F}_\mathcal H:=\{[v,w]_{\vec{0},\vec{e}_j}\:|\:v\nsim_\mathcal H w, 1\leq j\leq d\}$$ then $X_{\mathcal H}= X_{{\mathcal F}_{\mathcal H}}$. These are exactly the nearest neighbour shifts of finite type with symmetric and isotropic constraints. For example if the graph $\mathcal H$ is given by Figure \ref{Figure: Hard Square} then $X_\mathcal H$ is the hard square shift, that is, configurations with alphabet $\{0,1\}$ such that adjacent symbols cannot both be $1$. $X_{K_n}$ is the space of $n$-colourings of the graph, that is, configurations with alphabet $\{1,2, \ldots, n\}$ where all adjacent colours are distinct. We note that the properties, symmetry and isotropy, are not invariant under conjugacy. ${\mathcal F}$ will always denote a set of patterns and $\mathcal H$ will always denote a graph, there will not be any ambiguity in the notations $X_{\mathcal F}, X_\mathcal H$. A graph $\mathcal H$ is called \emph{four-cycle free} if it is finite, it has no self-loops and $C_4$ is not a subgraph of $\mathcal H$. For instance $K_4$ is not a four-cycle free graph. \begin{figure}[h] \centering \includegraphics[angle=0, width=.1\textwidth]{hardsquare.pdf}\caption{Graph for the Hard Square Shift} \label{Figure: Hard Square} \end{figure} The set of \emph{globally allowed patterns} of a shift space $X$ on a set $A\subset \mathbb{Z}^d$ is $$\mathcal L_A(X):=\{a\in {\mathcal A}^{A}\:\big{|}\:\text{ there exists }x\in X\text{ such that }x|_A=a\}.$$ Its \emph{language} is the set of all finite patterns appearing in $X$ that is $$\mathcal L(X):=\bigcup_{A\subset \mathbb{Z}^d \text{ finite}}\mathcal L_A(X).$$ We comment that this is different from the set of locally allowed patterns: Let $X$ be a shift space with a forbidden list ${\mathcal F}$. Given a finite set $A$, a pattern $a\in {\mathcal A}^A$ is said to be \emph{locally allowed} if no pattern from ${\mathcal F}$ appears in $a$. In general it is undecidable for shifts of finite type whether a locally allowed pattern belongs to $\mathcal L(X)$ \cite{Robinson1971}; however it is decidable when $X$ is a hom-shift where it is sufficient to check whether the pattern extends to a locally allowed pattern on $B_n$ for some $n$. The topological entropy of the shift space $X$ is the log growth rate of the number of allowed patterns in $X$, that is, $$h_{top}(X):=\lim_{n\longrightarrow \infty}\frac{\log|\mathcal L_{B_n}(X)|}{|B_n|}.$$ The existence of the limit follows from subadditivity arguments via the well-known multivariate version of Fekete's Lemma \cite{silviofekete}. Moreover the topological entropy is an invariant under conjugacy (for $d=1$ look at Proposition 4.1.9 in \cite{LM}, the proof extends to higher dimensions). We remark that the computation of this invariant for shifts of finite type in $d>1$ is a hard problem and very little is known \cite{pavlovhardsquare2012}, however there are algorithms to compute approximating upper and lower bounds of the topological entropy of the hom-shifts \cite{symmtricfriedlan1997,louidor2010improved}. Further if $\mathcal H$ is a finite connected graph with at least two edges, then $h_{top}(X_\mathcal H)>0$: \begin{prop}\label{proposition: hom-space positive entropy} Let $\mathcal H$ be a finite graph with distinct vertices $a, b$ and $c$ such that $a\sim_\mathcal H b$ and $b\sim_\mathcal H c$. Then $h_{top}(X_{\mathcal H})\geq\frac{\log{2}}{2}$. \end{prop} \begin{proof} It is sufficient to see this for a graph $\mathcal H$ with exactly three vertices $a$, $b$ and $c$ such that $a\sim_\mathcal H b$ and $b\sim_\mathcal H c$. For such a graph any configuration in $X_\mathcal H$ is composed of $b$ on one partite class of $\mathbb{Z}^d$ and a free choice between $a$ and $c$ for vertices on the other partite class. Then $$|\mathcal L_{B_n}(X_\mathcal H)|=2^{{\lfloor\frac{(2n+1)^d}{2}\rfloor}}+2^{{\lceil\frac{(2n+1)^d}{2}\rceil}}$$ proving that $h_{top}(X_{\mathcal H})=\frac{\log{2}}{2}$. \end{proof} A shift space $X$ is called \emph{entropy minimal} if for all shift spaces $Y\subsetneq X$, $h_{top}(X)>h_{top}(Y)$. In other words, a shift space $X$ is entropy minimal if forbidding any word causes a drop in entropy. From \cite{quastrow2000} we know that every shift space contains an entropy minimal shift space with the same entropy and also a characterisation of same entropy factor maps on entropy minimal shifts of finite type. One of the main results of this paper is the following: \begin{thm}\label{theorem:four cycle free entropy minimal} Let $\mathcal H$ be a connected four-cycle free graph. Then $X_\mathcal H$ is entropy minimal. \end{thm} For $d=1$ all irreducible shifts of finite type are entropy minimal \cite{LM}. A necessary condition for the entropy minimality of $X_\mathcal H$ is that $\mathcal H$ has to be connected. \begin{prop}\label{proposition:entropy requires connectivity} Suppose $\mathcal H$ is a finite graph with connected components $\mathcal H_1, \mathcal H_2, \ldots \mathcal H_r$. Then $h_{top}(X_\mathcal H)=\max_{1\leq i \leq r}h_{top}(X_{\mathcal H_i})$. \end{prop} This follows from the observation that $$\max_{1\leq i \leq r}|\mathcal L_{B_n}({X_{\mathcal H_i}})|\leq |\mathcal L_{B_n}({X_\mathcal H})|= \sum_{i=1}^r|\mathcal L_{B_n}({X_{\mathcal H_i}})|\leq r\max_{1\leq i \leq r}|\mathcal L_{B_n}({X_{\mathcal H_i}})|.$$ \section{Thermodynamic Formalism}\label{section:thermodynamic formalism} Here we give a brief introduction of thermodynamic formalism. For more details one can refer to \cite{Rue,walters-book}. By $\mu$ we will always mean a shift-invariant Borel probability measure on a shift space $X$. The \emph{support} of $\mu$ denoted by $supp(\mu)$ is the intersection of all closed sets $Y \subset X$ for which $\mu(Y)= 1$. Note that $supp(\mu)$ is a shift space as well. The \emph{measure theoretic entropy} is \begin{equation*} h_\mu:=\lim_{i \rightarrow \infty}\frac{1}{|D_i|}H^{D_i}_{\mu}, \end{equation*} \noindent where $H^{D_i}_{\mu}$ is the Shannon-entropy of $\mu$ with respect to the partition of $X$ generated by the cylinder sets on $D_i$, the definition of which is given by: \begin{equation*} H^{D_i}_{\mu}:=\sum_{a\in \mathcal L_{D_i}(X)}-\mu([a]_{D_i})\log{\mu([a]_{D_i})}, \end{equation*} with the understanding that $0\log 0=0$. A shift-invariant probability measure $\mu$ is a \emph{measure of maximal entropy} of $X$ if the maximum of $\nu \mapsto h_\nu$ over all shift-invariant probability measures on $X$ is obtained at $\mu$. The existence of measures of maximal entropy follows from upper-semi-continuity of the function $\nu \mapsto h_\nu$ with respect to the weak-$*$ topology. Further the well-known \emph{variational principle} for topological entropy of $\mathbb{Z}^d$-actions asserts that if $\mu$ is a measure of maximal entropy $h_{top}(X)=h_\mu$ whenever $X$ is a $\mathbb{Z}^d$-shift space. The following is a well-known characterisation of entropy minimality (it is used for instance in the proof of Theorem 4.1 in \cite{meestersteif2001}): \begin{prop} \label{proposition:entropyviamme} A shift space $X$ is entropy minimal if and only if every measure of maximal entropy for $X$ is fully supported. \end{prop} We understand this by the following: Suppose $X$ is entropy minimal and $\mu$ is a measure of maximal entropy for $X$. Then by the variational principle for $X$ and $supp(\mu)$ we get $$h_{top}(X)=h_\mu\leq h_{top}(supp(\mu))\leq h_{top}(X)$$ proving that $supp(\mu)=X$. To prove the converse, suppose for contradiction that $X$ is not entropy minimal and consider $Y\subsetneq X$ such that $h_{top}(X)= h_{top}(Y)$. Then by the variational principle there exists a measure $\mu$ on $Y$ such that $h_\mu= h_{top}(X)$. Thus $\mu$ is a measure of maximal entropy for $X$ which is not fully supported. Further is known if $X$ is a nearest neighbour shift of finite type; this brings us to Markov random fields which we introduce next. Given a set $A\subset \mathbb{Z}^d$ we denote the \emph{$r$-boundary} of $A$ by $\partial_r A$, that is, $$\partial_r A=\{w\in \mathbb{Z}^d\setminus A\:\Big \vert\: \|w-v\|_1\leq r \text{ for some }v\in A\}.$$ The \emph{1-boundary} will be referred to as the \emph{boundary} and denoted by $\partial A$. A \emph{Markov random field} on $\mathcal{A}^{\mathbb{Z}^d}$ is a Borel probability measure $\mu$ with the property that for all finite $A, B \subset \mathbb{Z}^d$ such that $\partial A \subset B \subset A^{c}$ and $a \in {\mathcal A}^A, b \in {\mathcal A}^B$ satisfying $\mu([b]_B)>0$ \begin{equation*} \mu([a]_A\;\Big\vert\;[b]_B)= \mu([a]_A\;\Big\vert\;[b]_{ \partial A}). \end{equation*} In general Markov random fields are defined over graphs much more general than $\mathbb{Z}^d$, however we restrict to the $\mathbb{Z}^d$ setting in this paper. A \emph{uniform Markov random field} is a Markov random field $\mu$ such that further \begin{equation*} \mu([a]_A\;\Big\vert\;[b]_{ \partial A})=\frac{1}{n_{A,b|_{\partial A}}} \end{equation*} where $n_{A,b|_{\partial A}}=|\{a\in {\mathcal A}^A\:|\: \mu([a]_A\cap [b]_{\partial A})>0\}|$. Following \cite{petersen_schmidt1997, schmidt_invaraint_cocycles_1997}, we denote by $\Delta_X$ the \emph{homoclinic equivalence relation} of a shift space $X$, which is given by \begin{equation*} \Delta_X := \{(x,y)\in X\times X\;|\; x_{\vec i}=y_{\vec i} \text{ for all but finitely many } \vec i\in \mathbb{Z}^d\}. \end{equation*} We say that a measure $\mu$ is \emph{adapted} with respect to a shift space $X$ if $supp(\mu)\subset X$ and \begin{equation*} x\in supp(\mu) \Longrightarrow \{y\in X\:|\: (x,y)\in \Delta_X\}\subset supp(\mu). \end{equation*} To illustrate this definition, let $X\subset \{0,1\}^{\mathbb{Z}}$ consist of configurations in $X$ in which at most a single $1$ appears. $X$ is uniquely ergodic; the delta-measure $\delta_{0^{\infty}}$ is the only shift-invariant measure on $X$. But $$supp(\delta_{0^\infty})= \{0^\infty\}\subsetneq \{y\in X\;|\; 0^\infty_i= y_i \text{ for all but finitely many } i\in \mathbb{Z}\}=X,$$ proving that it is not adapted. On the other hand, since the homoclinic relation of ${\mathcal A}^{\mathbb{Z}^d}$ is minimal, meaning that for all $x\in {\mathcal A}^{\mathbb{Z}^d}$ $$\overline{\{y\in {\mathcal A}^{\mathbb{Z}^d}\;|\; y_{\vec i}=x_{\vec i} \text{ for all but finitely many } {\vec{i}}\in \mathbb{Z}^d\}}= {\mathcal A}^{\mathbb{Z}^d},$$ it follows that a probability measure on ${\mathcal A}^{\mathbb{Z}^d}$ is adapted if and only if it is fully supported. The relationship between measures of maximal entropy and Markov random fields is established by the following theorem. This is a special case of the Lanford-Ruelle theorem \cite{lanfruell,Rue}. \begin{thm} All measures of maximal entropy on a nearest neighbour shift of finite type $X$ are shift-invariant uniform Markov random fields $\mu$ adapted to $X$.\label{thm:equiGibbs} \end{thm} The converse is also true under further mixing assumptions on the shift space $X$ (called the D-condition). The full strength of these statements is obtained by looking at \emph{equilibrium states} instead of measures of maximal entropy. The measures obtained there are not uniform Markov random fields, rather Markov random fields where the conditional probabilities are weighted via an \emph{interaction} giving rise to \emph{Gibbs states}. Uniform Markov random fields are Gibbs states with interaction zero. We will often restrict our proofs to the ergodic case. We can do so via the following standard facts implied by Theorem $14.15$ in \cite{Georgii} and Theorem 4.3.7 in \cite{kellerequ1998}: \begin{thm}\label{theorem: ergodic decomposition of markov random fields} Let $\mu$ be a shift-invariant uniform Markov random field adapted to a shift space $X$. Let its ergodic decomposition be given by a measurable map $x\longrightarrow \mu_x$ on $X$, that is, $\mu= \int_X \mu_x d\mu$. Then $\mu$-almost everywhere the measures $\mu_x$ are shift-invariant uniform Markov random fields adapted to $X$ such that $supp(\mu_x)\subset supp(\mu)$. Moreover $\int h_{\mu_x} d\mu(x)= h_\mu$. \end{thm} We will prove the following: \begin{thm}\label{theorem: MRF fully supported } Let $\mathcal H$ be a connected four-cycle free graph. Then every ergodic probability measure adapted to $X_\mathcal H$ with positive entropy is fully supported. \end{thm} This implies Theorem \ref{theorem:four cycle free entropy minimal} by the following: The Lanford-Ruelle theorem implies that every measure of maximal entropy on $X_\mathcal H$ is a uniform shift-invariant Markov random field adapted to $X_\mathcal H$. By Proposition \ref{proposition: hom-space positive entropy} and the variational principle we know that these measures have positive entropy. By Theorems \ref{theorem: ergodic decomposition of markov random fields} and \ref{theorem: MRF fully supported } they are fully supported. Finally by Proposition \ref{proposition:entropyviamme}, $X_\mathcal H$ is entropy minimal. Alternatively, the conclusion of Theorem \ref{theorem: MRF fully supported } can be obtained via some strong mixing conditions on the shift space; we will describe one such assumption. A shift space $X$ is called \emph{strongly irreducible} if there exists $g>0$ such that for all $x, y \in X$ and $A, B\subset \mathbb{Z}^d$ satisfying $\min_{\vec i \in A, \vec j \in B}\|\vec i - \vec j\|_1\geq g$, there exists $z\in X$ such that $z|_{A}= x|_A$ and $z|_B= y|_B$. For such a space, the homoclinic relation is minimal implying the conclusion of Theorem \ref{theorem: MRF fully supported } and further, that every probability measure adapted to $X$ is fully supported. Note that this does not prove that $X$ is entropy minimal unless we assume that $X$ is a nearest neighbour shift of finite type. Such an argument is used in the proof of Lemma 4.1 in \cite{meestersteif2001} which implies that every strongly irreducible shift of finite type is entropy minimal. A more combinatorial approach was used in \cite{Schraudner2010minimal} to show that general shift spaces with a weaker mixing property called uniform filling are entropy minimal. \section{The Pivot Property}\label{section: the pivot property} A \emph{pivot} in a shift space $X$ is a pair of configurations $(x,y)\in X$ such that $x$ and $y$ differ exactly at a single site. A subshift $X$ is said to have \emph{the pivot property} if for all distinct $(x,y)\in \Delta_X$ there exists a finite sequence of configurations $x^{(1)}=x, x^{(2)},\ldots, x^{(k)}=y \in X$ such that each $(x^{(i)}, x^{(i+1)})$ is a pivot. In this case we say $x^{(1)}=x, x^{(2)},\ldots, x^{(k)}=y$ is a \emph{chain of pivots} from $x$ to $y$. Here are some examples of subshifts which have the pivot property: \begin{enumerate} \item Any subshift with a trivial homoclinic relation, that is, the homoclinic classes are singletons. \item Any subshift with a safe symbol\footnote{A shift space $X\subset {\mathcal A}^{\mathbb{Z}^d}$ has a \emph{safe symbol} $\star$ if for all $x\in X$ and $A\subset \mathbb{Z}^d$ the configuration $z\in {\mathcal A}^{\mathbb{Z}^d}$ given by \begin{equation*} z_{\vec i}:=\begin{cases} x_{\vec i} &\text{ if } \vec i \in A\\ \star &\text{ if } \vec i \in A^c \end{cases}• \end{equation*} is also an element of $X$.}. \item The hom-shifts $X_{C_r}$. This was proved for $r\neq 4$ in \cite{chandgotia2013Markov}, the result for $r=4$ is a special case of Proposition \ref{proposition: frozenfoldpivot}. \item $r$-colorings of $\mathbb{Z}^d$ with $r\geq 2d+2$. (It is well-known, look for instance in Subsection 3.2 of \cite{chandgotia2013Markov}) \item\label{item: pivot property list number 5} $X_\mathcal H$ when $\mathcal H$ is dismantlable. \cite{brightwell2000gibbs} \end{enumerate} We generalise the class of examples given by (\ref{item: pivot property list number 5}) in Proposition \ref{proposition: frozenfoldpivot}. It is not true that all hom-shifts have the pivot property. \begin{figure}[h] \centering \includegraphics[angle=0, width=.1\textwidth]{fivecolouring.pdf}\caption{Frozen Pattern} \label{Figure: Five colour} \end{figure} The following was observed by Brian Marcus: Recall that $K_n$ denotes the complete graph with $n$ vertices. $X_{K_4}, X_{K_5}$ do not possess the pivot property if the dimension is two. For instance consider a configuration in $X_{K_5}$ which is obtained by tiling the plane with the pattern given in Figure \ref{Figure: Five colour}. It is clear that the symbols in the box can be interchanged but no individual symbol can be changed. Therefore $X_{K_5}$ does not have the pivot property. However both $X_{K_4}$ and $X_{K_5}$ satisfy a more general property as discussed in Subsection \ref{subsection: Hom-shifts and the pivot property}. The following theorem is another main result in this paper. \begin{thm}\label{theorem: pivot property for four cycle free} For all four-cycle free graphs $\mathcal H$, $X_\mathcal H$ has the pivot property. \end{thm} It is sufficient to prove this theorem for four-cycle free graphs $\mathcal H$ which are connected because of the following proposition: \begin{prop}\label{proposition: pivot for disconnected} Let $X_1, X_2, \ldots, X_n$ be shift spaces on disjoint alphabets such that each of them has the pivot property. Then $\cup_{i=1}^n X_i$ also has the pivot property. \end{prop} This is true since $(x, y)\in \Delta_{\cup_{i=1}^n X_i}$ implies $(x, y)\in \Delta_{X_i}$ for some $1\leq i \leq n$. \section{Folding, Entropy Minimality and the Pivot Property}\label{section:Folding, Entropy Minimality and the Pivot Property} Given a graph $\mathcal H$ we say that a vertex $v$ \emph{folds} into a vertex $w$ if and only if $u \sim_\mathcal H v$ implies $u \sim_\mathcal H w$. In this case the graph $\mathcal H\setminus \{v\}$ is called a \emph{fold} of $\mathcal H$. The folding gives rise to a `retract' from $\mathcal H$ to $\mathcal H\setminus\{v\}$, namely the graph homomorphism from $\mathcal H$ to $\mathcal H\setminus \{v\}$ which is the identity on $\mathcal H\setminus \{v\}$ and sends $v$ to $w$. This was introduced in \cite{nowwinkler} to help characterise cop-win graphs and used in \cite{brightwell2000gibbs} to establish many properties which are preserved under `folding' and `unfolding'. Given a finite tree $\mathcal H$ with more than two vertices note that a leaf vertex (vertex of degree $1$) can always be folded to some other vertex of the tree. Thus starting with $\mathcal H$, there exists a sequence of folds resulting in a single edge. In fact using a similar argument we can prove the following proposition. \begin{prop}\label{proposition:folding trees into other trees} Let $\mathcal H\subset \mathcal H^\prime$ be trees. Then there is a graph homomorphism $f: \mathcal H^\prime \longrightarrow \mathcal H$ such that $f|_{\mathcal H}$ is the identity map. \end{prop} To show this, first note that if $\mathcal H\subsetneq\mathcal H^\prime$ then there is a leaf vertex in $\mathcal H^\prime$ which is not in $\mathcal H$. This leaf vertex can be folded into some other vertex in $\mathcal H^\prime$. Thus by induction on $|\mathcal H^\prime \setminus \mathcal H|$ we can prove that there is a sequence of folds from $\mathcal H^\prime$ to $\mathcal H$. Corresponding to this sequence of folds we obtain a graph homomorphism from $\mathcal H^\prime$ to $\mathcal H$ which is the identity on $\mathcal H$. Here we consider a related notion for shift spaces. Given a nearest neighbour shift of finite type $X\subset {\mathcal A}^{\mathbb{Z}^d}$, \emph{the neighbourhood} of a symbol $v\in {\mathcal A}$ is given by $$N_X(v):=\{a \in {\mathcal A}^{\partial \vec 0}\:|\: [v]_{\vec 0}\cap [a]_{\partial \vec 0}\in \mathcal L_{D_1}(X)\},$$ that is the collection of all patterns which can `surround' $v$ in $X$. We will say that $v$ \emph {config-folds} into $w$ in $X$ if $N_X(v)\subset N_X(w)$. In such a case we say that $X$ \emph{config-folds} to $X\cap({\mathcal A}\setminus \{v\})^{\mathbb{Z}^d}$. Note that $X\cap({\mathcal A}\setminus \{v\})^{\mathbb{Z}^d}$ is obtained by forbidding $v$ from $X$ and hence it is also a nearest neighbour shift of finite type. Also if $X=X_\mathcal H$ for some graph $\mathcal H$ then $v$ config-folds into $w$ in $X_\mathcal H$ if and only if $v$ folds into $w$ in $\mathcal H$. Thus if $\mathcal H$ is a tree then there is a sequence of folds starting at $X_\mathcal H$ resulting in the two checkerboard configurations with two symbols (the vertices of the edge which $\mathcal H$ folds into). This property is weaker than the notion of folding introduced in \cite{chandgotiahammcliff2014}. The main thrust of this property in our context is: if $v$ config-folds into $w$ in $X$ then given any $x\in X$, every appearance of $v$ in $x$ can be replaced by $w$ to obtain another configuration in $X$. This replacement defines a factor (surjective, continuous and shift-invariant) map $f: X\longrightarrow X\cap({\mathcal A}\setminus \{v\})^{\mathbb{Z}^d}$ given by \begin{equation*} (f(x))_{\vec i}:=\begin{cases} x_{\vec i}&\text{ if } x_{\vec i}\neq v\\ w&\text{ if } x_{\vec i}= v. \end{cases}• \end{equation*} Note that the map $f$ defines a `retract' from $X$ to $X\cap({\mathcal A}\setminus \{v\})^{\mathbb{Z}^d}$. Frequently we will config-fold more than one symbol at once (especially in Section \ref{section: Proof of the main theorems}): Distinct symbols $v_1, v_2, \ldots, v_n$ \emph{config-fold disjointly} into $w_1, w_2, \ldots, w_n$ in $X$ if $v_i$ config-folds into $w_i$ and $v_i\neq w_j$ for all $1\leq i, j \leq n$. In this case the symbols $v_1, v_2, \ldots, v_n$ can be replaced by $w_1, w_2, \ldots, w_n$ simultaneously for all $x \in X$. Suppose $v_1, v_2,\ldots v_n$ is a maximal set of symbols which can be config-folded disjointly in $X$. Then $X\cap({\mathcal A}\setminus \{v_1, v_2, \ldots, v_n\})^{\mathbb{Z}^d}$ is called a \emph{full config-fold} of $X$. For example consider a tree $\mathcal H:=(\mathcal V,\mathcal E)$ where $\mathcal V:=\{v_1, v_2, v_3, \ldots, v_{n+1}\}$ and $\mathcal E:=\{(v_i, v_{n+1})\:|\: 1\leq i \leq n\}$. For all $1\leq i \leq n$, $\mathcal V\setminus \{v_i, v_{n+1}\}$ is a maximal set of symbols which config-folds disjointly in $X_\mathcal H$ resulting in the checkerboard patterns with the symbols $v_i$ and $v_{n+1}$ for all $1\leq i \leq n$. Thus the full config-fold of a shift space is not necessarily unique. However it is unique up to conjugacy: \begin{prop}\label{Proposition: Uniqueness of full config-fold} The full config-fold of a nearest neighbour shift of finite type is unique up to conjugacy via a change of the alphabet. \end{prop} The ideas for the following proof come essentially from the proof of Theorem 4.4 in \cite{brightwell2000gibbs} and discussions with Prof. Brian Marcus. \begin{proof} Let $X\subset {\mathcal A}^{\mathbb{Z}^d}$ be a nearest neighbour shift of finite type and $$M:=\{v\in {\mathcal A} \:|\: \text{ for all }w\in {\mathcal A},\ v \text{ config-folds into }w \Longrightarrow w \text{ config-folds into }v\}.$$ There is a natural equivalence relation $\equiv$ on $M$ given by $v\equiv w$ if $v$ and $w$ config-fold into each other. Let $A_1, A_2, A_3, \ldots, A_r\subset M$ be the corresponding partition. Clearly for all distinct $v, w\in M$, $v$ can be config-folded into $w$ if and only if $v, w\in A_i$ for some $i$. It follows that $A\subset A_i$ can be config-folded disjointly if and only if $\emptyset\neq A\neq A_i$. Let $v\in {\mathcal A}\setminus M$. We will prove that $v$ config-folds to a symbol in $M$. By the definition of $M$ there exists $v_1\in {\mathcal A}$ such that $N_X(v)\subsetneq N_X(v_1)$. If $v_1\in M$ then we are done, otherwise choose $v_2\in {\mathcal A}$ such that $N_X(v_1)\subsetneq N_X(v_2)$. Continuing this process recursively we can find a sequence $v= v_0, v_1, v_2, \ldots, v_n$ such that $N_X(v_{i-1})\subsetneq N_X(v_i)$ for all $1\leq i \leq n$ and $v_n\in M$. Thus $v$ config-folds into $v_n$, a symbol in $M$. Further if $v$ config-folds to a symbol in $A_i$ it can config-fold to all the symbols in $A_i$. Therefore $B$ is a maximal subset of symbols in ${\mathcal A}$ which can be config-folded disjointly if and only if $B=\cup_{i=1}^rB_i\cup ({\mathcal A}\setminus M)$ where $B_i\subset A_i$ and $|A_i\setminus B_i|=1$. Let $B'\subset {\mathcal A}$ be another such maximal subset, ${\mathcal A}\setminus B:=\{b_1, b_2, \ldots, b_r\}$ and ${\mathcal A}\setminus B':=\{b'_1, b'_2, \ldots, b'_r\}$ where $b_i, b'_i\in A_i$. Then the map $$f: X\cap ({\mathcal A}\setminus B)^{\mathbb{Z}^d} \longrightarrow X\cap ({\mathcal A}\setminus B')^{\mathbb{Z}^d} \text{ given by } f(x) := y\text{ where } y_{\vec i}=b'_j \text{ whenever }x_{\vec i}=b_j$$ is the required change of alphabet between the two full config-folds of $X$. \end{proof} Let $X\cap({\mathcal A}\setminus \{v_1, v_2, \ldots, v_n\})^{\mathbb{Z}^d}$ be a \emph{full config-fold} of $X$ where $v_i$ config-folds into $w_i$ for all $1\leq i\leq n$. Consider $f_X: {\mathcal A} \longrightarrow{\mathcal A}\setminus \{v_1, v_2, \ldots, v_n\}$ given by \begin{equation*} f_X(v):=\begin{cases} v&\text{ if } v\neq v_j \text{ for all }1\leq j\leq n\\ w_j&\text{ if } v= v_j\text{ for some }1\leq j \leq n. \end{cases}• \end{equation*} \noindent This defines a factor map $f_X: X\longrightarrow X\cap({\mathcal A}\setminus \{v_1, v_2, \ldots, v_n\})^{\mathbb{Z}^d}$ given by $(f_X(x))_{{\vec{i}}}:= f_X(x_{\vec{i}})$ for all ${\vec{i}} \in \mathbb{Z}^d$. $f_X$ denotes both the factor map and the map on the alphabet; it should be clear from the context which function is being used. In many cases we will fix a configuration on a set $A\subset \mathbb{Z}^d$ and apply a config-fold on the rest. Hence we define the map $f_{X,A}: X\longrightarrow X$ given by \begin{equation*} (f_{X,A}(x))_{\vec i}:=\begin{cases} x_{\vec i}&\text{ if } \vec i \in A\\ f_X(x_{\vec i})&\text{ otherwise.} \end{cases}• \end{equation*}• The map $f_{X,A}$ can be extended beyond $X$: \begin{prop}\label{prop: folding_fixing_a_set} Let $X\subset Y$ be nearest neighbour shifts of finite type, $Z$ be a full config-fold of $X$ and $y\in Y$ such that for some $A\subset \mathbb{Z}^d$, $y|_{A^c\cup\partial (A^c)}\in \mathcal L_{A^c\cup\partial (A^c)}(X)$. Then the configuration $z$ given by \begin{equation*} z_{\vec i}:=\begin{cases} y_{\vec i}&\text{ if } \vec i \in A\\ f_X(y_{\vec i})&\text{ otherwise} \end{cases}• \end{equation*} is an element of $Y$. Moreover $z|_{A^c}\in \mathcal L_{A^c}(Z)$. \end{prop} Abusing the notation, in such cases we shall denote the configuration $z$ by $f_{X, A}(y)$. If $A^c$ is finite, then $f_{X,A}$ changes only finitely many coordinates. These changes can be applied one by one, that is, there is a chain of pivots in $Y$ from $y$ to $f_{X,A}(y)$. A nearest neighbour shift of finite type which cannot be config-folded is called a \emph{stiff shift}. We know from Theorem 4.4 in \cite{brightwell2000gibbs} that all the stiff graphs obtained by a sequence of folds of a given graph are isomorphic. By Proposition \ref{Proposition: Uniqueness of full config-fold} the corresponding result for nearest neighbour shifts of finite type immediately follows: \begin{prop}\label{proposition:uniqueness of stiff shifts} The stiff shift obtained by a sequence of config-folds starting with a nearest neighbour shift of finite type is unique up to conjugacy via a change of the alphabet. \end{prop} Starting with a nearest neighbour shift of finite type $X$ the \emph{fold-radius} of $X$ is the smallest number of full config-folds required to obtain a stiff shift. If $\mathcal H$ is a tree then the fold-radius of $X_\mathcal H$ is equal to $$\left\lfloor\frac{diameter(\mathcal H)}{2}\right\rfloor.$$ Thus for every nearest neighbour shift of finite type $X$ there is a sequence of full config-folds (not necessarily unique) which starts at $X$ and ends at a stiff shift of finite type. Let the fold-radius of $X$ be $r$ and $X= X_0, X_1, X_2, \ldots, X_r$ be a sequence of full config-folds where $X_r$ is stiff. This generates a sequence of maps $f_{X_i}:X_{i}\longrightarrow X_{i+1}$ for all $0\leq i \leq r-1$. In many cases we will fix a pattern on $D_n$ or $D_n^c$ and apply these maps on the rest of the configuration. Consider the maps $I_{X,n}:X\longrightarrow X$ and $O_{X,n}:X\longrightarrow X$ (for $n>r$) given by \begin{equation*} I_{X,n}(x):=f_{X_{r-1},D_{n+r-1} }\left(f_{X_{r-2}, D_{n+r-2}}\left(\ldots\left(f_{X_{0}, D_n}(x)\right)\ldots\right)\right)\text{(Inward Fixing Map)} \end{equation*}• and \begin{eqnarray*} O_{X,n}(x):=f_{X_{r-1}, D_{n-r+1}^c }\left(f_{X_{r-2}, D_{n-r+2}^c}\left(\ldots\left(f_{X_{0}, D_n^c}(x)\right)\ldots\right)\right)\text{(Outward Fixing Map)}. \end{eqnarray*}• Similarly we consider maps which do not fix anything, $F_X: X\longrightarrow X_r$ given by \begin{eqnarray*} F_X(x):= f_{X_{r-1}}\left(f_{X_{r-2}}\left(\ldots\left(f_{X_{0}}(x)\right)\ldots\right)\right). \end{eqnarray*} Note that $D_k\cup \partial D_k= D_{k+1}$ and $D_k^c\cup \partial (D_k^c)=D_{k-1}^c$. This along with repeated application of Proposition \ref{prop: folding_fixing_a_set} implies that the image of $I_{X,n}$ and $O_{X,n}$ lie in $X$. This also implies the following proposition: \begin{prop}[The Onion Peeling Proposition]\label{prop: folding_ to _ stiffness_fixing_a_set} Let $X\subset Y$ be nearest neighbour shifts of finite type, $r$ be the fold-radius of $X$, $Z$ be a stiff shift obtained by a sequence of config-folds starting with $X$ and $y^1, y^2\in Y$ such that $y^1|_{D_{n-1}^c}\in \mathcal L_{D_{n-1}^c}(X)$ and $y^2|_{D_{n+1}}\in \mathcal L_{D_{n+1}}(X)$. Let $z^1, z^2\in Y$ be given by \begin{eqnarray*} z^1&:=&f_{X_{r-1},D_{n+r-1} }\left(f_{X_{r-2}, D_{n+r-2}}\left(\ldots\left(f_{X_{0}, D_n}(y^1)\right)\ldots\right)\right)\\ z^2&:=&f_{X_{r-1}, D_{n-r+1}^c }\left(f_{X_{r-2},D_{n-r+2}^c}\left(\ldots\left(f_{X_{0}, D_n^c}(y^2)\right)\ldots\right)\right)\text{ for }n>r. \end{eqnarray*}• The patterns $z^1|_{D_{n+r-1}^c}\in \mathcal L_{D_{n+r-1}^c}(Z)$ and $z^2|_{D_{n-r+1}}\in \mathcal L_{D_{n-r+1}}(Z)$. If $y^1, y^2\in X$ then in addition \begin{eqnarray*} z^1|_{D_{n+r-1}^c}&=&F_X(y^1)|_{D_{n+r-1}^c}\text{ and}\\ z^2|_{D_{n-r+1}}&=&F_X(y^2)|_{D_{n-r+1}}. \end{eqnarray*}• \end{prop} Abusing the notation, in such cases we shall denote the configurations $z^1$ and $z^2$ by $I_{X, n}(y^1)$ and $O_{X,n}(y^2)$ respectively. Note that $I_{X, n}(y^1)|_{D_n}= y^1|_{D_n}$ and $O_{X,n}(y^2)|_{D_n^c}= y^2|_{D_n^c}$. Also, $O_{X,n}$ is a composition of maps of the form $f_{X,A}$ where $A^c$ is finite; there is a chain of pivots in $Y$ from $y$ to $O_{X,n}(y)$. There are two kinds of stiff shifts which will be of interest to us: A configuration $x\in {\mathcal A}^{\mathbb{Z}^d}$ is called \emph{periodic} if there exists $n \in \mathbb N$ such that $\sigma^{n \vec e_{i}}(x)=x$ for all $1\leq i \leq d$. A configuration $x\in X$ is called \emph{frozen} if its homoclinic class is a singleton. This notion coincides with the notion of frozen coloring in \cite{brightwell2000gibbs}. A subshift $X$ will be called \emph{frozen} if it consists of frozen configurations, equivalently $\Delta_X$ is the diagonal. A measure on $X$ will be called \emph{frozen} if its support is frozen. Note that any shift space consisting just of periodic configurations is frozen. All frozen nearest neighbour shifts of finite type are stiff: Suppose $X$ is a nearest neighbour shift of finite type which is not stiff. Then there is a symbol $v$ which can be config-folded to a symbol $w$. This means that any appearance of $v$ in a configuration $x\in X$ can be replaced by $w$. Hence the homoclinic class of $x$ is not a singleton. Therefore $X$ is not frozen. \begin{prop}\label{proposition: periodicfoldentropy} Let $X$ be a nearest neighbour shift of finite type such that a sequence of config-folds starting from $X$ results in the orbit of a periodic configuration. Then every shift-invariant probability measure adapted to $X$ is fully supported. \end{prop} \begin{prop}\label{proposition: frozenfoldpivot} Let $X$ be a nearest neighbour shift of finite type such that a sequence of config-folds starting from $X$ results in a frozen shift. Then $X$ has the pivot property. \end{prop} \noindent\textbf{Examples:} \begin{enumerate} \item $X:=\{0\}^{\mathbb{Z}^d}\cup \{1\}^{\mathbb{Z}^d}$ is a frozen shift space but not the orbit of a periodic configuration. Clearly the delta measure $\delta_{\{0\}^{\mathbb{Z}^d}}$ is a shift-invariant probability measure adapted to $X$ but not fully supported. A more non-trivial example of a nearest neighbour shift of finite type which is frozen but not the orbit of a periodic configuration is the set of the Robinson tilings $Y$ \cite{Robinson1971}. There are configurations in $Y$ which have the so-called ``fault lines''; they can occur at most once in a given configuration. Consequently for all shift-invariant probability measures on $Y$, the probability of seeing a fault line is zero. Thus no shift-invariant probability measure (and hence no adapted shift-invariant probability measure) on $Y$ is fully supported. \item\label{Example: Safe Symbol} Let $X$ be a shift space with a safe symbol $\star$. Then any symbol in $X$ can be config-folded into the safe symbol. By config-folding the symbols one by one, we obtain a fixed point $\{\star\}^{\mathbb{Z}^d}$. Thus any nearest neighbour shift of finite type with a safe symbol satisfies the hypothesis of both the propositions. \item \label{Example: Folds to an edge}Suppose $\mathcal H$ is a graph which folds into a single edge (denoted by $Edge$) or a single vertex $v$ with a loop. Then the shift space $X_\mathcal H$ can be config-folded to $X_{Edge}$ (which consists of two periodic configurations) or the fixed point $\{v\}^{\mathbb{Z}^d}$ respectively. In the latter case, the graph $\mathcal H$ is called \emph{dismantlable} \cite{nowwinkler}. Note that finite non-trivial trees and the graph $C_4$ fold into an edge. For dismantlable graphs $\mathcal H$, Theorem 4.1 in \cite{brightwell2000gibbs} implies the conclusions of Propositions \ref{proposition: periodicfoldentropy} and \ref{proposition: frozenfoldpivot} for $X_\mathcal H$ as well. \end{enumerate}• \begin{proof}[Proof of Proposition \ref{proposition: periodicfoldentropy}] Let $\mu$ be a shift-invariant probability measure adapted to $X$. To prove that $supp(\mu)= X$ it is sufficient to prove that for all $n\in \mathbb N$ and $x\in X$ that $\mu([x]_{D_n})>0$. Let $X_0=X$, $X_1$, $X_2$$,\ldots,$ $X_r$ be a sequence of full config-folds where $X_r:=\{ \sigma^{\vec i_1}(p), \sigma^{\vec i_2}(p),\ldots, \sigma^{\vec i_{k-1}}(p) \}$ is the orbit of a periodic point. For any two configurations $z,w\in X$ there exists $\vec i\in \mathbb{Z}^d$ such that $F_X(z)= F_X(\sigma^{\vec i}(w)).$ Since $\mu$ is shift-invariant we can choose $y \in supp (\mu)$ such that $F_X(x)= F_X(y).$ Consider the configurations $I_{X,n}(x)$ and $O_{X,n+2r-1}(y)$. By Proposition \ref{prop: folding_ to _ stiffness_fixing_a_set} they satisfy the equations \begin{eqnarray*} I_{X,n}(x)|_{D_{n+r-1}^c}&=&F_X(x)|_{D_{n+r-1}^c}\text{ and }\\ O_{X,n+2r-1}(y)|_{D_{n+r}}&=&F_X(y)|_{D_{n+r}}. \end{eqnarray*} \noindent Then $I_{X,n}(x)|_{\partial D_{n+r-1}}= O_{X,n+2r-1}(y)|_{\partial D_{n+r-1}}$. Since $X$ is a nearest neighbour shift of finite type, the configuration $z$ given by \begin{eqnarray*} z|_{D_{n+r}}&:=&I_{X,n}(x)|_{D_{n+r}}\\ z|_{D_{n+r-1}^c}&:=&O_{X,n+2r-1}(y)|_{D_{n+r-1}^c} \end{eqnarray*} \noindent is an element of $X$. Moreover \begin{eqnarray*} z|_{D_{n}}&=&I_{X,n}(x)|_{D_{n}}=x|_{D_n}\\ z|_{D_{n+2r-1}^c}&=&O_{X,n+2r-1}(y)|_{D_{n+2r-1}^c}=y|_{D^c_{n+2r-1}}. \end{eqnarray*}• Thus $(y, z)\in \Delta_X$. Since $\mu$ is adapted we get that $z\in supp(\mu)$. Finally $$\mu([x]_{D_n})=\mu([z]_{D_n})>0.$$ \end{proof} Note that all the maps being discussed here, $f_X$, $f_{X,A}$, $F_X$, $I_{X,n}$ and $O_{X,n}$ are (not necessarily shift-invariant) single block maps, that is, maps $f$ where $\left(f(x)\right)_{\vec i}$ depends only on $x_{\vec i}$. Thus if $f$ is one such map and $x|_A= y|_A$ for some set $A\subset \mathbb{Z}^d$ then $f(x)|_A=f(y)|_A$; they map homoclinic pairs to homoclinic pairs. \begin{proof}[Proof of Proposition \ref{proposition: frozenfoldpivot}] Let $X_0=X$, $X_1$, $X_2$$,\ldots,$ $X_r$ be a sequence of full config-folds where $X_r$ is frozen. Let $(x, y) \in \Delta_X$. Since $X_r$ is frozen, $F_{X}(x)= F_X(y)$. Suppose $x|_{D_n^c}= y|_{D_n^c}$ for some $n\in \mathbb N$. Then $O_{X,n+r-1}(x)|_{D_n^c}=O_{X,n+r-1}(y)|_{D_n^c}$. Also by Proposition \ref{prop: folding_ to _ stiffness_fixing_a_set}, $$O_{X, n+r-1}(x)|_{D_n}=F_X(x)|_{D_n}=F_X(y)|_{D_n}= O_{X, n+r-1}(y)|_{D_n}.$$ This proves that $O_{X,n+r-1}(x)=O_{X,n+r-1}(y)$. In fact it completes the proof since for all $z\in X$ there exists a chain of pivots in $X$ from $z$ to $O_{X,n+r-1}(z)$. \end{proof} \section{Universal Covers}\label{section:universal covers} Most cases will not be as simple as in the proof of Propositions \ref{proposition: periodicfoldentropy} and \ref{proposition: frozenfoldpivot}. We wish to prove the conclusions of these propositions for hom-shifts $X_\mathcal H$ when $\mathcal H$ is a connected four-cycle free graph. Many ideas carry over from the proofs of these results because of the relationship of such graphs with their universal covers; we describe this relationship next. The results in this section are not original; look for instance in \cite{Stallingsgraph1983}. We mention them for completeness. Let $\mathcal H$ be a finite connected graph with no self-loops. We denote by $d_\mathcal H$ the ordinary graph distance on $\mathcal H$ and by $D_\mathcal H(u)$, the \emph{ball of radius 1} around $u$. A graph homomorphism $\pi:\mathcal C\longrightarrow \mathcal H$ is called a \emph{covering map} if for some $n \in \mathbb N \cup \{\infty\}$ and all $u \in \mathcal H$, there exist disjoint sets $\{C_i\}_{i=1}^n\subset \mathcal C$ such that $\pi^{-1}\left(D_\mathcal H(u)\right)= \cup_{i=1}^n C_i $ and $\pi|_{C_i}: C_i\longrightarrow D_\mathcal H(u)$ is an isomorphism of the induced subgraphs for $1\leq i\leq n$. A \emph{covering space} of a graph $\mathcal H$ is a graph $\mathcal C$ such that there exists a covering map $\pi: \mathcal C\longrightarrow \mathcal H$. A \emph{universal covering space} of $\mathcal H$ is a covering space of $\mathcal H$ which is a tree. Unique up to graph isomorphism \cite{Stallingsgraph1983}, these covers can be described in multiple ways. Their standard construction uses non-backtracking walks \cite{Angluin80}: A \emph{walk} on $\mathcal H$ is a sequence of vertices $(v_1, v_2, \ldots, v_n)$ such that $v_i\sim_\mathcal H v_{i+1}$ for all $1\leq i \leq n-1$. The \emph{length} of a walk $p=(v_1, v_2, \ldots, v_n)$ is $|p|=n-1$, the number of edges traversed on that walk. It is called \emph{non-backtracking} if $v_{i-1}\neq v_{i+1}$ for all $2\leq i \leq n-1$, that is, successive steps do not traverse the same edge. Choose a vertex $u \in \mathcal H$. The vertex set of the universal cover is the set of all non-backtracking walks on $\mathcal H$ starting from $u$; there is an edge between two such walks if one extends the other by a single step. The choice of the starting vertex $u$ is arbitrary; choosing a different vertex gives rise to an isomorphic graph. We denote the universal cover by $E_\mathcal H$. The covering map $\pi: E_\mathcal H\longrightarrow \mathcal H$ maps a walk to its terminal vertex. Usually, we will denote by $\tilde u, \tilde v$ and $\tilde w$ vertices of $E_\mathcal H$ such that $\pi(\tilde u)= u$, $\pi(\tilde v)= v$ and $\pi(\tilde w)= w$. This construction shows that the universal cover of a graph is finite if and only if it is a finite tree. To see this if the graph has a cycle then the finite segments of the walk looping around the cycle give us infinitely many vertices for the universal cover. If the graph is a finite tree, then all walks must terminate at the leaves and their length is bounded by the diameter of the tree. In fact, the universal cover of a tree is itself while the universal cover of a cycle (for instance $C_4$) is $\mathbb{Z}$ obtained by finite segments of the walks $(1, 2, 3, 0, 1, 2, 3, 0, \ldots )$ and $(1, 0, 3, 2, 1, 0, 3, 2, \ldots )$. Following the ideas of homotopies in algebraic topology, there is a natural operation on the set of walks: two walks can be joined together if one begins where the other one ends. More formally, given two walks $p=(v_1, v_2, \ldots, v_n)$ and $q=(w_1, w_2, \ldots, w_m)$ where $v_n=w_1$, consider $p\star q=(v_1, v_2, \ldots, v_n, w_2, w_3, \ldots, w_m)$. However even when $p$ and $q$ are non-backtracking $p\star q$ need not be non-backtracking. So we consider the walk $[p\star q]$ instead which erases the backtracking segments of $p \star q$, that is, if for some $i>1$, $v_{n-i+1}\neq w_{i}$ and $v_{n-j+1}=w_j$ for all $1\leq j \leq i-1$ then $$[p\star q]:=(v_1, v_2, \ldots, v_{n-i+1}, w_{i-1}, w_{i}, \ldots, w_m).$$ This operation of erasing the backtracking segments is called \emph{reduction}, look for instance in \cite{Stallingsgraph1983}. The following proposition is well-known (Section 4 of \cite{Stallingsgraph1983}) and shall be useful in our context as well: \begin{prop}\label{proposition:isomorphism_of_universal_covering_space} Let $\mathcal H$ be a finite connected graph without any self-loops. Then for all $\tilde{v}, \tilde w \in E_\mathcal H$ satisfying $\pi(\tilde v)= \pi(\tilde w)$ there exists a graph isomorphism $\phi: E_\mathcal H\longrightarrow E_\mathcal H$ such that $\phi(\tilde v)= \tilde w$ and $\pi \circ \phi = \pi$. \end{prop} To see how to construct this isomorphism, consider as an example $ (u)$, the empty walk on $\mathcal H$ and $(v_1, v_2, \ldots, v_n)$, some non-backtracking walk such that $v_1=v_n=u$. Then the map $\phi: E_\mathcal H\longrightarrow E_\mathcal H$ given by $$\phi(\tilde w):= [(v_1, v_2, \ldots, v_n) \star \tilde w].$$ is a graph isomorphism which maps $(u)$ to $(v_1, v_2, \ldots, v_n)$; its inverse is $\psi: E_\mathcal H\longrightarrow E_\mathcal H$ given by $$\psi(\tilde w):= [(v_n, v_{n-1}, \ldots, v_1)\star \tilde w].$$ The maps $\phi, \pi$ described above give rise to natural maps, also denoted by $\phi$ and $\pi$ where $$\phi:X_{E_\mathcal H}\longrightarrow X_{E_\mathcal H}$$ is given by $\phi(\tilde x)_{\vec{i}} := \phi(\tilde x_{\vec{i}})$ and $$\pi: X_{E_\mathcal H} \longrightarrow X_{\mathcal H}$$ is given by $\pi(\tilde x)_{\vec{i}}:=\pi(\tilde x_{\vec{i}})$ for all ${\vec{i}} \in \mathbb{Z}^d$ respectively. A \emph{lift} of a configuration $x\in X_\mathcal H$ is a configuration $\tilde{x}\in X_{E_\mathcal H}$ such that $\pi \circ \tilde{x}= x$. Now we shall analyse some consequences of this formalism in our context. More general statements (where $\mathbb{Z}^d$ is replaced by a different graph) are true (under a different hypothesis on $\mathcal H$), but we restrict to the four-cycle free condition. We noticed in Section \ref{section:Folding, Entropy Minimality and the Pivot Property} that if $\mathcal H$ is a tree then $X_{\mathcal H}$ satisfies the conclusions of Theorems \ref{theorem: MRF fully supported } and \ref{theorem: pivot property for four cycle free}. Now we will draw a connection between the four-cycle free condition on $\mathcal H$ and the formalism in Section \ref{section:Folding, Entropy Minimality and the Pivot Property}. \begin{prop}[Existence of Lifts]\label{proposition:covering_space_lifting} Let $\mathcal H$ be a connected four-cycle free graph. For all $x\in X_\mathcal H$ there exists $\tilde{x}\in X_{E_\mathcal H}$ such that $\pi(\tilde{x})=x$. Moreover the lift $\tilde{x}$ is unique up to a choice of $\tilde{x}_{\vec 0}$. \end{prop} \begin{proof} We will begin by constructing a sequence of graph homomorphisms $\tilde{x}^n:D_n \longrightarrow E_\mathcal H$ such that $\pi \circ\tilde{x}^n =x|_{D_n}$ and $\tilde{x}^m|_{D_n}= \tilde{x}^n$ for all $m>n$. Then by taking the limit of these graph homomorphisms we obtain a graph homomorphism $\tilde{x}\in X_{E_\mathcal H}$ such that $\pi \circ\tilde{x}=x$. It will follow that given $\tilde{x}^0$ the sequence $\tilde{x}^n$ is completely determined proving that the lifting is unique up to a choice of $\tilde{x}_{\vec {0}}$. The recursion is the following: Let $\tilde{x}^n: D_n \longrightarrow E_\mathcal H$ be a given graph homomorphism for some $n\in \mathbb N\cup \{0\}$ such that $\pi \circ\tilde{x}^n=x|_{D_n}$. For any ${\vec{ i}}\in D_{n+1}\setminus D_n$, choose a vertex ${\vec{j}}\in D_n$ such that $\vec{j}\sim \vec{i}$. Then $\pi(\tilde{x}^n_{\vec{j}})=x_{\vec{j}}\sim x_{\vec{i}}$. Since $\pi$ defines a local isomorphism between $E_\mathcal H$ and $\mathcal H$, there exists a unique vertex $\tilde v_{\vec{i}}\sim \tilde{x}^n_{\vec{j}} \in E_\mathcal H$ such that $\pi(\tilde v_{\vec{i}})= x_{\vec{i}}$. Define $\tilde{x}^{n+1}: D_{n+1}\longrightarrow E_\mathcal H$ by \begin{equation*} \tilde{x}^{n+1}_{\vec{i}}:=\begin{cases}\tilde{x}^{n}_{\vec{i}} &\text{if } \vec{i}\in D_n\\\tilde v_{\vec{i}} & \text{if } \vec{i}\in D_{n+1}\setminus D_n.\end{cases} \end{equation*}• Then clearly $\pi \circ \tilde{x}^{n+1}= x|_{D_{n+1}}$ and $\tilde{x}^{n+1}|_{D_n}=\tilde{x}^n$. Note that the extension $\tilde{x}^{n+1}$ is uniquely defined given $\tilde{x}^n$. We need to prove that this defines a valid graph homomorphism from $D_{n+1}$ to $E_\mathcal H$. Let $\vec{i}\in D_{n+1}\setminus D_n$ and $\vec{j}\in D_n$ be chosen as described above. Consider if possible any $\vec{j}^\prime\neq \vec{j} \in D_n$ such that $\vec{j}^\prime \sim \vec{i}$. To prove that $\tilde{x}^{n+1}$ is a graph homomorphism we need to verify that $\tilde{x}^{n+1}_{\vec{j}^\prime}\sim \tilde{x}^{n+1}_{\vec{i}}$. Consider $\vec{i}^\prime\in D_n$ such that $\vec{i}^\prime\sim \vec{j}$ and $\vec{j}^\prime$. Then $\vec{i}^\prime, \vec{j}, \vec{i}$ and $\vec{j}^\prime$ form a four-cycle. Since $\mathcal H$ is four-cycle free either $x_{\vec{i}^\prime}=x_{\vec{i}}$ or $x_{\vec{j}^\prime}= x_{\vec{j}}$. Suppose $x_{\vec{i}^\prime}=x_{\vec{i}}$; the other case is similar. Since $\pi$ is a local isomorphism and $\tilde{x}^{n+1}_{\vec{i}},\tilde{x}^{n+1}_{\vec{i^\prime}} \sim \tilde{x}^{n+1}_{\vec{j}}$, we get that $\tilde{x}^{n+1}_{\vec{i}}=\tilde{x}^{n+1}_{\vec{i}^\prime}$. But ${\vec{i}}', {\vec{j}}' \in D_n$ and $\tilde x^{n+1}|_{D_n}= \tilde x^{n}$ is a graph homomorphism; therefore $\tilde{x}^{n+1}_{\vec{i}}=\tilde{x}^{n+1}_{\vec{i}^\prime}\sim \tilde{x}^{n+1}_{\vec{j}^\prime}$. \end{proof} \begin{corollary}\label{corollary:covering_space_lifting_homoclinic} Let $\mathcal H$ be a connected four-cycle free graph and $x, y\in X_\mathcal H$. Consider some lifts $\tilde{x}, \tilde{y} \in X_{E_\mathcal H}$ such that $\pi(\tilde{x})= x $ and $\pi(\tilde{y})=y$. If for some $\vec{i}_0 \in \mathbb{Z}^d$, $\tilde{x}_{\vec{i}_0}= \tilde{y}_{\vec{i}_0}$ then $\tilde{x}= \tilde{y}$ on the connected subset of $$\{\vec{j} \in \mathbb{Z}^d\:|\: x_{\vec{j}}= y_{\vec{j}}\}$$ which contains $\vec{i}_0$. \end{corollary} \begin{proof} Let $D$ be the connected component of $\{\vec{i} \in \mathbb{Z}^d \:|\: x_{\vec{i}} = y_{\vec{i}}\}$ and $\tilde D$ be the connected component of $\{\vec{i} \in \mathbb{Z}^d \:|\: \tilde x_{\vec{i}} = \tilde y_{\vec{i}}\}$ which contain $\vec{i}_0$. Clearly $\tilde D \subset D$. Suppose $\tilde D\neq D$. Since both $D$ and $\tilde D$ are non-empty, connected sets there exist $\vec{i} \in D \setminus \tilde D$ and $\vec{j} \in \tilde D$ such that $\vec{i} \sim \vec{j}$. Then $x_{\vec{i}}= y_{\vec{i}}$, $x_{\vec{j}}= y_{\vec{j}}$ and $\tilde x_{\vec{j}}= \tilde y_{\vec{j}}$. Since $\pi $ is a local isomorphism, the lift must satisfy $\tilde x_{\vec{i}} = \tilde y_{\vec{i}}$ implying $\vec{i} \in \tilde D$. This proves that $D= \tilde D$. \end{proof} The following corollary says that any two lifts of the same graph homomorphism are `identical'. \begin{corollary}\label{corollary:lift_are_isomorphic} Let $\mathcal H$ be a connected four-cycle free graph. Then for all $\tilde x^1,\tilde x^2 \in X_{E_\mathcal H}$ satisfying $\pi(\tilde x^1) = \pi(\tilde x^2)= x$ there exists an isomorphism $\phi: E_\mathcal H \longrightarrow E_\mathcal H$ such that $\phi\circ \tilde x^1= \tilde x^2$. \end{corollary} \begin{proof} By Proposition \ref{proposition:isomorphism_of_universal_covering_space} there exists an isomorphism $\phi: E_\mathcal H\longrightarrow E_\mathcal H$ such that $\phi(\tilde x^1_{\vec{0}})= \tilde x^2_{\vec{0}}$ and $\pi \circ \phi = \pi$. Then $(\phi\circ\tilde x^1)_{\vec{0}} = \tilde x^2_{\vec{0}}$ and $\pi (\phi\circ\tilde x^1)= (\pi \circ \phi)(\tilde x^1)= \pi (\tilde x^1)= x$. By Proposition \ref{proposition:covering_space_lifting} $\phi\circ\tilde x^1= \tilde x^2$. \end{proof} It is worth noting at this point the relationship of the universal cover described here with the universal cover in algebraic topology. Undirected graphs can be identified with $1$ dimensional CW-complexes where the set of vertices correspond to the $0$-cells, the edges to the $1$-cells of the complex and the attaching map sends the end-points of the edges to their respective vertices. With this correspondence in mind the (topological) universal covering space coincides with the (combinatorial) universal covering space described above; indeed a $1$ dimensional CW-complex is simply connected if and only if it does not have any loops, that is, the corresponding graph does not have any cycles; it is a tree. The uniqueness, existence and many such facts about the universal covering space follow from purely topological arguments; for instance look in Chapter $13$ in \cite{MunkresTopology75} or Chapters $5$ and $6$ in \cite{Masseyanintroduction1977}. \section{Height Functions and Sub-Cocycles}\label{Section:heights} Existence of lifts as described in the previous section enables us to measure the `rigidity' of configurations. In this section we define height functions and subsequently the slope of configurations, where steepness corresponds to this `rigidity'. The general method of using height functions is usually attributed to J.H.Conway \cite{ThurstontilinggroupAMM}. Fix a connected four-cycle free graph $\mathcal H$. Given $x\in X_\mathcal H$ we can define the corresponding \emph{height function} $h_x:\mathbb{Z}^d\times \mathbb{Z}^d\longrightarrow \mathbb{Z}$ given by $h_x({\vec{i}},{\vec{j}}):=d_{E_\mathcal H}(\tilde{x}_{\vec{i}},\tilde{x}_{\vec{j}} )$ where $\tilde{x}$ is a lift of $x$. It follows from Corollary \ref{corollary:lift_are_isomorphic} that $h_x$ is independent of the lift $\tilde{x}$. Given a finite subset $A\subset \mathbb{Z}^d$ and $x\in X_\mathcal H$ we define the \emph{range of $x$ on A} as \begin{equation*} Range_A(x):=\max_{{\vec{j}}_1, {\vec{j}}_2\in A} h_x({\vec{j}}_1, {\vec{j}}_2). \end{equation*} For all $x\in X_\mathcal H$ \begin{equation*} Range_A(x)\leq Diameter(A) \end{equation*} and more specifically \begin{equation} \label{equation:diameter_bounds_height} Range_{D_n}(x)\leq 2n \end{equation} for all $n \in \mathbb N$. Since $\tilde x\in X_{E_\mathcal H}$ is a map between bipartite graphs it preserves the parity of the distance function, that is, if $\vec i, \vec j \in \mathbb{Z}^d$ and $x\in X_\mathcal H$ then the parity of $\|\vec i - \vec j\|_1$ is the same as that of $h_x(\vec i, \vec j)$. As a consequence it follows that $Range_{\partial D_n}(x)$ is even for all $x\in X_{\mathcal H}$ and $n \in \mathbb N$. We note that $$Range_{A}(x)= Diameter(Image(\tilde x|_{A})).$$ The height function $h_x$ is subadditive, that is, $$h_x(\vec i,\vec j)\leq h_x(\vec i, \vec k)+ h_x(\vec k, \vec j)$$ for all $x\in X_\mathcal H$ and $\vec i ,\vec j$ and $\vec k \in \mathbb{Z}^d$. This is in contrast with the usual height function (as in \cite{chandgotia2013Markov} and \cite{peled2010high}) where there is an equality instead of the inequality. This raises some technical difficulties which are partly handled by the subadditive ergodic theorem. The following terminology is not completely standard: Given a shift space $X$ a \emph{sub-cocycle} is a measurable map $c: X\times \mathbb{Z}^d \longrightarrow \mathbb N\cup \{0\}$ such that for all $\vec i, \vec j \in \mathbb{Z}^d$ $$c(x, \vec i +\vec j)\leq c(x, \vec i)+ c(\sigma^{\vec i}(x), \vec j).$$ Sub-cocycles arise in a variety of situations; look for instance in \cite{Hammersleyfirst1965}. We are interested in the case $c(x, \vec i)= h_x(\vec 0, \vec i)$ for all $x\in X_\mathcal H$ and $\vec i \in \mathbb{Z}^d$. The measure of `rigidity' lies in the asymptotics of this sub-cocycle, the existence of which is provided by the subadditive ergodic theorem. Given a set $X$ if $f: X\longrightarrow \mathbb R$ is a function then let $f^+:=\max(0,f)$. \begin{thm}[Subadditive Ergodic Theorem]\label{theorem:Subadditive_ergodic_theorem}\cite{walters-book} Let $(X, \mathcal B, \mu)$ be a probability space and let $T: X\longrightarrow X$ be measure preserving. Let $\{f_n\}_{n=1}^\infty$ be a sequence of measurable functions $f_n: X\longrightarrow \mathbb R\cup \{-\infty\}$ satisfying the conditions: \begin{enumerate}[(a)] \item $f_1^+ \in L^1(\mu)$ \item for each $m$, $n \geq 1$, $f_{n+m }\leq f_n + f_m \circ T^n$ $\mu$-almost everywhere. \end{enumerate}• Then there exists a measurable function $f: X\longrightarrow \mathbb R\cup \{-\infty \}$ such that $f^+\in L^1(\mu)$, $f\circ T=f$, $\lim_{n\rightarrow \infty} \frac{1}{n}f_n =f$, $\mu$-almost everywhere and $$\lim_{n \longrightarrow \infty}\frac{1}{n}\int f_n d\mu = \inf_{n}\frac{1}{n}\int f_n d \mu= \int f d\mu.$$ \end{thm} Given a direction $\vec{ i} =(i_1, i_2, \ldots, i_d)\in \mathbb R^d$ let $\lfloor\vec{ i}\rfloor=(\lfloor i_1\rfloor, \lfloor i_2\rfloor, \ldots, \lfloor i_d\rfloor)$. We define for all $x \in X_\mathcal H$ the \emph{slope of $x$ in the direction $\vec{ i}$} as $$sl_{\vec {i}}(x):= \lim_{n \longrightarrow \infty}\frac{1}{n} h_x(\vec 0, \lfloor n \vec{ i}\rfloor)$$ whenever it exists. If $\vec i\in \mathbb{Z}^d$ we note that the sequence of functions $f_n: X_\mathcal H\longrightarrow \mathbb N\cup \{\vec 0\}$ given by $$f_n(x)=h_x(\vec 0, n\vec i)$$ satisfies the hypothesis of this theorem for any shift-invariant probability measure on $X_\mathcal H$: $|f_1|\leq \|\vec i\|_1$ and the subadditivity condition in the theorem is just a restatement of the sub-cocycle condition described above, that is, if $T= \sigma^{\vec i}$ then $$f_{n+m }(x)= h_x(\vec 0, (n+m)\vec i)\leq h_x(\vec 0, n \vec i)+ h_{\sigma^{n \vec i}x}(\vec 0,m \vec i ) =f_n(x) + f_m(T^n(x)).$$ The asymptotics of the height functions (or more generally the sub-cocycles) are a consequence of the subadditive ergodic theorem as we will describe next. In the following by an ergodic measure on $X_\mathcal H$, we mean a probability measure on $X_\mathcal H$ which is ergodic with respect to the $\mathbb{Z}^d$-shift action on $X_\mathcal H$. \begin{prop}[Existence of Slopes]\label{prop:existence_of_slopes} Let $\mathcal H$ be a connected four-cycle free graph and $\mu$ be an ergodic measure on $X_\mathcal H$. Then for all $\vec{ i}\in \mathbb{Z}^d$ $$sl_{\vec {i}}(x)=\lim_{n \longrightarrow \infty}\frac{1}{n} h_x({\vec{0}}, n \vec{ i})$$ exists almost everywhere and is independent of $x$. Moreover if $\vec {i}= (i_1, i_2\ldots, i_d)$ then $$sl_{\vec {i}}(x)\leq \sum_{k=1}^d |i_k| sl_{\vec {e}_k}(x).$$ \end{prop} \begin{proof} Fix a direction $\vec{ i}\in \mathbb{Z}^d$. Consider the sequence of functions $\{f_n\}_{n=1}^\infty$ and the map $T: X_\mathcal H\longrightarrow X_\mathcal H$ as described above. By the subadditive ergodic theorem there exists a function $f: X_\mathcal H\longrightarrow \mathbb R\cup \{-\infty\}$ such that $$\lim_{n \rightarrow \infty}\frac{1}{n}f_n=f\ almost\ everywhere.$$ Note that $f= sl_{\vec{i}}$. Since for all $x\in X_\mathcal H$ and $n \in\mathbb N$, $0\leq f_n\leq n\|{\vec{i}}\|_1$, $0\leq f(x)\leq \|\vec i\|_1$ whenever it exists. Fix any $\vec{j}\in \mathbb{Z}^d$. Then \begin{eqnarray*} f_n(\sigma^{\vec{j}}(x))&=& h_{\sigma^{\vec{j}}(x)}({\vec{0}}, n \vec{i})= h_x(\vec{j}, n \vec{ i}+\vec{ j}) \end{eqnarray*} and hence \begin{eqnarray*} -h_x(\vec{j}, {\vec{0}}) + h_x({\vec{0}}, n \vec{i})- h_x(n \vec{i}, n \vec{i}+\vec{j})&\leq& f_n(\sigma^{\vec{j}}(x))\\ &\leq& h_x(\vec{j},{\vec{0}}) + h_x({\vec{0}}, n \vec{i})+ h_x(n \vec{i}, n \vec{i}+\vec{j}) \end{eqnarray*} implying \begin{eqnarray*} -2\|\vec{j}\|_1+ f_n(x)\leq & f_n(\sigma^{\vec{j}}(x))& \leq 2\|\vec{j}\|_1+ f_n(x) \end{eqnarray*}• implying $$f(x)=\lim_{n \longrightarrow \infty} \frac{1}{n} f_n(x)= \lim_{n \longrightarrow \infty}\frac{1}{n} f_n(\sigma^{\vec{j}} x)= f(\sigma^{\vec{j}}(x))$$ almost everywhere. Since $\mu$ is ergodic $sl_{{\vec{i}}}= f$ is constant almost everywhere. Let $\vec{i}^{(k)} = (i_1, i_2, \ldots, i_k, 0, \ldots, 0)\in \mathbb{Z}^d$. By the subadditive ergodic theorem \begin{eqnarray*} sl_{\vec{i}}(x)= \int sl_{\vec{i}}(x) d\mu&=& \lim_{n \longrightarrow \infty}\frac{1}{n}\int h_x({\vec{0}}, n \vec{i}) d\mu\\ &\leq&\sum_{k=1}^d \lim_{n \longrightarrow\infty}\frac{1}{n} \int h_{\sigma^{n \vec{i}^{(k-1)}}(x)}({\vec{0}}, ni_{k}\vec{e}_k ) d \mu\\ &=&\sum_{k=1}^d \lim_{n \longrightarrow\infty}\frac{1}{n} \int h_x({\vec{0}}, ni_{k}\vec{e}_k ) d \mu\\ &\leq&\sum_{k=1}^d|i_k|\lim_{n \longrightarrow\infty}\frac{1}{n} \int h_x({\vec{0}}, n\vec{e}_k ) d \mu\\ &=&\sum_{k=1}^d |i_k| sl_{\vec {e}_k}(x). \end{eqnarray*}• almost everywhere. \end{proof} \begin{corollary}\label{corollary: existence_of _slopes_in_reality} Let $\mathcal H$ be a connected four-cycle free graph. Suppose $\mu$ is an ergodic measure on $X_\mathcal H$. Then for all $\vec{i}\in \mathbb R^d$ $$sl_{\vec{i}}(x)=\lim_{n \longrightarrow \infty}\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{i}\rfloor)$$ exists almost everywhere and is independent of $x$. Moreover if $\vec{i}= (i_1, i_2,\ldots, i_d)$ then $$sl_{\vec{i}}(x)\leq \sum_{k=1}^d |i_k| sl_{\vec{e}_k}(x).$$ \end{corollary} \begin{proof} Let $\vec{i}\in \mathbb Q^d$ and $N\in \mathbb N$ such that $N \vec{i} \in \mathbb{Z}^d$. For all $n \in \mathbb N$ there exists $k \in \mathbb N\cup\{0\}$ and $0\leq m\leq N-1$ such that $n = kN+m$. Then for all $x\in X_\mathcal H$ $$h_x({\vec{0}}, k N\vec{i})- N\|\vec{i}\|_1\leq h_x({\vec{0}}, \lfloor n\vec{i} \rfloor)\leq h_x({\vec{0}}, k N\vec{i})+ N\|\vec{i}\|_1$$ proving $$sl_{\vec{i}}(x)=\lim_{n\longrightarrow\infty}\frac{1}{n}h_x({\vec{0}}, \lfloor n\vec{ i} \rfloor) = \frac{1}{N}\lim_{k \longrightarrow \infty} \frac{1}{k}h_x({\vec{0}}, k N\vec{i}) = \frac{1}{N}sl_{N\vec{i}}(x)$$ almost everywhere. Since $sl_{N\vec{i}}$ is constant almost everywhere, we have that $sl_{\vec{i}}$ is constant almost everywhere as well; denote the constant by $c_{\vec{i}}$ . Also $$sl_{\vec{i}}(x)\leq \frac{1}{N}\sum _{l=1}^d|N i_l|sl_{\vec{e}_l}(x)=\sum _{l=1}^d|i_l|sl_{\vec{e}_l}(x).$$ Let $X\subset X_\mathcal H$ be the set of configurations $x$ such that $$\lim_{n \longrightarrow \infty}\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{i}\rfloor)=c_{\vec{i}}$$ for all ${\vec{i}} \in \mathbb Q^d$. We have proved that $\mu(X)=1$. Fix $x\in X$. Let $\vec i, \vec{j}\in \mathbb R^d$ such that $\|\vec{i}- \vec{j}\|_1<\epsilon$. Then $$\left|\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{i}\rfloor)-\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{j}\rfloor)\right|\leq\frac{1}{n}\|\lfloor n \vec{i}\rfloor-\lfloor n \vec{j}\rfloor\|_1\leq\epsilon+\frac{2d}{n}.$$ Thus we can approximate $\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{i}\rfloor)$ for ${\vec{i}} \in \mathbb R^d$ by $\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{j}\rfloor)$ for ${\vec{j}} \in \mathbb Q^d$ to prove that $\lim_{n \longrightarrow \infty}\frac{1}{n} h_x({\vec{0}}, \lfloor n \vec{i}\rfloor)$ exists for all $\vec i \in \mathbb R^d$, is independent of $x\in X$ and satisfies $$sl_{\vec{i}} (x)\leq \sum _{k=1}^d|i_k|sl_{\vec{e}_k}(x).$$ \end{proof} The existence of slopes can be generalised from height functions to continuous sub-cocycles; the same proofs work: \begin{prop}Let $c:X\times \mathbb{Z}^d \longrightarrow \mathbb R$ be a continuous sub-cocycle and $\mu$ be an ergodic measure on $X$. Then for all $\vec{i}\in \mathbb R^d$ $$sl^c_{\vec{i}}(x):=\lim_{n \longrightarrow \infty}\frac{1}{n} c(x, \lfloor n \vec{i}\rfloor)$$ exists almost everywhere and is independent of $x$. Moreover if $\vec{i}= (i_1, i_2\ldots, i_d)$ then $$sl^c_{\vec{i}}(x)\leq \sum_{k=1}^d |i_k| sl^c_{\vec{e}_k}(x).$$ \end{prop} Let $C_X$ be the space of continuous sub-cocycles on a shift space $X$. $C_X$ has a natural vector space structure: given $c_1, c_2\in C_X$, $(c_1 +\alpha c_2)$ is also a continuous sub-cocycle on $X$ for all $\alpha\in \mathbb R$ where addition and scalar multiplication is point-wise. The following is not hard to prove and follows directly from definition. \begin{prop}\label{proposition: sub-cocycles under conjugacy} Let $X, Y$ be conjugate shift spaces. Then every conjugacy $f: X \longrightarrow Y$ induces a vector-space isomorphism $ f^\star: C_Y\longrightarrow C_X$ given by $$f^\star(c)(x, \vec {i}):= c(f(x), \vec{i})$$ for all $c\in C_Y$, $x\in X$ and $\vec i \in \mathbb{Z}^d$. Moreover $sl^c_{\vec i}(y)=sl^{f^\star(c)}_{\vec i}(f^{-1}(y))$ for all $y\in Y$ and $\vec i \in \mathbb R^d$ for which the slope $sl^c_{\vec i}(y)$ exists. \end{prop} \section{Proofs of the Main Theorems} \label{section: Proof of the main theorems} \begin{proof}[Proof of Theorem \ref{theorem: MRF fully supported }] If $\mathcal H$ is a single edge, then $X_\mathcal H$ is the orbit of a periodic configuration; the result follows immediately. Suppose this is not the case. The proof follows loosely the proof of Proposition \ref{proposition: periodicfoldentropy} and morally the ideas from \cite{lightwoodschraudnerentropy}: We prove existence of two kind of configurations in $X_\mathcal H$, ones which are `poor' (Lemma \ref{lemma:slope 1 is frozen}), in the sense that they are frozen and others which are `universal' (Lemma \ref{lemma:patching_various_parts}), for which the homoclinic class is dense. Ideas for the following proof were inspired by discussions with Anthony Quas. A similar result in a special case is contained in Lemma 6.7 of \cite{chandgotia2013Markov}. \begin{lemma}\label{lemma:slope 1 is frozen} Let $\mathcal H$ be a connected four-cycle free graph and $\mu$ be an ergodic probability measure on $X_\mathcal H$ such that $sl_{\vec e_k}(x)=1$ almost everywhere for some $1\leq k \leq d$. Then $\mu$ is frozen and $h_\mu=0$. \end{lemma} \begin{proof} Without loss of generality assume that $sl_{\vec e_1}(x)=1$ almost everywhere. By the subadditivity of the height function for all $k, n \in \mathbb N$ and $x\in X_\mathcal H$ we know that $$\frac{1}{kn}h_x(\vec 0, kn\vec{e}_1) \leq \frac{1}{kn}\sum_{m=0}^{n-1}h_x(km\vec{e}_1, k(m+1)\vec{e}_1)=\frac{1}{n}\sum_{m=0}^{n-1}\frac{1}{k}h_{\sigma^{km \vec e_1} (x)}(\vec 0, k\vec{e}_1) \leq 1.$$ Since $sl_{\vec{e}_1}(x)= 1$ almost everywhere, we get that $$\lim_{n\longrightarrow \infty} \frac{1}{n}\sum_{m=0}^{n-1}\frac{1}{k}h_{\sigma^{km \vec e_1} (x)}(\vec 0, k\vec{e}_1)=1$$ almost everywhere. By the ergodic theorem $$\int \frac{1}{k}h_{x}(\vec 0, k\vec{e}_1) d \mu= 1.$$ Therefore $h_{x}(\vec 0, k\vec{e}_1)=k$ almost everywhere which implies that \begin{equation} h_x(\vec i, \vec i+k\vec{e}_1)=k \label{eq:slopeoneheightconstantrise} \end{equation}• for all $\vec i \in \mathbb{Z}^d$ and $k \in \mathbb N$ almost everywhere. Let $X\subset supp(\mu)$ denote the set of such configurations. For some $n \in \mathbb N$ consider two patterns $a,b \in \mathcal L_{B_n\cup \partial_2 B_n}(supp(\mu))$ such that $a|_{\partial_2 B_n}= b|_{\partial_2 B_n}$. We will prove that then $a|_{B_n}= b|_{B_n}$. This will prove that $\mu$ is frozen, and $|\mathcal L_{B_n}(supp(\mu))|\leq|\mathcal L_{\partial_2 B_n}(supp(\mu))|\leq |{\mathcal A}|^{|\partial_2 B_n|}$ implying that $h_{top}(supp(\mu))=0$. By the variational principle this implies that $h_\mu=0$. Consider $x, y \in X$ such that $x|_{B_n\cup \partial_2 B_n}= a$ and $y|_{B_n\cup \partial_2 B_n}= b$. Noting that $\partial_2 B_n$ is connected, by Corollary \ref{corollary:covering_space_lifting_homoclinic} we can choose lifts $\tilde x, \tilde y\in X_{E_\mathcal H}$ such that $\tilde x|_{\partial_2 B_n}= \tilde y|_{\partial_2 B_n}$. Consider any $\vec i \in B_n$ and choose $k\in - \mathbb N$ such that $\vec i + k \vec e_1, \vec i + (2n+2+k)\vec e_1 \in \partial B_n$. Then by Equation \ref{eq:slopeoneheightconstantrise} $d_{E_\mathcal H}(\tilde x_{\vec i + k \vec e_1}, \tilde x_{\vec i + (2n+2+k) \vec e_1})= 2n+2$. But $$(\tilde x_{\vec i + k \vec e_1},\tilde x_{\vec i + (k+1)\vec e_1}, \ldots,\tilde x_{\vec i + (2n+2+k) \vec e_1} )\text{ and }$$ $$(\tilde y_{\vec i + k \vec e_1},\tilde y_{\vec i + (k+1)\vec e_1}, \ldots,\tilde y_{\vec i + (2n+2+k) \vec e_1} )$$ are walks of length $2n+2$ from $\tilde x_{\vec i + k \vec e_1}$ to $\tilde x_{\vec i + (2n+2+k) \vec e_1}$. Since $E_\mathcal H$ is a tree and the walks are of minimal length, they must be the same. Thus $\tilde x|_{B_n}=\tilde y|_{B_n}$. Taking the image under the map $\pi$ we derive that $$a|_{B_n}=x|_{B_n}=y|_{B_n}= b|_{B_n}.$$ \end{proof} This partially justifies the claim that steep slopes lead to greater `rigidity'. We are left to analyse the case where the slope is submaximal in every direction. As in the proof of Proposition 7.1 in \cite{chandgotia2013Markov} we will now prove a certain mixing result for the shift space $X_\mathcal H$. \begin{lemma}\label{lemma:patching_various_parts} Let $\mathcal H$ be a connected four-cycle free graph and $|\mathcal H|= r$. Consider any $x\in X_\mathcal H$ and some $y \in X_\mathcal H$ satisfying $Range_{\partial D_{(d+1)n+3r+k}}(y)\leq 2k$ for some $n \in \mathbb N$. Then \begin{enumerate} \item\label{case:not_bipartite} If either $\mathcal H$ is not bipartite or $x_{\vec 0}, y_{\vec 0}$ are in the same partite class of $\mathcal H$ then there exists $z\in X_\mathcal H$ such that $$z_{\vec i}= \begin{cases} x_{\vec i}& \ if \ \vec i \in D_n\\ y_{\vec i} &\ if \ \vec i \in D_{(d+1)n+3r+k}^c. \end{cases}$$ \item \label{case:bipartite} If $\mathcal H$ is bipartite and $x_{\vec 0}, y_{\vec 0}$ are in different partite classes of $\mathcal H$ then there exists $z\in X_\mathcal H$ such that $$z_{\vec i}= \begin{cases} x_{\vec i+\vec e_1} &if \ \vec i \in D_n\\ y_{\vec i} & if \ \vec i \in D_{(d+1)n+3r+k}^c. \end{cases}$$ \end{enumerate}• \end{lemma} The separation $dn+3r+k$ between the induced patterns of $x$ and $y$ is not optimal, but sufficient for our purposes. \begin{proof} We will construct the configuration $z$ only in the case when $\mathcal H$ is not bipartite. The construction in the other cases is similar; the differences will be pointed out in the course of the proof. \begin{enumerate} \item \textbf{Boundary patterns with non-maximal range to monochromatic patterns inside.} Let $\tilde y$ be a lift of $y$ and $\mathcal T^\prime$ be the image of $\tilde y|_{ D_{(d+1)n+3r+k+1}}$. Let $\mathcal T$ be a minimal subtree of $E_\mathcal H$ such that $$Image(\tilde y|_{\partial D_{(d+1)n+3r+k}})\subset \mathcal T\subset \mathcal T^\prime.$$ Since $Range_{\partial D_{(d+1)n+3r+k}}(y)\leq 2k$, $diameter(\mathcal T)\leq 2k$. By Proposition \ref{proposition:folding trees into other trees} there exists a graph homomorphism $f:\mathcal T^\prime \longrightarrow \mathcal T$ such that $f|_\mathcal T$ is the identity. Consider the configuration $\tilde y^1$ given by $$\tilde y^1_{\vec i}= \begin{cases} f(\tilde y_{\vec i}) &\text{ if }\vec i \in D_{(d+1)n+3r+k+1}\\ \tilde y_{\vec i} &\text{ otherwise.} \end{cases}•$$ The pattern $$\tilde y^1|_{D_{(d+1)n+3r+k+1}}\in \mathcal L_{D_{(d+1)n+3r+k+1}}(X_{\mathcal T})\subset \mathcal L_{D_{(d+1)n+3r+k+1}}(X_{E_\mathcal H}).$$ Moreover since $f|_\mathcal T$ is the identity map, $$\tilde y^1|_{D_{(d+1)n+3r+k}^c}=\tilde y|_{D_{(d+1)n+3r+k}^c}\in \mathcal L_{D_{(d+1)n+3r+k}^c}(X_{E_\mathcal H}).$$ Since $X_{E_\mathcal H}$ is given by nearest neighbour constraints $\tilde y^1\in X_{E_\mathcal H}$. Recall that the fold-radius of a nearest neighbour shift of finite type (in our case $X_\mathcal T$) is the total number of full config-folds required to obtain a stiff shift. Since $diameter(\mathcal T)\leq 2k$ the fold-radius of $X_{\mathcal T}\leq k$. Let a stiff shift obtained by a sequence of config-folds starting at $X_{\mathcal T}$ be denoted by $Z$. Since $\mathcal T$ folds into a graph consisting of a single edge, $Z$ consists of two checkerboard patterns in the vertices of an edge in $\mathcal T$, say $\tilde v_1$ and $\tilde v_2$. Corresponding to such a sequence of full config-folds, we had defined in Section \ref{section:Folding, Entropy Minimality and the Pivot Property} the outward fixing map $O_{X_\mathcal T, (d+1)n+3r+k}$. By Proposition \ref{prop: folding_ to _ stiffness_fixing_a_set} the configuration $O_{X_\mathcal T,(d+1)n+3r+k}(\tilde y^1)\in X_{E_{\mathcal H}}$ satisfies \begin{eqnarray*} O_{X_\mathcal T,(d+1)n+3r+k}(\tilde y^1)|_{D_{(d+1)n+3r+1}}\in \mathcal L_{D_{(d+1)n+3r+1}}(Z) \\ O_{X_\mathcal T,(d+1)n+3r+k}(\tilde y^1)|_{D_{(d+1)n+3r+k}^c}=\tilde y^1|_{D_{(d+1)n+3r+k}^c}=\tilde y|_{D_{(d+1)n+3r+k}^c}. \end{eqnarray*} \noindent Note that the pattern $O_{X_\mathcal T,(d+1)n+3r+k}(\tilde y^1)|_{\partial D_{(d+1)n+3r}}$ uses a single symbol, say $\tilde v_1$. Let $\pi (\tilde v_1)= v_1$. Then the configuration $y^\prime= \pi(O_{X_\mathcal T,(d+1)n+3r+k}(\tilde y^1))\in X_\mathcal H$ satisfies \begin{eqnarray*} y^\prime|_{\partial D_{(d+1)n+3r}} &=& v_1\\ y^\prime|_{D_{(d+1)n+3r+k}^c}&=&y|_{D_{(d+1)n+3r+k}^c}. \end{eqnarray*}• \item \textbf{Constant extension of an admissible pattern.} Consider some lift $\tilde x$ of $x$. We begin by extending $\tilde x|_{B_n}$ to a periodic configuration $\tilde x^1\in X_{E_\mathcal H}$. Consider the map $f: [-n, 3n]\longrightarrow [-n, n]$ given by \begin{equation*} f(k)=\begin{cases} k &\text{ if } k \in [-n,n]\\ 2n-k &\text{ if }k \in [n,3n]. \end{cases}• \end{equation*} \noindent Then we can construct the pattern $\tilde a\in \mathcal L_{[-n, 3n]^d}(X_{E_\mathcal H})$ given by $$\tilde a_{i_1, i_2, \ldots i_d}= \tilde x_{f(i_1), f(i_2), \ldots, f(i_d)}.$$ Given $k, l \in [-n, 3n]$ if $|k-l|=1$ then $|f(k)-f(l)|=1$. Thus $\tilde a$ is a locally allowed pattern in $X_{E_\mathcal H}$. Moreover since $f(-n)= f(3n)$ the pattern $\tilde a$ is `periodic', meaning, $$\tilde a_{i_1, i_2,\ldots, i_{k-1}, -n, i_{k+1}, \ldots, i_d }= \tilde a_{i_1, i_2, \ldots, i_{k-1}, 3n, i_{k+1}, \ldots, i_d }$$ for all $i_1, i_2, \ldots, i_d \in [-n,3n]$. Also $\tilde a|_{B_n}=\tilde x|_{B_n}$. Then the configuration $\tilde x^1$ obtained by tiling $\mathbb{Z}^d$ with $\tilde a|_{[-n,3n-1]^d}$, that is, $$\tilde x^1_{\vec i}= \tilde a_{(i_1\!\!\!\mod 4n,\ i_2\!\!\!\mod 4n,\ \ldots,\ i_d\!\!\!\mod 4n)-(n, n, \ldots, n)}\text{ for all }\vec i \in \mathbb{Z}^d$$ is an element of $X_{E_\mathcal H}$. Moreover $\tilde x^1|_{B_n}= \tilde a|_{B_n}= \tilde x|_{B_n}$ and $Image(\tilde x^1)= Image(\tilde x|_{B_n})$. Since $diameter(B_n)=2dn$, $diameter(Image(\tilde x^1))\leq 2dn$. Let $\tilde \mathcal T= Image(\tilde x^1)$. Then the fold-radius of $X_{\tilde \mathcal T}$ is less than or equal to $dn$. Let a stiff shift obtained by a sequence of config-folds starting at $X_{\tilde \mathcal T}$ be denoted by $Z'$. Since $\tilde \mathcal T$ folds into a graph consisting of a single edge, $Z^\prime$ consists of two checkerboard patterns in the vertices of an edge in $\tilde T$, say $\tilde w_1$ and $\tilde w_2$. Then by Proposition \ref{prop: folding_ to _ stiffness_fixing_a_set} \begin{eqnarray*} I_{X_{\tilde\mathcal T},n}(\tilde x^1)|_{D_n}= \tilde x^1|_{D_n}= \tilde x|_{D_n}\\ I_{X_{\tilde\mathcal T},n}(\tilde x^1)|_{D_{(d+1)n-1}^c} \in \mathcal L_{D_{(d+1)n-1}^c}(Z^\prime). \end{eqnarray*} \noindent We note that $I_{X_{\tilde\mathcal T},n}(\tilde x^1)|_{\partial D_{(d+1)n-1}}$ consists of a single symbol, say $\tilde w_1$. Let $\pi(\tilde w_1)= w_1$. Then the configuration $x^\prime=\pi(I_{X_{\tilde\mathcal T},n}(\tilde x^1)) \in X_{\mathcal H}$ satisfies \begin{eqnarray*} x^\prime|_{D_n}= x|_{D_n}\text{ and}\\ x^\prime|_{\partial D_{(d+1)n-1}}=w_1. \end{eqnarray*}• \item \textbf{Patching of an arbitrary pattern inside a configuration with non-maximal range.} We will first prove that there exists a walk on $\mathcal H$ from $w_1$ to $v_1$, $((w_1= u_1), u_2, \ldots, (u_{3r+2}= v_1))$. Since the graph is not bipartite, it has a cycle $p_1$ such that $|p_1|\leq r-1$ and is odd. Let $v^\prime$ be a vertex in $p_1$. Then there exist walks $p_2$ and $p_3$ from $w_1$ to $v^\prime$ and from $v^\prime$ to $v_1$ respectively such that $|p_2|, |p_3|\leq r-1$. Consider any vertex $w^\prime\sim_\mathcal H v_1$. If $3r+1-|p_2|- |p_3|$ is even then the walk $$p_2\star p_3 (\star (v_1, w^\prime, v_1))^{\frac{3r+1-|p_2|- |p_3|}{2}}$$ and if not, then the walk $$p_2\star p_1 \star p_3 (\star (v_1, w^\prime, v_1))^{\frac{3r+1-|p_1|-|p_2|- |p_3|}{2}}$$ is a walk of length $3r+1$ in $\mathcal H$ from $w_1$ to $v_1$. This is the only place where we use the fact that $\mathcal H$ is not bipartite. If it were bipartite, then we would require that $x^\prime_{\vec 0}$ and $y^\prime_{\vec 0}$ have to be in the same partite class to construct such a walk. Given such a walk the configuration $z$ given by \begin{eqnarray*} z|_{D_{(d+1)n}}&=& x^\prime|_{D_{(d+1)n}}\\ z|_{D^c_{(d+1)n +3r}}&= &y^\prime|_{D^c_{(d+1)n +3r}}\\ z|_{\partial D_{(d+1)n+i-2}}&=& u_i\text{ for all } 1\leq i \leq 3r+2 \end{eqnarray*} \noindent is an element of $X_\mathcal H$ for which $z|_{D_n}=x^\prime|_{D_n}=x|_{D_n}$ and $z|_{D_{(d+1)n+3r+k}^c}=y^\prime|_{D_{(d+1)n+3r+k}^c}=y|_{D_{(d+1)n+3r+k}^c}.$ \end{enumerate}• \end{proof} We now return to the proof of Theorem \ref{theorem: MRF fully supported }. Let $\mu$ be an ergodic probability measure adapted to $X_\mathcal H$ with positive entropy. Suppose $sl_{\vec e_i}(x)= \theta_i$ almost everywhere. By Lemma \ref{lemma:slope 1 is frozen}, $\theta_i<1$ for all $1\leq i \leq d$. Let $\theta= \max_i \theta_i$ and $0<\epsilon<\frac{1}{4}\left(1- \theta\right)$. Denote by $S^{d-1}$, the sphere of radius $1$ in $\mathbb R^d$ for the $l^1$ norm. By Corollary \ref{corollary: existence_of _slopes_in_reality} for all $\vec{v}\in S^{d-1}$ $$\lim_{n\longrightarrow \infty }\frac{1}{n}h_x({\vec{0}}, \lfloor n \vec{v}\rfloor) \leq \theta$$ almost everywhere. Since $S^{d-1}$ is compact in $\mathbb R^d$ we can choose a finite set $\{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_t\} \subset S^{d-1}$ such that for all $\vec{v}\in S^{d-1}$ there exists some $1\leq i\leq t$ satisfying $\|\vec{v}_i -\vec v\|_1<\epsilon$. By Egoroff's theorem \cite{Follandreal1999} given $\epsilon$ as above there exists $N_0\in \mathbb N$ such that for all $n\geq N_0$ and $1\leq i \leq t$ \begin{equation} \mu(\{x\in X_\mathcal H\:|\:h_x({\vec{0}}, \lfloor n \vec{v}_i\rfloor)\leq n\theta + n\epsilon\ for\ all\ 1\leq i\leq t\}) >1-\epsilon.\label{equation:uniform_continuity_of_heights} \end{equation}• Let $\vec{v} \in \partial D_{n-1}$ and $1\leq i_0\leq t$ such that $\|\frac{1}{n}\vec{v}-\vec{v}_{i_0}\|_1<\epsilon$. If for some $x\in X_\mathcal H$ and $n \in \mathbb N$ $$h_x({\vec{0}}, \lfloor n \vec{v}_{i_0}\rfloor)\leq n\theta + n\epsilon$$ then $$h_x({\vec{0}}, \lfloor \vec{v}\rfloor)\leq h_x({\vec{0}}, \lfloor n \vec{v}_{i_0}\rfloor) +\lceil n \epsilon \rceil \leq n\theta + 2n\epsilon+1.$$ By Inequality \ref{equation:uniform_continuity_of_heights} we get $$\mu\left(\{x\in X_\mathcal H\:|\: h_x\left({\vec{0}}, \lfloor \vec{v}\rfloor\right)\leq n\theta + 2n\epsilon+1\ for\ all\ \vec{v}\in \partial D_{n-1}\}\right) >1-\epsilon$$ for all $n\geq N_0$. Therefore for all $n\geq N_0$ there exists $x^{(n)}\in supp(\mu) $ such that $$Range_{\partial D_{n-1}}\left(x^{(n)}\right)\leq 2n\theta + 4 n \epsilon +2< 2n(1- \epsilon)+2.$$ Let $x \in X_\mathcal H$ and $n_0\in \mathbb N$. It is sufficient to prove that $\mu([x]_{D_{n_0-1}})>0$. Suppose $r:=|\mathcal H|$. Choose $k \in \mathbb N$ such that \begin{eqnarray*} n_0(d+1)+3r+k+1&\geq&N_0\\ 2\left(n_0(d+1)+3r+k+1\right)(1-\epsilon)+2&\leq&2k. \end{eqnarray*}• Then by Lemma \ref{lemma:patching_various_parts} there exists $z\in X_\mathcal H$ such that either \begin{equation*} z_{\vec{j}}= \begin{cases} x_{\vec{j}} \quad\quad\quad\quad \quad\quad\: &if \ {\vec{j}} \in D_{n_0}\\ x^{\left(n_0(d+1)+3r+k+1\right)}_{\vec{j}} \ &if \ {\vec{j}} \in D_{n_0(d+1)+3r+k}^c \end{cases} \end{equation*}• or \begin{equation*} z_{\vec{j}}= \begin{cases} x_{{\vec{j}} +\vec e_1}\quad\quad\quad\quad \quad\quad \:& if \ {\vec{j}} \in D_{n_0}\\ x^{\left(n_0(d+1)+3r+k+1\right)}_{\vec{j}} \ &if \ {\vec{j}} \in D_{n_0(d+1)+3r+k}^c. \end{cases} \end{equation*}• In either case $(z, x^{\left(n_0(d+1)+3r+k+1\right)})\in \Delta_{X_\mathcal H}$. Since $\mu$ is adapted to $X_\mathcal H$, $z\in supp(\mu)$. In the first case we get that $\mu([x]_{D_{n_0-1}})=\mu([z]_{D_{n_0-1}})>0$. In the second case we get that $$\mu([x]_{D_{n_0-1}})=\mu(\sigma^{\vec e_1}([x]_{D_{n_0-1}}))=\mu([z]_{D_{(n_0-1)}-\vec e_1})>0.$$ This completes the proof. \end{proof} Every shift space conjugate to an entropy minimal shift space is entropy minimal. However a shift space $X$ which is conjugate to $X_\mathcal H$ for $\mathcal H$ which is connected and four-cycle free need not even be a hom-shift. By following the proof carefully it is possible to extract a condition for entropy minimality which is conjugacy-invariant: \begin{thm}\label{theorem:conjugacy_invariant_entropy minimality condition} Let $X$ be a shift of finite type and $c$ a continuous sub-cocycle on $X$ with the property that $c(\cdot, {\vec{i}})\leq \|{\vec{i}}\|_1$ for all ${\vec{i}} \in \mathbb{Z}^d$. Suppose every ergodic probability measure $\mu$ adapted to $X$ satisfies: \begin{enumerate} \item If $sl^c_{\vec e_i}(x)=1$ almost everywhere for some $1\leq i \leq d$ then $h_\mu< h_{top}(X)$. \item If $sl^c_{\vec e_i}(x)<1$ almost everywhere for all $1\leq i \leq d$ then $supp(\mu)=X$. \end{enumerate} then $X$ is entropy minimal. \end{thm} Here is a sketch: By Proposition \ref{proposition:entropyviamme} and Theorems \ref{thm:equiGibbs}, \ref{theorem: ergodic decomposition of markov random fields} it is sufficient to prove that every ergodic measure of maximal entropy is fully supported. If $X$ is a shift of finite type satisfying the hypothesis of Theorem \ref{theorem:conjugacy_invariant_entropy minimality condition} then it is entropy minimal because every ergodic measure of maximal entropy of $X$ is an ergodic probability measure adapted to $X$; its entropy is either smaller than $h_{top}(X)$ or it is fully supported. To see why the condition is conjugacy invariant suppose that $f:X\longrightarrow Y$ is a conjugacy and $c\in C_Y$ satisfies the hypothesis of the theorem. Then by Proposition \ref{proposition: sub-cocycles under conjugacy} it follows that ${f^\star}(c)\in C_X$ satisfies the hypothesis as well. \begin{proof}[Proof of Theorem \ref{theorem: pivot property for four cycle free}] By Proposition \ref{proposition: pivot for disconnected} we can assume that $\mathcal H$ is connected. Consider some $(x, y)\in \Delta_{X_\mathcal H}$. By Corollary \ref{corollary:covering_space_lifting_homoclinic} there exist $(\tilde x, \tilde y)\in \Delta_{X_{E_\mathcal H}}$ such that $\pi(\tilde x)= x$ and $\pi(\tilde y)=y$. It is sufficient to prove that there is a chain of pivots from $\tilde x$ to $\tilde y$. We will proceed by induction on $\sum_{{\vec{i}} \in \mathbb{Z}^d} d_{E_\mathcal H}(\tilde x_{{\vec{i}}}, \tilde y_{\vec{i}})$. The induction hypothesis (on $M$) is : If $\sum_{{\vec{i}} \in \mathbb{Z}^d} d_{E_\mathcal H}(\tilde x_{\vec{i}}, \tilde y_{\vec{i}})= 2M$ then there exists a chain of pivots from $\tilde x$ to $\tilde y$. We note that $d_{E_\mathcal H}(\tilde x_{\vec{i}}, \tilde y_{\vec{i}})$ is even for all ${\vec{i}} \in \mathbb{Z}^d$ since there exists ${\vec{i}}^\prime\in \mathbb{Z}^d$ such that $\tilde x_{{\vec{i}}^\prime}= \tilde y_{{\vec{i}}^{\prime}}$ and hence $\tilde x_{\vec{i}}$ and $\tilde y_{\vec{i}}$ are in the same partite class of $E_\mathcal H$ for all ${\vec{i}} \in \mathbb{Z}^d$. The base case $(M=1)$ occurs exactly when $\tilde x$ and $\tilde y$ differ at a single site; there is nothing to prove in this case. Assume the hypothesis for some $M\in \mathbb N$. Consider $(\tilde x, \tilde y)\in \Delta_{X_{E_\mathcal H}}$ such that $$\sum_{{\vec{i}} \in \mathbb{Z}^d} d_{E_\mathcal H}(\tilde x_{\vec{i}}, \tilde y_{\vec{i}})=2M+2.$$ Let $$B=\{{\vec{j}} \in \mathbb{Z}^d\:|\: \tilde x_{\vec{j}} \neq \tilde y_{\vec{j}}\}$$ and a vertex $\tilde v\in E_\mathcal H$. Without loss of generality we can assume that \begin{equation} \max_{{\vec{i}} \in B} d_{E_\mathcal H}(\tilde v, \tilde x_{\vec{i}})\geq \max_{{\vec{i}} \in B} d_{E_\mathcal H}(\tilde v, \tilde y_{\vec{i}}).\label{equation:assumption_for_pivot} \end{equation}• Consider some ${\vec{i}}_0 \in B$ such that $$d_{E_\mathcal H}(\tilde v, \tilde x_{{\vec{i}}_0})= \max_{{\vec{i}} \in B}d_{E_\mathcal H}(\tilde v, \tilde x_{{\vec{i}}}).$$ Consider the shortest walks $(\tilde v= \tilde v_1, \tilde v_2, \ldots, \tilde v_n=\tilde x_{{\vec{i}}_0})$ from $\tilde v$ to $\tilde x_{{\vec{i}}_0}$ and $(\tilde v= \tilde v^\prime_1, \tilde v^\prime_2, \ldots, \tilde v^\prime_{n^\prime}=\tilde y_{{\vec{i}}_0})$ from $\tilde v$ to $\tilde y_{{\vec{i}}_0}$. By Assumption \ref{equation:assumption_for_pivot}, $n^\prime\leq n$. Since these are the shortest walks on a tree, if $\tilde v^\prime_k=\tilde v_{k^\prime}$ for some $1\leq k\leq n^\prime$ and $1\leq k^{\prime} \leq n$ then $k =k^{\prime}$ and $\tilde v_l = \tilde v_l^\prime$ for $1\leq l \leq k$. Let $$k_0= \max\{1\leq k\leq n^\prime\:|\: \tilde v_k^\prime= \tilde v_k\}.$$ Then the shortest walk from $\tilde x_{{\vec{i}}_0}$ to $\tilde y_{{\vec{i}}_0}$ is given by $\tilde x_{{\vec{i}}_0}=\tilde v_n, \tilde v_{n-1}, \tilde v_{n-2}, \ldots, \tilde v_{k_0}, \tilde v^\prime_{k_0+1}, \ldots, \tilde v^\prime_{n^\prime}= \tilde y_{{\vec{i}}_0}$. We will prove for all $\vec i \sim \vec i_{0}$, $\tilde x_{\vec i}= \tilde v_{n-1}$. This is sufficient to complete the proof since then the configuration $$\tilde x^{(1)}_{{\vec{j}}}= \begin{cases} \tilde x_{\vec{j}} \quad\ \:if\ {\vec{j}} \neq {\vec{i}}_0\\ \tilde v_{n-2}\ \ if\ {\vec{j}} = {\vec{i}}_0, \end{cases}$$ \noindent is an element of $X_{E_\mathcal H}$, $(\tilde x,\tilde x^{(1)})$ is a pivot and $$n+n^\prime -2 k_0 -2=d_{E_\mathcal H}{(\tilde x^{(1)}_{{\vec{i}}_0}, \tilde y_{{\vec{i}}_0})}< d_{E_\mathcal H}(\tilde x_{{\vec{i}}_0}, \tilde y_{{\vec{i}}_0})=n+n^\prime -2 k_0$$ \noindent giving us a pair $(\tilde x^{(1)}, \tilde y)$ such that $$\sum_{{\vec{i}} \in \mathbb{Z}^d} d_{E_\mathcal H}(\tilde x^{(1)}_{\vec{i}}, \tilde y_{\vec{i}})=\sum_{{\vec{i}} \in \mathbb{Z}^d} d_{E_\mathcal H}(\tilde x_{\vec{i}}, \tilde y_{\vec{i}})-2= 2M$$ to which the induction hypothesis applies. There are two possible cases: \begin{enumerate} \item ${\vec{i}} \in B$: Then $d_{E_\mathcal H}(\tilde v, \tilde x_{\vec i})=d_{E_\mathcal H}(\tilde v, \tilde x_{\vec i_0})-1$ and $\tilde x_{\vec{i}}\sim_{E_\mathcal H} \tilde x_{\vec i_0}$. Since $E_\mathcal H$ is a tree, $\tilde x_{\vec{i}}= \tilde v_{n-1}$. \item ${\vec{i}} \notin B$: Then $\tilde x_{\vec{i}}= \tilde y_{\vec{i}}$ and we get that $d_{E_\mathcal H}(\tilde x_{{\vec{i}}_0}, \tilde y_{{\vec{i}}_0})=2$. Since $\tilde x_{\vec{i}}\sim_{E_\mathcal H} \tilde x_{{\vec{i}}_0}$, the shortest walk joining $\tilde v$ and $\tilde x_{{\vec{i}}}$ must either be $\tilde v= \tilde v_1, \tilde v_2, \ldots, \tilde v_{n-1}= \tilde x_{{\vec{i}}}$ or $\tilde v= \tilde v_1, \tilde v_2,\ldots,\tilde v_{n}= \tilde x_{{\vec{i}}_0}, \tilde v_{n+1}= \tilde x_{{\vec{i}}}$. We want to prove that the former is true. Suppose not. Since $\tilde y_{{\vec{i}}_0}\sim_{E_\mathcal H} \tilde x_{{\vec{i}}}$ and ${\vec{i}}_0\in B$, the shortest walk from $\tilde v$ to $\tilde y_{{\vec{i}}_0}$ is $\tilde v= \tilde v_1, \tilde v_2,\ldots,\tilde v_{n}= \tilde x_{{\vec{i}}_0}, \tilde v_{n+1}= \tilde x_{{\vec{i}}}, \tilde v_{n+2}=\tilde y_{{\vec{i}}_0} $. This contradicts Assumption \ref{equation:assumption_for_pivot} and completes the proof. \end{enumerate} \end{proof} \section{Further Directions} \subsection{Getting Rid of the Four-Cycle Free Condition} In the context of the results in this paper, the four-cycle free condition seems a priori artificial; we feel that in many cases it is a mere artifact of the proof. To the author, getting rid of this condition is an important and interesting topic for future research. Here we will illustrate what goes wrong when we try to apply our proofs for the simplest possible example with four-cycles, that is, $C_4$. We have shown (Example \ref{Example: Folds to an edge}) that $X_{C_4}$ satisfies the hypothesis of Propositions \ref{proposition: periodicfoldentropy} and \ref{proposition: frozenfoldpivot} and thus it also satisfies the conclusions of Theorems \ref{theorem: MRF fully supported } and \ref{theorem: pivot property for four cycle free}. The proofs of Theorems \ref{theorem: MRF fully supported } and \ref{theorem: pivot property for four cycle free} however rely critically on the existence of lifts to the universal cover, that is, Proposition \ref{proposition:covering_space_lifting}. However the conclusion of this proposition does not hold for $X_{C_4}$: The universal cover of $C_4$ is $\mathbb{Z}$ and the corresponding covering map $\pi: \mathbb{Z} \longrightarrow C_4$ is given by $\pi(i)= i \mod 4$. By the second remark following Theorem 4.1 in \cite{chandgotia2013Markov} it follows that the induced map $\pi: X_{\mathbb{Z}}\longrightarrow X_{C_4}$ is not surjective disproving the conclusion of Proposition \ref{proposition:covering_space_lifting} for $X_{C_4}$. \subsection{Identification of Hom-Shifts} \noindent\textbf{Question 1:} Given a shift space $X$, are there some nice decidable conditions which imply that $X$ is conjugate to a hom-shift? Being conjugate to a hom-shift lays many restrictions on the shift space, for instance on its periodic configurations. Consider a conjugacy $f:X\longrightarrow X_\mathcal H$ where $\mathcal H$ is a finite undirected graph. Let $Z\subset X_\mathcal H$ be the set of configurations invariant under $\{\sigma^{2\vec e_i}\}_{i=1}^d$. Then there is a bijection between $Z$ and $\mathcal L_A(X_\mathcal H)$ where $A$ is the rectangular shape $$A:=\{\sum_{i=1}^{d}\delta_i \vec e_i\:|\: \delta_i\in \{0,1\}\}$$ because every pattern in $\mathcal L_A(X_\mathcal H)$ extends to a unique configuration in $Z$. More generally given a graph $\mathcal H$ it is not hard to compute the number of periodic configurations for a specific finite-index subgroup of $\mathbb{Z}^d$. Moreover periodic points are dense in these shift spaces and there are algorithms to compute approximating upper and lower bounds of their entropy \cite{symmtricfriedlan1997,louidor2010improved}. Hence the same then has to hold for the shift space $X$ as well. We are not familiar with nice decidable conditions which imply that a shift space is conjugate to a hom-shift. \subsection{Hom-Shifts and Strong Irreducibility}\label{subsection: homSI} \noindent\textbf{Question 2:} Which hom-shifts are strongly irreducible? We know two such conditions: \begin{enumerate} \item\cite{brightwell2000gibbs} If $\mathcal H$ is a finite graph which folds into $\mathcal H^\prime$ then $X_\mathcal H$ is strongly irreducible if and only if $X_{\mathcal H'}$ is strongly irreducible. This reduces the problem to graphs $\mathcal H$ which are stiff. For instance if $\mathcal H$ is dismantlable, then $X_\mathcal H$ is strongly irreducible. \item\cite{Raimundo2014} $X_\mathcal H$ is single site fillable. A shift space $X_{\mathcal F}\subset {\mathcal A}^{\mathbb{Z}^d}$ is said to be \emph{single site fillable} if for all patterns $a\in {\mathcal A}^{\partial\{\vec 0\}}$ there exists a locally allowed pattern in $X_{\mathcal F}$, $b\in {\mathcal A}^{D_1}$ such that $b|_{\partial\{\vec 0\}}=a$. In case $X_{\mathcal F}= X_\mathcal H$ for some graph $\mathcal H$ then it is single site fillable if and only if given vertices $v_1, v_2, \ldots, v_{2d}\in \mathcal H$ there exists a vertex $v\in \mathcal H$ adjacent to all of them. \end{enumerate} It follows that $X_{K_5}$ is single site fillable and hence strongly irreducible for $d=2$. In fact strong irreducibility has been proved in \cite {Raimundo2014} for shifts of finite type with a weaker mixing condition called TSSM. This does not cover all possible examples. For instance it was proved in \cite{Raimundo2014} that $X_{K_4}$ is strongly irreducible for $d=2$ even though it is not TSSM and $K_4$ is stiff. We do not know if it is possible to verify whether a given hom-shift is TSSM. \subsection{Hom-Shifts and Entropy Minimality}\textbf{Question 3:} Given a finite connected graph $\mathcal H$ when is $X_\mathcal H$ entropy minimal? We have provided some examples in the paper: \begin{enumerate} \item $\mathcal H$ can be folded to a single vertex with a loop or a single edge. (Proposition \ref{proposition: periodicfoldentropy}) \item $\mathcal H$ is four-cycle free. (Theorem \ref{theorem:four cycle free entropy minimal}) \end{enumerate}• Again this does not provide the full picture. For instance $X_{K_4}$ is strongly irreducible when $d=2$ and hence entropy minimal even though $K_4$ is stiff and not four-cycle free. A possible approach might be via identifying the right sub-cocycle and Theorem \ref{theorem:conjugacy_invariant_entropy minimality condition}. \noindent\textbf{Conjecture:} Let $d=2$ and $\mathcal H$ be a finite connected graph. Then $X_\mathcal H$ is entropy minimal. \subsection{Hom-Shifts and the Pivot Property}\label{subsection: Hom-shifts and the pivot property} We have given a list of examples of graphs $\mathcal H$ for which the shift space $X_\mathcal H$ has the pivot property in Section \ref{section: the pivot property}. In this paper we have provided two further sets of examples: \begin{enumerate} \item $\mathcal H$ can be folded to a single vertex with a loop or a single edge. (Proposition \ref{proposition: frozenfoldpivot}) \item $\mathcal H$ is four-cycle free. (Theorem \ref{theorem: pivot property for four cycle free}) \end{enumerate} We saw in Section \ref{section: the pivot property} that $X_{K_4}, X_{K_5}$ do not have the pivot property when $d=2$. However they do satisfy a weaker property which we will describe next. A shift space $X$ is said to have the \emph{generalised pivot property} if there is an $r\in \mathbb N$ such that for all $(x,y)\in \Delta_X$ there exists a chain $x^1=x, x^2, x^3, \ldots, y=x^n\in X$ such that $x^i$ and $x^{i+1}$ differ at most on some translate of $D_r$. It can be shown that any nearest neighbour shift of finite type $X\subset {\mathcal A}^\mathbb{Z}$ has the generalised pivot property. In higher dimensions this is not true without any hypothesis; look for instance in Section 9 in \cite{chandgotia2013Markov}. It is not hard to prove that any single site fillable nearest neighbour shift of finite type has the generalised pivot property. This can be generalised further: in \cite{Raimundo2014} it is proven that every shift space satisfying TSSM has the generalised pivot property. \noindent\textbf{Question 4:} For which graphs $\mathcal H$ does $X_\mathcal H$ satisfy the pivot property? What about the generalised pivot property? \section{Acknowledgments} I would like to thank my advisor, Prof. Brian Marcus for dedicated reading of a million versions of this paper, numerous suggestions, insightful discussions and many other things. The line of thought in this paper was begot in discussions with Prof. Tom Meyerovitch, his suggestions and remarks have been very valuable to me. I will also like to thank Prof. Ronnie Pavlov, Prof. Sam Lightwood, Prof. Michael Schraudner, Prof. Anthony Quas, Prof. Klaus Schmidt, Prof. Mahan Mj, Prof. Peter Winkler and Raimundo Brice\~no for giving a patient ear to my ideas and many useful suggestions. Lastly, I will like to thank Prof. Jishnu Biswas; he had introduced me to universal covers, more generally to the wonderful world of algebraic topology. This research was partly funded by the Four-Year Fellowship at the University of British Columbia. Lastly I would like to thank the anonymous referee for giving many helpful comments and corrections largely improving the quality of the paper. \bibliographystyle{abbrv}
1,108,101,566,053
arxiv
\section{Introduction} In the \textit{person search} problem, a \textit{query} person image crop is used to localize co-occurrences in a set of scene images, known as a \textit{gallery}. The problem may be split into two parts: 1) \textit{person detection}, in which all person bounding boxes are localized within each gallery scene and 2) \textit{person re-identification} (re-id), in which detected gallery person crops are compared against a query person crop. Two-step person search methods \cite{zheng_person_2017, chen_person_2018, lan_person_2018, han_re-id_2019, dong_instance_2020, wang_tcts_2020} tackle each of these parts explicitly with separate models. In contrast, end-to-end person search methods \cite{xiao_joint_2017, xiao_ian_2019, liu_neural_2017, ferrari_rcaa_2018, yan_learning_2019, munjal_query-guided_2019, dong_bi-directional_2020, zhong_robust_2020, chen_norm-aware_2020, kim_prototype-guided_2021, li_sequential_2021, han_decoupled_2021, zhang_diverse_2021, yan_anchor-free_2021, yu_cascade_2022, cao_pstr_2022, chen_hierarchical_2020, han_end--end_2021, li_cross-scale_2021} use a single model, typically sharing backbone features for detection and re-identification. \begin{figure}[t]% \begin{center} \includegraphics[width=1\linewidth]{./figures/gfn_retrieval.pdf} \end{center} \vspace{-0.5cm} \caption{An illustration of our proposed two-phase retrieval inference pipeline. In the first phase, the Gallery Filter Network discards scenes unlikely to contain the query person. The second phase is the standard person retrieval process, in which persons are detected, corresponding embeddings extracted, and these embeddings are compared to the query to produce a ranking.} \label{fig:gfn_retrieval} \end{figure} For both model types, the same steps are needed: 1) computation of detector backbone features, 2) detection of person bounding boxes, and 3) computation of feature embeddings for each bounding box, to be used for retrieval. Improvement of person search model efficiency is typically focused on reducing the cost of one or more of these steps. We propose the second and third steps can be avoided altogether for some subset of gallery scenes by splitting the retrieval process into two phases: scene retrieval, followed by typical person retrieval. This two-phase process is visualized in Figure \ref{fig:gfn_retrieval}. We call the module implementing scene retrieval the Gallery Filter Network (GFN), since its function is to filter scenes from the gallery. By performing the cheaper query-scene comparison before detection is needed, the GFN allows for a modular computational pipeline for practical systems, in which one process can determine which scenes are of interest, and another can detect and extract person embeddings only for interesting scenes. This could serve as an efficient filter for video frames in a high frame rate context, or to cheaply reduce the search space when querying large image databases. The GFN also provides a mechanism to incorporate global context into the gallery ranking process. Instead of combining global context features with intermediate model features as in \cite{dong_instance_2020, li_cross-scale_2021}, we explicitly compare global scene embeddings to query embeddings. The resulting score can be used not only to filter out gallery scenes using a hard threshold, but also to weight predicted box scores for remaining scenes. We show that both the hard-thresholding and score-weighting mechanisms are effective for the benchmark PRW and CUHK-SYSU datasets, resulting in state-of-the-art retrieval performance (+2.7\% top-1 accuracy on the PRW dataset over previous best model), with improved efficiency (over 50\% per-query cost savings on the CUHK-SYSU dataset vs. same model without the GFN). Additionally, we make contributions to the data processing and evaluation frameworks that are used by most person search methods with publicly available code. That work is described in Supplementary Material Section \ref{supp:data_proc}. \subsection{Contributions} Our contributions are as follows: \begin{itemize}[noitemsep, topsep=0pt] \item The Gallery Filter Network: A novel module for learning query-scene similarity scores which efficiently reduces retrieval gallery size via hard-thresholding, while improving detected embedding ranking with global scene information via score-weighting. \item Performance improvements and removal of unneeded elements in the SeqNet person search model \cite{li_sequential_2021}, dubbed SeqNeXt. \item Standardized tooling for the data pipeline and evaluation frameworks typically used for the PRW and CUHK-SYSU datasets, which is extensible to new datasets. \end{itemize} All of our code and model configurations are made publicly available\footnote{Project repository: \url{https://github.com/LukeJaffe/GFN}}. \section{Related Work} {\noindent {\bf Person Search.}} Beginning with the release of two benchmark person search datasets, PRW \cite{zheng_person_2017} and CUHK-SYSU \cite{xiao_joint_2017}, there has been continual development of new deep learning models for person search. Most methods utilize the Online Instance Matching (OIM) Loss from \cite{xiao_joint_2017} for the re-id feature learning objective. Several methods \cite{yan_anchor-free_2021, li_cross-scale_2021, zhang_diverse_2021} enhance this objective using variations of a triplet loss \cite{schroff_facenet_2015}. Many methods make modifications to the object detection sub-module. In \cite{li_cross-scale_2021, yan_anchor-free_2021, cao_pstr_2022}, a variation of the Feature Pyramid Network (FPN) \cite{lin_feature_2017} is used to produce multi-scale feature maps for detection and re-id. Models in \cite{yan_anchor-free_2021, cao_pstr_2022} are based on the Fully-Convolutional One-Stage (FCOS) detector \cite{tian_fcos_2019}. In COAT \cite{yu_cascade_2022}, a Cascade R-CNN-style \cite{cai_cascade_2018} transformer-augmented \cite{vaswani_attention_2017} detector is used to refine box predictions. We use a variation of the single-scale two-stage Faster R-CNN \cite{ren_faster_2015} approach from the SeqNet model \cite{li_sequential_2021}. {\noindent {\bf Query-Based Search Space Reduction.}} In \cite{ferrari_rcaa_2018, liu_neural_2017}, query information is used to iteratively refine the search space within a gallery scene until the query person is localized. In \cite{dong_instance_2020}, Region Proposal Network (RPN) proposals are filtered by similarity to the query, reducing the number of proposals for expensive RoI-Pooled feature computations. Our method uses query features to perform a coarser-grained but more efficient search space reduction by filtering out full scenes before expensive detector features are computed. {\noindent {\bf Query-Scene Prediction.}} In the Instance Guided Proposal Network (IGPN) \cite{dong_instance_2020}, a global relation branch is used for binary prediction of query presence in a scene image. This is similar in principal to the GFN prediction, but it is done using expensive intermediate query-scene features, in contrast to our cheaper modular approach to the task. {\noindent {\bf Backbone Variation.}} While the original ResNet50 \cite{he_deep_2016} backbone used in SeqNet and most other person search models has been effective to date, many newer architectures have since been introduced. With the recent advent of vision transformers (ViT) \cite{dosovitskiy_image_2021} and a cascade of improvements including the Swin Transformer \cite{liu_swin_2021} and the Pyramid Vision Transformer (v2) \cite{wang_pvt_2022}, used by the PSTR person search model \cite{cao_pstr_2022}, transformer-based feature extraction has increased in popularity. However, there is still an efficiency gap with CNN models, and newer CNNs including ConvNeXt \cite{liu_convnet_2022} have closed the performance gap with ViT-based models, while retaining the inherent efficiency of convolutional layers. For this reason, we explore ConvNeXt for our model backbone as an improvement to ResNet50, which is more efficient than ViT alternatives. \section{Methods} \subsection{Base Model} Our base person search model is an end-to-end architecture based on SeqNet \cite{li_sequential_2021}. We make modifications to the model backbone, simplify the two-stage detection pipeline, and improve the training recipe, resulting in superior performance. Since the model inherits heavily from SeqNet, and uses a ConvNeXt base, we refer to it simply as SeqNeXt to distinguish it from the original model. Our model, combined with the GFN module, is shown in Figure \ref{fig:model}. \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{./figures/model.pdf} \end{center} \vspace{-0.5cm} \caption{Architecture of the SeqNeXt person search model augmented with the GFN. Modules modified from SeqNet are colored red, and new modules, related to the GFN, are colored green. The model follows the standard Faster R-CNN paradigm, with backbone features from \texttt{conv4} being used to generate proposals via the RPN. \texttt{conv4} features are pooled for RPN proposals and passed through the \texttt{conv5} head to generate refined proposals. This process is repeated with the refined proposals to generate the final boxes. \texttt{conv4} features are also used to generate both person embeddings and scene embeddings in the same way: the person box or scene passes through the pooling block and then a duplicated \texttt{conv5} head, and \texttt{conv4}, \texttt{conv5} features are concatenated and passed through an embedding (Emb) head. In the pooling block, RoI Align\cite{he_mask_2020} is used for person and proposal features, while adaptive max pooling is used for scene features. GFN scores are generated using person and scene embeddings from the same or different scenes. Person re-id scores are combined with the score output of the second R-CNN stage to produce detector-weighted scores.} \label{fig:model} \end{figure*} {\noindent {\bf Backbone Features.}} Following SeqNet's usage of the first four CNN blocks (\texttt{conv1-4}) from ResNet50 for backbone features, we use the analogous layers in terms of downsampling from ConvNeXt, also referred to as \texttt{conv1-4} for convenience. {\noindent {\bf Multi-Stage Refinement and Inference.}} We simplify the detection pipeline of SeqNet by duplicating the Faster R-CNN head \cite{ren_faster_2015} in place of the Norm-Aware Embedding (NAE) head from \cite{chen_norm-aware_2020}. We still weight person similarity scores using the output of the detector, but use the second-stage class score instead of the first-stage as in SeqNet. This is depicted in Figure \ref{fig:model} as ``detector-weighted re-id scores''. Additionally during inference, we do not use the Context Bipartite Graph Matching (CBGM) algorithm from SeqNet, discussed in Supplementary Material Section \ref{supp:cbgm}. {\noindent {\bf Augmentation.}} Following resizing images to 900$\times$1500 (Window Resize) at training time, we employ one of two random cropping methods with equal probability: 1) Random Focused Crop (RFC): randomly take a 512$\times$512 crop in the original image resolution which contains at least one known person, 2) Random Safe Crop (RSC): randomly crop the image such that all persons are contained, then resize to 512$\times$512. This cropping strategy allowed us to train with larger batch sizes, while benefiting performance with improved regularization. At inference time, we resize to 900$\times$1500, as in other models. We also consider a variant of Random Focused Crop (RFC2), which resizes images so the ``focused'' person box is not clipped. {\noindent {\bf Objective.}} As in other person search models, we employ the Online Instance Matching (OIM) Loss \cite{xiao_joint_2017}, represented as $\mathcal{L}_\text{reid}$. This is visualized in Figure \ref{fig:gfn_objective}a. For all diagrams in Figure \ref{fig:gfn_objective}, we borrow from the spring analogy for metric learning used in \textit{DrLIM} \cite{hadsell_dimensionality_2006}, with the concept of \textit{attractions} and \textit{repulsions}. The detector loss is the sum of classification and box regression losses from the RPN, and the two Faster R-CNN stages, expressed as: \begin{equation} \mathcal{L}_\text{det} = \sum_{m \in M} \mathcal{L}^m_\text{cls} + \mathcal{L}^m_\text{reg}, \;\; M = \{\text{\scriptsize RPN}, \text{\scriptsize RCNN1}, \text{\scriptsize RCNN2}\} \label{eqn:loss_det} \end{equation} The full loss is the sum of the detector, re-id, and GFN losses: \begin{equation} \mathcal{L} = \mathcal{L}_\text{det} + \mathcal{L}_\text{reid} + \mathcal{L}_\text{gfn} \label{eqn:loss_full} \end{equation} \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{./figures/gfn_objective_graph.pdf} \end{center} \vspace{-0.5cm} \caption{Visual representation of the re-id and GFN optimization objectives. In a), b), c), e), circles represent scene images which contain one or more different person identities, labeled A and B. We show a system of three scenes with two unique person identities. Green connectors represent attraction, meaning two embeddings are pushed together by an objective, and red connectors represent repulsion, meaning two embeddings are pulled apart by an objective. In a) we show the standard re-id loss objective. In b) we show the scene-only GFN objective. In c) we show the baseline GFN objective, and in e) we show the combined query-scene GFN objective. In d) we show the graph form of the baseline GFN objective and re-id objective together, and in f) we show the graph form of the combined query-scene GFN objective and re-id objective together, with green ellipses surrounding independent sets in each multipartite component.} \label{fig:gfn_objective} \end{figure*} \subsection{Gallery Filter Network} \label{sec:gfn} Our goal is to design a module which removes low-scoring scenes, and reweights boxes from higher-scoring scenes. Let $s_\text{reid}$ be the cosine similarity of a predicted gallery box embedding with the query embedding, $s_\text{det}$ be the detector box score, $s_\text{gfn}$ be the cosine similarity for the corresponding gallery scene from the GFN, $\sigma(x) = \frac{e^{-x}}{1+e^{-x}}$, $\alpha$ be a temperature constant, and $\lambda_\text{gfn}$ be the GFN score threshold. At inference time, scenes scoring below $\lambda_\text{gfn}$ are removed, and detection is performed for remaining scenes, with the final score for detected boxes given by \newline $s_\text{final} = s_\text{reid} \cdot s_\text{det} \cdot \sigma(s_\text{gfn} / \alpha)$. The module should discriminate as many scenes below $\lambda_\text{gfn}$ as possible, while positively impacting the scores of boxes from any remaining scenes. To this end, we consider three variations of the standard contrastive objective \cite{chen_simple_2020, oord_representation_2019} in Sections \ref{sec:gfn_baseline_objective}-\ref{sec:gfn_scene_objective}, in addition to a number of architectural and optimization considerations in Section \ref{sec:gfn_arch_opt}. \subsubsection{Baseline Objective} \label{sec:gfn_baseline_objective} The goal of the baseline GFN optimization is to push person embeddings toward scene embeddings when a person is contained within a scene, and to pull them apart when the person is not in the scene, shown in Figure \ref{fig:gfn_objective}c. Let $x_i \in \mathbb{R}^d$ denote the embedding extracted from person $q_i$ located in some scene $s_j$. Let $y_j \in \mathbb{R}^d$ denote the embedding extracted from scene $s_j$. Let $X$ be the set of all person embeddings $x_i$, and $Y$ the set of all scene embeddings $y_j$, with $N=|X|, M=|Y|$. We define the query-scene indicator function to denote positive query-scene pairs as \begin{equation} \mathcal{I}^{Q}_{i,j} = \begin{cases} 1 & \text{if } q_i \text{ present in } s_j\\ 0 & \text{otherwise} \end{cases} \label{eqn:mem_q} \end{equation} We then define a set to denote indices for a specific positive pair and all negative pairs:\newline $K^{Q}_{i,j} = \{ k \in 1,\ldots,M \,|\, k = j \text{ or } \mathcal{I}^{Q}_{i,j} = 0 \}$. Define $\text{sim}(u, v) = u^\top v / \|u\|\|v\|$, the cosine similarity between two $u,v \in \mathbb{R}^d$, and $\tau$ is a temperature constant. Then the loss for a positive query-scene pair is the cross-entropy loss \begin{equation} \ell^Q_{i,j} = -\log\frac{ \exp{(\text{sim}(x_i, y_j)/\tau)} }{ \sum_{k \in K^{Q}_{i,j}}\exp{(\text{sim}(x_i, y_k)/\tau)} } \label{eqn:gfn_p_baseline} \end{equation} The baseline Gallery Filter Network loss sums positive pair losses over all query-scene pairs: \begin{equation} \mathcal{L}^Q_\text{gfn} = \sum_{i=1}^N \sum_{j=1}^M \mathcal{I}^{Q}_{i,j} \ell^Q_{i,j} \label{eqn:gfn_l_baseline} \end{equation} \subsubsection{Combined Query-Scene Objective} \label{sec:gfn_qs_objective} While it is possible to train the GFN directly with person and scene embeddings using the loss in Equation \ref{eqn:gfn_l_baseline}, we show that this objective is ill-posed without modification. The problem is that we have constructed a system of opposing attractions and repulsions. We can formalize this concept by interpreting the system as a graph $G(V, E)$, visualized in Figure \ref{fig:gfn_objective}d. Let the vertices $V$ correspond to person, scene, and/or combined person-scene embeddings, where an edge in $E$ (red arrow) connecting any two nodes in $V$ represents a negative pair used in the optimization objective. Let any group of nodes connected by green dashed arrows (not edges in $G$) be an independent set, representing positive pairs in the optimization objective. Then, each connected component of $G$ must be multipartite, or the optimization problem will be ill-posed by design, as in the baseline objective. To learn whether a person is contained within a scene while preventing this conflict of attractions and repulsions, we need to apply some unique transformation to query and scene embeddings before the optimization. One such option is to combine a query person embedding separately with the query scene and gallery scene embeddings to produce fused representations. This allows us to disentangle the web of interactions between query and scene embeddings, while still learning the desired relationship, visualized in Figure \ref{fig:gfn_objective}e. The person embedding used to fuse with each scene embedding in a pair is left colored, and the corresponding scenes are colored according to that person embedding. Person embeddings present in scenes which are not used are grayed out. In the graph-based presentation, shown in Figure \ref{fig:gfn_objective}f, this modified scheme using query-scene embeddings will always result in a graph comprising some number of star graph connected components. Since these star graph components are multipartite by design, the issue of conflicting attractions and repulsions is avoided. To combine a query and scene embedding into a single query-scene embedding, we define a function $f: \mathbb{R}^d, \mathbb{R}^d \rightarrow \mathbb{R}^d$, such that $z_{i,j} = f(x_i, y_j)$ and $w_i = f(x_i, y^{x_i})$, where $y^{x_i}$ is the embedding of the scene that person $i$ is present in. Borrowing from SENet \cite{hu_squeeze-and-excitation_2018} and QEEPS \cite{munjal_query-guided_2019}, we choose a sigmoid-activated elementwise excitation, with $\odot$ used for elementwise product. ``BN'' is a Batch Normalization layer, to mirror the architecture of the other embedding heads, and $\beta$ is a temperature constant. \begin{equation} f(x, y) = \text{BN} ( \sigma(x / \beta) \odot y ) \label{eqn:f_se} \end{equation} Other choices are possible for $f$, but the elementwise-product is critical, because it excites the features most relevant to a given query within a scene, eliciting the relationship shown in Figure \ref{fig:gfn_objective}e. % The loss for a positive query-scene pair is the cross-entropy loss \begin{equation} \ell^C_{i,j} = -\log\frac{ \exp{(\text{sim}(w_i, z_{i,j})/\tau)} }{ \sum_{k \in K^{Q}_{i,j}}\exp{(\text{sim}(w_i, z_{i,k})/\tau)} } \label{eqn:gfn_p_qs} \end{equation} The query-scene combined Gallery Filter Network loss sums positive pair losses over all query-scene pairs: \begin{equation} \mathcal{L}^C_\text{gfn} = \sum_{i=1}^N \sum_{j=1}^M \mathcal{I}^{Q}_{i,j} \ell^C_{i,j} \label{eqn:gfn_l_qs} \end{equation} \subsubsection{Scene-Only Objective} \label{sec:gfn_scene_objective} As a control for the query-scene objective, we also define a simpler objective which uses scene embeddings only, depicted in Figure \ref{fig:gfn_objective}b. This objective attempts to learn the less discriminative concept of whether two scenes share any persons in common, and has the same optimization issue of conflicting attractions and repulsions as the baseline objective. At inference time, it is used in the same way as the other GFN methods. We define the scene-scene indicator function to denote positive scene-scene pairs as \begin{equation} \mathcal{I}^{S}_{i,j} = \begin{cases} 1 & \text{if } s_i \text{ shares any $q$ in common with } s_j\\ 0 & \text{otherwise} \end{cases} \label{eqn:mem_s} \end{equation} Similar to Section \ref{sec:gfn_baseline_objective}, we define an index set: \newline $K^{S}_{i,j} = \{ k \in 1,\ldots,M \,|\, k = j \text{ or } \mathcal{I}^{S}_{i,j} = 0 \}$. Then the loss for a positive scene-scene pair is the cross-entropy loss \begin{equation} \ell^S_{i,j} = -\log\frac{ \exp{(\text{sim}(y_i, y_j)/\tau)} }{ \sum_{k \in K^{S}_{i,j}}\exp{(\text{sim}(y_i, y_k)/\tau)} } \label{eqn:gfn_p_image} \end{equation} The scene-only Gallery Filter Network loss sums positive pair losses over all scene-scene pairs: \begin{equation} \mathcal{L}^{S}_\text{gfn} = \sum_{i=1}^M \sum_{j=1}^M [i \neq j] \mathcal{I}^{S}_{i,j} \ell^S_{i,j} \label{eqn:gfn_l_image} \end{equation} where $[i \neq j]$ is $1$ if $i \neq j$ else $0$. \subsubsection{Architecture and Optimization} \label{sec:gfn_arch_opt} We consider a number of design choices for the architecture and optimization strategy of the GFN to improve its performance. {\noindent {\bf Architecture.}} Scene embeddings are extracted in the same way as person embeddings, except that a larger 56$\times$56 pooling size with adaptive max pooling is used vs. the person pooling size of 14$\times$14 with RoI Align. This larger scene pooling size is needed to adequately summarize scene information, since the scene extent is much larger than a typical person bounding box. In addition, the scene \texttt{conv5} head and Emb Head are duplicated from the corresponding person modules (no weight-sharing), shown in Figure \ref{fig:model}. {\noindent {\bf Lookup Table.}} Similar to the methodology used for the OIM objective \cite{xiao_joint_2017}, we use a lookup table (LUT) to store scene and person embeddings from previous batches, refreshing the LUT fully during each epoch. We compare the person and scene embeddings in each batch, which have gradients, with some subset of the embeddings in the LUT, which do not have gradients. Therefore only comparisons of embeddings within the batch, or between the batch and the LUT, have gradients. {\noindent {\bf Query Prototype Embeddings.}} Rather than using person embeddings directly from a given batch, we can use the identity prototype embeddings stored in the OIM LUT, similar to \cite{kim_prototype-guided_2021}. To do so, we lookup the corresponding identity for a given batch person identity in the OIM LUT during training, and substitute that into the objective. In doing so, we discard gradients from batch person embeddings, meaning that we only pass gradients through scene embeddings, and therefore only update the scene embedding module. This choice is examined in an ablation in Section \ref{sec:ablation_studies}. \section{Experiments and Analysis} \subsection{Datasets and Evaluation} {\noindent {\bf Datasets.}} For our experiments, we use the two standard person search datasets, CUHK-SYSU \cite{xiao_joint_2017}, and \textit{Person Re-identification in the Wild} (PRW) \cite{zheng_person_2017}. CUHK-SYSU comprises a mixture of imagery from hand-held cameras, and shots from movies and TV shows, resulting in significant visual diversity. It contains 18,184 scene images annotated with 96,143 person bounding boxes from tracked (known) and untracked (unknown) persons, with 8,432 known identities. PRW comprises video frames from six surveillance cameras at Tsinghua University in Hong Kong. It contains 11,816 scene images annotated with 43,110 person bounding boxes from known and unknown persons, with 932 known identities. The standard test retrieval partition for the CUHK-SYSU dataset has 2,900 query persons, with a gallery size of 100 scenes per query. The standard test retrieval partition for the PRW dataset has 2,057 query persons, and uses all 6,112 test scenes in the gallery, excluding the identity. For a more robust analysis, we additionally divide the given train set into separate train and validation sets, further discussed in Supplementary Material Section \ref{supp:data_proc}. {\noindent {\bf Evaluation Metrics.}} As in other works, we use the standard re-id metrics of mean average precision (mAP), and top-1 accuracy (top-1). For detection metrics, we use recall and average precision at 0.5 IoU (Recall, AP). In addition, we show GFN metrics mAP and top-1, which are computed as metrics of scene retrieval using GFN scores. To calculate these values, we compute the GFN score for each scene, and consider a gallery scene a match to the query if the query person is present in it. \subsection{Implementation Details} We use SGD optimizer with momentum for ResNet models, with starting learning rate 3e-3, and Adam for ConvNeXt models, with starting learning rate 1e-4. We train all models for 30 epochs, reducing the learning rate by a factor of 10 at epochs 15 and 25. Gradients are clipped to norm 10 for all models. Models are trained on a single Quadro RTX 6000 GPU (24 GB VRAM), and 30 epoch training time using the final model configuration takes 11 hours for the PRW dataset, and 21 hours for the CUHK-SYSU dataset. Our baseline model used for ablation studies has a ConvNeXt Base backbone, embedding dimension 2,048, scene embedding pool size 56$\times$56, and is trained with 512$\times$512 image crops using the combined cropping strategy (RSC+RFC). It uses the combined prototype feature version of the GFN objective. The final model configuration, used for comparison to other state-of-the-art models, is trained with 640$\times$640 image crops using the altered combined cropping strategy (RSC+RFC2). It uses the combined batch feature version of the GFN objective. Additional implementation details are given in Supplementary Material Section \ref{supp:add_imp_det}. \subsection{Comparison to State-of-the-art} We show a comparison of state-of-the-art methods on the standard benchmarks in Table \ref{tab:sota}. The GFN benefits all metrics, especially top-1 accuracy for the PRW dataset, which improves by 4.6\% for the ResNet50 backbone, and 2.9\% for the ConvNeXt Base backbone. Our best model, SeqNeXt+GFN with ConvNext Base, improves mAP by 1.8\% on PRW and 1.2\% on CUHK-SYSU over the previous best PSTR model. This benefit extends to larger gallery sizes for CUHK-SYSU, shown in Figure \ref{fig:cuhk_gallery_size}. In fact, the GFN score-weighting helps more as gallery size increases. This is expected, since the benefit of down-weighting contextually-unlikely scenes, vs. discriminating between persons within a single scene, has a greater effect when there are more scenes compared against. \begin{figure} \centering \begin{minipage}[b]{.23\textwidth} \includegraphics[width=1\linewidth]{./figures/cuhk_gallery_size.pdf} \vspace{-0.5cm} \caption{Effect of gallery size on mAP for the CUHK-SYSU dataset. SNX-CNB = SeqNeXt ConvNeXt Base. GFN helps more as gallery size increases.} \label{fig:cuhk_gallery_size} \end{minipage} \hfill \begin{minipage}[b]{.23\textwidth} \renewcommand{\arraystretch}{1.25} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cc|} \hline \textbf{Occluded} & mAP & top-1 \\ \hline SeqNeXt & 91.1 & 89.8 \\ SeqNeXt+GFN & \textbf{92.0} & \textbf{90.9} \\ \midrule \midrule \textbf{Low-Resolution} & mAP & top-1 \\ \hline SeqNeXt & 91.4 & 92.4 \\ SeqNeXt+GFN & \textbf{92.0} & \textbf{93.1} \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \captionof{table}{Performance metrics on two CUHK-SYSU retrieval partitions using either Occluded (top) or Low-Resolution (bottom) query persons.} \label{tab:occlusion_resolution} \end{minipage} \end{figure} \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|c|cc|cc|} \hline \multirow{2}*{\textbf{Method}} & \multirow{2}*{\textbf{Backbone}} & \multicolumn{2}{c|}{\textbf{CUHK-SYSU}} & \multicolumn{2}{c|}{\textbf{PRW}} \\\cline{3-6} & & mAP & top-1 & mAP & top-1 \\ \midrule \multicolumn{6}{|l|}{\textit{Two-step}}\\ IDE \cite{zheng_person_2017} & ResNet50 & - & - & 20.5 & 48.3 \\ MGTS \cite{chen_person_2018} & VGG16 & 83.0 & 83.7 & 32.6 & 72.1 \\ CLSA \cite{lan_person_2018} & ResNet50 & 87.2 & 88.5 & 38.7 & 65.0 \\ IGPN \cite{dong_instance_2020} & ResNet50 & 90.3 & 91.4 & {47.2} & 87.0 \\ RDLR \cite{han_re-id_2019} & ResNet50 & 93.0 & 94.2 & 42.9 & 70.2 \\ TCTS \cite{wang_tcts_2020} & ResNet50 & {93.9} & {95.1} & 46.8 & {87.5} \\ \midrule \midrule \multicolumn{6}{|l|}{\textit{End-to-end}} \\ OIM \cite{xiao_joint_2017} & ResNet50 & 75.5 & 78.7 & 21.3 & 49.4 \\ IAN \cite{xiao_ian_2019} & ResNet50 & 76.3 & 80.1 & 23.0 & 61.9 \\ NPSM \cite{liu_neural_2017} & ResNet50 & 77.9 & 81.2 & 24.2 & 53.1 \\ RCAA \cite{ferrari_rcaa_2018} & ResNet50 & 79.3 & 81.3 & - & - \\ CTXG \cite{yan_learning_2019} & ResNet50 & 84.1 & 86.5 & 33.4 & 73.6 \\ QEEPS \cite{munjal_query-guided_2019} & ResNet50 & 88.9 & 89.1 & 37.1 & 76.7 \\ APNet \cite{zhong_robust_2020} & ResNet50 & 88.9 & 89.3 & 41.9 & 81.4 \\ HOIM \cite{chen_hierarchical_2020} & ResNet50 & 89.7 & 90.8 & 39.8 & 80.4 \\ BINet \cite{dong_bi-directional_2020} & ResNet50 & 90.0 & 90.7 & 45.3 & 81.7 \\ NAE+ \cite{chen_norm-aware_2020} & ResNet50 & 92.1 & 92.9 & 44.0 & 81.1 \\ PGSFL \cite{kim_prototype-guided_2021} & ResNet50 & 92.3 & 94.7 & 44.2 & 85.2 \\ % DKD \cite{zhang_diverse_2021} & ResNet50 & 93.1 & 94.2 & 50.5 & 87.1 \\ DMRN \cite{han_decoupled_2021} & ResNet50 & 93.2 & 94.2 & 46.9 & 83.3 \\ AGWF \cite{han_end--end_2021} & ResNet50 & 93.3 & 94.2 & 53.3 & 87.7 \\ AlignPS \cite{yan_anchor-free_2021} & ResNet50 & 94.0 & 94.5 & 46.1 & 82.1 \\ % SeqNet \cite{li_sequential_2021} & ResNet50 & 93.8 & 94.6 & 46.7 & 83.4 \\ SeqNet+CBGM \cite{li_sequential_2021} & ResNet50 & 94.8 & 95.7 & 47.6 & 87.6 \\ COAT \cite{yu_cascade_2022} & ResNet50 & 94.2 & 94.7 & 53.3 & 87.4 \\ COAT+CBGM \cite{yu_cascade_2022} & ResNet50 & 94.8 & 95.2 & 54.0 & 89.1 \\ MHGAM \cite{li_cross-scale_2021} & ResNet50 & 94.9 & 95.9 & 47.9 & 88.0 \\ PSTR \cite{cao_pstr_2022} & ResNet50 & 94.2 & 95.2 &50.1 & 87.9 \\ % PSTR \cite{cao_pstr_2022} & PVTv2-B2 & 95.2 & 96.2 & 56.5 & 89.7 \\ \hline SeqNeXt (ours) & ResNet50 & 94.1 & 94.7 & 50.8 & 86.0 \\ SeqNeXt+GFN (ours) & ResNet50 & 94.7 & 95.3 & 51.3 & 90.6 \\ SeqNeXt (ours) & ConvNeXt & 96.1 & 96.5 & 57.6 & 89.5 \\ SeqNeXt+GFN (ours) & ConvNeXt & \textbf{96.4} & \textbf{97.0} & \textbf{58.3} & \textbf{92.4} \\ \hline \end{tabular}} \vspace{-0.3cm} \end{center} \caption{Standard performance metrics mAP and top-1 accuracy on the benchmark CUHK-SYSU and PRW datasets are compared for state-of-the-art \textit{two-step} and \textit{end-to-end} models. ConvNeXt backbone = ConvNeXt Base.} \vspace{-0.4cm} \label{tab:sota} \end{table} \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{|l|cc|cc|} \hline \multirow{2}*{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Same Cam ID}} & \multicolumn{2}{c|}{\textbf{Cross Cam ID}} \\ \cline{2-5} & mAP & top-1 & mAP & top-1 \\ \midrule HOIM \cite{chen_hierarchical_2020} & - & - & 36.5 & 65.0 \\ NAE+ \cite{chen_norm-aware_2020} & - & -& 40.0 & 67.5 \\ SeqNet \cite{li_sequential_2021} & - & -& 43.6 & 68.5 \\ SeqNet+CBGM \cite{li_sequential_2021} & - & -& 44.3 & 70.6 \\ AGWF \cite{han_end--end_2021} & - & -& 48.0 & 73.2 \\ COAT \cite{yu_cascade_2022} & - & -& 50.9 & 75.1 \\ COAT+CBGM \cite{yu_cascade_2022} & - & -& 51.7 & 76.1 \\ \hline SeqNeXt (ours) & 82.9 & 98.5 & 55.3 & 80.5 \\ SeqNeXt+GFN (ours) & \textbf{85.1} & \textbf{98.6} & \textbf{56.4} & \textbf{82.1} \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Performance on the PRW test set for query and gallery scenes from the same camera (Same Cam ID) or different cameras (Cross Cam ID).} \label{tab:cam_id} \end{table} The GFN benefits CUHK-SYSU retrieval scenarios with occluded or low-resolution query persons, as shown in Table \ref{tab:occlusion_resolution}. This shows that high quality query person views are not essential to the function of the GFN. The GFN also benefits both cross-camera and same-camera retrieval, as shown in Table \ref{tab:cam_id}. Strong cross-camera performance shows that the GFN can generalize to varying locations, and does not simply pick the scene which is the most visually similar. Strong same-camera performance shows that the GFN is able to use query information, even when all gallery scenes are contextually similar. To showcase these benefits, we provide some qualitative results in Supplementary Material Section \ref{supp:qualitative}. These examples show that the GFN uses local person information combined with global context to improve retrieval ranking, even in the presence of difficult confusers. \subsection{Ablation Studies} \label{sec:ablation_studies} We conduct a series of ablations using the PRW dataset to show how detection, re-id, and GFN performance are each impacted by variations in model architecture, data augmentation, and GFN design choices. In the corresponding metrics tables, we show re-id results by presenting the GFN-modified scores as mAP and top-1, and the difference between unmodified mAP and top-1 with $\Delta$mAP and $\Delta$top-1. This highlights the change in re-id performance specifically from the GFN score-weighting. To indicate the baseline configuration in a table, we use the $\dagger$ symbol, and the final model configuration is highlighted in gray. Results for most of the ablations are shown in Supplementary Material Section \ref{supp:add_ablations}, including model modifications, image augmentation, scene pooling size, embedding dimension, and GFN sampling. {\noindent {\bf GFN Objective.}} We analyze the impact of the various GFN objective choices discussed in Section \ref{sec:gfn}. Comparisons are shown in Table \ref{tab:gfn_objective}. Most importantly, the re-id mAP performance without the GFN is relatively high, but the re-id top-1 performance is much lower than the best GFN methods. Conversely, the Scene-Only method achieves competitive re-id top-1 performance, but reduced re-id mAP. The Base methods were found to be significantly worse than all other methods, with GFN score-weighting actually reducing GFN performance. The Combined methods were the most effective, better than the Base and Scene-Only methods for both re-id and GFN-only stats, showcasing the improvements discussed in Section \ref{sec:gfn_qs_objective}. In addition, the success of the Combined objective can be explained by two factors: 1) similarity relationship between scene embeddings and 2) query information given by query-scene embeddings. The Scene-Only objective, which uses only similarity between scene embeddings, is functional but not as effective as the Combined objective, which uses both scene similarity and query information. Since the Scene-Only objective incorporates background information, and does not use query information, we reason that the provided additional benefit of the Combined objective comes from the described mechanism of query excitation of scene features, and not from \eg, simple matching of the query background with the gallery scene image. Finally, the Batch and Proto modifiers to the Combined and Base methods were found to be relatively similar in performance. Since the Proto method is simpler and more efficient, we use it for the baseline model configuration. \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cc|cccc|cc|} \hline & \multicolumn{2}{|c|}{\textbf{Detection}} & \multicolumn{4}{c|}{\textbf{Re-id}} & \multicolumn{2}{c|}{\textbf{GFN}} \\ \midrule \textbf{GFN Objective} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule None & 96.0 & \textbf{93.6} & 58.6 & 88.7 & - & - & - & - \\ Scene-Only & 96.0 & 93.4 & 56.5 & 91.9 & -0.9 & +2.8 & 16.1 & 73.3 \\ Base Batch & 95.7 & 93.1 & 53.9 & 86.6 & -2.6 & -2.0 & \textbf{23.8} & 58.4 \\ Base Proto & 96.0 & \textbf{93.6} & 55.0 & 86.2 & -3.0 & -2.7 & 22.9 & 57.8 \\ \rowcolor{gray!20} Comb. Batch & \textbf{96.2} & \textbf{93.6} & \textbf{59.5} & 92.2 & \textbf{+1.1} & +2.9 & 20.5 & \textbf{78.8} \\ Comb. Proto$\dagger$ & 96.0 & 93.4 & 58.8 & \textbf{92.3} & \textbf{+1.1} & \textbf{+3.5} & 20.4 & 78.5 \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Comparison of different options for the GFN optimization objective. ``None'' does not use the GFN, Scene-Only uses the objective in Section \ref{sec:gfn_scene_objective}, Base uses the baseline objective in Section \ref{sec:gfn_baseline_objective}, Combined (Comb.) uses the query-scene objective in Section \ref{sec:gfn_qs_objective}, Batch indicates that batch query embeddings are used, Proto indicates that prototype query embeddings are used. Baseline model is marked with \dag, final model is highlighted gray.} \label{tab:gfn_objective} \end{table} \subsection{Filtering Analysis} {\noindent {\bf GFN Score Threshold.}} We consider selection of the GFN score threshold value to use for filtering out gallery scenes during retrieval. In Figure \ref{fig:gfn_histogram}, we show histograms of GFN scores for both CUHK-SYSU and PRW. We introduce another metric to help analyze computation savings from the filtering operation: the fraction of negative gallery scenes which can be filtered out (negative predictive value) when using a threshold which keeps 99\% of positive gallery scenes (recall). For the histograms shown, this value is 91.4\% for CUHK-SYSU, and only 11.5\% for PRW. In short, this is because there is greater variation in scene appearance in CUHK-SYSU than PRW. This results in most query-gallery comparisons for CUHK-SYSU evaluation occurring between scenes from clearly different environments (e.g., two different movies). While the GFN score-weighting improves performance for both same-camera and cross-camera retrieval, shown in Table \ref{tab:cam_id}, query-scene scores used for hard thresholding may be less discriminative for nearly-identical scenes as in PRW vs. CUHK-SYSU, shown in Figure \ref{fig:gfn_histogram}. Still, the GFN top-1 score for the final PRW model was 78.4\%, meaning that 78.4\% of queries resulted in the correct gallery scene being ranked first using only the GFN score. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./figures/gfn_histogram.pdf} \end{center} \vspace{-0.5cm} \caption{GFN score histograms for the CUHK-SYSU and PRW test sets. Matches and non-matches (Diffs) are shown for queries in the gallery size 4,000 set for CUHK-SYSU, and the full gallery for PRW.} \label{fig:gfn_histogram} \end{figure} {\noindent {\bf Compute Cost.}} In Table \ref{tab:timing}, we show the breakdown of percent time spent on shared computation, GFN-only computation, and detector-only computation. Since most computation time ($\sim$60\%) is spent on detection, with only ($\sim$5\%) of time spent on GFN-related tasks, there is a large cost savings from using the GFN to avoid detection by filtering gallery scenes. Exactly how much time is saved in practice depends on the relative number of queries vs. the gallery size, and how densely populated the gallery scenes are with persons of interest. To give an understanding of compute savings for a single query, we show some example calculations using the conservative recall requirement of 99\%. For CUHK-SYSU, we have 99.9\% of gallery scenes negative, 91.4\% of negative gallery scenes filtered, and 61.0\% of time spent doing detection on gallery scenes, resulting in 55.7\% computation saved using the GFN compared to the same model without the GFN. For PRW, the same calculation yields 6.6\% computation saved using the GFN. \begin{table}[t!] \renewcommand{\arraystretch}{1.2} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline &\multicolumn{2}{|c|}{\textbf{Shared}} & \multicolumn{2}{c|}{\textbf{GFN}} & \multicolumn{2}{|c|}{\textbf{Detection}} \\ \cline{2-7} & Backbone & Query Emb. & Scene Emb. & GFN Scores & RPN & R-CNN($\times$2) \\ \hline \multirow{2}*{CUHK Time (\%)} & 33.7 & <0.1 & 5.3 & <0.1 & 19.2 & 41.8 \\ \cline{2-7} & \multicolumn{2}{|c|}{33.7} & \multicolumn{2}{c|}{5.3} & \multicolumn{2}{|c|}{61.0} \\ \midrule \midrule \multirow{2}*{PRW Time (\%)} & 36.9 & <0.1 & 5.3 & <0.1 & 16.1 & 41.7 \\ \cline{2-7} & \multicolumn{2}{|c|}{36.9} & \multicolumn{2}{c|}{5.3} & \multicolumn{2}{|c|}{57.8} \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Percent computation time averaged per query of shared feature extraction, GFN, and detection on the CUHK-SYSU (gallery size 4,000) and PRW (gallery size full) test sets.} \label{tab:timing} \end{table} \section{Conclusion} We describe and demonstrate the Gallery Filter Network, a novel module for improving accuracy and efficiency of person search models. We show that the GFN can efficiently filter gallery scenes under certain conditions, and that it benefits scoring for detects in scenes which are not filtered. We show that the GFN is robust under a range of different conditions by testing on different retrieval sets, including cross-camera, occluded, and low-resolution scenarios. In addition, we show that the benefit given by GFN score-weighting increases as gallery size increases. Separately, we develop the base SeqNeXt person search model, which has significant performance gains over the original SeqNet model. We offer a corresponding training recipe to train efficiently with improved regularization, using an aggressive cropping strategy. Taken together, the SeqNeXt+GFN combination yields a significant improvement over other state-of-the-art methods. Finally, we note that the GFN is not specific to SeqNeXt, and can be easily combined with other person search models. {\noindent {\bf Societal Impact.}} It is important to consider the potential negative impact of person search models, since they are ready-made for surveillance applications. This is highlighted by the PRW dataset being entirely composed of surveillance imagery, and the CUHK-SYSU dataset containing many street-view images of pedestrians. We consider two potential advantages of advancing person search research, and doing so in an open format. First, that person search models can be used for beneficial applications, including aiding in finding missing persons, and for newly-emerging autonomous systems that interact with humans, \eg, automated vehicles. Second, it allows the research community to understand how the models work at a granular level, and therefore benefits the potential for counteracting negative uses when the technology is abused. \section*{Acknowledgements} The authors would like to thank Wesam Sakla and Michael Goldman for helpful discussions and feedback. \clearpage {\small \bibliographystyle{ieee_fullname} \section{Data Processing and Evaluation} \label{supp:data_proc} We make publicly available our codebase\footnote{\url{https://github.com/LukeJaffe/GFN}}, which includes instructions and config files needed to replicate all main experiments of the paper. For comparitive purposes, we implicitly refer in the following subsections to the public codebases of OIM\footnote{\url{https://github.com/ShuangLI59/person_search}} \cite{xiao_joint_2017}, NAE\footnote{\url{https://github.com/dichen-cd/NAE4PS}} \cite{chen_norm-aware_2020}, SeqNet\footnote{\url{https://github.com/serend1p1ty/SeqNet}} \cite{li_sequential_2021}, COAT\footnote{\url{https://github.com/Kitware/COAT}} \cite{yu_cascade_2022}, AlignPS\footnote{\url{https://github.com/daodaofr/AlignPS}} \cite{yan_anchor-free_2021}, and PSTR\footnote{\url{https://github.com/JialeCao001/PSTR}} \cite{cao_pstr_2022}. \subsection{Standardized Data Format} We produce an intermediate COCO-style \cite{lin_microsoft_2014} format for all partitions of the CUHK-SYSU and PRW datatsets. In addition to standard COCO object metadata, we include \texttt{person\_id} and \texttt{is\_known} fields for persons, and a \texttt{cam\_id} image field for performing cross-camera evaluation. This standardization process made it straightforward to prepare new partitions of the data. In particular, we split the standard training sets into separate training and validation sets, and created some additional smaller debugging sets. This allowed us to pick hyperparameters without fitting to the test data. We also standardize the format of retrieval partitions into three categories: 1) fully-specified format which encodes the exact gallery scenes to be used for each query 2) format which specifies queries only, and uses all scenes in the partition as the gallery and 3) format which uses all possible queries, and all possible scenes as the gallery. We create the second and third formats because it is otherwise inefficient to fully-specify the ``all'' cases. \subsection{Training and Validation Sets} For both datasets, known identity sets between the train and test partitions are disjoint, making the standard evaluation an \textit{open-set} retrieval problem. To construct the training and validation sets to mirror the open-set retrieval problem of the standard train-test divide, we build a graph based on which scenes share common person identities. Two nodes (scenes) have an edge between them if they share at least one person identity in common. In this way, we can easily split the CUHK-SYSU dataset into a set of connected components, and divide those components into two groups for train ($\sim$80\%) and val ($\sim$20\%). Since the PRW dataset comprises video surveillance footage, this graph has the property that nearly every scene is connected to another scene via some common person identity. Therefore, we ignore the top 100 most common person identities when constructing the graph for PRW, resulting in a partition which is not quite open-set, but should exhibit similar generalization properties for the purpose of model development. For PRW, we also divide components into two groups for train ($\sim$80\%) and val ($\sim$20\%). We rename the original train set to ``trainval'', and all of our final experimental results in this paper are from models trained on the full trainval set, and tested on the full test set using the standard retrieval scenarios. \subsection{Partition Information} Metadata for the exact breakdown of known and unknown identities and boxes for each partition is given in Table \ref{tab:dataset_info}. \begin{table}[t!] \renewcommand{\arraystretch}{1.3} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cccc|cccc|} \hline & \multicolumn{4}{c|}{\textbf{CUHK-SYSU}} & \multicolumn{4}{c|}{\textbf{PRW}} \\ \midrule \textbf{Metadata} & trainval & test & train & val & trainval & test & train & val \\ \midrule Scenes & 11,206 & 6,978 & 8,964 & 2,242 & 5,704 & 6,112 & 4,563 & 1,141 \\ Boxes & 55,272 & 40,871 & 44,244 & 11,028 & 18,048 & 25,062 & 14,897 & 3,151 \\ Known IDs & 5,532 & 2,900 & 4,296 & 1,236 & 483 & 544 & 424 & 158 \\ Known Boxes & 15,085 & 8,345 & 11,623 & 3,462 & 14,907 & 19,127 & 12,125 & 2,782 \\ Unknown Boxes & 40,187 & 32,526 & 32,621 & 7,566 & 3,141 & 5,935 & 2,772 & 369 \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Dataset metadata showing how many scenes, boxes, and person IDs are in each partition.} \label{tab:dataset_info} \end{table} \subsection{Evaluation Functions} Using these standardized partitions, we are able to use just one function for detection evaluation and one for retrieval evaluation, as opposed to separate functions for each dataset. This also makes it easier to add in method-specific metrics that can be immediately tested for all partitions. We note that the current dataset releases for PRW and CUHK-SYSU have a small number (5 or less) of the following errors: duplicate bounding boxes in a single scene, repeated person ids in a single scene, and repeated gallery scenes in a retrieval partition. Although these issues are not handled correctly by the standard evaluation function, we exactly replicate the previous erroneous behavior in our new evaluation function to be certain the comparison against other methods is fair. We leave correction of the underlying data and evaluation function to future work. \subsection{Augmentation Code Structure} To make use of augmentation strategies in the albumentations library \cite{buslaev_albumentations_2020}, we refactor evaluation to occur on the augmented data instead of the original data. This allows for easy inclusion of different resizing and cropping strategies which we make use of, in addition to a wealth of other augmentations, experimenting with which we leave to future work. \subsection{Config Format and Ray Tune} For running experiments with our code, we provide a YAML config format which is compatible with the Ray Tune library \cite{liaw_tune_2018}. We specifically support the \texttt{tune.grid\_search} functionality by parsing lists in the YAML file as inputs to this function. This makes it easy to run ablations with many variations using a single config file. \section{Additional Implementation Details} \label{supp:add_imp_det} {\noindent {\bf Model Details.}} We set the OIM scalar (inverse temperature) parameter to 30.0 as in \cite{li_sequential_2021}, with an OIM circular queue size of 5,000 for CUHK-SYSU and 500 for PRW. The OIM momentum parameter is also left at 0.5. For the GFN, the training temperature parameter is 0.1, and the GFN excitation function temperature parameter is 0.2. During training, we use a batch size of 12 for ResNet50 backbone models, and a batch size of 8 for ConvNeXt backbone models. For the ResNet50 backbone, we freeze all batch norm layers, and all weights through the \texttt{conv1} layer of the model. For ConvNeXt backbones, we freeze only the \texttt{conv1} layer of the model. All backbones are initialized using weights from pre-training on ImageNet1k \cite{russakovsky_imagenet_2015}. We use automatic-mixed precision (AMP), which significantly reduces all training and inference times. To avoid \texttt{float16} overflow, we refactor all loss functions to divide before summation when computing mean reduction. This increases likelihood of underflow, but results in more stable training overall. {\noindent {\bf GFN Sampling Strategies.}} Since we are unable to use the entire GFN LUT to form loss pairs in any given batch due to memory limitations, we have a choice about which LUT embeddings to select for the GFN optimization. By default, for each query person present in the current batch, we sample one matching scene embedding and the person embeddings for all persons in that scene. In addition, we consider sampling a ``hard negative'' scene, defined as a scene which shares at least one person identity in common with the query scene, but that does not contain the query person identity. An ablation for related choices is considered in Section \ref{supp:add_ablations}. \section{Qualitative Analysis} \label{supp:qualitative} Qualitative examples are shown for both CUHK-SYSU and PRW in Figure \ref{fig:qualitative}. All examples show cases where the baseline model top-1 match is incorrect, but the GFN-modified match for the same example is correct. We highlight examples where global scene context has an obvious vs. a more subtle impact, and where the query and scene camera ID are the same or different. \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{./figures/retrieval_examples.pdf} \end{center} \vspace{-0.4cm} \caption{Retrieval examples (CUHK-SYSU left, PRW right) from the baseline model where application of the GFN score corrected the top-1 result. The query box is shown in yellow, a false positive gallery match in red, and a true positive gallery match in blue. In each scene, the white box in the lower right duplicates the person of interest for easier comparison between scenes. In the top-left and middle-left, subtle contextual clues (formal wear) help correct the predicted box. In the bottom-left, an obvious contextual clue (interior of same building) corrects the prediction, despite a $180^{\circ}$ change in viewpoint of the person. In the top-right, the false positive and correct match look nearly identical, and the correct box is from the same camera view. In the middle-right, the false positive and correct match have the same shirt and hairstyle, and the correct box is from a different camera view. In the lower-right, the false positive appears to be a mistake in the ground truth (should be true positive), but the GFN ``helped'' by up-weighting a more contextually similar scene.} \label{fig:qualitative} \end{figure*} \section{Additional Ablations} \label{supp:add_ablations} {\noindent {\bf Model Modifications.}} We consider how changes to the SeqNet architecture impact performance, including usage of a second Faster R-CNN head instead of the NAE head, and usage of the second detector stage score instead of the first stage score during inference. Results are shown in Table \ref{tab:model_mods}. Using the ConvNeXt Base backbone instead of ResNet50 does not improve detection performance, but it significantly improves re-id performance, especially mAP, by 7-8\%. Using the first stage score significantly helps detection performance, but it reduces re-id performance. \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cc|cccc|cc|} \hline & \multicolumn{2}{|c|}{\textbf{Detection}} & \multicolumn{4}{c|}{\textbf{Re-id}} & \multicolumn{2}{c|}{\textbf{GFN}} \\ \midrule \textbf{Model} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule RN50 NAE-FCS & 97.6 & 93.5 & 50.3 & 89.4 & 0.0 & +3.5 & 16.7 & 78.5 \\ RN50 RCNN-SCS & 96.0 & 93.1 & 51.1 & 90.6 & +0.1 & \textbf{+3.8} & 16.3 & 78.0 \\ CNB NAE-FCS & \textbf{97.9} & \textbf{94.9} & 58.7 & 91.4 & \textbf{+1.3} & +3.4 & \textbf{21.0} & \textbf{78.9} \\ \rowcolor{gray!20} CNB RCNN-SCS$\dagger$ & 96.0 & 93.4 & \textbf{58.8} & \textbf{92.3} & +1.1 & +3.5 & 20.4 & 78.5 \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Comparison of model backbone (RN50=ResNet50, CNB=ConvNeXt Base), NAE vs. R-CNN head for the second detector stage, and first (stage) classifier score (FCS) or second (stage) classifier score (SCS) used at inference time. Baseline model is marked with \dag, final model is highlighted gray.} \label{tab:model_mods} \end{table} {\noindent {\bf Image Augmentation.}} Shown in Table \ref{tab:augmentation}, we compare the Window Resize augmentation to the two cropping methods used, and a strategy combining the two. We find that the Window Resize method achieves comparable re-id performance with other methods, but much lower detection performance. This may be attributed to the regularizing effect of random cropping for detector training. In addition, we find that Random Safe Cropping alone results in better detection performance than Random Focused Cropping alone, but worse re-id performance. This shows that the regularizing effect of random crops that may be in the wrong scale is more important for detection, and having features in the target scene scale is more important for re-id. Combining the two results in better performance than either alone for both detection and re-id. \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cc|cccc|cc|} \hline & \multicolumn{2}{|c|}{\textbf{Detection}} & \multicolumn{4}{c|}{\textbf{Re-id}} & \multicolumn{2}{c|}{\textbf{GFN}} \\ \midrule \textbf{Method} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule WRS & 89.3 & 87.7 & 57.3 & 91.1 & +0.9 & \textbf{+4.7} & 19.6 & 78.3 \\ RSC & 95.9 & 93.1 & 55.8 & 91.0 & +0.7 & +3.7 & 18.5 & 77.6 \\ RFC & 95.0 & 92.7 & 58.4 & 91.2 & \textbf{+1.4} & +3.4 & 20.8 & 77.8 \\ RFC2 & 95.4 & 93.1 & 58.2 & 91.1 & \textbf{+1.4} & +3.8 & \textbf{21.1} & 78.4 \\ RSC+RFC\dag & 96.0 & 93.4 & \textbf{58.8} & \textbf{92.3} & +1.1 & +3.5 & 20.4 & 78.5 \\ \rowcolor{gray!20} RSC+RFC2 & \textbf{96.1} & \textbf{93.8} & 58.7 & \textbf{92.3} & +1.3 & +3.3 & 20.8 & \textbf{78.9} \\ \midrule \midrule \textbf{Crop Size} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule 256$\times$256 & 95.3 & 91.9 & 51.4 & 90.1 & 0.1 & 3.3 & 16.7 & 78.0 \\ 384$\times$384 & \textbf{96.3} & \textbf{93.6} & 56.7 & 92.0 & 0.6 & 3.0 & 19.6 & 79.2 \\ 512$\times$512\dag & 96.0 & 93.4 & 58.8 & \textbf{92.3} & 1.1 & \textbf{3.5} & 20.4 & 78.5 \\ \rowcolor{gray!20} 640$\times$640 & 95.3 & 92.9 & \textbf{59.6} & \textbf{92.3} & \textbf{1.4} & 3.4 & \textbf{21.8} & \textbf{79.6} \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Comparison of image augmentation methods (top), and image crop sizes (bottom). Augmentation methods include WRS (Window Resize to 900$\times$1500), RSC (Random Safe Crop to square crop size), RFC (Random Focused Crop to square crop size), RFC2 (variant of RFC), and RSC+RFC(2) which performs either cropping method randomly with equal probability. Baseline model is marked with \dag, final model is highlighted gray.} \label{tab:augmentation} \end{table} {\noindent {\bf Scene Pooling Size and Embedding Dimension.}} We analyze choices for the RoI Align pooling size for the scene embedding head, and choices for the embedding dimension for both the query and scene embedding heads. Comparisons are shown in Table \ref{tab:feat_pool_size}. GFN performance increases nearly-monotonically with scene pooling size, with diminishing returns for GFN score-weighted re-id performance. We also note that larger scene pooling size results in a significant increase in memory consumption, so we use 56$\times$56 by default, which captures most of the performance gain, with some memory savings. It is clear that the scene pooling size should be larger than the query pooling size to ensure that all person information in a scene is adequately captured. The relationship between person box size distribution vs. scene size, with the ratio of respective pooling sizes could be further investigated. For the embedding dimension, performance also increases nearly-monotonically with size, for both re-id and the GFN-only stats. Although there are diminishing returns in performance, like with the scene pooling size, we choose the relatively large value of 2,048 because it results in little additional memory consumption or compute time. \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cc|cccc|cc|} \hline & \multicolumn{2}{|c|}{\textbf{Detection}} & \multicolumn{4}{c|}{\textbf{Re-id}} & \multicolumn{2}{c|}{\textbf{GFN}} \\ \midrule \textbf{Pool Size} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule 14$\times$14 & \textbf{96.1} & 93.5 & 58.1 & 91.6 & +0.1 & +3.3 & 18.2 & 77.9 \\ 28$\times$28 & 95.9 & 93.4 & 58.5 & 92.3 & +0.7 & \textbf{+3.6} & 19.7 & 79.2 \\ \rowcolor{gray!20} 56$\times$56$\dagger$ & 96.0 & 93.4 & \textbf{58.8} & 92.3 & +1.1 & +3.5 & 20.4 & 78.5 \\ 112$\times$112 & \textbf{96.1} & \textbf{93.6} & \textbf{58.8} & \textbf{92.4} & \textbf{+1.2} & \textbf{+3.6} & \textbf{22.1} & \textbf{79.8} \\ \midrule \midrule \textbf{Emb Dim} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule 128 & 96.1 & \textbf{93.6} & 58.0 & 91.6 & 0.7 & 3.8 & 19.6 & 77.9 \\ 256 & 95.9 & 93.4 & 58.2 & 92.0 & 1.0 & \textbf{4.3} & 20.1 & 78.3 \\ 512 & 96.1 & 93.5 & 58.7 & 91.8 & 1.0 & 4.0 & 20.0 & 77.6 \\ 1024 & \textbf{96.2} & \textbf{93.6} & \textbf{59.3} & 92.2 & \textbf{1.1} & 3.5 & 20.0 & 78.0 \\ \rowcolor{gray!20} 2048$\dagger$ & 96.0 & 93.4 & 58.8 & \textbf{92.3} & \textbf{1.1} & 3.5 & \textbf{20.4} & \textbf{78.5} \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Comparison of pooling sizes for the RoI Align block used to compute scene embeddings (top) and comparison of the embedding dimension used for both query and scene embeddings (bottom). Baseline model is marked with \dag, final model is highlighted gray.} \label{tab:feat_pool_size} \end{table} {\noindent {\bf GFN Sampling.}} We analyze choices for the GFN sampling procedure, with comparisons shown in Table \ref{tab:gfn_sampling}. Critically, we find that all sampling options with the LUT are better than not using the LUT at all, as shown by both the large increase in GFN stats, and the contribution of GFN score-weighting to re-id stats. This is expected but important, because it shows that batch-only query-scene comparisons are insufficient (usually just comparing a query to the scene it is present in), and that LUT comparisons are needed despite no gradients flowing through the LUT. Among sampling mechanisms that use the LUT, results for GFN score-weighted re-id stats were relatively similar, and more trials with more samples per trial are likely needed to distinguish a standout method. \begin{table}[t!] \renewcommand{\arraystretch}{1.0} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|cc|cccc|cc|} \hline & \multicolumn{2}{|c|}{\textbf{Detection}} & \multicolumn{4}{c|}{\textbf{Re-id}} & \multicolumn{2}{c|}{\textbf{GFN}} \\ \midrule \textbf{Sampling} & Recall & AP & mAP & top-1 & $\Delta$ mAP & $\Delta$ top-1 & mAP & top-1 \\ \midrule No LUT & \textbf{96.2} & \textbf{ 93.7} & 57.5 & 90.8 & -0.3 & +2.1 & 13.3 & 72.8 \\ P1N0 & 96.1 & 93.6 & \textbf{59.5} & 91.9 & \textbf{+1.3} & +2.4 & 21.0 & 78.7 \\ \rowcolor{gray!20} P1N1$\dagger$ & 96.0 & 93.4 & 58.8 & \textbf{92.3} & +1.1 & +3.5 & 20.4 & 78.5 \\ P2N0 & \textbf{96.2} & \textbf{93.7} & 59.1 & 91.9 & +1.2 & \textbf{+3.6} & 20.9 & \textbf{79.5} \\ P2N1 & 96.0 & 93.6 & 59.1 & 91.7 & +1.2 & +3.4 & \textbf{21.1} & \textbf{79.5} \\ \hline \end{tabular} } \vspace{-0.2cm} \end{center} \caption{Comparison of different sampling options for optimization of the GFN. P$x$N$y$ indicates that $x$ positive scenes and $y$ hard negative scenes are sampled for each person in the batch. No LUT means we use only batch query and scene embeddings, and no LUT is used. Baseline model is marked with \dag, final model is highlighted gray.} \label{tab:gfn_sampling} \end{table} \section{Comparison with CBGM} \label{supp:cbgm} The GFN module is similar to the Context Bipartite Graph Matching (CBGM) method from \cite{li_sequential_2021} in that both methods use context from the query and gallery scenes to improve prediction ranking, although the GFN is used at inference-time only, and does not need to be trained. CBGM is more explicit, in that it directly attempts to match detected person boxes in the query and gallery scenes, at the expense of requiring sensitive hyperparameters: the number of boxes to use from each scene for the matching. The authors found that very different values for these parameters were optimal for the CUHK-SYSU vs. PRW datasets, and did not provide a clear methodology for their selection besides test set performance. In contrast, we use the exact same GFN configuration for both datasets during training and inference, selected separately based on validation data, and found it to robustly improve performance for both.
1,108,101,566,054
arxiv
\section{Introduction} The \footnote{Preprint of an article submitted for consideration in [Journal of Artificial Intelligence and Consciousness] © [2021] [copyright World Scienti!c Publishing Company] [https://www.worldscientific.com/worldscinet/jaic]} field of artificial intelligence has advanced considerably since its inception in 1956 at the Dartmouth Conference organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester. The exponential growth in compute power and data, along with advances in machine learning and in particular, deep learning, have resulted in remarkable pattern recognition capabilities. For example, real-time detection of pedestrians has recently achieved an average precision of over 55\% \citep{tan2019}. Natural language understanding systems are achieving superhuman performance on some tasks such as yes/no question answering \citep{wang2019a}. However, as Yan \citep{lecun2020} wrote,``trying to build intelligent machines by scaling up language models is like building high-altitude planes to go to the moon. You might beat altitude records, but going to the moon will require a completely different approach." Beyond the need to improve the accuracy of pattern recognition beyond current levels, current deep learning approaches suffer from susceptibility to adversarial attacks, a need for copious amounts of labeled training data, and an inability to meaningfully generalize. After over 30 years of intense effort, the AGI community has developed the theoretical underpinnings for AGI and affiliated working software systems \citep{wang2005,wang2006}. While achieving human-level AGI is arguably years to decades away, some of the currently available AGI subsystems are ready to be incorporated into non-profit and for-profit products and services. Some of the most promising AGI systems we have encountered are OpenNARS \citep{wang2006,wang2010}, OpenCog \citep{goertzel2009,goertzel2013}, and AERA \citep{thorisson2012}. We are collaborating with all three teams and have developed video analytics and Smart City applications that leverage both OpenCog and OpenNARS \citep{hammer2019}. After several years of applying these AGI technologies to complex, real-world problems in IoT, networking, and security at scale, we have encountered a few stumbling blocks largely related to real-time performance on large datasets and cumulative learning \citep{thorisson2019}. In order to progress from successful proofs of concept and demos to scalable products, we have developed the Deep Fusion Reasoning Engine (DFRE) metamodel and associated DFRE framework, which is the focus of this paper. We have used this metamodel and framework to bring together a wide array of technologies ranging from machine learning, deep learning, and probabilistic programming to the reasoning engines operating under the assumption of insufficient knowledge and resources (AIKR) \citep{wang2005}. As discussed below, we believe the initial results are promising: The data show a dramatic increase in system accuracy, ability to generalize, resource utilization, and real-time performance when compared to state-of-the-art AI systems. The following sections will cover related theories and technologies, the metamodel itself, empirical results, and discussions as well as future work. Appendix A contains some background information that may help readers gain a deeper understanding of this material. \section{Related Theories and Technologies} \subsection{Korzybski} After living through WWI, Korzybki, was concerned about the future trajectory of mankind. He focused his research on the creation of a non-metaphysical definition of man that was both descriptive and predictive from a scientific and engineering perspective. He focused on what he termed the ``time-binding property" that enabled human societies to advance exponentially from a technological perspective. As his objective was to discover the source of humanity's self-destructive tendencies, he created a model of the human nervous system he called ``the structural differential", which is the primary inspiration for our metamodel. Korzybski developed a theory that explains the power of the human nervous system, the weaknesses that cause many of humanity's major problems such as world wars, and a path to optimal/correct functioning of the human nervous system. The Institute of General Semantics, which Korzybski founded, continues to train of educators around the world. Korzybski focused on helping people better utilize the considerable power of the human nervous system in part because the combination of exponential advancement of technology and a primitive way of using it could lead to large-scale destruction. Given that we have far more powerful compute capability and weapons, the sane operation of all autonomous learning systems, human or machine, is of even greater importance. \subsection{OpenNARS} OpenNARS (see \citep{hammer2019}) is a Java implementation of a Non-Axiomatic Reasoning System (NARS). NARS is a general-purpose reasoning system that works under the Assumption of Insufficient Knowledge and Resources (AIKR). As described in \cite{wang2009}, this means the system works in Real-Time, is always open to new input, and operates with a constant information processing ability and storage space. An important part is the Non-Axiomatic Logic (see \cite{wang2010} and \cite{wang2006}) which allows the system to deal with uncertainty. To our knowledge, our solution is the first to apply NARS to a real-time visual reasoning task. \subsection{Embeddings} Graph embedding \citep{cui2018,hamilton2017} is a technique used to represent graph nodes, edges, and sub-graphs in vector space that other machine learning algorithms can use. Graph neural networks use graph embeddings to aggregate information from graph structures in non-Euclidean ways. This allows the DFRE Framework to use the embeddings to learn from different data sources that are in the form of graphs, such as Concept Net \citep{speer2017}. Despite its performance across different domains, the graph neural networks suffer from scalability issues \citep{ying2018,zhou2018} because calculating the Laplacian matrix for all nodes in a large network may not be feasible. The levels of abstraction and the focus of attention mechanisms used by an Agent resolve these scalability issues in a systematic way. \section{The DFRE Metamodel} The DFRE metamodel and framework are based on the idea that knowledge is a hierarchical structure, where the levels in the hierarchy correspond to levels of abstraction. The \textit{DFRE metamodel} refers to the way that knowledge is hierarchically structured while a \textit{model} refers to knowledge stored in a manner that complies to the DFRE metamodel. It is based on non-Aristotelian, non-elementalistic systems of thinking. The backbone of its hierarchical structure is based on \textit{difference}, a.k.a. antisymmetric relations, while the offshoots of such relations are based on symmetric relations. As in figure \ref{fig:amoeba}, even a simple amoeba has to differentiate distinctions and similarities because preserving symmetric and antisymmetric relations is fatally important. \begin{figure}[ht] \includegraphics[scale=0.5,width=\linewidth]{fig5.png} \caption{\label{fig:amoeba}Amoeba distinguishing between distinctions and similarities.} \end{figure} Korzybski \citep{korzybski1994} dedicated the majority of his professional life to analyzing and studying the nature of this hierarchical structure. While it is well beyond the scope of this paper to discuss the details of his analysis, our initial focus was to incorporate these fundamental principles. \begin{itemize} \item K1 – the core framework of knowledge is based on anti-symmetric relations \begin{itemize} \item Spatial understanding: right/left/top/bottom \item Temporal understanding: before/after \item Corporal understanding: pain/satiation \item Emotional understanding: happy/sad \item Social understanding: friend/foe \item Causal understanding: X causes Y \end{itemize} \item K2 – symmetric relations add further structure \begin{itemize} \item A and B are friends \item A is like B \end{itemize} \item K3 – knowledge is layered \begin{itemize} \item Sensor data is on a different layer than high level symbolic information \item Symbolic information B, which expands or provides context to symbolic information A, is at a higher layer / level of abstraction \item In the symbolic space, there are theoretically an infinite number of layers, i.e., it is always possible to refer to a symbol and expand upon it, thus creating yet another level of abstraction \end{itemize} \item K4 – since knowledge is structure, any structure destroying operations such as confusing levels of abstraction, treating an anti-symmetric relation as symmetric, or vice-versa, can, if inadvertently applied, be knowledge-corrupting and/or a knowledge-destroying operation.\footnote[1]{Korzybski argues that what is currently limiting humanity’s advancement is the general lack of understanding of how our own abstracting mechanisms work \citep{korzybski1949}. He considers mankind to currently be in the childhood of humanity and the day, if it should come, that humanity becomes generally aware of the metamodel, is the day humanity enters into the “manhood of humanity” \citep{korzybski1921}} However it should be noted that creative problem solving and other adaptive behaviors may require mixing levels of abstraction. The key is to ensure that the long-term structure of the metamodel is meticulously maintained and that these operations occur by design and not by accident. \end{itemize} The DFRE Knowledge Graph (DFRE KG) groups information into four levels as shown in Figure \ref{fig:loa}. These are labeled L0, L1, L2, and L* and represent different levels of abstraction with L0 being closest to the raw data collected from various sensors and external systems, and L2 representing the highest levels of abstraction, typically obtained via mathematical methods, i.e. statistical learning and reasoning. The layer L2 can theoretically have infinitely many sub-layers. L* represents the layer where the high-level goals and motivations, such as self-monitoring, self-adjusting and self-repair, are stored. There is no global absolute level for a concept and all sub-levels in L2 are relative. However, L0, L1, L2 and L* are global concepts themselves. For example, an Agent, which is basically a computer program that performs various tasks autonomously, can be instantiated to troubleshoot a problem, such as one related to object recognition or computer networking. The framework promotes cognitive synergy and metalearning, which refer to the use of different computational techniques (e.g., probabilistic programming, Machine Learning/Deep Learning, and such) to enrich its knowledge and address combinatorial explosion issues. \begin{figure}[h] \includegraphics[width=\linewidth]{fig1.png} \caption{\label{fig:loa}DFRE Framework with four levels of abstraction.} \end{figure} One advantage of the DFRE Framework is its integration of human domain expertise, ontologies, prior learnings by the current DFRE KG-based system and other similar systems, and additional sources of prior knowledge through the middleware services. It provides a set of services that an Agent can utilize as shown in Figure \ref{fig:architecture}. \begin{figure}[h] \includegraphics[width=\linewidth]{fig4.png} \caption{\label{fig:architecture}DFRE Framework.} \end{figure} The Sensor Data Services are used to digitize any real world data, such as video recordings. Similarly, the Data Structuring Services restructure data ,e.g., rectifying an image, if needed. These two services are the basis for Image Processing Services which provide a set of supervised and unsupervised algorithms to detect objects, colors, lines, and other visual criteria. The Sensor Data Analytic Services analyze objects and create object boundaries enriched with local properties, such as an object's size and coordinates, which create a 2D symbolic representation of the world. Spatial Semantic Services then uses this representation to construct the initial knowledge graph that captures the spatial relations of the object as a relational graph. Any L2- or high-level reasoning is performed on this knowledge graph. Graph-based knowledge representation provides a system with the ability to: \begin{itemize} \item Effectively capture the relations in the sub-symbolic world in a world of symbols, \item Keep a fluid data structure independent of programming language, in which Agents running on different platforms can share and contribute, \item Use algorithms based on the graph neural networks to allow preservation of topological dependency of information \citep{scarselli2009} on nodes. \end{itemize} All processes are fully orchestrated by the Agent that catalogues knowledge by strictly preserving the structure while evolving new structures and levels of abstraction in its knowledge graph because, for DFRE KG, knowledge is structure. Multiple Agents can have not only individual knowledge graphs but also a single knowledge graph on which all can cooperate and contribute. In other words, multiple Agents can work toward the same goal by sharing the same knowledge graph synchronously or asynchronously. Different Agents can have partially or fully different knowledge graphs depending on their experience, and share those entire graphs or their fragments through the communication channel provided by the DFRE Framework. Note that although the framework can provide supervised machine learning algorithms if needed, the current IoT use case is based on a retail store which requires unsupervised methods as explained in the next section. \section{Experimental Results} The DFRE Framework was previously tested in the Smart City domain \citep{hammer2019}, in which the system learns directly from experience with no initial training required (one-shot), based on a fusion of sub-symbolic (object tracker) and symbolic (ontology and reasoning) information. The current use case is based on object-class recognition in a retail store. Shelf space monitoring, inventory management and alerts for potential stock shortages are crucial tasks for retailers who want to maintain an effective supply chain management. In order to expedite and automate these processes, and reduce both the requisite for human labor and the risk of human error, several machine learning and deep learning-based techniques have been utilized \citep{baz2016,franco2017,george2014,tonioni2017}. Despite the high success rates, the main problems for such systems are the requirements for a broad training set, including compiling images of the same product with different lighting and from different angles, and retraining when a new product is introduced or an existing product is visually updated. The current use case does not demonstrate an artificial neural network-based learning. The DFRE Framework has an artificial general intelligence-based approach to these problems. \begin{figure*}[h] \includegraphics[width=\linewidth]{fig2.png} \caption{\label{fig:retail} Retail use case for DFRE Framework.} \end{figure*} Before a reasoning engine operates on symbolic data within the context of the DFRE Framework, several services must be run, as shown in Figure \ref{fig:retail}. The flow starts with a still image captured from a video camera that constantly records the retail shelves by the Sensor Data Services as in Figure \ref{fig:retail}.a, which corresponds to L0 in Figure \ref{fig:loa}. Next, the image is rectified by the Data Structuring Services in Figure \ref{fig:retail}.b for better line detection by the Image Processing Services, as displayed in Figure \ref{fig:retail}.c. The Image Processing Services in the retail case are unsupervised algorithms used for color-based pixel clustering and line detection, such as probabilistic Hough transform \citep{kiryati1991}. The Sensor Data Analytics Services in Figure \ref{fig:retail}.d create the bounding boxes which represent the input in a 2D world of rectangles, as shown in Figure \ref{fig:retail}.e. The sole aim of all these services is to provide the DFRE KG with the best symbolic representation of the sub-symbolic world in rectangles. Finally, the Spatial Semantics Services operate on the rectangles to construct a knowledge graph, which preserves not only the symbolic representation of the world, but also the structures within it in terms of relations, as shown in Figure \ref{fig:retail}.f. This constitutes the L1 level abstraction in the DFRE KG. L1 knowledge graph representation also recognizes and preserves the attributes of each bounding box, such as the top-left \textit{x} and \textit{y} coordinates, and the \textit{center}'s coordinates: \textit{height}, \textit{width}, \textit{area} and \textit{circumference}. The relations used for the current use case are \textit{inside}, \textit{aligned}, \textit{contains}, \textit{above}, \textit{below}, \textit{on left of}, \textit{on right of}, \textit{on top of}, \textit{under} and \textit{floating}. Since the relations in the DFRE KG are by default antisymmetrical, the system does not know that \textit{aligned(a,b)} means \textit{aligned(b,a)}, or \textit{on left of} and \textit{on right of} are inverse relations unless such terms are input as expert knowledge or are learned by the system through experience or simulations. The only innate relations in the DFRE metamodel are \textit{distinctions}, which are \textit{anti-symmetric} and \textit{similarity} relations; and the rest is learned by experience. \begin{figure}[h] \includegraphics[width=\linewidth]{fig3.png} \caption{\label{fig:loaretail}LoA for retail use case.} \end{figure} The system's ultimate aim is to dynamically determine \textit{shelves}, \textit{products} and \textit{unknown/others}, as illustrated in Figure \ref{fig:loaretail}, and to monitor the results with timestamps. While L2 identifies only the concepts of \textit{shelf}, \textit{product} and \textit{unknown}, and the possible relations among them, the reasoning engine, NARS \citep{wang2006,wang2010,wang2018}, creates their L1 intensions as an evidence-based truth system in which there is no absolute knowledge. This is useful in the retail use case scenario because the noise in L0 data causes both overlapping regions and conflicting premises at L1. This noise results from not only the projection of the 3D world input data into a 2D framework, but also the unsupervised algorithms used by L1 services. The system has only four rules for L2 level reasoning: \begin{itemize} \item \textit{If a rectangle contains another rectangle that is not floating, the outer rectangle can be a shelf while the inner one can be a product.} \item \textit{If a rectangle is aligned with a shelf, it can be a shelf too.} \item \textit{If a rectangle is aligned with a product horizontally, it can be a product too.} \item \textit{If a floating rectangle is stacked on a product, it can be a product too.} \end{itemize} Note that applying levels of abstraction gives the DFRE Framework the power to perform reasoning based on the expert knowledge in L2 level mostly independent of L1 level knowledge. In other words, the system does not need to be trained for different input; it is unsupervised in that sense. The system has a metalearning objective which continuously attempts to improve its knowledge representation. The current use case had 152 rectangles of various shapes and locations, of which 107 were products, 16 were shelves, and the remaining 29 were other objects. When the knowledge graph in L1 is converted into Narsese, 1,478 lines of premises that represent both the relations and attributes were obtained and sent to the reasoner. Such a large amount of input with the conflicting evidence caused the reasoning engine to perform poorly. Furthermore, the symmetry and transitivity properties associated with the reasoner resulted in the scrambling of the existing structure in the knowledge graph. Therefore, the DFRE Framework employed a Focus of Attention (FoA) mechanism. The FoA creates overlapping covers of knowledge graphs for the reasoner to work on this limited context. Later, the framework combines the results from the covers to finally determine the intensional category. For example, when the FoA utilizes the reasoner on a region, a rectangle can be recognized as a shelf. However, when the same rectangle is processed in another cover, it may be classified as a product. The result with higher frequency and confidence wins. The FoA mechanism is inspired by the human visual attention system, which manages input flow and recollects evidence as needed in case of a conflicting reasoning result. A FoA mechanism can be based on objects' attributes, such as color or size, with awareness of proximity. For this use case, the FoA determined the contexts by picking the largest non-empty rectangle, and traversing its neighbors based on their sizes in decreasing order. The framework is tested in various settings with different camera angles and products placements as shown in Figure \ref{fig:visual_experiments}. \begin{figure*}[h] \includegraphics[width=\linewidth]{experimental_6.png} \caption{\label{fig:visual_experiments} DFRE Framework qualitative result in retails use cases with different environments} \end{figure*} In Figure \ref{fig:visual_experiments}, each row represents samples from different settings: rectified frames, bounding boxes, and instantaneous output of the reasoner. We would like to emphasize that the system does not require any retraining or any change in order to adapt to the new setting. It requires only a camera to be pointed to the scene; then it automatically generalizes. DFRE Framework was tested 10 times in 4 different settings with and without the FoA. The precision, recall and f-1 scores are exhibited in Table \ref{tab:result}. \begin{table*}[h] \centering \caption{\label{tab:result}DFRE Framework experimental results.} \resizebox{0.97\linewidth}{!}{% \begin{tabular}{cccccll} \cline{1-7} \multirow{2}{*}{\textbf{Category}} & \multicolumn{3}{c}{\textbf{without FoA }(\%)} & \multicolumn{3}{c}{\textbf{with FoA } (\%)}\\ & precision & recall & f1-score & precision & recall & f1-score \\ \hline \multicolumn{1}{c}{\textit{product}} & 80.70 & 29.32 & 52.88 & 96.36 & 99.07 & 97.70 \\ \multicolumn{1}{c}{\textit{shelf}} & 8.82 & 18.75 & 12.00 & 82.35 & 87.50 & 88.85 \\ \multicolumn{1}{c}{\textit{other}} & 36.61 & 89.66 & 52.00 &96.00 & 82.76 & 88.89 \\ \hline \multicolumn{1}{c}{\textbf{overall accuracy}} & \multicolumn{3}{c}{\textbf{46.30 } \newline (min/max: 30.13/84.65)} & \multicolumn{3}{c}{\textbf{94.73 } \newline (min/max: 88.10/100.00)} \\ \hline \end{tabular}} \end{table*} The results indicate that the FoA mechanism improves the success of our AGI-based framework significantly by allowing the reasoner to utilize all of its computing resources in a limited but controlled context. The results are accumulated by the framework, and the reasoner makes a final decision. This approach not only allows us to perform reasoning on the intension sets of L1 knowledge, which are retrieved through unsupervised methods, but also resolves the combinatorial explosion problem whose threshold depends on the limits of available resources. In addition, one can easily extend this retail use case to include prior knowledge of product types and other visual objects, such as tables, chairs, people and shelves, as allowed by the DFRE KG. \subsection{Graph Embedding for Link Predictions} Recall, a graph $G(V,E)$, where $V$ is the set of all vertices, or nodes, in $G$, and $E$, is the set of paired nodes, called edges. $|V| \in \mathbb{Z}$ is the order of the graph, or the total number of nodes. As mentioned before, DFRE takes advantage of graph embedding, a transformation that constructs a non-dimensional knowledge graph $G$ into a $d$-dimensional vectors space $S \in \mathbb{R}^{ |V| \times d}$. Among the many benefits, such as creating a Euclidean distance measurement for $G$, link predictions can be established between node vectors in $S$. Preliminary experimental results have given a great deal of insight into the relationships between nodes that might otherwise not be present from the graph space. The main algorithm used by DFRE to transform our knowledge graph $G$ to a 2-dimensional vector space is Node2Vec. \citep{grover2016}. This framework is a representation learning based approach that learns continuous feature representations for all nodes in a given knowledge graph $G$. The benefits from this algorithm, and the motivation for use in DFRE, construct the graph embedding space $S$ where link prediction, and other methods of measurement, can be used while preserving relevant network properties from the original knowledge graph. The functionality behind Node2Vec is similar to most other embedding processes, by use of the Skip-Gram model, and a sampling-strategy. Four arguments are input into the framework: the number of walks, the length of the walks, $p$, and $q$, where $p$ is referred to as the return hyper-parameter, and $q$ is the I/O hyper-parameter. Once a 2-dimensional vector representation has been assigned to every node $ n \in V $ , our embedding vectors space $S \in \mathbb{R}^{ |V| \times 2}$ can provide additional metrics used for machine learning and prediction measures. One such measure is link prediction used to understand the relationship between nodes in a graph that might not be obvious from the graph space. Consider the nodes $n_1 , n_2 \in G(V,E)$ such that $n_1$ and $n_2$ are not similar ideas in the graph (e.g. the probability $(n_1 , n_2) \in E(G)$ is low). Once the nodes are represented in vector form $\hat{n_1} , \hat{n_2} \in \mathbb{R}^{2} \subset S$, we establish a linear relationship between the two such that a line $ y = ax + b $ is satisfied, where $a = \frac{\hat{n_{22}} - \hat{n_{21}}}{\hat{n_{12}} - \hat{n_{11}}}$ and $b = \hat{n_{21}} - a(\hat{n_{11}})$. Let $ \epsilon > 0 $ , then $\forall \hat{n_k}$ that lies within the range of $ y \pm \epsilon$ , we consider these node vectors to be associated hidden links between two the two ideas $\hat{n_1}$ and $\hat{n_2}$. Additionally, if the line $y$ is divited into four evenly distributed quadrants ${y_1 .. y_4}$ and grown by small perturbations where $ y^{\prime} = y \pm \epsilon^{\prime} $ such that $\epsilon^{\prime} = \epsilon + \gamma$ and $\gamma \in (0,1]$ until there exists at least one $\hat{n_k}$ in every quadrant. We call this set of node vectors $S_{n}$. This set of node vectors $S_n$ gathered within range $y^{\prime}$ provide DFRE a relationship that might not be immediately obvious from the graph space alone. The distribution of the vectors along the quadrants is revealing in away such that, for example, consider the two disjoint subsets $(\hat{n_k})_i$ and $(\hat{n_l})_j$ of $S_n$. Without loss of generality, if $(\hat{n_k})_i \in y^{\prime}_1$ and $(\hat{n_k})_j \in y^{\prime}_4$, where $ i << j $, we see that the relationship skews towards the set of node vectors that lie within the range of $y_4^{\prime}$. Additionally, within the embedding space, consider a finite set of clusters $\{C_1 , C_2 , \dots \}$, each corresponding to its own central idea. For any arbitrary cluster $C_i$, if a new node vector ${\hat{n^{\prime}}}$ is introduced in $S$ such that ${\hat{n^{\prime}}} \in C_i$, then we can easily leverage this proximity into our sub-symbolic space to identify any additional node vectors. We find the main benefit to graph embedding is that we now have an unsupervised method for correlating the graph embedding space with additional embedding spaces that are generated using unsupervised machine learning techniques. \section{Philosophical Implications} The nativism-versus-empiricism debate, which posits that some knowledge is innate and some is learned through experience, was ascribed in the ancient world by the Greek philosophers, including Plato and Epicurus. Today, Descartes is widely accepted as a pioneering philosopher working on the mind as he furthered and reformulated the debate in the 17\textsuperscript{th} century with new arguments. Perception, memory, and reasoning are three fundamental cognitive faculties that enhance this debate by explicating the building blocks of natural intelligence. We perceive the sub-symbolic world, and abstract it in memory, and reason on this symbolic world representation. All three place concept learning and categorization at the center of the human mind. The process of concept learning and categorization continues to be an active research topic related to the human mind since it is essential to natural intelligence \citep{lakoff1984} and cognitively inspired robotics research \citep{chella2006,lieto2017}. It is widely accepted that this process is based more on interactional properties and relationships among Agents, as well as between an Agent and its environment, than objective features such as color, shape and size \citep{johnson1987,lakoff1984}. This makes the distinction of anti-symmetric and symmetric relations crucial in the DFRE Framework, which assumes that the levels of abstraction are part of innate knowledge. In other words, an Agent has L0, L1 and pre-existing L2 by default. This constitutes a common a priori metamodel shared by all DFRE Agents. Each Agent instantiated from the framework has the abstraction skill based on interactional features and relationships. If the concepts in real life exist in interactional systems, natural intelligence needs to capture these systems of interactions with its own tools, such as abstraction. These tools should also be based on interactional features by strictly preserving the distinction between symmetry and anti-symmetry. The mind is a system as well. Modern cognitive psychologists agree that concepts and their relations in memory function as the fundamental data structures to higher level system operations, such as problem solving, planning, reasoning and language. Concepts are abstractions that have evolved from a conceptual primitive. An ideal candidate for a conceptual primitive would be something that is a step away from a sensorimotor experience \citep{gardenfors2000}, but is still an abstraction of experience \citep{cohen1997}. For example, a dog fails the mirror test but exhibits intelligence when olfactory skills are needed to complete a task \citep{horowitz2017}. A baby’s mouthing behavior is not only a requisite for developing oral skills but also for discovering the surrounding environment through one of its expert sensorimotor skills related to its survival. The baby is probably abstracting many objects into edible versus inedible higher categories given its insufficient knowledge and resources. What is astonishing about a natural intelligence system is that it does not need a plethora of training input and experiments to learn the abstraction. It quickly and automatically fits new information into an existing abstraction or evolves it into a new one, if needed. This is nature’s way of managing combinatorial explosion. Objects and their interconnected relationships within the world can be chaotic. Natural intelligence’s solution to this problem becomes its strength: context. The concept of ‘sand’ has different abstractions depending on whether it is on a beach, on a camera, or on leaves. An Agent in these three different contexts must abstract the sand in relation to its interaction with world in its short-term memory. This cumulative set of experiences can later become part of long-term memory, more specifically, episodic memory. The DFRE Framework uses a Focus of Attention (FoA) mechanism that provides the context while addressing the combinatorial explosion problem. The DFRE metamodel's new way of representing practically all knowledge as temporally evolving (i.e. time series) can be viewed as the metamodel's conceptual space. For example, the retail use case given in Section 2 starts with a 3D world of pixels that is abstracted as lines and rectangles in 2D. The framework produces spatial semantics using the rectangles in the 2D world. Based on this situation, a few hundred rectangles produce thousands of semantic relations, which present a combinatorial explosion for most AGI reasoning engines. For each scene, the DFRE KG creates contexts, such candidate shelves, runs reasoners for each context, and merges knowledge in an incremental way. This not only addresses the combinatorial explosion issue, but also increases the success rate of reasoning, provided that the levels of abstraction are computed properly \citep{gorban2018}. Abstracting concepts in relation to their contexts also allows a natural intelligence to perform mental experiments, which is a crucial part of planning and problem solving. The DFRE Framework can integrate with various simulators, re-run a previous example together with its context, and alter what is known for the purposes of experimentation to gain new knowledge, which is relationships and interactions of concepts. Having granular structures provides structured thinking, structured problem solving, and structured information processing \citep{yao2012}. The DFRE Framework has granular structures but emphasizes the preservation of structures in knowledge. When a genuine problem that cannot be solved by the current knowledge arises, it requires scrambling the structures and running simulations on the new structures in order to provide an Agent with creativity. Note that this knowledge scrambling is performed in a separate sandbox. The DFRE Framework ensures that the primary DFRE KG is not corrupted by these creative, synthetic, ``knowledge-scrambling" activities. \section{Discussion on Metamodel and Consciousness} For the purposes of this discussion, we will define consciousness as autonomous self-aware adaptation to the environment. This means that an abstraction of the self as well as the environment of the self is learned autonomously. Human consciousness builds on the prior capabilities of chemistry-binders (plants) and space-binders (animals) with the unique ability of infinite levels of abstraction \citep{korzybski1994}. For any concept, one can envision creating a higher-level meta-concept. We are able to formulate symbolic representations that can be externalized and shared. The human model of the self evolves not only via direct interaction with the environment and cogitation, but also by watching other humans and modeling them. Human consciousness as an implementation of the metamodel appears to be dynamic in nature. The concept of self can grow to encompass family, friends, social/work groups, and beyond. Understanding our nature as time-binders that form a collective consciousness of ever-increasing power (cognitive and physical) over time, civilizations, and generations, appears to lead to higher cognitive functioning of the individual. Human societies consisting of billions of people networked together in real-time, with petabytes of shared storage and petaflops of compute, may see the evolution of exo-cortical consciousness. In fact, many argue that this exo-cortical consciousness already exists with the growing number of autonomous self-healing systems deployed and connected throughout the world. Since the metamodel hypothesizes the existence of an exo-cortical consciousness, it consequently yields to the possibility of implementing artificial consciousness, e.g., in robots. Artificial consciousness, which is also known as machine consciousness, is a field designed to mimic the aspects of human cognition that are related to human consciousness \citep{aleksander2008, chella2009}. In the 1950s, consciousness was seen as a vague term, and inseparable from intelligence. \citep{searle1992,chalmers1996}. Fortunately, the improvements in technology, and computational and cognitive sciences have created new interest in the field. \citep{chella2011} reviews that the most important gap between artificial and biological consciousness studies is engineering autonomy, semantic capabilities, intentionality, self-motivation, resilience and information integration. The Agents based on the metamodel have autonomy. They also have semantic capabilities and intentions to seek solutions or communicate with other Agents for knowledge sharing, which are set by self-motivation. \cite{chella2011} emphasize that consciousness is a real physical phenomenon, can be artificially replicated, is either a computational phenomenon or more. We bring forth the metamodel as an enabler for the achievement of artificial consciousness. The abstraction mechanism constantly and automatically creates the abstractions of the sensor data and the system's own experience. The metamodel is based on generation of new knowledge using self-perception and experience, and shares knowledge among the Agents of similar nature to support collective consciousness and resilience. The creation of self through experience gives the metamodel the ability of enhanced generalization and autonomy. Similarly, focus of the attention mechanism to segment complex problems semantically is also related to consciousness because attention and consciousness are interrelated \citep{taylor2007,taylor2009}. Implementation of attention is important because control theory is related to consciousness and plays a leading role in the intentional mechanism of an Agent. \section{Conclusion} Several mathematical models and formal semantics \citep{duntsch2002,belohlavek2004,wang2008,wille1982,ma2007} are proposed to specify the meanings of real world objects as concept structures and lattices. However, they are computationally expensive \citep{jinhai2015}. One way to overcome this issue is with granular computing \citep{yao2012}. The extension of a concept can be considered a granule, and the intension of the concept is the description of the granule. Assuming that concepts share granular common parts with varying derivational and compositional stages, categorization, abstraction and approximation occur at multiple levels of granularity which plays an important role in human perception \citep{hobbs1985,yao2001,yao2009}. The DFRE Framework has granular structures but emphasizes the preservation of structures in knowledge. Being in the extension of a concept does not necessarily grant the granular concept the right to have similar relations and interactions of its intensional concept up to a certain degree or probability. Each level must preserve its inter-concept relationships and its symmetry or anti-symmetry in a hierarchically structured way. We have outlined the fundamental principles of the DFRE Framework. DFRE takes a neurosymbolic approach leveraging state-of-the-art subsymbolic algorithms (e.g. ML/DL/Matrix Profile) and state-of-the-art symbolic processing (e.g. reasoning, probabilistic programming, and graph analysis) in a synergistic way. The DFRE metamodel can be thought of as a knowledge graph with some additional structure, which includes both a formalized means of handling anti-symmetric and symmetric relations, as well as a model of abstraction. This additional structure enables DFRE-based systems to maintain the structure of knowledge and seamlessly support cumulative and distributed learning. Although this paper provides highlights of one experiment in the visual domain employing an unsupervised approach, we have also run similar experiments on time series and natural language data with similar promising results. \section{Future work} Nowadays there is a rapid transition in the AI research field from single modality tasks, such as image classification and machine translation, to more challenging tasks that involve multiple modalities of data and subtle reasoning, such as visual question answering (VQA) \citep{agrawal2015, anderson2017,zhu2020} and visual dialog \citep{vishvak2019}. A meaningful and informative conversation, either between human-computer or computer-computer, is an appropriate task to demonstrate such a reasoning process given the complex information exchange mechanism during the dialog. However, most existing research focuses on the dialog itself and involves only a single Agent. We plan to design a more reliable DFRE system with implicit information sources. To this end, we propose a novel natural and challenging task with implicit information sources: describe an unseen video mainly based on the dialog between two cooperative Agents. The entire process can be described in three phases: In the preparation phase, two Agents are provided with different information. Agent A1 is able to see the complete information from different modalities (i.e., video, audio, text), while Agent A2 is only given limited information. In the second phase, A2 has several opportunities to ask A1 relevant questions about the video, such as the person involved, the event happened, \textit{etc.} A2 is encouraged to ask questions that help to accomplish the ultimate video description objective, and A1 is expected to give informative and constructive answers that not only provide the needed information but also motivate A2 to ask additional useful questions in the next conversation round. After several rounds of question-answer interactions, A2 is asked to describe the unseen video based on the limited information and the dialog history with A1. In this task setup, our DFRE system accomplishes a multi-modal task even without direct access to the original information, but learns to filter and extract useful information from a less sensitive information source, \textit{i.e.}, the dialog. It is highly difficult for AI systems to identify people based on the natural language descriptions. Therefore, such task settings and reasoning ability based on implicit information sources have great potential to be applied in a wide practical context, such as the smart hospital systems, improving current systems. The key aspect to consider in this future work is the effective knowledge transfer from A1 to A2. A1 plays the role of humans, with full access to all the information, while A2 has only an ambiguous understanding of the surrounding environment from two static video frames after the first phase. In order to describe the video with details that are not included in the initial input, A2 needs to extract useful information from the dialog interactions with A1. Therefore, we will propose a QA-Cooperative network that involves two agents with the ability to process multiple modalities of data. We further propose a cooperative learning method that enables us to jointly train the network with a dynamic dialog history update mechanism. The knowledge gap and transfer process are both experimentally demonstrated. The novelties of the proposed future work can be summarized as follows: (i) We propose a novel and challenging video description task via two multi-modal dialog agents, whose ultimate goal is for one Agent to describe an unseen video based on the interactive dialog history. This task establishes a more reliable setting by providing implicit information sources to the metamodel. (ii) We propose a QA-Cooperative network and the goal-driven learning method with a dynamic dialog history update mechanism, which helps to effectively transfer knowledge between two agents. (iii) With the proposed network and cooperative learning method, our A2 Agent with limited information can be expected to achieve promising performance, comparable to the strong baseline situation where full ground truth dialog is provided.
1,108,101,566,055
arxiv
\section{Introduction} \label{se:intro} This work concerns the action of Hecke algebras and Galois deformation rings on homology groups of certain arithmetic manifolds. Let $X_0(N)$ be a modular curve, $\mathcal{O}$ the ring of integers of a finite extension of $\mathbb{Q}_\ell$, and $\fm$ a non-Eisenstein maximal ideal of the Hecke algebra $\mathbb{T}$ acting on $\operatorname{H}_1(X_0(N),\mathcal{O})_{\fm}$. Using the modularity lifting results of Wiles~\cite{Wiles:1995}, and Taylor and Wiles~\cite{Taylor/Wiles:1995}, Diamond~\cite{Diamond:1997} proved that a certain deformation ring $R$ acts freely on $\operatorname{H}_1(X_0(N),\mathcal{O})_{\fm}$, so the map $R \twoheadrightarrow \mathbb{T}$, through which the $R$ action on Hecke module factors, is an isomorphism. In particular, the $\mathbb{T}$-module $\operatorname{H}_1(X_0(N),\mathcal{O})_{\fm}$ is free. Freeness of this Hecke module was proved earlier by Mazur \cite{Mazur} using geometric arguments, and even without the assumption that $\fm$ is non-Eisenstein. Diamond uses patching to deduce freeness of the Hecke module over $R$ when $N$ is the minimal level for the residual representation $\overline{\rho}_{\fm}$ attached to $\fm$. For non-minimal levels $N$, he uses a numerical criterion, \cite[Theorem 2.4]{Diamond:1997}, for a finitely generated $R$-module to be free and for $R$ to be a complete intersection. The fact that the criterion is applicable is verified by level raising arguments a la Ribet, that had already been used in similar contexts in \cite{Wiles:1995}. Use of classical multiplicity one results for modular forms which prove such freeness results after tensoring with the fraction field of $\mathcal{O}$ were another important ingredient in being able to apply the numerical criterion. In \cite{Iyengar/Khare/Manning:2022a} we developed a higher codimensional version of the Wiles--Lenstra--Diamond numerical criterion for finitely generated modules $M$ of sufficient depth over certain families of rings $R$ to have a free direct summand of positive rank, and for $R$ to be complete intersection. We applied this to the ring $R_\infty$ and module $M_\infty$ produced by the patching method to deduce that $M_\infty$ has a free direct summand of some positive rank. From this it follows that the lowest degree non-vanishing homology group $\operatorname{H}_d(Y,\mathcal{O})_{\fm}$ of a certain arithmetic manifold $Y$ associated to $\mathrm{PGL}_2$ over an arbitrary number field $F$ also has a free direct summand of positive rank as a module over the Hecke algebra $\mathbb{T}$. On the other hand our methods did not yield freeness of the Hecke module, in contrast to the result of \cite{Diamond:1997}. The reason for this is that patching gives no direct control over the generic rank of the module $M_\infty$. Typically the only way of determining this generic rank is to use classical multiplicity one theorems for automorphic forms to determine the rank of $M_\infty$ at all $\overline{\mathbb{Q}}_\ell$-points of $\operatorname{Spec} \mathbb{T}\subseteq \operatorname{Spec} R_\infty$. This approach only works if $\operatorname{Spec} \mathbb{T}$ has enough $\overline{\mathbb{Q}}_\ell$-points --- to directly deduce freeness from the main result of \cite{Iyengar/Khare/Manning:2022a} one would need to have a $\overline{\mathbb{Q}}_\ell$-point of $\operatorname{Spec} \mathbb{T}$ on each irreducible component of $\operatorname{Spec} R_\infty$. This issue also arises in \cite[Theorem 1.3]{Calegari/Geraghty:2018}. In the modular curve case considered by Diamond, this is not an issue as the Hecke algebra is flat over $\mathcal{O}$ and so has enough $\overline{\mathbb{Q}}_\ell$-points. For arithmetic manifolds associated to arbitrary number fields, the Hecke algebra $\mathbb{T}$ is, in general, not flat over $\mathcal{O}$ and may not have enough $\overline{\mathbb{Q}}_\ell$-points to determine the generic rank of $M_\infty$. In fact, $\mathbb{T}$ may well have no $\overline{\mathbb{Q}}_\ell$-points at all! Thus the requirement that $\operatorname{Spec} \mathbb{T}$ have a $\overline{\mathbb{Q}}_\ell$-point on each irreducible component of $\operatorname{Spec} R_\infty$ is extremely restrictive. In this paper we develop new commutative algebra arguments, building on those of \cite{Iyengar/Khare/Manning:2022a}, to show freeness of patched modules under far less restrictive conditions. The main result is Theorem \ref{th:only} which allows us to use information about the rank of $M_\infty$ over one component of $\operatorname{Spec} R_\infty$ to determine its rank over the other components. When the number field $F$ is totally complex (considered in Theorem \ref{th:mult 1}), this allows us to deduce freeness assuming only the existence of a $\overline{\mathbb{Q}}_\ell$-point of $\operatorname{Spec} \mathbb{T}$ lying on one particular component of $\operatorname{Spec} R_\infty$ (the ``highest level'' component). This is equivalent to assuming the existence of a particular geometric characteristic 0 lift of $\overline{\rho}_{\fm}$ (ramified at a specified set of primes $\Sigma$, as well as some other auxiliary primes). One might expect that such a lift always exists (although this is far from known in general), and so our work represents a plausible approach for proving freeness of Hecke modules associated to totally complex fields. For an arbitrary number field $F$ (considered in Theorem \ref{th:mult 2^r1}) the situation is somewhat more complicated. In order to deduce freeness from our method, one needs to prove an appropriate lower bound on the generic rank of $M_\infty$ on the ``minimal level'' component in addition to having a characteristic $0$ lift of $\overline{\rho}_{\fm}$ corresponding to a point on the ``highest level'' component. In the case when $F$ is totally complex, the required lower bound is $1$ and so one gets this for free. However in the case of a general $F$, we are only able to prove this in the case when $\operatorname{Spec} \mathbb{T}$ has an additional $\overline{\mathbb{Q}}_\ell$-point on the ``minimal level'' component of $\operatorname{Spec} R_\infty$. \section{Congruence modules and Wiles defect}\label{se:congruence} In this section we recall the construction and basic properties of congruence modules and Wiles defect, introduced in \cite[Section 2]{Iyengar/Khare/Manning:2022a}, for modules over local rings. \begin{chunk} Let $\mathcal{O}$ be a discrete valuation ring, with valuation $\operatorname{ord}(-)$ and uniformizer $\varpi$. Throughout we fix a complete local $\mathcal{O}$-algebra $A$ and a finitely generated $A$-module $M$. Given a map $\lambda\colon A\to \mathcal{O}$ of $\mathcal{O}$-algebras, set $\mathfrak{p}_\lambda \colonequals \operatorname{Ker}(\lambda)$ and \[ \con{\lambda}(A)\colonequals \operatorname{tors}(\mathfrak{p}_\lambda/\mathfrak{p}_\lambda^2)\,, \] namely, the torsion part of the cotangent module $\mathfrak{p}_\lambda/\mathfrak{p}_\lambda^2$ of $\lambda$. This is a torsion $\mathcal{O}$-module. For any finitely generated $A$-module $M$, set \[ \operatorname{F}^i_{\lambda}(M)\colonequals \tfree{\operatorname{Ext}^i_A(\mathcal{O},M)} \] the torsion-free quotient of the $\mathcal{O}$-module $\operatorname{Ext}^i_A(\mathcal{O},M)$. Here $\mathcal{O}$ is viewed as an $A$-module via $\lambda$. The \emph{congruence module} of $M$ at $\lambda$ is the $\mathcal{O}$-module \[ \cmod{\lambda}(M)\colonequals \operatorname{Coker}\left(\operatorname{F}^c_{\lambda}(M) \xrightarrow{\ \operatorname{F}^c(\varepsilon)\ }\operatorname{F}^c_{\lambda}(M/\mathfrak{p} M)\right) \] where $c\colonequals \operatorname{height}{\mathfrak{p}_{\lambda}}$ and $\varepsilon \colon M\to M/\mathfrak{p} M$ is the natural surjection. \end{chunk} \begin{chunk} \label{ch:acat} Let $A$ be an $\mathcal{O}$-algebra and $\lambda\colon A\to\mathcal{O}$ a map of $\mathcal{O}$-algebras. The following conditions are equivalent: \begin{enumerate}[\quad\rm(1)] \item The local ring $A_{\mathfrak{p}_\lambda}$ is regular. \item The rank of the $\mathcal{O}$-module $\mathfrak{p}_\lambda/\mathfrak{p}^2_\lambda$ is $\operatorname{height} \mathfrak{p}_\lambda$. \item The $\mathcal{O}$-module $\cmod{\lambda}(A)$ is torsion. \item The $\mathcal{O}$-module $\cmod{\lambda}(M)$ is torsion for each finitely generated $A$-module $M$. \end{enumerate} Moreover, when these conditions hold the $\mathcal{O}$-module $\cmod{\lambda}(A)$ is cyclic. Condition (2) is that the embedding dimension of the ring $A_{\mathfrak{p}_\lambda}$ is equal to its Krull dimension, so the equivalence of (1) and (2) is one definition of regularity; see \cite[Definition~2.2.1]{Bruns/Herzog:1998}. For the other equivalences see \cite[Theorem 2.5 and Lemma~2.6]{Iyengar/Khare/Manning:2022a}. The pairs $(A,\lambda)$ satisfying the equivalent conditions above are the objects of a category $\operatorname{C}_{\mathcal{O}}$. A morphism $\varphi\colon (A,\lambda)\to (A',\lambda')$ in $\operatorname{C}_{\mathcal{O}}$ is a map of $\mathcal{O}$-algebras $\varphi\colon A\to A'$ over $\mathcal{O}$; that is to say, with $\lambda'\circ \varphi = \lambda$. For a natural number $c$, the subcategory $\operatorname{C}_{\mathcal{O}}(c)$ of $\operatorname{C}_{\mathcal{O}}$ consists of pairs $(A,\lambda)$ such that $\operatorname{height} \mathfrak{p}_\lambda=c$. \end{chunk} \begin{chunk} \label{ch:cmod-properties} Fix a pair $(A,\lambda)$ in $\operatorname{C}_{\mathcal{O}}$ and a finitely generated $A$-module $M$. Since $A$ is regular at $\mathfrak{p}_\lambda$, and in particular a domain, the $A_{\mathfrak{p}_\lambda}$-module $M_{\mathfrak{p}_\lambda}$ has a rank. The \emph{Wiles defect} of $M$ at $\lambda$ is the integer \[ \delta_\lambda(M)\colonequals \operatorname{rank}_{A_{\mathfrak{p}_\lambda}}(M_{\mathfrak{p}_\lambda}) \cdot \operatorname{length}_{\mathcal{O}}\con{\lambda}(A) - \operatorname{length}_{\mathcal{O}}\cmod{\lambda}(M)\,. \] In particular the Wiles defect of $A$ at $\lambda$ is \[ \operatorname{length}_{\mathcal{O}}\con{\lambda}(A) - \operatorname{length}_{\mathcal{O}}\cmod{\lambda}(A)\,. \] We refer to \cite[Introduction]{Iyengar/Khare/Manning:2022a} for a discussion on precedents to this definition. Here are some salient properties of the Wiles defect. \begin{enumerate}[\quad\rm(1)] \item \label{it:defect-ci} If $A$ is complete intersection, then $\delta_\lambda(A)=0$; the converse holds when in addition $\operatorname{depth} A\ge c+1$. \item \label{it:defect-positive} If $\operatorname{depth}_AM\ge c+1$, then $\delta_{\lambda}(M) \ge 0$. \item \label{it:defect-invariance} If $\varphi\colon A'\to A$ is a surjective map in $\operatorname{C}_{\mathcal{O}}(c)$ and $\operatorname{depth}_AM\ge c$, then \[ \delta_{\lambda\varphi}(M) = \delta_{\lambda}(M)\,. \] We refer to this property as the invariance of domain for congruence modules. \item \label{it:defect-freeness} Assume $A$ is Gorenstein and $M$ is maximal Cohen--Macaulay. If $\delta_{\lambda}(M) = \mu \cdot \delta_{\lambda}(A)$, for $\mu\colonequals \operatorname{rank}_{A_{\mathfrak{p}_\lambda}}(M_{\mathfrak{p}_\lambda})$, then there is an isomorphism of $A$-modules \[ M\cong A^\mu \oplus W \quad\text{and $W_{\mathfrak{p}_\lambda}=0$.} \] \end{enumerate} These results are established in \cite{Iyengar/Khare/Manning:2022a}: For (1) and (2), see Theorem A; for (3), see Theorem E, and for (4) see Theorem B. \end{chunk} \section{The setup} \label{se:setup} Fix a finite set $T$. In the number theoretic applications, $T$ will be a finite set of primes in the ring of integers $\mathcal{O}_F$ of a number field $F$. However, for ease of notation, in this section we take $T\colonequals \{1,\cdots,n\}$ for some integer $n\ge 1$. \begin{chunk} \label{ch:rings} Fix an integer $g\ge 1$ and the $\mathcal{O}$-algebra \[ A\colonequals \frac{\mathcal{O}\pos{x_1,\dots,x_n,y_1,\dots,y_n,t_1,\dots,t_g}}{(x_1y_1,\dots, x_ny_n)}\,. \] For each subset $\Sigma\subseteq T$, set \[ A_{\Sigma}\colonequals A/(x_i\mid i\notin\Sigma)\,. \] Evidently each $A_{\Sigma}$ is a reduced complete intersection, of dimension $n+g$. Moreover the ring $A_{\varnothing}\cong A/(\boldsymbol x)$ is regular. Given subsets $\Sigma'\subseteq \Sigma$ there is an induced surjection \[ A_{\Sigma}\twoheadrightarrow A_{\Sigma'}\,, \] of $\mathcal{O}$-algebras. In particular, the family $\{A_{\Sigma}\}$, as $\Sigma$ various over subsets of $T$, has an initial object, $A_{T}=A$, and final object $A_{\varnothing}$. For each $\Sigma\subseteq T$, consider the ideal \[ I_{\Sigma}\colonequals A(x_i\mid i\notin\Sigma) + A(y_j\mid j\in \Sigma)\,. \] Then the Zariski closed subsets $V(I_{\Sigma})$, as $\Sigma$ varies over the subsets of $T$, are the irreducible components of $\operatorname{Spec} A$. Moreover \[ \operatorname{Spec} A_{\Sigma} = \bigcup_{\Sigma'\subseteq\Sigma} V(I_{\Sigma'}) \] viewed as subsets of $\operatorname{Spec} A$. Thus, the irreducible components of $\operatorname{Spec} A_{\Sigma}$ are a subset of the irreducible components of $\operatorname{Spec} A$. \end{chunk} \begin{chunk} \label{ch:o-points} We focus on the $\mathcal{O}$-points in $\operatorname{Spec} A$, namely, the subset \[ \mathcal{V} \colonequals \{(a_1,\dots,a_n,b_1,\dots,b_n,c_1,\dots,c_g)\in \varpi(\mathcal{O})^{2n+g} \mid a_ib_i = 0\quad \text{for $1\le i\le n$}. \} \] Since $\mathcal{O}$ is a domain $a_ib_i=0$ if and only if $a_i=0$ or $b_i=0$. Each point $\boldsymbol v \colonequals (\boldsymbol a, \boldsymbol b, \boldsymbol c)$ in $\mathcal{V}$ corresponds to the prime ideal \[ \mathfrak{p}_{\boldsymbol v}\colonequals (x_i-a_i,y_j-b_j,t_k-c_k\mid 1\le i,j\le n \text{ and } 1\le k\le g)\,. \] This is the kernel of the map of local $\mathcal{O}$-algebras \[ \lambda_{\boldsymbol v}\colon A\longrightarrow \mathcal{O} \quad\text{where $\lambda_{\boldsymbol v}(x_i)=a_i, \lambda_{\boldsymbol v}(y_j)=b_i, \text{ and } \lambda_{\boldsymbol v}(t_k)=c_k$.} \] For $\Sigma\subseteq T$ consider the following subsets of $\mathcal{V}$: \begin{align*} \mathcal{V}_\Sigma & \colonequals \{(\boldsymbol a,\boldsymbol b,\boldsymbol c)\in \mathcal{V}\mid \text{$a_i = 0$ for $i\not\in\Sigma$}\} \\ \mathcal{Z}_\Sigma & \colonequals \{(\boldsymbol a,\boldsymbol b,\boldsymbol c)\in \mathcal{V}\mid \text{$a_i = 0$ for $i\not\in\Sigma$ and $b_j=0$ for $j\in\Sigma$.}\} \end{align*} Thus $\mathcal{V}_\Sigma$ are the $\mathcal{O}$-valued points in $\operatorname{Spec} A_{\Sigma}$ viewed as a subset $\operatorname{Spec} A$. Furthermore, $\mathcal{Z}_\Sigma$, as $\Sigma$ varies over the subsets of $T$, are the $\mathcal{O}$-valued point of the irreducible components of $\operatorname{Spec} A$; see \ref{ch:rings}. Set \begin{equation} \label{eq:interior-component} \begin{aligned} {\mathcal{Z}}^{\circ}_{\Sigma} &\colonequals \mathcal{Z}_{\Sigma}\setminus \bigcup_{\Sigma'\ne \Sigma} \mathcal{Z}_{\Sigma} \\ &= \{(\boldsymbol a,\boldsymbol b,\boldsymbol c)\in \mathcal{V}\mid \text{$a_i \ne 0$ for $i\in\Sigma$ and $b_j \ne 0$ for $j\notin\Sigma$.}\} \end{aligned} \end{equation} These are the $\mathcal{O}$-valued points that lie in a single irreducible component of $\operatorname{Spec} A$. Observe that in ${\mathcal{Z}}^{\circ}_{\Sigma}$ there are points $\boldsymbol v$ where a given subset of the components $\boldsymbol a,\boldsymbol b$ is fixed, and the other components are of any specified order. This fact is the key in the proof of Theorem~\ref{th:only}. Each $\boldsymbol v$ in $\mathcal{Z}^{\circ}_{\Sigma} $ is a regular point of $A$, in that the local ring $A_{\mathfrak{p}_{\boldsymbol v}}$ is regular. Set \begin{equation} \label{eq:interior-points} {\mathcal{V}}^{\circ}_{\Sigma} \colonequals \bigsqcup_{\Sigma'\subseteq \Sigma} {\mathcal{Z}}^{\circ}_{\Sigma'}\,; \end{equation} these are the $\mathcal{O}$-valued points on $\operatorname{Spec} A_{\Sigma}$ that lie in a single component. Observe that the surjection $A\to A_{\Sigma}$ is an isomorphism at each $\boldsymbol v$ in $\mathcal{V}^{\circ}_{\Sigma}$, in that localization at $\boldsymbol v$ is an isomorphism \[ A_{\mathfrak{p}_{\boldsymbol v}} \xrightarrow{\ \cong\ } (A_\Sigma)_{\mathfrak{p}_{\boldsymbol v}}\,. \] Thus $\boldsymbol v$ is regular also as a point in $\operatorname{Spec} A_{\Sigma}$; said otherwise, $(A_\Sigma,\lambda_{\boldsymbol v})$ is in $\operatorname{C}_{\mathcal{O}}$. \end{chunk} \begin{chunk} \label{ch:cotangent-module} We compute the cotangent modules of the rings $A_\Sigma$ at various points in $\mathcal{V}^{\circ}_{\Sigma}$. To ease up notation, we set \[ \con {\boldsymbol v}(-) \colonequals \con{\lambda_{\boldsymbol v}}(-)\,. \] Here is a simple computation. \begin{lemma} \label{le:cotangent-Rsigma} For $\Sigma'\subseteq \Sigma$ and $\boldsymbol v$ is in $\mathcal{Z}^{\circ}_{\Sigma'}$ one gets \[ \con{\boldsymbol v}(A_{\Sigma}) = \bigoplus_{i\in\Sigma'} \frac{\mathcal{O}[y_i]}{a_i[y_i]} \bigoplus_{j\in\Sigma\setminus\Sigma'} \frac{\mathcal{O}[x_j]}{b_j[x_j]}\,, \] and hence $\operatorname{length}_{\mathcal{O}} \con {\boldsymbol v}(A_{\Sigma}) = \sum_{i\in\Sigma'} \operatorname{ord}(a_i) + \sum_{j\in\Sigma\setminus\Sigma'} \operatorname{ord}(b_j)$. \end{lemma} \begin{proof} Without loss of generality we can assume $\Sigma=T$. For $\boldsymbol v \colonequals (\boldsymbol a, \boldsymbol b, \boldsymbol c)$ in $\mathcal{V}_{\Sigma}$ one has \[ x_iy_i = b_i(x_i-a_i) + a_i(y_i-b_i) + (x_i-a_i)(y_i-b_i) \] so it is immediate from the description of $\mathfrak{p}_{\boldsymbol v}$, see \ref{ch:o-points}, that the cotangent module $\mathfrak{p}_{\boldsymbol v}/\mathfrak{p}_{\boldsymbol v}^2$ is generated as an $\mathcal{O}$-module by classes $[x_i-a_i], [y_i-b_i]$, for $i\in T$, and $[t_k-c_k]$ for $1\le k\le g$, subject to the relations \[ b_i[x_i-a_i]+a_i[y_i-b_i] = 0\quad\text{for $i\in T$}\,. \] That is to say, there is an isomorphism of $\mathcal{O}$-modules \[ \mathfrak{p}_{\boldsymbol v}/\mathfrak{p}_{\boldsymbol v}^2 = \bigoplus_{i\in T} \frac{\mathcal{O}[x_i-a_i]\oplus \mathcal{O}[y_i-b_i]}{b_i[x_i-a_i]+a_i[y_i-b_i]} \bigoplus_{k=1}^{g} \mathcal{O}[t_k-c_k] \] Thus when $\boldsymbol v$ is in $\mathcal{Z}^{\circ}_{\Sigma'}$, it is clear from \eqref{eq:interior-component} that \[ \con{\boldsymbol v}(A) = \bigoplus_{i\in\Sigma'}\frac{\mathcal{O}[y_i]}{a_i[y_i]} \bigoplus_{j\not\in\Sigma'}\frac{\mathcal{O}[x_j]}{b_j[x_j]} \] This is the desired result. \end{proof} We also need to compute the cotangent module of the ring \[ B\colonequals A_{\Sigma}/I \quad \text{where}\quad I\colonequals \bigcap_{\Sigma'\subsetneq \Sigma} I_{\Sigma'}\,. \] This ring is not complete intersections when $|\Sigma|\ge 2$. \begin{lemma} \label{le:cotangent-S} Fix a subset $\Sigma\subseteq T$ and let $B$ be the quotient ring defined above. Fix $s\in \Sigma$ and set $\Sigma'\colonequals \Sigma\setminus\{s\}$. For any $\boldsymbol v\in \mathcal{Z}^{\circ}_{\Sigma'}$ one has \[ \operatorname{length}_{\mathcal{O}} \con {\boldsymbol v}(B) = \sum_{i\in\Sigma'} \operatorname{ord}(a_i) + \min\{\operatorname{ord}(b_s), \sum_{i\in\Sigma'} \operatorname{ord}(a_i)\}\,. \] \end{lemma} \begin{proof} Without loss of generality, we can assume $\Sigma=T$ and $s=1$. Then \[ B=A/I \quad\text{where $I = (x_1\cdots x_n)$.} \] For $\boldsymbol v = (\boldsymbol a,\boldsymbol b, \boldsymbol c)$ in $\mathcal{Z}^{\circ}_{\Sigma'}$ one has $a_1=0$ and $a_i\ne 0$ for $i\ge 2$. Then \[ x_1\cdots x_n = x_ 1(x_2-a_2+a_2)\cdots (x_n-a_n+a_n) = (a_2\dots a_n) x_1 + z \] where $z$ is a sum of monomials quadratic or higher in the $(x_i-a_i)$. It follows that $\con{\boldsymbol v}(B)$ is the quotient of $\con{\boldsymbol v}(A)$ by the term $(a_2\dots a_n)[x_1]$. Thus the stated inequality is immediate from the description of $\con {\boldsymbol v}(A)$ in Lemma~\ref{le:cotangent-Rsigma}. \end{proof} \end{chunk} \begin{chunk} \label{ch:mcm} We keep the notation from \ref{ch:rings} and \ref{ch:o-points}. Let $M_{\Sigma}$ be a maximal Cohen--Macaulay $A_\Sigma$-module; it is also maximal Cohen--Macaulay as an $A$-module. For each point $\boldsymbol v$ in $\mathcal{V}^{\circ}_{\Sigma}$, described in \eqref{eq:interior-points}, the ring $A_{\Sigma}$ is regular at $\boldsymbol v$, so $M_{\Sigma}$ is free at $\boldsymbol v$, that is to say, $(M_\Sigma)_{\mathfrak{p}_{\boldsymbol v}}$ is free over $(A_\Sigma)_{\mathfrak{p}_{\boldsymbol v}}\cong A_{\mathfrak{p}_{\boldsymbol v}}$; this is by the Auslander-Buchsbaum equality; see \cite[Theorem~1.3.3]{Bruns/Herzog:1998}. Moreover, the rank of this free module is the same at any $\boldsymbol v\in \mathcal{Z}^{\circ}_{\Sigma'}$, for any $\Sigma'\subseteq\Sigma$, for these lie in the same irreducible component of $\operatorname{Spec} A_{\Sigma}$. We denote this number $\operatorname{rank}_{\Sigma'}(M_{\Sigma})$; thus \[ \operatorname{rank}_{\Sigma'} (M_{\Sigma}) = \operatorname{rank}_{A_{\mathfrak{p}_{\boldsymbol v}}}(M_\Sigma)_{\mathfrak{p}_{\boldsymbol v}}\,. \] We extend this to all $\Sigma'\subseteq T$ by setting $\operatorname{rank}_{\Sigma'}(M_{\Sigma})=0$ when $\Sigma'\not\subseteq \Sigma$, because $(M_\Sigma)_{\mathfrak{p}_{\boldsymbol v}}=0$ for $\boldsymbol v\in \mathcal{Z}^{\circ}_{\Sigma'}$. \end{chunk} \begin{chunk} \label{ch:modules} We fix a family of modules $M_{\Sigma}$, where $M_{\Sigma}$ is an $A_{\Sigma}$-module, equipped with $A_{\Sigma}$-linear surjections \[ \pi_{\Sigma,\Sigma'}\colon M_{\Sigma}\twoheadrightarrow M_{\Sigma'}\,, \] whenever $\Sigma'\subseteq \Sigma$, satisfying the following properties: \begin{enumerate}[\quad\rm(1)] \item $M_{\Sigma}$ is a self-dual, maximal Cohen--Macaulay, $A_{\Sigma}$-module; \item $\pi_{\Sigma,\Sigma'}$ is an isomorphism at $\mathfrak{p}_{\boldsymbol v}$ for each $\boldsymbol v$ in $\mathcal{V}^{\circ}_{\Sigma}$. \item For any integer $s\not\in\Sigma'$, and $\Sigma\colonequals \Sigma'\cup \{s\}$, the composition \[ M_{\Sigma'} \cong (M_{\Sigma'})^\vee \xrightarrow{\ (\pi_{\Sigma,\Sigma'})^\vee \ } (M_{\Sigma})^\vee \cong M_{\Sigma} \xrightarrow{\ \pi_{\Sigma,\Sigma'} \ } M_{\Sigma'}\,, \] is multiplication by $y_s$. \end{enumerate} Following the discussion in \ref{ch:mcm}, for $\Sigma\subseteq T$ set \[ \mu_{\Sigma} \colonequals \operatorname{rank}_{\Sigma}M_{T}\,. \] Given the isomorphism in condition (2) above, for subsets $\Sigma,\Sigma'\subseteq T$ one has \[ \operatorname{rank}_{\Sigma'}M_{\Sigma} = \begin{cases} \mu_{\Sigma'} & \text{when $\Sigma'\subseteq \Sigma$}\\ 0 &\text{otherwise} \end{cases} \] To ease up notation, we set \[ \cmod {\boldsymbol v}(M) \colonequals \cmod {\lambda_{\boldsymbol v}}(M)\,. \] Here is a computation of congruence modules. \begin{lemma} \label{le:cmod} Fix $\Sigma'\subseteq \Sigma\subseteq T$ and $\boldsymbol v \colonequals (\boldsymbol a, \boldsymbol b, \boldsymbol c)$ in $\mathcal{Z}^{\circ}_{\Sigma'}$ one has \[ \operatorname{length}_{\mathcal{O}} \cmod {\boldsymbol v}(M_{\Sigma}) = \operatorname{length}_{\mathcal{O}} \cmod {\boldsymbol v}(M_{\Sigma'}) + \mu_{\Sigma'}(\sum_{i\in\Sigma\setminus \Sigma'} \operatorname{ord}(b_i))\,. \] \end{lemma} \begin{proof} We verify that by an induction on the cardinality of $\Sigma\setminus \Sigma'$. The base case is when $\Sigma=\Sigma'$, and then the stated equality is clear. For the induction step it suffices to note that if $\Sigma = \Sigma' \cup\{s\}$ for $s\notin \Sigma'$, then \[ \operatorname{length}_{\mathcal{O}} \cmod {\boldsymbol v}(M_{\Sigma}) = \operatorname{length}_{\mathcal{O}} \cmod {\boldsymbol v}(M_{\Sigma'}) + \operatorname{rank}_{\boldsymbol v}(M_\Sigma) \operatorname{ord}(b_s)\,. \] This is immediate by property \ref{ch:modules}(3) and \cite[Proposition~4.4]{Iyengar/Khare/Manning:2022a}. \end{proof} \begin{proposition} \label{pr:free-summand} For each $\Sigma\subseteq T$ the $A_\Sigma$-module $M_{\Sigma}$ has a free summand of rank $\mu_{\varnothing}$. In particular, one gets an inequality $\mu_{\Sigma}\ge \mu_{\varnothing}$. \end{proposition} \begin{proof} Pick a $\boldsymbol v$ in $\mathcal{Z}^\circ_{\varnothing}$. Since the local ring $A_{\varnothing}$ is regular and $M_{\varnothing}$ is maximal Cohen--Macaulay, it is free, and of rank equal to $\mu_{\varnothing}$. Thus $\cmod {\boldsymbol v}(M_\varnothing)=0$ and hence from Lemma~\ref{le:cmod} applied with $\Sigma'=\varnothing$ one gets the first equality below: \begin{align*} \operatorname{length}_{\mathcal{O}} \cmod {\boldsymbol v}(M_{\Sigma}) &=\mu_{\varnothing}\cdot \big(\sum_{i\in\Sigma} \operatorname{ord}(b_i)\big) \\ &=\mu_{\varnothing} \cdot \operatorname{length}_{\mathcal{O}} \con {\boldsymbol v}(A_\Sigma)\,. \end{align*} The second equality is by Lemma~\ref{le:cmod}, applied with $\Sigma'=\varnothing$. Since $A_\Sigma$ is complete intersection, and hence Gorenstein, it remains to apply \ref{ch:cmod-properties}(4). \end{proof} \end{chunk} \section{A criterion for freeness} \label{se:freeness} The theorem below builds on and strengthens Proposition~\ref{pr:free-summand}. \begin{theorem} \label{th:only} Let $\{A_\Sigma\}$ and $\{M_\Sigma\}$ be the families of rings and modules described in \ref{ch:rings} and \ref{ch:modules}, respectively. If $\mu_{T}\le \mu_{\varnothing}$, then $M_{\Sigma}\cong A_{\Sigma}^{\mu_{\varnothing}}$ for each $\Sigma\subseteq T$. \end{theorem} \begin{proof} By Proposition~\ref{pr:free-summand}, for each $\Sigma\subseteq T$ there is an $A_{\Sigma}$-module $W_{\Sigma}$ and an isomorphism of $A_\Sigma$-modules \begin{equation} \label{eq:only-summands} M_{\Sigma}\cong A_{\Sigma}^{\mu_\varnothing} \oplus W_{\Sigma}\,. \end{equation} We prove that $W_{\Sigma}$ is zero. To that end it suffices to prove $\mu_{\Sigma}\le \mu_{\varnothing}$ for each $\Sigma\subseteq T$. Indeed, then $\operatorname{rank}_{\boldsymbol v}(M_\Sigma) \le \mu_{\varnothing}$ for $\boldsymbol v\in \mathcal{Z}^\circ_{\Sigma}$, as described in \eqref{eq:interior-points}. Thus the isomorphism in \eqref{eq:only-summands} implies that $\operatorname{rank}_{\boldsymbol v}(W_{\Sigma})=0$ for each such $\boldsymbol v$. This implies that the support of $W_{\Sigma}$ does not contain any component of $\operatorname{Spec} A_\Sigma$. Since $W_{\Sigma}$ is a maximal Cohen--Macaulay $A_\Sigma$-module, we conclude that it is zero, as desired. It thus remains to verify that \[ \mu_{\Sigma}\le \mu_{\varnothing}\quad\text{for each $\Sigma\subseteq T$.} \] We verify this by computing the congruence module of $W_\Sigma$ at suitable points $\boldsymbol v$ in $\mathcal{V}^\circ_{\Sigma}$, and proving that, if the desired inequality does not hold, then there are points $\boldsymbol v$ at which the Wiles defect of $W_{\Sigma}$ would be negative, violating~\ref{ch:cmod-properties}\eqref{it:defect-positive}. Since the inequality above holds when $\Sigma=T$, it suffices to prove that if the inequality holds for a given $\Sigma$, it holds also for $\Sigma' \colonequals \Sigma\setminus \{s\}$ for any $s\in\Sigma$. With $\Sigma$ and $\Sigma'$ as above, fix $\boldsymbol v$ in $\mathcal{Z}^\circ_{\Sigma'}$. The isomorphisms in \eqref{eq:only-summands}, applied to $\Sigma$ and $\Sigma'$, yield the first equality and the third inequality, respectively, below: \begin{equation} \label{eq:only-estimates} \begin{aligned} \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(W_{\Sigma}) &= \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(M_{\Sigma}) - \mu_{\varnothing}\cdot \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(A_{\Sigma}) \\ &= \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(M_{\Sigma'}) + \mu_{\Sigma'}\cdot \operatorname{ord}(b_s) - \mu_{\varnothing}\cdot \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(A_{\Sigma}) \\ &\ge \mu_{\varnothing}\cdot \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(A_{\Sigma'}) + \mu_{\Sigma'}\cdot \operatorname{ord}(b_s) - \mu_{\varnothing}\cdot \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(A_{\Sigma}) \\ &=\mu_{\varnothing}\cdot \left[\operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(A_{\Sigma'}) - \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(A_{\Sigma})\right] + \mu_{\Sigma'}\cdot \operatorname{ord}(b_s) \\ &= (\mu_{\Sigma'} -\mu_{\varnothing})\cdot \operatorname{ord}(b_s)\,. \end{aligned} \end{equation} The second equality is by Lemma~\ref{le:cmod} while the last one is by Lemma~\ref{le:cotangent-Rsigma}. Since $\mu_\Sigma \le \mu_\varnothing$, the $A_\Sigma$-module $W_{\Sigma}$ is not supported on $\mathcal{Z}^\circ_{\Sigma}$. Since $W_{\Sigma}$ is also maximal Cohen--Macaulay, it follows that the $A_\Sigma$ action on it factors through the quotient ring $B\colonequals A_{\Sigma}/I$, where \[ I\colonequals \bigcap_{\Sigma''\subsetneq \Sigma} I''_{\Sigma} \] considered in Lemma~\ref{le:cotangent-S}. Observe that the surjection $A_{\Sigma}\to A_{\Sigma'}$ factors through the map $A_{\Sigma}\to B$. In particular, the latter is also an isomorphism at $\boldsymbol v$, and hence the pair $(B,\lambda_{\boldsymbol v}\circ \varepsilon)$, where $\varepsilon\colon B\to A_{\Sigma'}$ is the canonical surjection, is also in $\operatorname{C}_{\mathcal{O}}$, and the results in \ref{ch:cmod-properties} apply to this pair as well. Since $W_{\Sigma}$ is maximal Cohen--Macaulay, the invariance of domain property of congruence modules~\ref{ch:cmod-properties}\eqref{it:defect-invariance} gives the equality below: \begin{equation} \label{eq:only-S} \begin{aligned} (\mu_{\Sigma'}-\mu_{\varnothing})\cdot \operatorname{length}\con {\boldsymbol v}(B) &\ge \operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}^B(W_{\Sigma}) \\ &=\operatorname{length}_{\mathcal{O}}\cmod {\boldsymbol v}(W_{\Sigma}) \\ &\ge (\mu_{\Sigma'} -\mu_{\varnothing})\cdot \operatorname{ord}(b_s)\,, \end{aligned} \end{equation} where $\cmod {\boldsymbol v}^B(W_{\Sigma})$ is the congruence module of $W_{\Sigma}$, treated as a $B$-module. The first inequality holds because the Wiles defect of the $B$-module $W_{\Sigma}$ at $\boldsymbol v$ is non-negative; see~\ref{ch:cmod-properties}\eqref{it:defect-positive}. The last inequality is from \eqref{eq:only-estimates}. Suppose, contrary to the desired result, $\mu_{\Sigma'}> \mu_{\varnothing}$. Then \eqref{eq:only-S} implies \[ \operatorname{length}\con {\boldsymbol v}(B) \ge \operatorname{ord}(b_s) \quad\text{for each $\boldsymbol v\in \mathcal{Z}^\circ_{\Sigma'}$.} \] On the other hand, from Lemma~\ref{le:cotangent-S} we get that \[ \operatorname{length}_{\mathcal{O}} \con {\boldsymbol v}(B) = \sum_{i\in\Sigma} \operatorname{ord}(a_i) + \min\{\operatorname{ord}(b_s), \sum_{i\in\Sigma} \operatorname{ord}(a_i)\}\,. \] In particular, when $b_s\ge \sum_{i\in\Sigma} \operatorname{ord}(a_i)$ combining the (in)equalities above yields \[ 2\sum_{i\in\Sigma} \operatorname{ord}(a_i)\ge \operatorname{ord}(b_s) \quad\text{for each $\boldsymbol v\in \mathcal{Z}^\circ_{\Sigma'}$.} \] It is clear from \eqref{eq:interior-component} that there are points $\boldsymbol v\in \mathcal{Z}^\circ_{\Sigma'}$ with $\operatorname{ord}(b_s)$ arbitrarily large and $a_i$ fixed. This violates the inequality above, yielding the contradiction we seek. \end{proof} \begin{chunk} Theorem~\ref{th:only} addresses only rings of the form considered in (\ref{ch:rings}). This is primarily for ease of exposition, as this is sufficient to deal with the rings considered in \cite{Iyengar/Khare/Manning:2022a}. One could prove analogues of Theorem \ref{th:only} with the rings $A_\Sigma$ replaced by more general classes of Gorenstein rings. Indeed, there are more general families of rings $A_\Sigma$ and modules $M_\Sigma$ for which the argument of Proposition \ref{pr:free-summand} applies. Besides Proposition \ref{pr:free-summand}, the primary input into the proof of Theorem \ref{th:only} is Lemma \ref{le:cotangent-S}, which allows one to pick points $\boldsymbol v\in \mathcal{Z}^\circ_{\Sigma'}$ making $\operatorname{length}_{\mathcal{O}} \Psi_{\boldsymbol v}(M_\Sigma)$ arbitrarily large, while keeping $\operatorname{length}_{\mathcal{O}} \Phi_{\boldsymbol v}(B)$ bounded. But one can find such $\boldsymbol v$ in more general cases than we have considered here (roughly, one merely needs to find a point $\boldsymbol v_0\in \mathcal{Z}_{\Sigma'}\cap \mathcal{Z}_\Sigma$ at which $\operatorname{Spec} B$ is smooth, and pick $\boldsymbol v\in \mathcal{Z}_{\Sigma'}^\circ$ to be sufficiently close to $\boldsymbol v_0$). This could potentially allow one to prove freeness of Hecke modules at more general levels than we consider here and in \cite{Iyengar/Khare/Manning:2022a}. \end{chunk} \section{Applications to Hecke modules} \label{se:applications} In this section we apply our results from Section~\ref{se:freeness} to the number theoretic context explored in \cite{Iyengar/Khare/Manning:2022a}. We freely use the notation and results of that paper. Let $F$ be a number field and assume Conjectures A, B, C and D from \cite{Iyengar/Khare/Manning:2022a} hold for $F$. Let $r_1$ and $r_2$ be the number of real and complex places of $F$ respectively, so that $r_1+2r_2=[F:\mathbb{Q}]$. Pick a prime $\ell>2$ which does not ramify in $F$, and let $E/\mathbb{Q}_\ell$ be a finite extension with ring of integers $\mathcal{O}$, uniformizer $\varpi\in \mathcal{O}$ and residue field $k = \mathcal{O}/\varpi\mathcal{O}$. We use this ring as the ring $\mathcal{O}$ from \cite{Iyengar/Khare/Manning:2022a}, and the DVR from Section \ref{se:congruence}. Let $\mathcal{N}_\varnothing\subseteq \mathcal{O}_F$ be a nonzero ideal which is relatively prime to $\ell$. Let $K_0(\mathcal{N}_\varnothing)\subseteq \mathrm{PGL}_2(\mathbb{A}_F^\infty)$ be the corresponding compact open subgroup from \cite[Section 13]{Iyengar/Khare/Manning:2022a} (and recall that $K_0(\mathcal{N}_\varnothing)$ was defined to have additional level structure at a certain auxiliary prime). Let $\fm\subseteq {\mathbb{T}}(K_0(\mathcal{N}_\varnothing))$ be a non-Eisenstein maximal ideal with residue field $k$, for which $N(\overline{\rho}_{\fm}) = \mathcal{N}_\varnothing$. Let $\Sigma$ be a finite set of primes of $\mathcal{O}_F$ such that for all $v\in \Sigma$: \begin{itemize} \item $v\nmid \mathcal{N}_\varnothing$. \item $\overline{\rho}_{\fm}$ is unramified at $v$. \item $v\nmid \ell$ and $q_v \not\equiv 1\pmod{\ell}$. \item $\overline{\rho}_{\fm}(\Frob_v)$ has eigenvalues $q_v\epsilon_v$ and $\epsilon_v$ for some $\epsilon_v=\pm 1$. \end{itemize} We consider the following hypothesis on $\overline{\rho}_\fm$ on $\Sigma$: \begin{hypothesis}\label{hy:integral lift} There is a finite set $T$ of primes of $\mathcal{O}_F$ such that $\Sigma\subseteq T$ and for all $v\in T$: \begin{itemize} \item $v\nmid \mathcal{N}_\varnothing$. \item $\overline{\rho}_{\fm}$ is unramified at $v$. \item $v\nmid \ell$ and $q_v \not\equiv 1\pmod{\ell}$. \item $\overline{\rho}_{\fm}(\Frob_v)$ has eigenvalues $q_v\epsilon_v$ and $\epsilon_v$ for some $\epsilon_v=\pm 1$. \end{itemize} and a continuous Galois representation $\rho\colon G_F\to \mathrm{GL}_2(\mathcal{O}')$, where $\mathcal{O}'$ is the ring of integers in a finite extension $E'/\mathbb{Q}_\ell$, with uniformizer $\varpi'\in\mathcal{O}'$, satisfying: \begin{itemize} \item $\rho\equiv \overline{\rho}_\fm\pmod{\varpi'}$; \item $\det \rho = \varepsilon_\ell$; \item For all places $v|\ell$, $\rho$ is flat at $v$; \item If $v$ is any place of $F$ for which $v\nmid \ell$ and $v|\mathcal{N}_\varnothing$, then $\rho$ is minimally ramified at $v$; \item If $v$ is any place of $F$ with $v\nmid\ell\mathcal{N}_\varnothing$ and $v\not\in T$ then $\rho$ is unramified at $v$. \item If $v\in T$, then $\rho$ is ramified at $v$. \item If $v \in T$ and $q_v\equiv- 1\pmod{\ell}$ then $\rho$ arises from $R_v^{{\rm uni}(\epsilon_v)}$. \end{itemize} \end{hypothesis} The question of whether such geometric liftings exist for representations $\overline{\rho}:G_F \to \mathrm{GL}_2(k)$ when $F$ is not a totally real field is wide open. One might optimistically expect such liftings to exist, as there seems to be no strong heuristic to suggest otherwise. To make our results independent of Hypothesis \ref{hy:integral lift} one could just assume that $\overline{\rho}_\fm$ arises by reduction of a geometric characteristic 0 representation $\rho$. There are many such representations (in the case of CM fields $F$) associated to regular algebraic cuspidal automorphic representations of $\mathrm{GL}_2(\mathbb{A}_F)$ by \cite{HLTT}. From now on assume that $\overline{\rho}_\fm$ and $\Sigma$ satisfy Hypothesis \ref{hy:integral lift}. Let $T$ and $\rho\colon G_F\to \mathrm{GL}_2(\mathcal{O}')$ be as in Hypothesis \ref{hy:integral lift}. For any $\Sigma'\subseteq T$ define $\mathcal{N}_{\Sigma'}\subseteq \mathcal{O}_F$, $K_0(\mathcal{N}_{\Sigma'})\subseteq\mathrm{PGL}_2(\mathbb{A}_F^\infty)$ and $Y_0(\mathcal{N}_{\Sigma'})$ as in \cite{Iyengar/Khare/Manning:2022a}. Let $R_{\Sigma'}$ and $\mathbb{T}_{\Sigma'}$ be associated the global deformation ring and Hecke algebra, again defined as in \cite{Iyengar/Khare/Manning:2022a}. By expanding $E$ if necessary, we may assume that all augmentations $\lambda\colon \mathbb{T}_{\Sigma'}\to \overline{\mathbb{Q}}_\ell$, for all $\Sigma'\subseteq T$, have image equal to $\mathcal{O}$, and so in particular $E'=E$ and $\mathcal{O}'=\mathcal{O}$. Now assume that $\overline{\rho}_\fm|_{G_{F(\zeta_\ell)}}$ is absolutely irreducible. The patching argument from \cite[Section 14]{Iyengar/Khare/Manning:2022a} produces, for each $\Sigma'\subseteq T$, a complete local noetherian $\mathcal{O}$-algebra $R_{\Sigma',\infty}$ and a maximal Cohen--Macaulay $R_{\Sigma',\infty}$-module $M_{\Sigma',\infty}$. Moreover there exists a power series ring $S_\infty$ with maps $S_\infty\to R_{\Sigma',\infty}$ for which \[ R_{\Sigma',\infty}\otimes_{S_\infty}\mathcal{O} = R_{\Sigma'}\cong \mathbb{T}_{\Sigma'} \quad \text{and} \quad M_{\Sigma',\infty}\otimes_{S_\infty}\mathcal{O} \cong \operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_\Sigma),\mathcal{O})_{\fm_\Sigma} \] We apply the results of Sections \ref{se:setup} and \ref{se:freeness}, taking $A = R_{T,\infty}$, $A_{\Sigma'} = R_{\Sigma',\infty}$ and $M_{\Sigma'} = M_{\Sigma',\infty}$. The work of \cite[Section 14]{Iyengar/Khare/Manning:2022a} implies that these objects satisfy all of the properties listed in Section \ref{se:setup}. In particular, we may consider the integers $\mu_{\Sigma'} \colonequals\operatorname{rank}_{\Sigma'} M_{T,\infty}$. Our result relies on the following standard generic multiplicity result: \begin{lemma}\label{lem:generic multiplicity} For any $\Sigma'\subseteq T$ and augmentation $\lambda\colon \mathbb{T}_{\Sigma'}\twoheadrightarrow \mathcal{O}$ we have \[ \dim_E (\operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_{\Sigma'}),\mathcal{O})_{\fm_{\Sigma'}}\otimes_\lambda E) = 2^{r_1} \] \end{lemma} \begin{proof} Set $\mathfrak{p}_{\lambda}\colonequals \operatorname{Ker}(\lambda)$. Recall that \[ \operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_{\Sigma'}),\mathcal{O})_{\fm_{\Sigma'}}\otimes_\lambda E \colonequals (\operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_{\Sigma'}),\mathcal{O})_{\fm_{\Sigma'}}/\mathfrak{p}_\lambda)\otimes_\mathcal{O} E \] The lemma can be deduced from \cite[\S 3.6.2]{Harder}. \end{proof} This allows us to compute the rank $\mu_{\Sigma'}$ for all $\Sigma'$ for which an appropriate augmentation $\mathbb{T}_{\Sigma'}\twoheadrightarrow\mathcal{O}$ exists: \begin{corollary}\label{cor:mu_Sigma} Take any $\Sigma'\subseteq T$. If there exists an augmentation $\lambda\colon \mathbb{T}_{\Sigma'}\twoheadrightarrow \mathcal{O}$ such that the pullback \[ R_{\Sigma',\infty}\twoheadrightarrow R_{\Sigma'}\cong \mathbb{T}_{\Sigma'}\xrightarrow{\lambda}\mathcal{O} \] is equal to $\lambda_{\boldsymbol v}$ for some $\displaystyle {\boldsymbol v} \in \mathcal{V}_{\Sigma'}\smallsetminus \bigcup_{\Sigma''\subsetneq \Sigma'} \mathcal{Z}_{\Sigma''}$, then $\mu_{\Sigma'} = 2^{r_1}$. \end{corollary} \begin{proof} The condition that $\displaystyle {\boldsymbol v} \in \mathcal{V}_{\Sigma'}\smallsetminus \bigcup_{\Sigma''\subsetneq \Sigma'} \mathcal{Z}_{\Sigma''}$ ensures that $\mathfrak{p}_{\boldsymbol v}$ lies in $\mathcal{Z}_{\Sigma'}$ and that $\operatorname{Spec} R_{\Sigma',\infty}$ is regular at $\mathfrak{p}_{\boldsymbol v}$. It follows by the discussion in Section \ref{se:setup} that $\operatorname{rank}_{\mathfrak{p}_{\boldsymbol v}} M_{\Sigma',\infty} = \mu_{\Sigma'}$. Thus \begin{align*} \mu_{\Sigma'} &= \dim_E (M_{\Sigma',\infty}\otimes_{\lambda}E)\\ &= \dim_E ((M_{\Sigma',\infty}\otimes_{R_{\Sigma',\infty}}R_{\Sigma'})\otimes_{\lambda}E)\\ &= \dim_E ((M_{\Sigma',\infty}\otimes_{S_\infty}\mathcal{O})\otimes_{\lambda}E)\\ &= \dim_E (\operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_{\Sigma'}),\mathcal{O})_{\fm_{\Sigma'}}\otimes_{\lambda}E)\\ &= 2^{r_1} \end{align*} by Lemma \ref{lem:generic multiplicity}. \end{proof} In particular, the integral lift $\rho\colon G_F\to \mathrm{GL}_2(\mathcal{O})$ of $\overline{\rho}_{\fm}$ from Hypothesis \ref{hy:integral lift} gives the following: \begin{corollary}\label{cor:mu_T} $\mu_T = 2^{r_1}$. \end{corollary} \begin{proof} Recall that we have assumed $\overline{\rho}_\fm|_{G_{F(\zeta_\ell)}}$ is absolutely irreducible. Thus standard modularity lifting results (see for instance \cite[Theorem 5.16, Theorem 9.19]{Calegari/Geraghty:2018}) imply that the representation $\rho$ is modular, and moreover is equal to $\rho_\lambda$ for some augmentation $\lambda\colon \mathbb{T}_T\twoheadrightarrow \mathcal{O}$. Let the pullback \[ R_{T,\infty}\twoheadrightarrow R_{T}\cong \mathbb{T}_{T}\xrightarrow{\lambda}\mathcal{O} \] equal $\lambda_{\boldsymbol v}$ for some ${\boldsymbol v}\in \mathcal{V}_T$. By the description of $R_{T,\infty}$ in \cite{Iyengar/Khare/Manning:2022a} (in particular, \cite[Proposition 12.1]{Iyengar/Khare/Manning:2022a}) the fact that $\rho$ is ramified at all primes in $T$ implies that ${\boldsymbol v}\not\in \mathcal{Z}_{\Sigma'}$ for any $\Sigma'\subsetneq T$. Hence Corollary \ref{cor:mu_Sigma} gives $\mu_T = 2^{r_1}$. \end{proof} Now as $N(\overline{\rho}_{\fm}) = \mathcal{N}_\varnothing$ we have that $M_{\varnothing,\infty} \ne 0$ and so $\mu_{\varnothing}\ge 1$. Hence $1\le \mu_\varnothing \le \mu_T = 2^{r_1}$ by Proposition \ref{pr:free-summand}. If the number field $F$ is totally complex, so that $r_1=0$, we then get $\mu_\varnothing = 1 = \mu_T$ and so Theorem \ref{th:only} implies that $M_{\Sigma,\infty}\cong R_{\Sigma,\infty}$. Applying $-\otimes_{S_\infty}\mathcal{O}$, this gives that $\operatorname{H}_{r_2}(Y_0(\mathcal{N}_\Sigma),\mathcal{O})_{\fm_\Sigma}\cong R_\Sigma\cong\mathbb{T}_\Sigma$. We have thus proved the following: \begin{theorem}\label{th:mult 1} Let $F$ be a totally complex number field in which $\ell$ does not ramify, and assume Conjectures A, B, C and D from \cite{Iyengar/Khare/Manning:2022a} hold for $F$. Let $\mathcal{N}_\varnothing\subseteq \mathcal{O}_F$ be a nonzero ideal and let $\fm\subseteq {\mathbb{T}}(K_0(\mathcal{N}_\varnothing))$ be a non-Eisenstein maximal ideal such that $N(\overline{\rho}_\fm) = \mathcal{N}_\varnothing$ and $\overline{\rho}_\fm|_{G_{F(\zeta_\ell)}}$ is absolutely irreducible. Let $\Sigma$ be a finite set of primes of $\mathcal{O}_F$ such that for all $v\in \Sigma$: \begin{itemize} \item $\overline{\rho}_v$ is unramified. \item $v\nmid \ell$ and $q_v \not\equiv 1\pmod{\ell}$. \item $\overline{\rho}_v(\Frob_v)$ has eigenvalues $q_v\epsilon_v$ and $\epsilon_v$ for some $\epsilon_v=\pm 1$. \end{itemize} Assume that $\overline{\rho}_\fm$ and $\Sigma$ satisfy Hypothesis \ref{hy:integral lift}. Then $\operatorname{H}_{r_2}(Y_0(\mathcal{N}_\Sigma),\mathcal{O})_{\fm_\Sigma}$ is free of rank $1$ over $\mathbb{T}_\Sigma$. \qed \end{theorem} When $F$ is not totally complex, we get the following weaker result: \begin{theorem}\label{th:mult 2^r1} Let $F$ be a number field in which $\ell$ does not ramify, and assume Conjectures A, B, C and D from \cite{Iyengar/Khare/Manning:2022a} hold for $F$. Let $\mathcal{N}_\varnothing\subseteq \mathcal{O}_F$ be a nonzero ideal and let $\fm\subseteq {\mathbb{T}}(K_0(\mathcal{N}_\varnothing))$ be a non-Eisenstein maximal ideal such that $N(\overline{\rho}_\fm) = \mathcal{N}_\varnothing$ and $\overline{\rho}_\fm|_{G_{F(\zeta_\ell)}}$ is absolutely irreducible. Let $\Sigma$ be a finite set of primes of $\mathcal{O}_F$ such that for all $v\in \Sigma$: \begin{itemize} \item $\overline{\rho}_v$ is unramified. \item $v\nmid \ell$ and $q_v \not\equiv 1\pmod{\ell}$. \item $\overline{\rho}_v(\Frob_v)$ has eigenvalues $q_v\epsilon_v$ and $\epsilon_v$ for some $\epsilon_v=\pm 1$. \end{itemize} Assume that $\overline{\rho}_\fm$ and $\Sigma$ satisfy Hypothesis \ref{hy:integral lift} and that $\operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_\varnothing),E)_{\fm_\Sigma}\ne 0$. Then $\operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_\Sigma),\mathcal{O})_{\fm_\Sigma}$ is free of rank $2^{r_1}$ over $\mathbb{T}_\Sigma$. \end{theorem} \begin{proof} The hypothesis that $\operatorname{H}_{r_1+r_2}(Y_0(\mathcal{N}_\varnothing),E)_{\fm_\Sigma}\ne 0$ implies there exists an augmentation $\lambda'\colon \mathbb{T}_\varnothing\twoheadrightarrow \mathcal{O}$. Again, this pulls back to an augmentation \[ \lambda_{\boldsymbol v'}\colon R_{\varnothing,\infty}\twoheadrightarrow R_\varnothing\cong \mathbb{T}_\varnothing\xrightarrow{\lambda'}\mathcal{O} \] for some ${\boldsymbol v'} \in \mathcal{V}_\varnothing$. Vacuously we have ${\boldsymbol v'} \not\in \mathcal{Z}_{\Sigma''}$ for $\Sigma''\subsetneq\varnothing$ and so Corollary \ref{cor:mu_Sigma} gives $\mu_\varnothing = 2^{r_1} = \mu_T$. The claim now follows from the previous argument. \end{proof} \section*{Acknowledgements} This work is partly supported by National Science Foundation grants DMS-200985 (SBI) and DMS-2200390 (CBK), and by a Simons Fellowship (CBK). The second author thanks the Tata Institute for Fundamental Research in Mumbai, and the third author thanks the Max Planck Institute for Mathematics in Bonn, for support and hospitality. \bibliographystyle{amsplain} \begin{bibdiv} \begin{biblist} \bib{Bruns/Herzog:1998}{book}{ author={Bruns, Winfried}, author={Herzog, J{\"u}rgen}, title={Cohen-macaulay rings}, edition={2}, series={Cambridge Studies in Advanced Mathematics}, publisher={Cambridge University Press}, date={1998}, } \bib{Calegari/Geraghty:2018}{article}{ author={Calegari, Frank}, author={Geraghty, David}, title={Modularity lifting beyond the {T}aylor-{W}iles method}, date={2018}, ISSN={0020-9910}, journal={Invent. Math.}, volume={211}, number={1}, pages={297\ndash 433}, url={https://doi.org/10.1007/s00222-017-0749-x}, review={\MR{3742760}}, } \bib{Diamond:1997}{article}{ author={Diamond, Fred}, title={The {T}aylor-{W}iles construction and multiplicity one}, date={1997}, ISSN={0020-9910}, journal={Invent. Math.}, volume={128}, number={2}, pages={379\ndash 391}, url={http://dx.doi.org/10.1007/s002220050144}, review={\MR{1440309}}, } \bib{Harder}{article}{ author={Harder, Gunter}, title={Eisenstein cohomology of arithmetic groups. the case gl2}, date={1987}, journal={Invent. Math}, volume={89}, number={1}, pages={37\ndash 118}, } \bib{HLTT}{article}{ author={Harris, M.}, author={Lan, Kai-Wen}, author={Taylor, R.}, author={Thorne, J.}, title={On the rigid cohomology of certain shimura varieties}, date={2016}, journal={Res. Math. Sci.}, volume={3}, number={37}, pages={308}, } \bib{Iyengar/Khare/Manning:2022a}{article}{ author={{Iyengar}, Srikanth~B.}, author={{Khare}, Chandrashekhar~B.}, author={{Manning}, Jeffrey}, title={{Congruence modules and the Wiles-Lenstra-Diamond numerical criterion in higher codimensions}}, date={2022-06}, journal={arXiv e-prints}, pages={arXiv:2206.08212}, eprint={2206.08212}, } \bib{Mazur}{article}{ author={Mazur, Barry}, title={Modular curves and the eisenstein ideal}, date={1977}, journal={Publ. Math. Inst. Hautes \'Etudes Sci.}, volume={47}, pages={33\ndash 186}, } \bib{Taylor/Wiles:1995}{article}{ author={Taylor, Richard}, author={Wiles, Andrew}, title={Ring-theoretic properties of certain {H}ecke algebras}, date={1995}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={141}, number={3}, pages={553\ndash 572}, url={http://dx.doi.org/10.2307/2118560}, review={\MR{1333036}}, } \bib{Wiles:1995}{article}{ author={Wiles, Andrew}, title={Modular elliptic curves and {F}ermat's last theorem}, date={1995}, ISSN={0003-486X}, journal={Ann. of Math. (2)}, volume={141}, number={3}, pages={443\ndash 551}, url={https://doi.org/10.2307/2118559}, review={\MR{1333035}}, } \end{biblist} \end{bibdiv} \end{document}
1,108,101,566,056
arxiv
\section{Introduction} The coexistence of magnetism and superconductivity is generally thought to be mutually exclusive. Recent discoveries of exotic superconductors, which exhibit coexistence with ferromagnetism or antiferromagnetism, have challenged this long standing belief.\cite{sax00,pfl01,par06} However, it has been known for some time that under certain circumstances conventional superconducting order can coexist with paramagnetic order in high magnetic fields. Since the 1960s this state of coexistence has become known as Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state.\cite{ful64,lar64} More recently, it has been thought to be observed in the heavy-fermion superconductor CeCoIn$_5$.\cite{bia03,rad03,kak05} Here, we study the Pauli paramagnetic depairing effect of a magnetic field in a $d$-wave superconductor. We use the weak-coupling Bardeen-Cooper-Schrieffer theory of superconductivity with quasiparticle interactions as described by Landau's theory of Fermi liquids. The FFLO state of a spin-singlet superconductor in high magnetic fields appears as a result of the intricate interplay of two effects: (1) loss of superconducting condensation energy in a magnetic field when Cooper pairs (with antiparallel spins) are depaired, and (2) gain of magnetic energy due to the Zeeman effect by spin-polarizing quasiparticles. As a result, in high fields a spatially nonuniform superconducting state coexists with pockets of spin-polarized quasiparticles localized at the zeros of the oscillating order parameter in real space. We address the modifications of the FFLO state by Fermi-liquid (FL) interactions. Our analysis is for quasi-two-dimensional (quasi-2D) fermionic systems with a cylindrical Fermi surface and the magnetic field applied perpendicular to the axis of the cylinder. In this geometry we can neglect any orbital effects on the superconducting condensate due to magnetic field. To further simplify our analysis, we restrict our study to FFLO states with 1D spatial modulations of the order parameter and neglect any low-temperature transition to a FFLO state with 2D modulations.\cite{shi98} The effects of FL interactions in quasi-2D $s$-wave superconductors were first studied by Burkhardt and Rainer within quasiclassical theory.\cite{bur94} They reported a considerable change of the standard FFLO phase diagram when tuning the FL parameter $F^a_0$. Since the underlying physics of Pauli depairing is the same for all spin-singlet superconductors, we expect similar new phenomena to occur for $d$-wave pairing states. However, we anticipate additional effects due to gap nodes and the associated spin-polarized nodal quasiparticles. Here, we extend our earlier work\cite{vor05b} by including many-body interactions in the form of FL effects, which enter the quasiclassical Eilenberger equation,\cite{eil68,lar68} \be [ i\vare_m \widehat{\tau}_3 - \widehat{v}_Z - \whs_\sm{FL} - \whDelta , \whg ] + i\vv_f \cdot \grad \, \whg = 0 \ , \label{eq:eilFL} \ee with Matsubara frequency $\vare_m$ and quasiclassical Green's function $\whg$, through the FL {\it dressed} Zeeman term, $\widehat{v}_Z$, and the FL self-energy, $\whs_\sm{FL}$,\cite{ale85,ser83} \be \widehat{v}_Z+\whs_\sm{FL} = \left( \begin{array}{cc} \vb \cdot \vsigma & 0 \\ 0 & \vb \cdot \vsigma^* \end{array} \right) \,, \quad \vb = \mu \vB_0/(1+F^a_0) + \vnu \,. \ee The Pauli matrices $\sigma_i$ describe the coupling of quasiparticle spins to the FL {\it dressed} external field $\vB_0/(1+F_0^a)$ and the internal exchange field $\vnu$. The latter satisfies the self-consistency condition given by the spin part of the diagonal component of $\whg$, \be \label{eq:nu} \vnu(\vR) = A^a_0 \, T\sum_{\vare_m} \int d\hat{\vp}' \, \vg(\vR, \hat{\vp}'; \vare_m) \,. \ee $A^a_0$ is the isotropic channel of the antisymmetric part of the Landau interaction $A^a(\hat{\vp}, \hat{\vp}')$. The Landau parameter $A^a_0$ is related to the quasiparticle FL parameter $F^a_0$ through $A^a_0 = F^a_0/(1+F^a_0)$. $\mu = (g/2)|\mu_B|$ is the magnetic moment of an electron. The $g$ factor of a free electron is $g=2$. Here $g$ is a free parameter. \section{Results and Discussion} \begin{figure}[t] \centerline{\includegraphics[height=55mm]{./PD_F0.5_F0.0.eps}} \caption{\label{fig:FL_0.5_0.0} (Color online) The phase diagram of a 2D $d_{x^2-y^2}$-wave superconductor for $F_0^a = 0.5$ (left panel) and $F_0^a = 0.0$ (right panel). For positive $F_0^a$ the LO state is stabilized over a wider range of fields. Note that the energetically unphysical Pauli-limited transition is first order (dot-dashed black line) at low temperatures, $0<T<T_P<T_{FFLO} \approx 0.56 T_c$. Above $T_P \approx 0.4 T_c$ the instability would become second order (dotted line). Without Fermi liquid effects $T_P=T_{FFLO}$. At the lower critical field $B_{c1}$ a LO state with modulation $\vq$ along nodal directions $(110)$ is stablized (solid magenta line). At even higher fields a LO state with $\vq \parallel (100)$ becomes stable (dashed green line). } \end{figure} In Figs.~\ref{fig:FL_0.5_0.0} and \ref{fig:FL_F-0.5} we show the computed phase diagrams of a 2D $d$-wave superconductor for three different strengths of the FL parameter $F^a_0$ ranging from negative to positive. The evolution of the $d$-wave phase diagram with $F_0^a$ is similar to the $s$-wave case.\cite{bur94} We determine the order of a phase transition by calculating the jump in the spin magnetization (density of the magnetic moment),\cite{ale85} \be \vM(\vR) = \frac{2\mu N_\sm{F}}{1+F_0^a} \left( \mu \vB_0 - T\sum_{\vare_m} \int d\hat{\vp}' \, \vg(\vR, \hat{\vp}'; \vare_m) \right) \,, \label{eq:mag} \ee across the transition line. Simultaneously, we check it by directly evaluating the free energy. A discontinuity of the magnetization, $\vM ={\partial (F/V)}/{\partial \vB}$, defines a first-order phase transition, while a kink in $\vM$ (discontinuity in the susceptibility) defines a second-order transition. We adopt the following notation for drawing transition lines: second-order transitions have solid lines, while first-order transitions have dashed lines. For comparison, the unphysical part of the normal (N) to uniform superconducting (USC) state transition (Pauli limited) is shown by a thin dot-dashed line inside the Larkin-Ovchinnikov (LO) phase, as well as the corresponding second-order phase transition between USC and LO phases (solid magenta line). \begin{figure}[t] \centerline{\includegraphics[height=55mm]{./PD_F-0.5_R2.eps}} \caption{\label{fig:FL_F-0.5} (Color online) The phase diagram of a 2D $d$-wave superconductor for $F_0^a = -0.5$ with $T_{FFLO} \ll T_P \approx 0.75 T_c$. Note the first-order LO-N transition between $T_* < T < T_{FFLO}$. Inset: Sketch of evolution of the order parameter near $B_{c1}$ transition from uniform (USC) to periodic (PER) LO solution at $T/T_c=0.15$. In a narrow wedge of magnetic fields single (SDW) and double (DDW) domain wall solutions are favored over either USC or PER states. } \end{figure} \paragraph{Second-order transition line at $B_{c2}$:} First, we consider the second-order instability line of the upper critical field $B_{c2}$ from the N state into the USC state or from the N state into the spatially nonuniform (periodic) FFLO state with an order parameter $\Delta(\vR) \sim \exp(i\vq\cdot\vR)$ or $\Delta(\vR) \sim \cos \,\vq\cdot\vR$. The phase transition can be obtained by linearizing the Eilenberger equation in $\Delta$. Near a second order transition $\vnu$ is zero in linear order. Thus, the linearized gap equation for $\Delta$ is identical to that of the $F_0^a=0$ case if one replaces $B_0$ with $B_0/(1+F_0^a)$. So one obtains the second order normal-state instability line from the known solution\cite{shi98,shi97,mak96,yan98,vor05b} by simple scaling, $ \mu B_{c2}(T;F_0^a) = \mu B_{c2}(T;0) (1+F_0^a)$. \paragraph{First-order transition line at $B_{c2}$:} For first-order transitions, we must solve the general expressions (\ref{eq:eilFL}) - (\ref{eq:nu}), which are nonlinear in the mean fields, and calculate the corresponding Green's functions and free energy. However, the calculation of the Pauli-limited transition line from the normal into the uniform state is straightforward, since we know already the general form of $\whg$ for a uniform superconductor.\cite{vor05b} At $T=0$, the free energy density can be expressed in a very intuitive way, $\Del F/V = -{1\over 2} N_\sm{F} \langle |\Delta(\hat{\vp})|^2 \rangle_\sm{FS} + {1\over 2} \Del \vM \cdot \vB_0 $, similar to the result by Clogston,\cite{clo62} where $\Del \vM = \vM_N - \vM$ and $N_\sm{F}$ is the density of states per spin at the Fermi energy. It follows that a gain in condensation energy happens at the expense of the magnetic energy of the spin-polarized quasiparticles, which is proportional to the difference of the spin magnetization between the N and USC state. In the $s$-wave superconductor in the absence of the Meissner effect, the electron magnetization $\vM$ vanishes at $T=0$, and the Pauli limited field is $\mu B_P = \sqrt{ {1\over 2}(1+F_0^a) \langle |\Delta(\hat{\vp};B_P)|^2 \rangle_\sm{FS}}$. For $d$-wave, $\vM(T=0)$ is nonzero due to nodal quasiparticles, but is reduced from the normal-state magnetization by a fraction $p=|\Del \vM|/|\vM_N|$. Then the right-hand side of the previous equation needs to be divided by $\sqrt{p}$. If many-body interactions, like FL effects, are considered in the N state, then the spin magnetization of an isotropic Pauli paramagnet is given by $\vM_N = \chi_N \, \vB_0 = 2\mu^2 N_\sm{F} \vB_0 / (1+F_0^a)$. Thus, for positive FL parameters $F_0^a$ the normal-state susceptibility $\chi_N$ is suppressed compared to a noninteracting Fermi gas, while for negative $F_0^a$ it is enhanced. The FFLO state, as well as the USC state, become stable in a wider range of magnetic fields for positive $F_0^a$ (Fig.~\ref{fig:FL_0.5_0.0}), and in a smaller region for negative $F_0^a$ (Fig.~\ref{fig:FL_F-0.5}). This happens because pockets of polarized electrons do not gain as much in magnetic energy for $F_0^a>0$ as for $F_0^a=0$, and the opposite happens for $F_0^a<0$. The FFLO state exists for any $F^a_0>0$, but disappears at some critical negative value, when the upper critical field $B_{c2}$ of the FFLO state, $B_{c2}(T;F_0^a) = B_{c2}(T;0) (1+F_0^a)$, drops below the Pauli-limited field, $B_{P}(T;F_0^a) \approx B_{P}(T;0) \sqrt{1+F_0^a}$. At that point the FFLO state becomes unstable against the USC state for any field and temperature. The numerically determined critical value is $F_0^a \approx -0.765$ for a $d$-wave superconductor, which is lower than that for an $s$-wave superconductor, $F_0^a=-0.5$.\cite{bur94} We see that the FFLO state in a 2D $d$-wave superconductor is more stable against FL effects compared to the $s$-wave case discussed in detail by Burkhardt and Rainer.\cite{bur94} Again, this is not completely unexpected, because spin-polarized nodal quasiparticles can gain magnetic energy without the breaking of Cooper pairs, which is unavoidable in a fully gapped superconductor. Further, negative values of $F_0^a$ make part of the LO-N transition, $T_* < T < T_{FFLO}$, to be first order (Fig.~\ref{fig:FL_F-0.5}). This is in qualitative agreement with the $s$-wave case.\cite{bur94} However, in contrast to the $s$-wave case we failed to detect any first-order transition inside the LO phase between phases with different periods $\vq$. \paragraph{Second-order transition line at $B_{c1}$:} Fig.~\ref{fig:FL_F-0.5} also shows details of the lower critical transition $B_{c1}$ from the USC state to the periodic LO state. We calculate the free energy as a function of the field for four different types of order parameters. We find successive transitions from the USC solution to the single-domain wall (SDW) and then to the double-domain wall (DDW) and next to the periodic solution (PER) in a thin but finite wedge of magnetic fields. For $T/T_c=0.15$, we sketch in the inset of Fig.~\ref{fig:FL_F-0.5} the different energetically favorable solutions with their respective transitions in magnetic field. Although the transition from the USC to SDW state is continuous, there is a sequence of transitions, as domain walls enter the bulk one by one with increasing field. \paragraph{The USC-N transition:} Finally, in Fig.~\ref{fig:hc2} we show the normalized Pauli limited transition between the normal and uniform superconducting state for several Fermi-liquid parameters. The break in line from solid to dashed indicates, as before, the change from a second to first-order transition. On the right side of Fig.~\ref{fig:hc2} we show the normalized magnetization in the USC state as a function of field. Note that the magnetization jump, when crossing into the normal state, is larger for negative Fermi-liquid parameters. \begin{figure}[th] \centerline{\includegraphics[height=60mm]{./hc2.mag.m.eps}} \caption{\label{fig:hc2} (Color online) Left panel: The Pauli-limited upper critical field for Fermi-liquid parameters $F_0^a=\{0.5, 0.0, -0.5\}$. Right panel: Normalized uniform magnetization vs.\ field for temperatures $T/T_c=\{0.1, 0.3, 0.5, 0.7, 0.9\}$ as labeled in the bottom window. } \end{figure} \section{Thermodynamic implications} So far we discussed specific results for the phase diagram of a quasi-2D Fermi liquid model in the superconducting state. As we have seen, the existence and extent of a first-order phase transition between the normal and superconducting state - USC or FFLO - at lower temperatures can be modeled by invoking a negative Fermi liquid parameter $F_0^a$. On the other hand, in three dimensional superconductors of type-II the first-order transition line can be modified by a combination of Zeeman and orbital depairing. Generally, the existence of a first-order phase transition between the normal and superconducting state is an anomalous phenomenon for strong type-II superconductors. Since the first-order transition {\it is} seen in CeCoIn$_5$, we want to address the thermodynamic constraints imposed along such transition independent of a specific model. This is an important question that needs to be addressed if one wants to compare theory with experiments. Although most experiments report phase diagrams for CeCoIn$_5$ in fair agreement with each other (see Refs.\ \onlinecite{tay02,bia03,kak05,mit06,Curro06}), there is a noticeable variation in the position and sharpness of the first-order transition between the normal and superconducting state. For that reason, we check the internal consistency of independent experiments by a thermodynamic analysis. Along the first-order transition line the free energy is continuous and results in a generalized Clausius-Clapeyron equation, $\frac{d B_{P}}{d T} = -\frac{ \Del S }{ V \Del M }$, which relates the jumps in entropy, $\Del S = S_N - S$, and magnetization, $\Del M = M_N - M$, and volume $V$. If the magnetization in the superconducting state is reduced by a fraction $0<p<1$ from $M_N$, then $\Del S/V = - p M_N d B_{P}/ d T$. Consequently, the latent heat associated with this transition is $Q=T \Del S = p T M_N |d B_{P}/ d T|$. The experiments by Bianchi et al.\cite{bia03} show a value of $\Del S/V_{\rm mol} \approx 200$ mJ/(mol\ K) at $T \approx 0.5$K, and a measured slope of the upper critical field of $d B_{P}/dT \approx -1.5$ T/K at $T\approx 0.5$ K.\cite{tay02,bia03} Whereas Tayama et al.\cite{tay02} reports a magnetization jump of $\Del M \approx 0.1\, M_N \approx 80$ mJ/(mol\ T) at $T \approx 0.45$ K. It is obvious that the agreement between the ratio of the discontinuities, $\Del S/(V \Del M) \sim 2.5$ T/K, and the slope of $B_{c2}$ is poor at $T\approx 0.5$ K (a deviation of $\sim 70$\%), despite good overall agreement between both phase transition lines. The origin of this inconsistency is poorly understood, but might be related to the nature of the localized $f$ electrons and their contributions to the magnetization. Further entropy and magnetization studies are needed to resolve this open problem. \section{Conclusions} We studied the superconducting phase diagram of quasi-2D $d$-wave superconductors in the presence of Fermi-liquid effects in high magnetic fields. We found that a negative Fermi-liquid parameter $F_0^a$ increases the gain in magnetic energy, while at the same time it reduces the available phase space of a FFLO state. The uniform superconducting and periodic FFLO state are competing and at a critical Fermi-liquid parameter $F_0^a\approx -0.765$ the FFLO state is completely suppressed. We note that in order to explain the high-field phase diagram of the CeCoIn$_5$ superconductor in terms of an FFLO state one needs to go beyond the simplistic Fermi-liquid picture considered here. While we find that the inclusion of Fermi-liquid effects considerably changes the phase diagram, the changes are not consistent with the experimental findings. For example, (1) the magnitude of the calculated critical temperature $T_{P}$, where the Pauli-limited upper critical transition changes from second to first order between the uniform superconducting and normal state, and (2) the corresponding magnetization jump are much larger than those seen in experiment. We also note that the shape of the transition lines of the calculated FFLO state is qualitatively different from experiments. It is not unreasonable to expect that some of those discrepancies may be overcome by including the effects of impurity scattering,\cite{ada03} antiferromagnetic spin fluctuations, local magnetic moments, or orbital effects. Finally, our analysis of the first-order transition puts stringent constraints on thermodynamic properties and reveals a significant discrepancy between specific heat and magnetization measurements that requires further studies. \section{Acknowledgments} We thank I. Vekhter, C. Capan, R. Movshovich, J. Thompson, and L. Bulaevskii for helpful discussions. We are grateful for the parallel computing resources at T-CNLS and the IC supercomputer facilities at LANL. A. B. V. received funding from the Louisiana Board of Regents for this research. M. J. G. was supported by the U.S. DOE at Los Alamos National Laboratory under the auspices of the NNSA under Contract No. DE-AC52-06NA25396. \bibliographystyle{apsrev}
1,108,101,566,057
arxiv
\section{$O^-(2n,q)$} For more details about the results of this section, one is referred to the paper \cite{DY}. Also, we recommend \cite{ZX} as a general reference for matrix groups over finite fields. Throughout this paper, the following notations will be used:\\ \begin{itemize} \item [] $q = 2^r$ ($r \in \mathbb{Z}_{>0}$),\\ \item [] $\mathbb{F}_q$ = the finite field with $q$ elements,\\ \item [] $Tr A$ = the trace of $A$ for a square matrix $A$,\\ \item [] $^tB$ = the transpose of $B$ for any matrix $B$. \end{itemize}\ Let $\theta^-$ be the nondegenerate quadratic form on the vector space $\mathbb{F}_q^{2n \times 1}$ of all $2n \times 1$ column vectors over $\mathbb{F}_q$, given by \begin{equation}\label{a9} \theta^{-}(\sum_{i=1}^{2n} x_i e^i) = \sum_{i=1}^{n-1} x_{i}x_{n-1+i}+x^{2}_{2n-1}+x_{2n-1}x_{2n}+ax^{2}_{2n}, \end{equation} where $\{e^1=^t[10\ldots0], e^2=^t[010\ldots0],\ldots,e^{2n}=^t[0\ldots01]\}$ is the standard basis of $\mathbb{F}_q^{2n \times 1}$, and $a$ is a fixed element in $\mathbb{F}_q$ such that $z^2+z+a$ is irreducible over $\mathbb{F}_q$, or equivalently $a \in \mathbb{F}_q \backslash \Theta(\mathbb{F}_q)$, where $\Theta(\mathbb{F}_q) =\{ \alpha^{2}+\alpha |\alpha \in \mathbb{F}_q \}$ is a subgroup of index 2 in the additive group $\mathbb{F}_q^{+}$ of $\mathbb{F}_q$. Let $\delta_a$ (with $a$ in the above paragraph), $\eta$ denote respectively the $2\times 2$ matrices over $\mathbb{F}_q$, given by: \begin{equation}\label{a10} \delta_{a}= \begin{bmatrix} 1 & 1 \\ 0 & a \end{bmatrix} z, \;\; \eta = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} . \end{equation} Then the group $O^{-}(2n,q)$ of all isometries of $(\mathbb{F}_q^{2n \times 1}, \theta^{-})$ consists of all matrices \begin{equation}\label{a11} \begin{bmatrix} A & B & e \\ C & D & f \\ g & h & i \end{bmatrix} (A, B, C, D(n-1) \times (n-1), \; e, f(n-1) \times 2,\; g, \; h \; 2 \times (n-1)) \end{equation} in $GL(2n,q)$ satisfying the relations:\\ \begin{equation*} {}^t A C+{}^t g \delta_a g \;\; is \; alternating, \end{equation*} \begin{equation*} {}^t B D+{}^t h \delta_a h \;\;is \;alternating, \end{equation*} \begin{equation}\label{a12} {}^t e f+{}^t i \delta_a i+\delta_a \; \textmd{is alternating}, \end{equation} \begin{equation*} {}^t A D+{}^t C B+{}^t g \eta h=1_{n-1}, \end{equation*} \begin{equation*} {}^t A f+{}^t C e+{}^t g \eta i=0, \end{equation*} \begin{equation*} {}^t B f+{}^t D e+{}^t h \eta i=0. \end{equation*} Here an $n \times n$ matrix $(a_{ij})$ is called alternating if \begin{equation*} \begin{cases} a_{ii}=0, & \text{for $1 \leq i \leq n$},\\ a_{ij}= -a_{ji}=a_{ji}, & \text{for $1 \leq i < j \leq n$.} \end{cases} \end{equation*} $P^-=P^-(2n,q)$ is the maximal parabolic subgroup of $O^{-}(2n,q)$ defined by:\\ \begin{align*} P^{-}(2n,q)&= \bigg\{ \left[% \begin{smallmatrix} A & 0 & 0 \\ 0 & {}^{t}A^{-1} & 0 \\ 0 & 0 &i \\ \end{smallmatrix}% \right] \left[% \begin{smallmatrix} 1_{n-1} & B & {}^{t}h^{t}i \eta i \\ 0 & 1_{n-1} & 0 \\ 0 & h & 1_{2} \\ \end{smallmatrix}% \right] \bigg| \substack{ A \in GL(n-1,q),\;\; i \in O^{-}(2,q),\\ \\ {}^{t}B+{}^{t}h \delta_{a}h \;\; \textrm{is alternating}}\bigg\}, \end{align*} where $O^-(2,q)$ is the group of all isometries of $(\mathbb{F}_q^{2 \times 1}, \theta^-)$, with \begin{equation*} \theta^-(x_1 e^1+x_2e^2)=x_1^2+x_1x_2+a x_2^2. \; \; \; \; \; \textrm{(cf.\;(\ref{a9}))} \end{equation*} One can show that \begin{equation}\label{a13} O^{-}(2,q)=SO^{-}(2,q) \coprod \left[\begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right] SO^{-}(2,q), \end{equation} \begin{align*} SO^{-}(2,q)&= \bigg\{ \begin{bmatrix} d_1& ad_2 \\ d_{2}&d_{1}+d_{2} \\ \end{bmatrix}% \bigg|d^{2}_{1}+d_{1}d_{2}+ad^{2}_{2}=1\bigg\}\\ &= \bigg\{ \begin{bmatrix} d_1& ad_2 \\ d_{2}&d_{1}+d_{2} \\ \end{bmatrix}% \bigg|d_{1}+d_{2}b \in \mathbb{F}_q(b), \; \textrm{with } N_{\mathbb{F}_q(b)/\mathbb{F}_{q}}(d_1+d_{2}b)=1\bigg\}, \end{align*} where $b \in \overline{\mathbb{F}_q}$ is a root of the irreducible polynomial $z^2+z+a$ over $\mathbb{F}_q$. $ SO^- (2,q)$ is a subgroup of index 2 in $O^{-}(2,q)$ and \begin{equation*} |SO^{-}(2,q)| = q+1, \; |O^{-}(2,q)|=2(q+1). \end{equation*} $SO^{- }(2,q)$ here is defined as the kernel of a certain epimorphism $\delta^-:O^-(2n,q) \rightarrow \mathbb{F}_2^+$, to be defined below. The Bruhat decomposition of $O^-(2n,q)$ with respect to $P^-=P^- (2n,q)$ is \begin{equation}\label{a14} O^-(2n,q) = \coprod_{r=0}^{n-1} P^- \sigma_r^-P^-, \end{equation}where \[ \sigma_r^- = \begin{bmatrix} 0 & 0 & 1_r & 0 &0\\ 0 & 1_{n-1-r} & 0 & 0 &0\\ 1_r & 0 & 0 & 0 &0\\ 0 & 0 & 0 & 1_{n-1-r} &0\\ 0 & 0 & 0 &0 &1_2 \end{bmatrix} \in O^-(2n,q). \] For each $r$, with $0\leq r \leq n-1$, put \[ A^-_r = \{ w \in P^-(2n,q) \mid \sigma^-_rw(\sigma_r^-)^{-1} \in P^-(2n,q) \}. \] As a disjoint union of right cosets of $P^-=P^-(2n,q)$, the Bruhat decomposition in (14) can be written as \begin{equation}\label{a15} O^{-}(2n,q)= \coprod_{r=0}^{n-1}P^{-} \sigma _{r}^{-}(A _{r} ^{-} \backslash P^{-}). \end{equation} The order of the general linear group $ GL(n,q)$ is given by \begin{equation*} g_n=\prod_{j=0}^{n-1}(q^n-q^j)=q^{\binom{n}{2}}\prod_{j=1}^{n}(q^{j}-1). \end{equation*} For integers $n,r$ with $0 \leq r \leq n$, the $q$-binomial coefficients are defined as: \begin{equation}\label{a16} \left[ \substack{n \\ r} \right]_q = \prod_{j=0}^{r-1} (q^{n-j} - 1)/(q^{r-j} - 1). \end{equation} Then, for integers $n,r$ with $0 \leq r \leq n$, we have \begin{equation}\label{a17} \frac{g_n}{g_{n-r} g_r} = q^{r(n-r)}\left[ \substack{n \\ r} \right]_q. \end{equation} In \cite{DY}, it is shown \begin{equation}\label{a18} \mid A^-_r \mid = 2(q+1)g_r g_{n-1-r} q^{(n-1)(n+2)/2}q^{r(2n-3r-5)/2}, \end{equation} \begin{equation}\label{a19} \mid P^{-}(2n,q) \mid = 2(q+1)g_{n-1}q^{(n-1)(n+2)/2}. \end{equation} So, from (\ref{a17})-(\ref{a19}), we get: \begin{equation}\label{a20} \mid A^-_r\backslash P^-(2n,q) \mid = \left[ \substack{n-1\\ r} \right]_q q^{r(r+3)/2}, \end{equation} and \begin{equation}\label{a21} \mid P^-(2n,q)\mid ^{2} \mid A_r^- \mid^{-1} = 2(q+1)q^{n^{2}-n}\prod_{j=1}^{n-1}(q^{j}-1)\left[ \substack{n-1 \\ r} \right]_q q^{{r \choose 2}}q^{2r}. \end{equation} As one consequence of these computations, from (\ref{a15}) and (\ref{a21}), we are able to get the order of $O^-(2n,q )$. \begin{align}\label{a22} \begin{split} \mid O^-(2n,q) \mid &= \sum_{r=0}^{n-1} \mid P^-(2n,q)\mid^2 \mid A_r^- \mid ^{-1}\\ &= 2q^{n^2-n}(q^n + 1) \prod_{j=1}^{n-1} (q^{2j} - 1), \end{split} \end{align} where one needs to apply the following $q$-binomial theorem with $x=-q^{2}$: \[ \sum_{r=0}^n \left[ \substack{n \\ r} \right]_q (-1)^r q^{{r \choose 2}} x^r = (x;q)_n, \] with $(x;q)_n = (1-x)(1-qx)\cdots(1-q^{n-1}x)$ ( $x$ an indeterminate, $n \in \mathbb{Z}_{>0}$ ). Related to the Clifford algebra $C(\mathbb{F}_q^{2n \times 1},\theta^{-})$ of the quadratic space $(\mathbb{F}_q^{2n \times 1}, \theta^{-})$, there is an epimorphism of groups $\delta^-:O^-(2n,q) \rightarrow \mathbb{F}_2^+$, which is given by \begin{equation*} \delta^-(w)=Tr({}^t h \delta_a g )+Tr(e \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}% {}^t f )+Tr(B \;{}^t C)+{}^t i^2 \delta_a i^1, \end{equation*} where $\delta_a $ is as in (\ref{a10}), $ i=[i^1 i^2]$ with $i^1,i^2$ denoting the first and second columns of $i$, and \begin{equation*} w = \begin{bmatrix} A & B & e \\ C & D & f \\ g & h &i \end{bmatrix}% \in O^{-}(2n,q) \;\;\;\; \textrm{(cf. (\ref{a11}),\;(\ref{a12}))}. \end{equation*} In order to describe $SO^-(2n,q)$, we introduce a subgroup $Q^- (2n,q)$ of index 2 in $P^- (2n,q)$, defined by: \begin{align*} Q^{-}&=Q^{-}(2n,q)\\ & =\bigg\{\left[% \begin{smallmatrix} A & 0 & 0 \\ 0 & {}^{t}A^{-1} & 0 \\ 0 & 0 &i \\ \end{smallmatrix}% \right] \left[% \begin{smallmatrix} 1_{n-1} & B & {}^{t}h^{t}i \eta i \\ 0 & 1_{n-1} & 0 \\ 0 & h & 1_{2} \\ \end{smallmatrix}% \right] \big| \substack{A \in GL(n-1,q), \; i \in SO^{-}(2,q),\\ \\ {}^{t}B+{}^{t}h \delta_{a}h \;\; \textrm{is alternating}}\bigg\}. \end{align*} Also, for each $r$, with $0 \leq r \leq n-1$, we define \begin{equation*} B_r^-=\{w \in Q^-(2n,q)| \; \sigma_r^- w(\sigma_r^-)^{-1} \in P^- (2n,q)\}. \end{equation*} which is a subgroup of index 2 in $ A_{r}^{-}$. The decompositions in (\ref{a14}) and (\ref{a15}) can be modified so as to give: \begin{equation*} O^{-}(2n,q)=\coprod_{r=0}^{n-1}P^- \sigma_r^{-}Q^{-}, \end{equation*} \begin{equation}\label{a23} O^{-}(2n,q)=\coprod_{r=0}^{n-1}P^- \sigma_r^{-}(B_r^- \backslash Q^-), \end{equation} and \begin{equation*} |B_r^- \backslash Q^-|=|A_r^{-} \backslash P^-| \;\;\;\;\; \textrm{(cf. \;(\ref{a20}))}. \end{equation*} $SO^{-}(2n,q):=Ker\delta^{-}$ is given by \begin{align}\label{a24} \begin{split} SO^{-}(2n,q)= \; &(\coprod_{0 \leq r \leq n-1,r \; even} Q^- \sigma_r^{-}( B_r^- \backslash Q^-))\\ &\coprod(\coprod_{0 \leq r \leq n-1, r \; odd} \rho Q^- \sigma_r^{-}( B_r^- \backslash Q^- )), \end{split} \end{align} with \[ \rho= \begin{bmatrix} 1_{n-1} & 0 & 0 & 0 \\ 0 & 1_{n-1} & 0 & 0 \\ 0 & 0 & 1 &1 \\ 0 & 0 & 0 & 1 \end{bmatrix}% \in P^-(2n,q), \] and \begin{equation*} |SO^{-}(2n,q)|=q^{n^{2}-n}(q^{n}+1)\prod_{j=1}^{n-1}(q^{2j}-1) \textrm{(cf. \; (\ref{a20}))}. \end{equation*} \section{Gauss sums for $O^{-}(2n,q)$} The following notations will be used throughout this paper. \begin{gather*} tr(x)=x+x^2+\cdots+x^{2^{r-1}} \text{the trace function} ~\mathbb{F}_q \rightarrow \mathbb{F}_2,\\ \lambda(x) = (-1)^{tr(x)} ~\text{the canonical additive character of} ~\mathbb{F}_q. \end{gather*} Then any nontrivial additive character $\psi$ of $\mathbb{F}_q$ is given by $\psi(x) = \lambda(ax)$ , for a unique $a \in \mathbb{F}_q^*$. For any nontrivial additive character $\psi$ of $\mathbb{F}_q$ and $a \in \mathbb{F}_q^*$, the Kloosterman sum $K_{GL(t,q)}(\psi ; a)$ for $GL(t,q)$ is defined as \begin{equation}\label{a25} K_{GL(t,q)}(\psi ; a) = \sum_{w \in GL(t,q)} \psi(Trw + a~Trw^{-1}). \end{equation} Observe that, for $t=1$,~$ K_{GL(1,q)}(\psi ; a)$ denotes the Kloosterman sum $K(\psi ; a)$. For the Kloosterman sum $K(\psi ; a)$, we have the Weil bound (cf. \cite{RH}) \begin{equation}\label{a26} \mid K(\psi ; a) \mid \leq 2\sqrt{q}. \end{equation} In \cite{D1}, it is shown that $K_{GL(t,q)}(\psi ; a)$ ~satisfies the following recursive relation: for integers $t \geq 2$, ~$a \in \mathbb{F}_q^*$ , \begin{multline}\label{a27} K_{GL(t,q)}(\psi ; a) = q^{t-1}K_{GL(t-1,q)}(\psi ; a)K(\psi ;a)\\ + q^{2t-2}(q^{t-1}-1)K_{GL(t-2,q)}(\psi ; a), \end{multline} where we understand that $K_{GL(0,q)}(\psi ; a)=1$ . From (\ref{a27}), in \cite{D1} an explicit expression of the Kloosterman sum for $GL(t,q)$ was derived.\\ \begin{theorem}\label{B}(\cite{D1}): For integers $t \geq 1$, and $a \in \mathbb{F}_q^*$, the Kloosterman sum $K_{GL(t,q)}(\psi ; a)$ is given by \begin{multline*} K_{GL(t,q)}(\psi ; a)=q^{(t-2)(t+1)/2} \sum_{l=1}^{[(t+2)/2]} q^l K(\psi;a)^{t+2-2l} \sum \prod_{\nu=1}^{l-1} (q^{j_\nu -2\nu}-1), \end{multline*} where $K(\psi;a)$ is the Kloosterman sum and the inner sum is over all integers $j_1,\ldots,j_{l-1}$ satisfying $2l-1 \leq j_{l-1} \leq j_{l-2} \leq \cdots \leq j_1 \leq t+1$. Here we agree that the inner sum is $1$ for $l=1$. \end{theorem} \begin{proposition}:\label{C} Let $\psi$ be a nontrivial additive character of $\mathbb{F}_q$. Then \begin{flushleft} \begin{equation}\label{a28} (a) \; \sum _{i \in SO^-(2,q)}\psi(Tr i )=-K( \psi;1), \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation} \begin{equation}\label{a29} (b) \; \sum_{ i \in O^-(2,q)} \psi(Tr i )=-K( \psi;1)+q+1. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation} \end{flushleft} \end{proposition} \proof From (\ref{a13}), \[ \sum_{i \in O^-(2,q)}\psi(Tr i )=\sum_{i \in SO^-(2,q)} \psi( Tr i )+ \sum_{i \in SO^-(2,q )} \psi(Tr \left[% \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix}% \right] i), \] the first and second sums of which are respectively equal to $-K(\psi;1)$ and $q+1$ (\cite{D4}, Prop. 3.1). \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$\\ \begin{proposition}(\cite{DY}, Prop. 4.4): Let $\psi$ be a nontrivial additive character of $\mathbb{F}_q$. For each positive integer $r$, let $\Omega_{r}$ be the set of all $ r \times r$ nonsingular symmetric matrices over $\mathbb{F}_q$. Then the $b_{r}(\psi)$ defined below is independent of $\psi$, and is equal to: \begin{equation}\label{a30} b_r=b_r( \psi )= \sum _{B \in \Omega_r } \sum_{ h \in \mathbb{F}_q^{r \times 2}} \psi(Tr \delta_a{}^t h B h) \end{equation} \begin{equation*} = \begin{cases} q^{r(r+ 6)/4}\prod_{j=1}^{r/2}(q^{2j-1}-1), & \text{for $r$ even},\\ -q^{(r^2 +4r-1)/4 }\prod_{j=1}^{(r+1)/2}(q^{2j-1}-1), & \text{for $r$ odd.} \end{cases} \end{equation*} In Section 5 of \cite{DY}, it is shown that the Gauss sums for $O^{-}(2n,q)$ and $SO^{-}(2n,q)$ are respectively given by (cf. (\ref{a16}), (\ref{a23})-(\ref{a25}), (\ref{a30})) : \begin{align*} \begin{split} &\sum_{w \in O^-(2n,q)} \psi(Tr w)\\ &=\sum_{r=0}^{n-1 }|B_r^- \backslash Q^-| \sum _{ w \in P^-} \psi(Tr w \sigma_r^-)\\ &=q^{(n-1)(n + 2)/2}(-K( \psi;1)+q+1)\sum_{ r=0}^{ n-1} \left[\substack{n-1\\r}\right]_q q^{r(2n-r-3)/2}b_rK_{GL(n-1-r,q)}(\psi;1),\\ &\sum_{w \in SO^-(2n,q)}\psi(Tr w)\\ &=\sum_{ 0 \leq r \leq n-1, r \textrm{even} }|B_r^- \backslash Q^-| \sum_{ w \in Q^- } \psi ( Tr w \sigma_r^-)\\ &+ \sum_{ 0 \leq r \leq n-1, r \textrm{odd} }|B_r^- \backslash Q^-| \sum_{ w \in Q^- } \psi ( Tr \rho w \sigma_r^- ) \end{split} \end{align*} \begin{align}\label{a31} \begin{split} =&q^{(n-1)(n+2)/2} \{-K( \psi;1) \sum_{0 \leq r \leq n-1, r \textrm{even}}^{}\left[\substack{n-1\\r}\right]_q q^{r(2n-r-3)/2}b_{r}K_{GL(n-1-r,q)}( \psi ;1) \\ &+{(q+1)\sum _{0 \leq r \leq n-1, r \textrm{odd}} \left[\substack{n-1\\r}\right]_q q^{r(2n-r-3)/2}b_{r}K_{GL(n-1-r,q)}(\psi;1)}\}. \end{split} \end{align} \end{proposition} Here $\psi$ is any nontrivial additive character of $\mathbb{F}_q$. For our purposes, we only need the following three expressions of the Gauss sums for~$SO^-(2,q),$ $O^-(2,q)$, and $SO^-(4,q)$. So we state them separately as a theorem (cf. (\ref{a28}), (\ref{a29}), (\ref{a31})). Also, for the ease of notations, we introduce \begin{equation*} G_1(q) = SO^-(2,q), G_2(q) = O^-(2,q), G_3(q) = SO^-(4,q). \end{equation*} \begin{theorem}:\label{E} Let $\psi$ be any nontrivial additive character of $\mathbb{F}_q$. Then we have \begin{align*} & \sum_{w \in G_1(q)} \psi(Tr w) =- K(\psi ; 1),\\ & \sum_{w \in G_2(q)} \psi(Tr w) =- K(\psi ; 1) + q + 1,\\ & \sum_{w \in G_3(q)} \psi(Tr w) =- q^2(K(\psi ; 1)^2 +q^3-q). \end{align*} \end{theorem}\ \begin{proposition}(\cite{D3}):\label{F} For $n=2^s(s \in \mathbb{Z}_{\geq 0})$, and $\psi$ a nontrivial additive character of $\mathbb{F}_q$, \[ K(\psi;a^n) = K(\psi;a). \] \end{proposition} For the next corollary, we need a result of Carlitz. \begin{theorem}\label{G}(\cite{L2}): For the canonical additive character $\lambda$ of $\mathbb{F}_q$, and $a \in \mathbb{F}_{q} ^{*}$, \begin{equation}\label{a32} K_{2}(\psi;a) = K(\psi;a)^{2}-q. \end{equation} \end{theorem} The next corollary follows from Theorems \ref{E} and \ref{G}, Proposition \ref{F}, and by simple change of variables.\\ \begin{corollary}:\label{H} Let $\lambda$ be the canonical additive character of \; $\mathbb{F}_q$, and let $a \in \mathbb{F}_q^*$. Then we have \begin{align} \sum_{w \in G_1(q)} \lambda(aTrw) &= -K(\lambda;a),\\ \sum_{w \in G_2(q)} \lambda(aTrw) &= -K(\lambda;a)+q+1,\\ \sum_{w \in G_3(q)} \lambda(aTrw) &= -q^2(K(\lambda;a)^2+q^3-q)\\ &= -q^2(K_2(\lambda;a)+q^3). \end{align} \end{corollary}\ \begin{proposition}\label{I}(\cite{D3}): Let $\lambda$ be the canonical additive character of $\mathbb{F}_q$, $m \in \mathbb{Z}_{> 0}$, $\beta \in \mathbb{F}_q$ . Then \begin{align}\label{a37} \begin{split} & \sum_{a \in \mathbb{F}_q^*} \lambda(-a \beta) K_m(\lambda;a) \\ &= \left\{% \begin{array}{ll} qK_{m-1}(\lambda;\beta^{-1})+(-1)^{m+1}, & \hbox{if $\beta \neq 0$,} \\ (-1)^{m+1}, & \hbox{if $\beta = 0$,} \\ \end{array}% \right. \end{split} \end{align} with the convention $K_0(\lambda;\beta^{-1})=\lambda(\beta^{-1})$. \end{proposition} Let $G(q)$ be one of finite classical groups over $\mathbb{F}_q$. Then we put, for each $\beta \in \mathbb{F}_q$, \[ N_{G(q)}(\beta) = \mid \{ w \in G(q) \mid Tr(w) = \beta \} \mid . \] Then it is easy to see that \begin{equation}\label{38} qN_{G(q)}(\beta) = \mid G(q) \mid + \sum_{a \in \mathbb{F}_q^*} \lambda(-a \beta)\sum_{w \in G(q)} \lambda(a ~Trw). \end{equation} For brevity, we write \begin{equation}\label{39} n_1(\beta) = N_{G_1(q)}(\beta), \; n_2(\beta) = N_{G_2(q)}(\beta), \; n_3(\beta) = N_{G_3(q)}(\beta). \end{equation} Using (33), (34), (36)--(38), one derives the following. \begin{proposition}:\label{J} With $n_1(\beta), n_2(\beta), n_3(\beta)$ as in (39), we have \begin{align} & n_1(\beta) = \left\{% \begin{array}{ll} 1, & \hbox{if $\beta = 0$,} \\ 2, & \hbox{if $\beta \neq 0$ with $tr(\beta^{-1}) = 1$,} \\ 0, & \hbox{if $\beta \neq 0$ with $tr(\beta^{-1}) = 0$,} \\ \end{array}% \right. \\ & n_2(\beta) = \left\{% \begin{array}{ll} q+2, & \hbox{if $\beta = 0$,} \\ 2, & \hbox{if $\beta \neq 0$ with $tr(\beta^{-1}) = 1$,} \\ 0, & \hbox{if $\beta \neq 0$ with $tr(\beta^{-1}) = 0$,} \\ \end{array}% \right.\\ & n_3(\beta) = \left\{% \begin{array}{ll} q^4, & \hbox{if $\beta = 0$,} \\ q^2\{q^3+q^2-K(\lambda;\beta^{-1})\}, & \hbox{if $\beta \neq 0$.} \\ \end{array}% \right. \end{align} \end{proposition} \section{Construction of codes} Let \begin{equation}\label{a43} N_1=|G_1(q)|=q+1, \; N_2=|G_2(q)|=2(q+1),\; N_3=|G_3(q)|=q^2(q^4-1). \end{equation} Here we will construct three binary linear codes $C(G_1(q))$ of length $N_1$, $C(G_2(q))$ of length $N_2$, and $C(G_3(q))$ of length $N_3$, respectively associated with the orthogonal groups $G_1(q)$, $G_2(q)$, and $G_3(q)$. By abuse of notations, for $i=1,2,3$, let $g_1, g_2,\ldots,g_{N_i}$ be a fixed ordering of the elements in the group $G_i(q)$. Also, for $i=1,2,3$, we put \[ v_i = (Trg_1,Trg_2,\ldots,Trg_{N_i}) \in \mathbb{F}_q^{N_i}. \] Then, for $i=1,2,3$, the binary linear code $C(G_i(q))$ is defined as \begin{equation}\label{a44} C(G_i(q)) = \{ u \in \mathbb{F}_2^{N_i} \mid u\cdot v_i = 0 \}, \end{equation} where the dot denotes the usual inner product in $\mathbb{F}_q^{N_i}$. The following Delsarte's theorem is well-known.\\ \begin{theorem}\label{K}(\cite{FN}): Let $B$ be a linear code over $\mathbb{F}_q$. Then \[ (B|_{\mathbb{F}_2})^\bot = tr(B^\bot). \] \end{theorem}\ In view of this theorem, the dual $C(G_i(q))^\bot (i=1,2,3)$ is given by \begin{equation}\label{a45} C(G_i(q))^\bot = \{ c(a) = (tr(aTrg_1),\ldots,tr(aTrg_{N_i}))| a \in \mathbb{F}_q \}. \end{equation} Let $\mathbb{F}_2^+,\mathbb{F}_q^+$ denote the additive groups of the fields $\mathbb{F}_2,\mathbb{F}_q$, respectively. Then, with $\Theta(x)=x^2+x$ denoting the Artin-Schreier operator in characteristic two, we have the following exact sequence of groups: \begin{equation*} 0 \rightarrow \mathbb{F}_2^+ \rightarrow \mathbb{F}_q^+ \rightarrow \Theta(\mathbb{F}_q) \rightarrow 0. \end{equation*} Here the first map is the inclusion and the second one is given by $x \mapsto \Theta(x) = x^2+x$. So \begin{equation}\label{a46} \Theta(\mathbb{F}_q) = \{\alpha^2 + \alpha \mid \alpha \in \mathbb{F}_q \},~ and ~~[\mathbb{F}_q^+ : \Theta(\mathbb{F}_q)] = 2. \end{equation} \begin{theorem}\label{L}(\cite{D3}): Let $\lambda$ be the canonical additive character of $\mathbb{F}_q$, and let $\beta \in \mathbb{F}_q^*$. Then \begin{equation*} (a) \sum_{\alpha \in \mathbb{F}_q-\{0,1\}}\lambda(\frac{\beta}{\alpha^2+\alpha})=K(\lambda;\beta)-1, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation*} \begin{equation}\label{a47} (b)\sum_{\alpha \in \mathbb{F}_q}\lambda(\frac{\beta}{\alpha^2+\alpha+b})=-K(\lambda;\beta)-1, \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{equation} if $x^2+x+b (b\in \mathbb{F}_q)$ is irreducible over $\mathbb{F}_q$, or equivalently if $b \in \mathbb{F}_q\setminus\Theta(\mathbb{F}_q)$ (cf. \;(\ref{a46})). \end{theorem} \begin{theorem}:\label{M} For any $q=2^r$, the map $\mathbb{F}_q \rightarrow $ $C(G_{i}(q))^{\bot}(a \mapsto c(a))$, for $i=1,2,3,$ is an $\mathbb{F}_2$-linear isomorphism. \end{theorem} \proof Since $G_2(q)$ case can be shown in exactly the same manner as $G_1(q)$ case, we will treat $G_1(q)$ and $G_3(q)$ cases. Let $i=1$. The map is clearly $\mathbb{F}_2 $-linear and surjective. Let $a$ be in the kernel of the map. Then $tr(a Tr g)=0$, for all $g \in G_1(q)$. Since $n_1( \beta)=|\{g \in G_1(q )|Tr(g)= \beta \} |=2$, for all $\beta \in \mathbb{F}_q^{*}$ with $tr( \beta^{-1})=1$ (cf. (40)), $tr(a \beta)=0$, for all $\beta \in \mathbb{F}_q^{*}$ with $tr(\beta^{-1} )=1$. Let $ b \in \mathbb{F}_q \backslash \Theta (\mathbb{F}_q)$. Then $tr(\gamma)=1 \Leftrightarrow \gamma=\alpha^2+\alpha+b$, for some $\alpha \in \mathbb{F}_q$. As $z^2+z+b$ is irreducible over $\mathbb{F}_q$, $\alpha^2+ \alpha+b \neq 0$, for all $ \alpha \in \mathbb{F}_q$, and hence $tr(\frac{a}{ \alpha^2 + \alpha+b})=0$, for all $ \alpha \in \mathbb{F}_q$. So $\sum_{ \alpha \in \mathbb{F}_q } \lambda(\frac{ a}{\alpha^2 + \alpha+b})=q$. Assume now that $a \neq 0 $. Then, from (\ref{a26}), (\ref{a47}), \[ q=-K( \lambda; a)-1 \leq 2 \sqrt{q}-1. \] But this is impossible, since $x > 2 \sqrt{x}-1 $, for $x \geq 2$. Now, let $ i=3$. Again, the map is $ \mathbb{F}_2$-linear and surjective. From (42) and using the Weil bound in (\ref{a26}), we see that $n_3( \beta )=|\{g \in G_3(q) |Tr(g)=\beta\} |>0$, for all $\beta \in \mathbb{F}_q$. Let $a$ be in the kernel. Then $tr(aTr g)=0 $, for all $g \in G_3 (q)$, and hence $tr(a \beta )=0$, for all $\beta \in \mathbb{F}_q$.This implies that $a=0$, since otherwise $tr: \mathbb{F}_q \rightarrow \mathbb{F}_2$ would be the trivial map. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$\\ \section{Power moments of Kloosterman sums} In this section, we will be able to find, via Pless power moment identity, a recursive formula for the power moments of Kloosterman sums in terms of the frequencies of weights in $C(G_i(q))$, for each $i=1,2,3$. \begin{theorem}\label{N}(Pless power moment identity): Let $ B$ be an $q$-ary $[n,k]$ code, and let $B_{i}$(resp.$B_{i} ^{\bot})$ denote the number of codewords of weight $i$ in $B$(resp. in $B^{\bot})$. Then, for $h=0,1,2, \cdots$, \begin{equation}\label{a48} \sum_{j=0}^{n}j^{h}B_{j}=\sum_{j=0}^{min \{ n,h \}}(-1)^{j}B_{j} ^{\bot} \sum_{t=j}^{h} t! S(h,t)q^{k-t}(q-1)^{t-j}\binom{n-j}{n-t}, \end{equation} where $S(h,t)$ is the Stirling number of the second kind defined in (3). \end{theorem} Recall that, for $i=1, 2, 3$, every codeword in $C(G_i(q))^\bot$ can be written as $c(a)$, for a unique $a \in \mathbb{F}_q$ (cf. Theorem 13, (45)). \begin{lemma}:\label{O} Let $c(a)=(tr(aTrg_1),\cdots,tr(aTr g_{N_i})) \in C(G_i(q))^{\bot}$, for $ a \in \mathbb{F}_q^{*}$, and $i=1, 2, 3$. Then the Hamming weight $w(c(a))$ can be expressed as follows: \begin{equation}\label{a49} (a) \;\;\;For \;\; i=1, 2,\; w(c(a))= \frac{ 1}{2 }(q +1 +K (\lambda;a )), \qquad \qquad\qquad\qquad\qquad \end{equation} \begin{align}\label{a50} \begin{split} (b)\;\;\; For \;\; i=3, \;w(c(a))&=\frac{ 1}{2 }q^2(q^4 +q^3 -q -1 +K (\lambda; a )^2 )\\ &=\frac{ 1}{2 }q^2(q^4 +q^3 -1 + K_2 ( \lambda; a)). \qquad\qquad\qquad \;\; \end{split} \end{align} \end{lemma} \proof For $ i=1, 2, 3$, \begin{align*} \begin{split} w(c(a))&=\frac{ 1}{2 } \sum_{j=1}^{N_i }(1-(-1)^{tr(aTrg_j )})\\ &=\frac{ 1}{ 2}(N_i- \sum_{w \in G_i(q)} \lambda (a Tr w)). \end{split} \end{align*} Our results now follow from (\ref{a43}) and (33)-(36). \qquad\qquad\qquad\qquad \qquad\qquad$\square$\\ Fix $i(i=1, 2, 3)$, and let $u=(u_1, \cdots , u_{N_{i}}) \in \mathbb{F}_2^{N_{i}}$, with $\nu_\beta$ 1's in the coordinate places where $ Tr(g_j )= \beta$, for each $ \beta \in \mathbb{F}_q$. Then we see from the definition of the code $C(G_i(q))$(cf. (45)) that $u$ is a codeword with weight $j$ if and only if $ \sum_{\beta \in \mathbb{F}_{q}}^{} \nu_{\beta }=j$ and $\sum_{\beta \in \mathbb{F}_{q}} ^{} \nu_{\beta} \beta =0$(an identity in $ \mathbb{F}_q$). As there are $\prod_{\beta \in \mathbb{F}_q} \binom{n_i( \beta)}{ \nu_\beta}$ many such codewords with weight $j$, we obtain the following result. \begin{proposition}:\label{P} Let $\{C_{i,j}\}_{j=0}^{N_i}$ be the weight distribution of $C(G_i(q))$, for each $i=1, 2, 3$, where $C_{i,j}$ denotes the frequency of the codewords with weight $j$ in $C(G_i(q))$. Then \begin{equation}\label{a51} C_{i,j}=\sum \prod_{ \beta \in \mathbb{F}_q} \binom{n_i( \beta )}{ \nu_\beta}, \end{equation} where the sum runs over all the sets of integers $\{ \nu_\beta \}_{ \beta \in \mathbb{F}_q }(0 \leq \nu_\beta \leq n_i(\beta))$, satisfying \begin{equation}\label{a52} \sum_{\beta \in \mathbb{F}_{q}}^{} \nu_{\beta}=j \; and \; \sum _{\beta \in \mathbb{F}_{q}}^{} \nu_{\beta} \beta =0. \end{equation} \end{proposition} \begin{corollary}:\label{Q} Let $\{C_{i,j}\}_{j=0}^{N_{i}}$ be the weight distribution of $C(G_i(q))$, for $i=1, 2,3$. Then, for $i=1, 2, 3$, we have: $C_{i,j}=C_{i,N_{i}-j}$, for all $ j$, with $0 \leq j \leq N_i.$ \end{corollary} \proof Under the replacements $\nu_ \beta \rightarrow n_i(\beta )-\nu_ \beta$, for each $\beta \in \mathbb{F}_q$, the first equation in (\ref{a52}) is changed to $N_i -j$, while the second one in (\ref{a52}) and the summands in (\ref{a51}) are left unchanged. Here the second sum in (\ref{a52}) is left unchanged, since $\sum_{\beta \in \mathbb{F}_q}n_i(\beta)\beta=0$, as one can see by using the explicit expression of $n_i( \beta)$ in(40)-(42). \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$\\ \begin{theorem}\label{R}(\cite{GJ}): Let $q=2^r$, with $ r \geq 2$. Then the range $R$ of $K(\lambda ;a) $, as $a$ varies over $\mathbb{F}_q^{*}$, is given by: \begin{equation*} R=\{t \in \mathbb{Z} \; | \; |t |<2 \sqrt{q}, \; t\equiv -1 (mod \; 4) \}. \end{equation*} In addition, each value $t \in R $ is attained exactly $H(t^2 -q)$ times, where $H(d)$ is the Kronecker class number of $d$. \end{theorem} Now, we get the following formulas in (\ref{a2}), (\ref{a5}), and (\ref{a8}), by applying the formula in (\ref{a51}) to each $ C(G_i(q))$, using the explicit values of $n_i(\beta)$ in (40)-(42), and taking Theorem 18 into consideration. \begin{theorem}:\label{S} Let $\{C_{i,j}\}_{j=0}^{N_{i}}$ be the weight distribution of $C(G_i(q) )$, for $i=1, 2, 3$. Then \begin{equation*} (a) \;\;\; C_{1,j}=\sum \binom{1}{\nu_0} \prod_{tr( \beta^{-1})=1} \binom{2}{\nu_ \beta} \;\; (j=0,\cdots, N_1), \qquad \qquad\qquad\qquad\qquad\qquad \end{equation*} where the sum is over all the sets of nonnegative integers $\{ \nu_0 \} \cup \{ \nu_ \beta \}_{tr( \beta^{-1})=1}$ satisfying $ \nu_0+ \sum_{tr(\beta^{-1})=1}^{} \nu_\beta=j$ and $\sum_{tr(\beta^{-1})=1}^{} \nu_{\beta} \beta=0$. \begin{equation*} (b) \;\;\; C_{2,j}=\sum \binom{q+2}{\nu_0} \prod_{tr (\beta^{-1})=1} \binom{2}{\nu_\beta} \;\; (j=0,\cdots, N_2),\qquad\qquad\qquad\qquad \;\;\;\;\;\;\;\; \end{equation*} where the sum is over all the sets of nonnegative integers $\{ \nu_0 \} \cup \{ \nu_ \beta \}_{tr( \beta^{-1})=1}$ satisfying $ \nu_0+ \sum_{tr(\beta^{-1})=1}^{} \nu_\beta=j$ and $\sum_{tr(\beta^{-1})=1}^{} \nu_{\beta} \beta=0$. \end{theorem} \begin{equation*} (c) \;\;\; C_{3,j}=\sum \binom{m_0}{\nu_0} \prod_{ |t |<2 \sqrt{ q}, \; t \equiv -1(4)} \prod_{K(\lambda;\beta^{-1})=t} \binom{m_t}{\nu_\beta }(j=0,\cdots, N_3),\qquad\qquad\qquad\qquad \end{equation*} where the sum is over all the sets of nonnegative integers $\{\nu_ \beta \}_{ \beta \in \mathbb{F}_q}$ satisfying $\sum_{\beta \in \mathbb{F}_q}^{} \nu_\beta=j$ and $\sum_{\beta \in \mathbb{F}_q}^{} \nu_{\beta} \beta=0$, \begin{equation*} m_0=q^4, \end{equation*} and \begin{equation*} m_t=q^2(q^3 +q^2 -t), \end{equation*} for all integers $t$ satisfying $ |t|<2 \sqrt{q}$ and $t \equiv -1 (mod \;\; 4)$. We now apply the Pless power moment identity in (\ref{a48}) to each $C(G_i(q))^\bot$, for $i=1, 2, 3,$ in order to obtain the results in Theorem 1(cf. (\ref{a1}), (\ref{a4}), (\ref{a6}), (\ref{a7})) about recursive formulas. Then the left hand side of that identity in (\ref{a48}) is equal to \begin{equation}\label{a53} \sum_{a \in \mathbb{F}_q^{*}}w(c(a))^h, \end{equation} with the $w(c(a))$ in each case given by (\ref{a49}), (\ref{a50}). For $i=1, 2,$ (\ref{a53}) is \begin{equation*} \frac{1}{2^h } \sum_{ a \in \mathbb{F}_q^{*}}(q+1+K(\lambda;a))^h \qquad\qquad\qquad \end{equation*} \begin{equation*} =\frac{1}{2^h} \sum_{ a \in \mathbb{F}_q^{*} } \sum_{l=0}^{h} \binom{h}{l}(q+1)^{h-l}K(\lambda;a)^l \end{equation*} \begin{equation} =\frac{1}{2^h} \sum_{l=0}^{h} \binom{h}{l}(q+1)^{h-l} M K^l.\qquad\;\;\;\; \end{equation} Similarly, for $i=3$, (\ref{a53}) equals \begin{align} (\frac{q^2}{2 })^h \sum_{l=0}^{h} \binom{h}{l} (q^4 +q^3-q-1)^{h-l} MK^{2l} \\ =(\frac{q^2}{2 })^h \sum_{ l=0}^{h} \binom{h}{l}(q^4 +q^3 -1)^{h-l}MK_2^l. \end{align} Note here that, in view of (\ref{a32}), obtaining power moments of 2-dimensional Kloosterman sums is equivalent to getting even power moments of Kloosterman sums. Also, one has to separate the term corresponding to $l=h$ in (54)-(56), and notes $dim_{\mathbb{F}_2} C(G_i)=r$.\\ \section{Remarks and Examples} The explicit computations about power moments of Kloosterman sums was begun with the paper [18] of Sali\'{e} in 1931, where he showed, for any odd prime $q$, \begin{equation}\label{a57} MK^{h}=q^{2}M_{h-1}-(q-1)^{h-1}+2(-1)^{h-1} \;\;\;\; (h \geq 1). \end{equation} However, this holds for any prime power $q=p^r$ ($p$ a prime). Here $M_0=0$, and for $h \in z_{>o}$, \begin{equation*} M_{h}=|\{(\alpha_1,\cdots,\alpha_h)\in(\mathbb{F}_{q}^{*})^h \; | \; \sum_{j=1}^{h}\alpha_j = 1 =\sum_{j=1}^{h} \alpha_{j}^{-1}\}\;|. \end{equation*} For positive integers $h$, we let \begin{equation*} A_{h}=|\{(\alpha_1,\cdots,\alpha_h)\in(\mathbb{F}_{q}^{*})^h \; | \; \sum_{j=1}^{h}\alpha_j = 0 =\sum_{j=1}^{h} \alpha_{j}^{-1}\}\;|. \end{equation*} Then $(q-1)M_{h-1}=A_h$, for any $h \in \mathbb{Z}_{>0}$. So (\ref{a57}) can be rewritten as \begin{equation}\label{a58} MK^h=\frac{q^2}{q-1}A_h-(q-1)^{h-1}+2(-1)^{h-1}. \end{equation} Iwaniec \cite{H1} showed the expression (\ref{a58}) for any prime $q$. However, the proof given there works for any prime power $q$, without any restriction. Also, this is a special case of Theorem 1 in \cite{HD}, as mentioned in Remark 2 there. For $q=p$ any prime, $MK^{h}$ was determined for $h \leq 4$ (cf. \cite{H1}, [18]). \begin{equation*} MK^1=1, \;\;MK^2=p^2-p-1, \qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{equation*} \begin{equation*} MK^3=(\frac{-3}{p})p^2+2p+1 \; (\textrm{with the understanding} (\frac{-3}{2})=-1 \; (\frac{-3}{3})=0), \qquad \end{equation*} \begin{equation*} MK^4= \begin{cases} 2p^3-3p^2-3p-1, & p \leq 3,\\ 1, & p=2. \qquad \qquad \qquad\qquad\qquad\qquad\qquad\qquad \;\; \end{cases} \end{equation*} Except \cite{L1} for $1 \leq h \leq 4$, not much progress had been made until Moisio succeeded in evaluating $MK^h$, for the other values of $h$ with $h \leq 10$ over the finite fields of characteristic two in \cite{M1}(Similar results exist also over the finite fields of characteristic three (cf. \cite{GM}, \cite{M2})). So we have now closed form formulas for $h \leq 10$. His result was a breakthrough, but the way it was proved is too indirect, since the frequencies are expressed in terms of the Eichler Selberg trace formulas for the Hecke operators acting on certain spaces of cusp forms for $\Gamma_{1}(4)$. In addition, the power moments of Kloosterman sums are obtained only for $h \leq 10$ and not for any higher order moments. On the other hand, our formulas in (\ref{a1}) and (\ref{a2}) allow one, at least in principle, to compute moments of all orders for any given $q$. In below, for small values of $i$, we compute, by using (\ref{a1}), (\ref{a2}), and MAGMA, the frequencies $C_i$ of weights in $C(SO^{-}(2,2^4))$ and $C(SO^{-}(2,2^5))$, and the power moments $MK^h$ of Kloosterman sums over $\mathbb{F}_{2^4}$ and $\mathbb{F}_{2^5}$. In particular, our results confirm those of Moisio's given in \cite{M1}, when $q=2^4$ and $q=2^5$.\\ \\ \begin{table}[!htp] \begin{center} \begin{tabular}{c c c c c c c c } \multicolumn{8}{c}{TABLE I} \\ \multicolumn{8}{c}{The weight distribution of $C(SO^{-}(2,2^{4}))$} \\ \\ \hline w & frequency & w& frequency & w& frequency &w& frequency\\[0.5pt] \hline 0 & 1 & 5 & 396 & 10 & 1208 & 15 & 8 \\ 1 & 1 & 6 & 792 & 11 & 792 & 16 &1 \\ 2 & 8 & 7 & 1208 & 12 & 396 & 17 &1 \\ 3 & 40 & 8 & 1510 & 13 & 140 \\ 4 & 140 & 9 & 1510 & 14 & 40 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!htp] \begin{center} \begin{tabular}{c c c c c c } \multicolumn{6}{c}{ TABLE II} \\ \multicolumn{6}{c}{The power moments of Kloosterman sums over $\mathbb{F}_{2^{4}}$} \\ \\ \hline $i$ & $MK^i$ &$i$ & $MK^i$& $i$ &$MK^i$\\[0.5pt] \hline 0 & 15 & 10 & 604249199 & 20&159966016268924111\\ 1 & 1 & 11 & 3760049569 & 21 &1115184421375168321\\ 2 & 239 & 12 & 28661262671 & 22 &7829178965854277039\\ 3 & 289 & 13 & 188901585601 & 23 &54689811340914235489\\ 4 & 7631 & 14 & 1380879340079 & 24 &383400882469952537231\\ 5 & 22081 & 15 & 9373110103009 & 25 & 2680945149821576426881\\ 6 & 300719 & 16 & 67076384888591 & 26 & 18780921149940510987119\\ 7 & 1343329 & 17 & 462209786722561 & 27 &131394922435183254906529\\ 8 & 13118351 & 18 & 3272087534565359 & 28 &920122084792925568335951\\ 9 & 72973441 & 19 & 22721501074479649& 29 &6439066453841188580322241\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!htp] \begin{center} \begin{tabular}{c c c c c c c c } \multicolumn{8}{c}{TABLE III} \\ \multicolumn{8}{c}{The weight distribution of $C(SO^{-}(2,2^{5}))$} \\ \\ \hline w & frequency & w& frequency & w& frequency &w& frequency\\[0.5pt] \hline 0 & 1 & 9 & 1204220 & 18 & 32411632 & 27 & 34800\\ 1 & 1 & 10 & 2892592 & 19 & 25586000 & 28 & 7352\\ 2 & 16 & 11 & 6049808 & 20 & 17909672 & 29 & 1240\\ 3 & 176 & 12 & 11088968 & 21 & 11088968 & 30 & 176\\ 4 & 1240 & 13 & 17909672 & 22 & 6049808 & 31 & 16\\ 5 & 7352 & 14 & 25586000 & 23 & 2892592 & 32 & 1\\ 6 & 34800 & 15 & 32411632 & 24 & 1204220 & 33 & 1\\ 7 & 133840 & 16 & 36463878 & 25 & 433532\\ 8 & 433532 & 17 & 36463878 & 26 & 133840\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!htp] \begin{center} \begin{tabular}{c c c c c c } \multicolumn{6}{c}{TABLE IV} \\ \multicolumn{6}{c}{The power moments of Kloosterman sums over $\mathbb{F}_{2^{5}}$} \\ \\ \hline $i$ & $MK^i$ &$i$ & $MK^i$& $i$ &$MK^i$\\[0.5pt] \hline 0 & 31 & 10 & 44833141471 & 20 & 733937760431358760351\\ 1 & 1 & 11 & 138050637121 & 21 & 6855945343839827241601\\ 2 & 991 & 12 & 4621008512671 & 22 & 86346164924243497892191\\ 3 & -959 & 13 & 22291740481921 & 23 & 851252336789971927746241\\ 4 & 63391 & 14 & 497555476630111 & 24 & 10249523095374924648418591\\ 5 & -63359 & 15 & 3171377872090561 & 25 & 104764273348415132423811841\\ 6 & 5102431 & 16 & 55381758830599711 & 26 & 1224170008071148563308433631\\ 7 & -678719 & 17 & 423220459165032961 & 27 & 12819574031043721011365916481\\ 8 & 460435231 & 18 & 6318551635327312351 & 28 & 146828974390583504114568758431\\ 9 & 613044481 & 19 & 54461730980167425601 & 29 &1562774752282717527826758007681\\ \hline \\ \end{tabular} \end{center} \end{table}
1,108,101,566,058
arxiv
\section{Introduction} The modeling of a cardiovascular system is the problem of great importance due to it should be aid for understanding and prediction of various diseases like atherosclerosis, arteriosclerosis, hypertension, etc. The most often approaches to the problem is a direct applying of classical hydrodynamics models of fluid flow through elastic shells or tubes~\cite{Canic2003, Pontrelli2003, Quarteroni2003}. But in some cases, for example, for muscular resistance arteries, it is necessary to take into account the difference between an ordinary passive and a ''biological'' active tube. The one way to reach the purpose is a wide discussed effect of the flow-induced vasodilation by Nitric Oxide (NO) radical~\cite{Snow_etal2001, Buchanan1993, Rachev2000, Smith2003}. For a long time the endothelium cells, covering the arterial bed surface, was supposed to provide only the friction reduction for the blood flow through arteria. But since the EDRF was discovered via the comparison of two arterial rings with and without endothelium by the ability to acetylcholine-dependent smooth muscle relaxation and due to later investigations it was shown the endothelium plays the main role for the local blood flow regulation. The so-called Endothelium Derived Relaxing Factor (EDRF) was discovered in 1980 by Robert~F.~Furchgott~\cite{Furchgott1980,Furchgott1999}. Nitric oxide was proposed as a signal molecule to set a connection from endothelium to the smooth muscles. The EDRF-NO mechanism enabled to explain the principle of action for the first-aid medicine Nitroglycerine being used before without understanding. The mechanical nature of an arterial wall tonus regulation is highlighted in the works~\cite{Payne2005, Rachev2000, Joannides_etal1995, Rubanyi1986}. It is perceived that the increasing of a shear stress between the blood flow and inner arterial surface causes the relaxation in a smooth muscle layer of an arterial wall. This necessary induces the increasing of the arterial radius and the decreasing of the shear stress itself. Therefore the process in a whole gives the system with a negative feedback. There are three main layers in an arterial wall. The first, internal layer is intima~(i), the second layer is media~(m) and the last one is adventitia~(a). The inner boundary of the intima layer, i.e. the internal arterial surface, is covered with endothelium cells. The media layer is full of smooth muscle cells. The thicknesses of the layers depend on the type of artery or arteriole. We mainly consider the muscle resistance arteries which have well developed muscle layer (media) and non-vanishing intima layer. The typical ratio of intima thickness to the media one is about $10^{-1}$. The scenario of the flow-induced relaxation is as follows. The increasing shear stress $\sigma_{shear}$ on the surface of endothelium cells opens calcium channels which launch the production of NO from L-arginine with NO-synthase (NOS) catalysis and then NO diffuses with descending through the intima layer towards the smooth muscle cells in the media layer. As a lipophilic molecule NO easily penetrates through a cell membrane of a muscle cell and initiate synthesis of the cyclic guanosine monophosphate (cGMP). Ultimately, cGMP stimulates the outflow of intracellular $Ca^{2+}$ that leads to relaxation of the smooth muscle cell. The flow-induced contraction is realized vise versa. The aim of this paper is to develop and to study the mathematical model for description of blood flow autoregulation that accumulates a viscoelastic nature of an arterial wall and the two-layer diffusion and kinetic processes for concentrations of the key agents: Nitric Oxide (NO) and Calcium ions ($Ca^{2+}$). The outline of the article is as follows. In the section~\ref{sec:Main_assumptions} we introduce the assumptions of the model. In the section~\ref{sec:Math_model_derivation} we derive the closed system for description of the autoregulation process. In the section~\ref{sec:Steady_state} the steady-state concentrations of NO and $Ca^{2+}$ are obtained. In the section~\ref{sec:Thin-wall_artery} we study the limit case of a thin-wall artery. In this case the stability condition of an equilibrium state of the system is given. In the section~\ref{sec:Passive_tube} we consider the case of passive dilation of an arteria with the fully relaxed muscles. The exact kink-shaped solution is found. In the section~\ref{sec:Numerical_simulation} the numerical simulation of the autoregulation process near the stationary state is presented. In the section~\ref{sec:Conclusion} we summarize and discuss the obtained results. The appendix~\ref{sec:Appendix-notation} includes the essential notations. The appendix~\ref{sec:Appendix-SimpleEq-method} gives the approach for finding exact solution of the passive vessel model. \section{Main assumptions of the model}\label{sec:Main_assumptions} We consider the arteria to be axial-symmetric, viscoelastic and incompressible. The blood is also assumed to be incompressible and Newtonian. The flow is quasi-stationary, the transmural pressure is constant and the velocity profile is the power generalization of the Poiseuille's law. We suppose the dependence of muscular force on calcium concentration to be linear and the dependence of $Ca^{2+}$ concentration decreasing ratio on Nitric Oxide concentration is also linear. The concentration of NO in the endothelium is assumed to be proportional to the shear stress on an arterial wall~\cite{Rachev2000}. \section{The statement of the problem}\label{sec:Math_model_derivation} Let us consider an arterial segment of length $l$ in the cylindrical coordinate system $( r, \theta, x \equiv z )$. The intima, media, and adventitia layers have coordinates $R_i,\,R_m,\,R_a$ respectively. \subsection{The shear stress dependence on the blood flow} Consider the power generalization velocity profile of the Poiseuille's flow~\cite{Quarteroni2003}: \begin{equation} V_{x}(r,x,t) = \frac{\gamma+2}{\gamma} \left[\, 1 - \left(\frac{r}{R(t)}\right)^{\gamma}\, \right]\,\bar{u}(x,t) \end{equation} Here $V_{x}$ is the axial velocity, $\bar{u}$ is the cross-sectional averaged axial velocity, $R$ is the arterial radius and $\gamma$ is the profile sharpness. In case of Newtonian fluid with dynamical viscosity $\mu$ the shear stress on the wall of elastic tube is \begin{equation} \sigma_{shear} = -\mu \left.\PDfrac{V_{x}}{r}\right|_{r=R} = (\gamma + 2)\mu \frac{\bar{u}}{R} = (\gamma + 2)\mu\,\frac{Q}{\pi R^3} \label{eq:shear_stress} \end{equation} where $Q = A\bar{u}$ is the blood discharge through the cross-section with the area $A$. Summarize all the assumptions for the laminar stationary flow, the cross-section averaged Navier-Stokes equation takes the form of the generalized Hagen-Poiseuille equation: \begin{equation} \frac{\Delta P}{l} = 2(\gamma + 2)\mu \frac{Q}{\pi R^4} \label{eq:generalized_Poiseuille} \end{equation} where $\Delta P$ is the pressure difference on an arterial segment with the length $l$. It shows the linear dependence of the pressure gradient from the discharge and inversely proportionality to the fourth power of the arterial radius. In case of axial-symmetric radial perturbations $R(t) = R_{0} + \eta(t)$ we have from (\ref{eq:shear_stress}): \begin{equation} \sigma_{shear} = \frac{ (\gamma + 2)\mu }{ \pi R_{0}^3 }\, \frac{ Q }{ \left(1 + \dfrac{\eta}{R_0}\right)^3 } \label{eq:shear_stress_small_pert} \end{equation} There is a hypothesis of maintaining the shear stress principle $\sigma_{shear} = const$ \cite{Togawa1980,Rachev2000}. One can conclude the increasing of the flow necessitate the increasing of the steady-state arterial radius to compensate the changing of the shear stress. The estimated relation between the new steady-state discharge and new stationary radius is as follows: \begin{equation} \eta = \left( \sqrt[3]{ \frac{Q}{Q_{0}} } - 1 \right) R_{0} \end{equation} It is remarkable the difference of reaction for increasing and decreasing the blood flow near the previous stationary value. The changing of the radius in response to higher flow is smaller than for the same lower flow. It is explained by the inversely cubic dependence of shear stress from the radius. One can see there is the linear dependence between radial perturbation and mean blood flow in case of small radial perturbations ($|\eta| << R_{0}$). \subsection{The synthesis and diffusion of Nitric Oxide} According to the EDRF-mechanism mediated by the fluid flow, the concentration of Nitric Oxide produced by an endothelium cell is managed by the shear stress value. We consider the NO transport to the smooth muscle tissue as a diffusion process (diffusion coefficient is $D_1$) with a descending (reaction coefficient is $\delta_1$). Then it continues to diffuse through the media layer but with another diffusion coefficient $D_2$ and reaction coefficient $\delta_2$. The production of Nitric Oxide in an endothelium cell has the shear stress $\sigma_{shear}$ as one of essential regulators, therefore this process can be described with a kinetic equation: \begin{equation} \Dfrac{n_{e}}{t} = - k_{e}\,n_{e} + \psi\,\sigma_{shear}(t) \label{eq:NO_production} \end{equation} where $n_{e}$ is the NO concentration in an endothelium cell, $k_{e}$ is the rate of mass transfer of NO from the cell, and $\psi$ is the production rate constant. Under the assumption of quasi-stationary NO production, i.e. that characteristic time of NO mass transfer from an endothelium cell towards intima layer ($\tau_{NO-mass-transfer} \sim \Delta r^2/D \simeq 1/528\, {sec}$, where $D=3300\,{\mu}m^2/sec,\; \Delta r \simeq 2.5\,{\mu}m$ \cite{Regirer2005}) is smaller than the typical time of $\sigma_{shear}$ changing ($\tau_{shear} \sim \tau_{radius-oscillations} \sim 1/2 \,{sec}$), from equation (\ref{eq:NO_production}) we have the following relation between the $n_{e}$ and $\sigma_{shear}$: \begin{equation} n_{e}(t) = \frac{\psi}{k_{e}}\,\sigma_{shear}(t) \label{eq:inner_bound_condition_for_NO} \end{equation} The relation (\ref{eq:inner_bound_condition_for_NO}) is used as inner boundary condition for Nitric Oxide diffusion through an arterial wall ($n|_{r=R_{intima}} = n_{e}$). Ultimately, at the inner boundary of intima layer we assume the concentration of NO to be proportional to the shear stress (proportionality coefficient is $k_{3}$). Between the intima and media layers we use the continuity of concentrations and fluxes. On the external layer we take the impenetrability condition into account. Thus the system of equations and the boundary conditions for the Nitric Oxide concentration are as follows: \begin{equation} \begin{gathered} \PDfrac{ n_{j} }{ t } = D_{j} \frac{1}{r}\PDfrac{}{r} \left(r\, \PDfrac{n_{j}}{r} \right) - \delta_{j}\, n_{j}, \hfill \\ R_{i} < r < R_{m}\quad \mbox{for}\; j=1\quad \mbox{(intima)} \hfill\\ R_{m} < r < R_{a}\quad \mbox{for}\; j=2\quad \mbox{(media)} \hfill\\ n_{1}|_{r=R_{i}} = k_{3}\,\sigma_{shear} \hfill \\ n_{1}|_{r=R_{m}} = n_{2}|_{r=R_{m}},\quad \left. D_{1}\PDfrac{n_{1}}{r} \right|_{r=R_{m}} = \left. D_{2}\PDfrac{n_{2}}{r} \right|_{r=R_{m}} \hfill \\ \left. \PDfrac{n_{2}}{r} \right|_{ r=R_{a} } = 0 \hfill \label{eq:NO_diffusion} \end{gathered} \end{equation} The system of equations (\ref{eq:NO_diffusion}) together with initial conditions describes the two-layer diffusion-kinetic process for the Nitric Oxide in an arterial wall. \subsection{The equation for the kinetics of the Calcium ions in a smooth muscle cell} To derive the balance-equation for concentration of $Ca^{2+}$ in a smooth muscle cell it is necessary to describe the ways of $Ca^{2+}$ in- and out-fluxes. There are two source of the calcium ions: the extracellular space and the intracellular containers -- sarcoplasmic reticulum. The concentration of $Ca^{2+}$ in these sources is about $10^4$ greater than in the intracellular space. The balance of calcium ions in the muscle cell at the point $r$ may be described, similarly to \cite{Rachev2000}, as \begin{equation} \PDfrac{C(r,t)}{t} = -\alpha(C - C_{0}) + \beta(C_{ext} - C) - k_{1}n_{2}(r,t) \label{eq:full_Ca-balance_eq} \end{equation} where the first term is responsible for the natural active outflow transport of $Ca^{2+}$ compared to the minimal observed concentration $C_{0}$, the second term is described a passive diffusion provided by the difference between the intracellular calcium concentration $C$ and extracellular ones $C_{ext}$, and the last term is presented the NO-mediated active outflow. Taking into account the relation $C_{ext} >> C$ we can treat it as a constant source: $\varphi_{0} = \alpha C_{0} + \beta C_{ext} = const$. In this case the equation (\ref{eq:full_Ca-balance_eq}) can be transformed to the form: \begin{equation} \PDfrac{C(r,t)}{t} = -\alpha C - k_{1} n_{2}(r,t) + \varphi_{0} \label{eq:Ca-balance_eq} \end{equation} The equation (\ref{eq:Ca-balance_eq}) is used to describe the Calcium-balance in the smooth muscle layer. \subsection{The equation for an arterial wall movement} In order to obtain the close system of a blood flow autoregulation we need to have a link between the radial perturbation and the external forces such as pressure and muscular force \cite{ChernKudr2006}. The constitutive equation~\cite{FungBook1993} can be found from the movement equation for an arterial wall segment. Let us consider the incompressible viscoelastic wall element with mass $\Delta m$, density $\rho_{w}$, width $h$, radius $R$, and length $\Delta x$. According to the movement law \begin{equation} \begin{gathered} \Delta m \Dfrac[2]{R}{t} = f_{radial} + f_{pressure}, \\ f_{radial} = -\sigma_{\theta\theta}\, 2 \pi \Delta x h,\quad f_{pressure} = (\bar{P} - P_{ext})\, 2 \pi \Delta x h \end{gathered} \label{eq:general_wall_mov_eq} \end{equation} where $\Delta m = \rho_{w} 2 \pi R \Delta x h$, $f_{radial}$ is proportional to the circumference component of a stress tensor $\sigma_{\theta\theta}$ and $f_{pressure}$ is the resulting transmural pressure (the difference between the internal and external pressure). The stress tensor component $\sigma_{\theta\theta}$ consists of three parts: a passive elastic force (weakly nonlinear with quadratic addition), a viscous force and an active force due to the muscle tonus \begin{equation} \sigma_{\theta\theta} = \frac{E(<\!C\!>)}{1-\xi^2}\frac{R-R_0}{R_0} + E_{1} \left( \frac{R-R_0}{R_0} \right)^2 + \lambda \Dfrac{R}{t} + k_{2}\,F(C) \label{eq:stress_tensor} \end{equation} here $E(<\!C\!>)$ is the Yung's modulus dependent on averaged concentration of $Ca^{2+}$ in a muscle cell layer, $\xi$ is the Poisson's ratio, $E_{1}$ is the small nonlinear elastic coefficient for a square addition, $\lambda$ is the viscous characteristic of the wall, $F(C)$ is the active force component determined also by the integral calcium concentration level above the threshold one $C_{th}$, $k_{2}$ is the coefficient of proportionality for the muscular tonus response on the $Ca^{2+}$ level. Substitute (\ref{eq:stress_tensor}) in (\ref{eq:general_wall_mov_eq}) and take into consideration the linear dependence of muscle force on calcium and the incompressibility condition $h_{0}R_{0} = h R$. Then the constitutive equation for the radial perturbations (${R = R_{0} + \eta}$,\, ${|\eta| << R_{0}}$) has a form \begin{equation} \begin{gathered} \rho_{w}h_{0}\Dfrac[2]{\eta}{t} + \frac{\lambda h_{0}}{R_{0}}\,\Dfrac{\eta}{t} + \frac{\varkappa(C)h_{0}}{R_{0}}\,\eta + \frac{E_{1}h_{0}}{R_{0}^{3}}\,\eta^2 = \\ \hfill = (\bar{P} - P_{ext}) - \frac{h_{0}}{R_{0}}\, k_{2}\, F(C) \end{gathered} \end{equation} where \begin{equation} \begin{gathered} \varkappa(C) = \varkappa_{0}(1 + \varepsilon F(C)),\quad \varkappa_{0} = \frac{E_{0}}{R_{0}(1-\xi^2)},\quad \varepsilon << 1 \\ F(C) = \int_{R_m}^{R_a} [\,C - C_{th}\,]\,\theta(C - C_{th})\, r\,dr\,, \hfill \\ \theta\, \mbox{-- the Heaviside's step function} \hfill \end{gathered} \end{equation} Renormalize the constants $\lambda, \varkappa, k_2$ with the value $h_0/R_0$ and denote the constant, under the assumptions, transmural pressure $P_{0} = \bar{P} - P_{ext} = const$ and ${\varkappa_{1} = \frac{E_{1}h_{0}}{R_{0}^{3}}}$. Ultimately, we obtain a new integro-differential equation describing the wall movement in the presence of smooth muscle tonus \begin{equation} \begin{gathered} \rho_{w}h_{0} \Dfrac[2]{\eta}{t} + \lambda\,\Dfrac{\eta}{t} + \varkappa(C)\,\eta + \varkappa_1\,\eta^{2} = P_{0} -\,k_{2} \int_{R_m}^{R_a} [\,C - C_{th}\,]\,\theta(C - C_{th})\, r\,dr \end{gathered} \label{eq:consitutive_eq} \end{equation} One can see in the case of absence of muscle force (full relaxation) it is the equation of a nonlinear damping oscillator with an external force. The presence of calcium-dependent force term is provided the feedback and makes the arteria different from a passive viscoelastic tube. \subsection{The problem statement for the blood flow autoregulation in dimensionless variables} Summarize the equations obtained above we have the complete system to describe the process of blood flow autoregulation due to EDRF-NO mechanism: \begin{equation} \PDfrac{C(r,t)}{t} = - \alpha\, C - k_{1}\,n_{2}(r,t) + \varphi_{0}, \quad R_{m} < r < R_{a} \label{eq:Ca_balance} \end{equation} \begin{equation} \PDfrac{ n_{1} }{ t } = D_{1} \frac{1}{r}\PDfrac{}{r}\left(r\, \PDfrac{n_{1}}{r} \right) - \delta_{1}\, n_{1},\quad R_{i} < r < R_{m} \label{eq:NO_diffusion_in_intima} \end{equation} \begin{equation} \PDfrac{ n_{2} }{ t } = D_{2} \frac{1}{r}\PDfrac{}{r}\left( r\, \PDfrac{n_{2}}{r} \right) - \delta_{2}\, n_{2},\quad R_{m} < r < R_{a} \label{eq:NO_diffusion_in_media} \end{equation} \begin{equation} \begin{gathered} \rho_{w}\, h_{0}\, \Dfrac[2]{\eta}{t} + \lambda \, \Dfrac{\eta}{t} + \varkappa(C)\, \eta + \varkappa_1\, \eta^{2} = P_{0} -\, k_{2} \int_{R_m}^{R_a} [\,C - C_{th}\,]\,\theta(C - C_{th})\, r\,dr \end{gathered} \label{eq:wall_movement_eq} \end{equation} with the boundary conditions: \begin{equation} \begin{gathered} n_{1}|_{r=R_{i}} = k_{3}\,\sigma_{shear} = \frac{ k_{3}\,(\gamma + 2) \mu\, Q } { \pi\,R_0^{3} \left( 1 + \frac{\eta}{R_0} \right)^3 } \hfill \\ n_{1}|_{r=R_{m}} = n_{2}|_{r=R_{m}},\quad \left. D_{1}\, \PDfrac{n_{1}}{r} \right|_{r=R_{m}} = \left. D_{2}\, \PDfrac{n_{2}}{r} \right|_{r=R_{m}} \hfill \\ \left. \PDfrac{n_{2}}{r} \right|_{ r=R_{a} } = 0 \hfill \end{gathered} \label{eq:boundary_conditions} \end{equation} As the initial values it is taken the perturbed steady-state solutions. Here equation (\ref{eq:Ca_balance}) describes the $Ca^{2+}$-balance in a smooth-muscle cell, equations (\ref{eq:NO_diffusion_in_intima}),~(\ref{eq:NO_diffusion_in_media}) characterize the diffusion of Nitric Oxide in intima and media respectively, and the equation (\ref{eq:wall_movement_eq}) gives the relation establishing the arterial wall movement under the influence of the average calcium ions concentration. In order to pass on to the non-dimensional system of equation setting up the new dimensionless variables: \begin{equation} \begin{gathered} C = C_{th}\, C',\quad n_{1} = n_{1}^{0}\, n_{1}',\quad n_{2} = n_{2}^{0}\, n_{2}', \\ \eta = \eta_{0}\, \eta',\quad t = t_{0}\, {t}',\quad r = r_{0}\, {r}' \end{gathered} \label{eq:non-dim_variables} \end{equation} were for convenience choosing \begin{equation} \begin{gathered} n^{0} \equiv n_{1}^{0} = \frac{D_{2}}{D_{1}}n_{2}^{0} = k_{3}\,\sigma_{shear}^{0} = \frac{ k_{3}\,(\gamma + 2) \mu\, Q }{ \pi\,{R_0}^{3} } \\ r_0 = \eta_{0} = R_0,\quad t_{0} = \frac{1}{\alpha},\quad R_0 = R_{i}\\ \end{gathered} \end{equation} After substitution (\ref{eq:non-dim_variables}) the system (\ref{eq:Ca_balance})~--~(\ref{eq:wall_movement_eq}) turns into a dimensionless form (primes over the variables are omitted): \begin{equation} \PDfrac{C}{t} = - C - k'_{1}\,n_{2} + \varphi'_{0}, \quad R'_{m} < r < R'_{a} \label{eq:non-dim_Ca_balance} \end{equation} \begin{equation} \PDfrac{n_{1}}{t} = D'_{1}\frac{1}{r} \PDfrac{}{r}\left( r\, \PDfrac{n_{1}}{r} \right) - \delta'_{1}\, n_{1}, \quad 1 < r < R'_{m} \label{eq:non-dim_NO_diffusion_in_intima} \end{equation} \begin{equation} \PDfrac{n_{2}}{t} = D'_{2}\frac{1}{r} \PDfrac{}{r}\left( r\, \PDfrac{n_{2}}{r} \right) - \delta'_{2}\, n_{2}, \quad R'_{m} < r < R'_{a} \label{eq:non-dim_NO_diffusion_in_media} \end{equation} \begin{equation} \begin{gathered} \Dfrac[2]{\eta}{t} + \lambda'\, \Dfrac{\eta}{t} + \varkappa'\,\eta + \varkappa_{1}'\, \eta^{2} = P'_{0} -\, k'_{2} \int_{R'_m}^{R'_a} [\,C - 1\,]\; \theta(C - 1)\, r\,dr \end{gathered} \label{eq:non-dim_wall_movement_eq} \end{equation} where dimensionless constants are \begin{equation} \begin{gathered} k'_{1} = \frac{ k_{1}\,n^{0} }{ \alpha\, C_{th} },\quad \varphi'_{0} = \frac{ \varphi_{0} }{ \alpha\, C_{th} } \equiv \frac{ \beta\, C_{ext} }{ \alpha\, C_{th} }, \hfill \\ D'_{1,2} = \frac{ D_{1,2} }{ \alpha\,{R_0}^{2} },\quad \delta'_{1,2} = \frac{ \delta_{1,2} }{ \alpha },\quad \lambda' = \frac{ \lambda }{ \alpha\,\rho_{w}\, h_{0} },\quad \hfill \\ \varkappa' = \frac{ \varkappa }{ \alpha^{2}\,\rho_{w}\, h_{0} },\quad \varkappa'_{1} = \frac{ \varkappa_{1}\,R_{0} }{ \alpha^{2}\,\rho_{w}\, h_{0} }, \hfill \\ P'_{0} = \frac{ P_{0} }{ \alpha^{2}\,\rho_{w}\, h_{0}\,R_0 },\quad k'_{2} = \frac{ k_{2}\,C_{th}\,R_0 }{ \alpha^{2}\,\rho_{w}\, h_{0} } \hfill \end{gathered} \end{equation} Then the boundary conditions take the form: \begin{equation} \begin{gathered} n_{1}|_{r=1} = \frac{ 1 }{ ( 1 + \eta )^{3} } \hfill \\ n_{1}|_{r=R'_{m}} = n_{2}|_{r=R'_{m}},\quad \left. \PDfrac{n_{1}}{r} \right|_{r=R'_{m}} = \left. \PDfrac{n_{2}}{r} \right|_{r=R'_{m}} \hfill \\ \left. \PDfrac{n_{2}}{r} \right|_{r=R'_{a}} = 0 \hfill \end{gathered} \label{eq:non-dim_boundary_conditions} \end{equation} where $R_{0}=R_{i},\quad R'_{m}=R_{m}/R_0,\quad R'_{a}=R_{a}/R_0$\quad and initial values are the perturbed solutions of the steady-state system. From the non-dimensional system of equations one can make a remark that the stationary blood flow discharge through the vessel's cross-section $Q$ has implicit influence on the $Ca^{2+}$-concentration in the smooth muscle cell via term $k'_{1}\, n_{2}$ in the equation (\ref{eq:non-dim_Ca_balance}) due to the coefficient $k'_{1} \sim n^{0} \sim Q$. \section{The solution of the problem in a steady state} \label{sec:Steady_state} To consider the stationary case letting the following: \begin{equation} C = \tilde{C}(r),\; n_{1} = \tilde{n}_{1}(r),\; n_{2} = \tilde{n}_{2}(r),\; R = R_{0} = const \end{equation} Under the assumptions the system of equations (\ref{eq:non-dim_Ca_balance})~--~(\ref{eq:non-dim_wall_movement_eq}) takes the form: \begin{equation} \tilde{C}(r) = - k'_{1}\,\tilde{n}_{2}(r) + \varphi'_{0},\quad R'_{m} \leq r \leq R'_{a} \label{eq:steady-state_Ca} \end{equation} \begin{equation} \Dfrac[2]{\tilde{n}_{1}}{r} + \frac{1}{r}\,\Dfrac{\tilde{n}_{1}}{r} - \frac{ \delta'_{1} }{ D'_{1} }\, \tilde{n}_{1} = 0,\quad 1 \leq r \leq R'_{m} \label{eq:steady-state_NO_in_intima} \end{equation} \begin{equation} \Dfrac[2]{\tilde{n}_{2}}{r} + \frac{1}{r}\,\Dfrac{\tilde{n}_{2}}{r} - \frac{ \delta'_{2} }{ D'_{2} }\, \tilde{n}_{2} = 0,\quad R'_{m} \leq r \leq R'_{a} \label{eq:steady-state_NO_in_media} \end{equation} \begin{equation} P'_{0} = k'_{2}\int_{R'_m}^{R'_a} [\,\tilde{C}(r) - 1\,]\,\theta(\tilde{C} - 1)\, r\,dr \label{eq:steady-state_wall_movement_eq} \end{equation} with the boundary conditions: \begin{equation} \begin{gathered} \tilde{n}_{1}|_{r=R'_{i}} = 1 \hfill \\ \tilde{n}_{1}|_{r=R'_{m}} = \tilde{n}_{1}|_{r=R'_{m}},\quad \left. \Dfrac{\tilde{n}_{1}}{r} \right|_{r=R'_{m}} = \left. \Dfrac{\tilde{n}_{2}}{r} \right|_{r=R'_{m}} \hfill \\ \left. \Dfrac{\tilde{n}_{2}}{r} \right|_{r=R'_{a}} = 0 \hfill \end{gathered} \label{eq:steady-state_bound_cond} \end{equation} The ODEs (\ref{eq:steady-state_NO_in_intima}),~(\ref{eq:steady-state_NO_in_media}) for NO concentration have general solution via modified Bessel functions $I_0(z), K_0(z)$: \begin{equation} \begin{gathered} \tilde{n}_{1}(r) = A_1\; I_{0}\!\left( \sqrt{ \frac{\delta'_1}{D'_1} }\, r \right) + A_2\; K_{0}\!\left( \sqrt{ \frac{\delta'_1}{D'_1} }\, r \right) \\ \tilde{n}_{2}(r) = B_1\; I_{0}\!\left( \sqrt{ \frac{\delta'_2}{D'_2} }\, r \right) + B_2\; K_{0}\!\left( \sqrt{ \frac{\delta'_2}{D'_2} }\, r \right) \end{gathered} \label{eq:exact_steady-state_NO-distr} \end{equation} where $A_1,\, A_2, B_1,\, B_2$ are the arbitrary constants defining by the boundary conditions (\ref{eq:steady-state_bound_cond}): \begin{equation} \begin{gathered} A_1\, I_{0}(\xi_1) + A_2\, K_{0}(\xi_1) = 1 \hfill \\ B_1\, I_{1}(\xi_2\,R'_{a}) - B_2\, K_{1}(\xi_2\,R'_{a}) = 0 \hfill \\ A_1\, I_{0}(\xi_1\,R'_{m}) + A_2\, K_{0}(\xi_1\,R'_{m}) = \hfill \\ \hfill = B_1\, I_{0}(\xi_2\,R'_{m}) + B_2\, K_{0}(\xi_2\,R'_{m}) \\ \xi_1\, (A_1\, I_{1}(\xi_1\,R'_{m}) - A_2\, K_{1}(\xi_1\,R'_{m}) ) = \hfill \\ \hfill = \xi_2\, ( B_1\,I_{1}(\xi_2\,R'_{m}) - B_2\, K_{1}(\xi_2\,R'_{m}) ) \label{eq:bound_cond_system} \end{gathered} \end{equation} where $\xi_1 \equiv \sqrt{ \frac{\delta'_1}{D'_1} },\quad \xi_2 \equiv \sqrt{ \frac{\delta'_2}{D'_2} }$ Using the typical experimental data for the muscular resistance artery ~\cite{Dorf2003, Li2004, Regirer2005}\\ $R_{i} = 1.0\, mm,\: h=0.5\, mm$; $R'_{m}=1.05,\: R'_{a}=1.3$ and assuming $\xi_{1}=6,\: \xi_{2}=2$ we can find the constants $A_1,\, A_2,\, B_1,\, B_2$ from the boundary conditions~(\ref{eq:bound_cond_system}). Steady-state $Ca^{2+}$- concentration $\tilde{C}(r)$ is given by (\ref{eq:steady-state_Ca}). The equilibrium distribution of concentrations is depicted on the figure~\ref{fig:stationary-case}. \begin{figure}[!ht] \centering \includegraphics[width=9cm]{stationary.eps} \caption{Stationary distribution of NO and intracellular Calcium ions.} \label{fig:stationary-case} \end{figure} \section{The case of a thin-wall artery}\label{sec:Thin-wall_artery} To understand the qualitative behavior of the system consider the limit case of a thin wall artery. The similar model was studied by A.~Rachev, S.A.~Regirer et al. in \cite{Rachev2000, Regirer2002}. There are estimations to come to the limit case. The first relation is $h_{i}/h_{m} << 1$ that enables to come to a one-layer wall model. The second one is $\tau_{diffusion} << \tau_{kinetic}$, were the typical time of the diffusion process is $\tau_{diffusion} = \frac{h^2}{D}$ and the typical time of the kinetic process is $\tau_{kinetic} = min\{ \frac{1}{\delta},\, \frac{1}{\alpha} \}$. Here $h_{i}, h_{m}$ are the wall thickness of intima and media layers, $h$ is the spatial scale of the wall thickness. Considering that the kinetic processes for Nitric Oxide are faster than for Calcium ions we have as follows: \begin{equation} h << \sqrt {\frac{D}{\delta}} \equiv h_{0} \label{eq:thickness_estimation} \end{equation} where $h_{0}$ is the characteristic wall thickness to compare with. Taking into account the typical values of the parameters as $D = 3300\, \mu{m}^2/sec$ and $\delta = 1\, sec^{-1}$ \cite{Regirer2005} we obtain $h_{0} = 57\, \mu{m}$. It is also should be note the default condition of quasi-stationary diffusion: $\tau_{diffusion} << T_{osc}$, where $T_{osc}$ is the typical period of radial oscillations. The typical value of $T_{osc}^{-1}$ is about $1 \div 2\,sec^{-1}$ then the $h_{0}$ value is close to $57\,\mu{m}$ or a bit less. The large and medium resistance muscle arteries have the specific wall thickness ${h \sim 100 \div 1000\, \mu{m}}$ whereas the small arteries and arterioles have the much smaller thickness ${h \sim 10\, \mu{m}}$. Therefore the limit case covers the case of flow in a small artery with ${h << 50\, \mu{m}}$. Thus the intima and media layers are so thin to neglect the multi-layer nature of the wall and eliminate the diffusion processes. After the averaging of the calcium feedback $F(C)$ over the wall thickness the system (\ref{eq:Ca_balance})~--~(\ref{eq:wall_movement_eq}) takes a simplified form: \begin{equation} \begin{gathered} \Dfrac{x}{t} = - \alpha\,x - \dfrac{a}{\left(1 + \frac{y}{c}\right)^3} + b \hfill \\ \Dfrac{y}{t} = z \hfill \\ \Dfrac{z}{t} = - A\,x - \kappa\,y - \beta\,z - \kappa_{1}\,y^2 - \kappa_{2}\,xy + B \hfill \end{gathered} \label{eq:thin-wall-limit_system} \end{equation} were $x = x(t) \equiv C(x,t) - C_{th}$ is the average concentration of $Ca^{2+}$ in the arterial smooth muscle layer, $y = y(t) \equiv \eta(t)$ is the deviation of the radius of the vessel ($y > -c$), $z=z(t)$ is the velocity of radius oscillation; $\alpha$ is the rate of a natural ''pumping'' of the free calcium ions from the intracellular space, $a$ is represents the blood flow level ($a \sim Q$), $c$ is the non-perturbed arterial radius, $b$ is the rate of the calcium inflow in a smooth muscle cell, $A$ is the coefficient of proportionality for the calcium-feedback force, $\kappa$ is the linear elasticity coefficient, $\kappa_{1}$ is the nonlinear elasticity coefficient, $\kappa_{2}$ is the small calcium-induced elasticity coefficient, $\beta$ is the viscous (resistance) coefficient of an arterial wall, $B$ represents the mean constant transmural pressure. Look for the stationary points of the system (\ref{eq:thin-wall-limit_system}). One can obtain under the condition \begin{equation} b - a = \alpha\frac{B}{A} \label{eq:constants_relation} \end{equation} there is a stationary point $\{x = B/A,\, y = 0,\, z = 0\}$. It corresponds to the non-perturbed state of an artery. All the rest real stationary points of the system have $y < -c$ and hence they are out of physical sense. The relation (\ref{eq:constants_relation}) reflects the balance between the muscle forces mediated by calcium concentration and the pressure forces in the blood. The steady-state $Ca^{2+}$ concentration is equal to $x = B/A \sim P_{0}/(h_{0} R_{0})$. Study the stability of the dynamical system (\ref{eq:thin-wall-limit_system}) near the stationary point $\{B/A, 0, 0\}$ taking into account relation (\ref{eq:constants_relation}). Consider the linearized system \begin{equation} \Dfrac{\vec{X}}{t} = \mathbb{A} \vec{X} + \vec{F}\,, \end{equation} were \begin{equation*} \begin{gathered} \mathbb{A} = \left( \begin{array}{ccc} -\alpha & \frac{3 a}{c} & 0 \\ 0 & 0 & 1 \\ -A & -(\kappa + \kappa_{2}\frac{B}{A}) & -\beta \end{array} \right) \\ \vec{X} = (x,\,y,\,z)^{T},\quad \vec{F} = (b-a,\, 0,\, B)^{T} \end{gathered} \end{equation*} The Routh-Hurwitz criterion provides the condition then all eigenvalues of $\mathbb{A}$ have negative real parts. Here the stability condition is as follows: \begin{equation} \beta \left( \alpha^2 + \alpha\beta + \kappa + \kappa_{2}\frac{B}{A} \right) > \frac{3 a A}{c} \label{eq:stability_criterion} \end{equation} Taking into consideration the strictly positiveness of the $A, a, c, \alpha$ and non-negative values of the rest parameters one can conclude from (\ref{eq:stability_criterion}) the condition for the wall viscosity ${\beta > 0}$. It shows the importance of the viscoelastic nature of an arterial wall to maintain the stability of the stationary state. In general case, there is the critical wall viscosity $\beta_{critical}$ below that oscillations demonstrate the lack of stability. The qualitative analysis on the phase plane confirms the preliminary estimates (figure~\ref{fig:phase_plane}). \begin{figure*}[!ht] \centering \includegraphics[width=8cm]{phase_plane_periodic.eps} \includegraphics[width=8cm]{phase_plane_stable.eps} \caption{The two-dimensional projection of the phase trajectory of the system. For $\beta = \beta_{critical}$ it is the periodic oscillations (left) and for $\beta > \beta_{critical}$ it is the damping oscillations (right).} \label{fig:phase_plane} \end{figure*} \section{The case of a passive vessel}\label{sec:Passive_tube} One can see from the thin-wall approximation the more the discharge the less the equilibrium calcium level. In the general model we have a non-constant distribution of $Ca^{2+}$ concentration. If the stationary $Ca^{2+}$ concentration for the whole arterial wall is below the threshold level $C_{th}$ it becomes a fully relaxed. In this case the 'active' viscoelastic tube is reduced to the 'passive' one. The law of the arterial wall motion (\ref{eq:non-dim_wall_movement_eq}) in the dimensionless form (primes are omitted) is as follows: \begin{equation} \Dfrac[2]{\eta}{t} + \lambda\,\Dfrac{\eta}{t} + \varkappa_0\,\eta + \varkappa_1\,\eta^{2} = P_{0} \label{eq:non_dim_passive_wall_movement_eq} \end{equation} The nonlinear differential equation (\ref{eq:non_dim_passive_wall_movement_eq}) can be solved exactly via the simplest equation method \cite{Kudryashov2005, Kudryashov1990}. One can obtain \begin{eqnarray} \label{eq:exact_passive-tube_solution} \eta(t) = \eta_{\infty} \tanh \left( \frac{\lambda\,t}{10} \right) \left( 2 - \tanh\left( \frac{\lambda\,t}{10} \right) \right) \hfill \\ \eta_{\infty} = \sqrt{ \frac{P_0}{3\,\varkappa_{1}} },\quad \varkappa = \sqrt{ \frac{4\,\varkappa_{1}\,P_0}{3} },\quad \lambda = \sqrt[4]{ \frac{2500\,\varkappa_{1}\,P_0}{27} } \nonumber \hfill \end{eqnarray} The kink-shaped solution demonstrates the switch from one steady state to another under a constant force field. Here the solution (\ref{eq:exact_passive-tube_solution}) satisfies a non-perturbed state of artery with $\eta(0) = 0$. The pressure and smooth-muscle force compensate each other. After a vanishing of the muscle force (due to the sharp decreasing of the calcium level) arteria expands to a new equilibrium state. The new arterial radius depends on the transmural pressure and the elastic properties of an arterial wall. It can be estimated by $\eta_{\infty}$. \section{The numerical simulation for the problem of blood flow autoregulation} \label{sec:Numerical_simulation} Consider the general case of the two-layer kinetic-diffusion system in the dimensionless form (\ref{eq:non-dim_Ca_balance})~--~(\ref{eq:non-dim_boundary_conditions}) for description of the blood flow regulation. In order to study the dynamics of the solutions of the system near the steady state the numerical simulation is performed. An implicit iterative finite-difference scheme is implemented. As the initial values the perturbed exact stationary solutions (\ref{eq:steady-state_Ca}),~(\ref{eq:exact_steady-state_NO-distr}) are taken. The behavior of the solution for the initial stretching of the radius confirms the asymptotic stability of the stationary state (figure \ref{fig:eta_perturbed}). \begin{figure}[!ht] \centering \includegraphics[width=8cm]{stretched_artery.eps} \caption{The dynamics of the system relaxation to the previous steady state after an initial stretching of the artery $\eta(t=0) = 0.1$.} \label{fig:eta_perturbed} \end{figure} As a test solution in case of passive dilation the exact solution (\ref{eq:exact_passive-tube_solution}) is taken. The comparison gives a good agreement between the numerical solution and the exact one (figure~\ref{fig:test_case-passive_tube}). \begin{figure}[!ht] \centering \includegraphics[width=8cm]{passive_tube_test.eps} \caption{The passive expanding of the arteria due to a constant transmural pressure. The exact solution (\ref{eq:exact_passive-tube_solution}) (dotted line) and the numerical one (solid line).} \label{fig:test_case-passive_tube} \end{figure} In response on the changing of the blood flow, that is managed by the coefficient $k_{1} \sim Q$, the system comes after the damping oscillations to a new steady state (figure \ref{fig:flow_changing}). It is remarkable the different reaction of the system to the increasing and decreasing of the discharge. The relaxation time in case of flow decreasing is smaller than in case of flow increasing. It may be explained by the drop of the critical viscosity level in response on the decreasing flow according to (\ref{eq:stability_criterion}). Also the arterial radius deviation is bigger in case of decreasing of blood flow accordingly to the inverse cubic dependence of the shear stress on the radius. In case of increasing flow it is vice versa. One can see the growth of the blood flow can potentially be a source of instability especially for small arterial wall viscosity near the critical one. \begin{figure*}[!ht] \centering \includegraphics[width=8cm]{dec_discharge.eps} \includegraphics[width=8cm]{inc_discharge.eps} \caption{The transition of the system to the new equilibrium state after the decreasing for $25\%$ (left) and increasing for $25\%$ (right) of the mean blood flow.} \label{fig:flow_changing} \end{figure*} \section{Conclusion}\label{sec:Conclusion} The two-layer diffusion-kinetic model is proposed to describe the process of a local blood flow regulation in an artery. The exact stationary distributions of the key agents -- Nitric Oxide and Calcium ions are obtained. The limit case of a thin wall artery under the estimation of the wall thickness (\ref{eq:thickness_estimation}) is analytically studied. The stability condition for the equilibrium state is given by the formula (\ref{eq:stability_criterion}). It is shown the necessity of the viscoelastic nature (non-zero viscosity) of the arterial wall to provide the stability of the system. The minimal critical viscosity value of a wall is obtained in the linearized case. In case of full relaxation of the smooth muscles the exact solution in the kink form is found to describe the passive dilation of the artery. The numerical simulation demonstrates the transition of the system to the new steady state with the new radius value in response of changing of the mean blood discharge. This result is in agreement with the experimental observation~\cite{Snow_etal2001}. It is confirmed the importance of the endothelium derived relaxing factor -- Nitric Oxide for arterial haemodynamics. The model can be applied to the study of the local autoregulation of the coronary, cerebral and kidney blood flow. \begin{acknowledgments} This work was supported by the International Science and Technology Center under project B1213. \end{acknowledgments}
1,108,101,566,059
arxiv
\section{Introduction} The need for algorithms and methods that can handle large data in a distributed setting has grown significantly in recent years. Specifically, such settings may arise in two prototypical scenarios: (a) induced distributed data: distribute and parallelize computationally demanding optimization tasks to connected computational nodes using a data distributed model and (b) intrinsically distributed data: data is collected across a connected network of sensors (e.g., mobile devices, camera networks), where some or all of the computation can be performed in individual sensor nodes without requiring centralized data pooling. Several distributed learning approaches have been proposed to meet these needs. In particular, the alternating direction method of multiplier (ADMM)~\cite{boyd2010} is an optimization technique that has been very often used in computer vision and machine learning to handle model estimation and learning in either of the two large data settings~\cite{risheng2012, liansheng2012, ehsan2013, zinan2013, chunyu2014, lai2014, boussaid2014, miksik2014}. In the distributed optimization setting, the distributed nodes process data locally by solving small optimization problems and aggregate the result by exchanging the (possibly compressed) local solutions (e.g., local model parameter estimates) to arrive at a consensus global result. However, the nature of distributed learning models, particularly in the fully distributed setting where no network topology is presumed, inherently requires repetitive communications between the device nodes. Therefore, it is desirable to reduce the amount of information exchanged and simultaneously improve computational efficiency through faster convergence of such distributed algorithms. To this end, the contributions of this paper are three fold. \begin{itemize} \item We propose two variants of ADMM for the consensus-based distributed learning faster than the standard ADMM. Our method extends an acceleration approach for ADMM~\cite{he2000} by an efficient variable penalty parameter update strategy. This strategy results in improved convergence properties of ADMM and also works in a fully distributed fashion. \item We extend our proposed method to automatically determine the maximum number of iterations allocated to successive updates by employing a budget magement scheme. This strategy results in adaptive parameter tuning for ADMM, removing the need for arbitrary parameter settings, and effectively induces a varying network communication topology. \item We apply the proposed method to a prototypical vision and learning problem, the distributed PPCA for structure-from-motion, and demonstrate its empirical utility over the traditional ADMM. \end{itemize} \section{Problem Description and Related Works} \begin{figure*}[t] \centering \begin{subfigure}[h]{0.28\textwidth} \includegraphics[trim=1cm 1cm 0cm 4cm, width=1\textwidth]{graph_CPL.pdf} \caption{Centralized} \label{fig1:cent} \end{subfigure}% \qquad \begin{subfigure}[h]{0.28\textwidth} \includegraphics[trim=1cm 3cm 0cm 2cm, width=1\textwidth]{graph_DPL.pdf} \caption{Distributed} \label{fig1:dist} \end{subfigure} \qquad \begin{subfigure}[h]{0.28\textwidth} \includegraphics[trim=1cm 3cm 0cm 2cm, width=1\textwidth]{graph_DPLANT.pdf} \caption{Proposed} \label{fig1:proposed} \end{subfigure} \caption{Centralized, distributed, and the proposed learning model in a ring network. The bigger size of $\rho_{ij}$ means that corresponding constraint is more penalized. Solid edges denote currently strongly influencing edges and dotted edges indicate the edges with less influence.} \label{fig:CPL_DPL_DPLANT} \end{figure*} The problem we consider in this paper can be formulated as a consensus-based optimization problem~\cite{bertsekas1989}. A general consensus-based optimization problem can be written as \begin{align} \label{eq:obj_cent} \arg\min_{\theta_{i}} &\quad \sum_{i=1}^{J} f_{i}(\theta_{i}),\quad s.t. \quad \theta_{i} = \theta_{j}, \forall i \neq j \end{align} where we want to find the set of optimal parameters $\theta_{i}, i = 1..J$ that minimizes the sum of convex objective functions $f_{i}(\theta_{i})$, where $J$ denotes the total number of the functions. This problem is typically a reformulation of a centralized optimization task $\arg \min f(\theta)$ with a decomposable objective $ f(\theta) =\sum_{i=1}^J f_i(\theta)$. Given the consensus formulation, the original problem can be solved by decomposing the problem into $J$ subproblems so that $J$ processors can cooperate to solve the overall problem by changing the equality constraint to $\theta_{i} = \bar{\theta}$ where $\bar{\theta}$ denotes a globally shared parameter. The optimization can be approached efficiently by exploiting the alternating direction method of multiplier (ADMM)~\cite{boyd2010}. The above consensus formulation is particularly suitable for many optimization problems that appear in computer vision. For instance, since $f_{i}(\theta_{i})$ can be any convex function, we can also consider a probabilistic model with the joint negative log likelihood $f_{i}(\theta_{i}) = -\log p(x_{i}, z_{i} | \theta_{i})$ between the observation $x_{i}$ and the corresponding latent variable $z_{i}$. Assuming $(x_{i}, z_{i})$ are independent and identically distributed, finding the maximum likelihood estimate of the shared paramter $\bar{\theta}$ can then be formulated as the optimization problem we described above for many exponential family parametric densities. Moreover, the function need not be a likelihood, but can also be a typical decomposable and regularized loss that occurs in many vision problems such as denoising or dictionary learning. It is often very convenient to consider the above consensus optimization problem from the perspective of optimization on graphs. For instance, the centralized i.i.d. Maximum Likelihood learning can be viewed as the optimization on the graph in Fig.~\ref{fig1:cent}. Edges in this graph depict functional (in)dependencies among variables, commonly found in representations such as Markov Random Fields \cite{miksik2014} or Factor Graphs \cite{bishop2006}. In this context, to fully decompose $f(\cdot)$ and eliminate the need for a processing center completely, one can introduce auxiliary variables $\rho_{ij}$ on every edge to break the dependency between $\theta_{i}$ and $\theta_{j}$~\cite{forero2011, yoon2012} as shown in Fig.~\ref{fig1:dist}. This generalizes to arbitrary graphs, where the connectivity structure may be implied by node placement or communication constraints (camera networks), imaging constraints (pixel neighborhoods in images or frames in a video sequence), or other contextual constraints (loss and regularization structure). In general, given a connected graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ with the nodes $i, j \in \mathcal{V}$ and the edges $e_{ij} = (i,j) \in \mathcal{E}$, the consensus optimization problem becomes \begin{align} \min \sum_{i \in \mathcal{V}} f_{i}(\theta_{i}), \quad s.t. \quad \theta_{i} = \rho_{ij}, \rho_{ij} = \theta_{j}, j \in \mathcal{B}_{i} \end{align} Solving that problem is equivalent to optimizing the augmented Lagrangian $\mathcal{L}(\bm{\Theta}) = \sum_{i \in \mathcal{V}}\mathcal{L}_{i}(\bm{\Theta}_{i})$, \small \begin{align} \mathcal{L}_{i}(\bm{\Theta}_{i}) &= f_i(\textbf{$\theta_i$}) + \sum_{j \in \mathcal{B}_{i}} \left\{ \lambda^\top_{ij1} (\theta_i - \rho_{ij}) + \lambda^\top_{ij2}(\rho_{ij} - \theta_j) \right\} + \frac{\eta}{2} \sum_{j \in \mathcal{B}_{i}} \left\{ \| \theta_i - \rho_{ij} \|^2 + \| \rho_{ij} - \theta_j \|^2 \right\}, \label{eq:lagrangian} \end{align} \normalsize where $\bm{\Theta} = \{\bm{\Theta}_{i}: i \in \mathcal{V}\}$, $\bm{\Theta}_{i} = \{\theta_{i}, \rho_{i}, \lambda_{i}\}$ are parameters to find, $\lambda_{i} = \{\lambda_{ij1}, \lambda_{ij2}: j \in \mathcal{B}_i\}$, $\lambda_{ij1}\text{, }\lambda_{ij2}$ are Lagrange multipliers, $\mathcal{B}_{i} = \{j | e_{ij} \in \mathcal{E} \} $ is the set of one hop neighbors of node $i$, $\eta > 0$ is a fixed scalar penalty constraint, and $\|\cdot\|$ is induced norm. The ADMM approach suggests that the optimization can be done in coordinate descent fashion taking gradient of each variable while fixing all the others. \subsection{Convergence Speed of ADMM} The currently known convergence rate of ADMM is $O(1/T)$ where $T$ is the number of iterations~\cite{he2012}. Even though $O(1/T)$ is the best known bound, it has been observed empirically that ADMM converges faster in many applications. Moreover, the computation time per each iteration may dominate the total algorithm running time. Thus many speed up techniques for ADMM have been proposed that are application specific. One way is to come up with a predictor-corrector step for the coordinate descent~\cite{goldstein2014} using some available acceleration method such as~\cite{nesterov1983}. It guarantees quadratic convergence for strongly convex $f_{i}(\cdot)$. Another way is to replace the gradient descent optimization with a stochastic one~\cite{ouyang2013, suzuki2013}. This approach has recently gained attention as it greatly reduces the computation per iteration. However, these methods usually require the coordinating center node thus may not readily applicable to the decentralized setting. Moreover, we want to preserve the application range of ADMM and avoid introducing additional assumptions on $f_{i}(\cdot)$. One way to improve convergence speed of ADMM is through the use of different constraint penalty in each iteration. For example,~\cite{he2000} proposed ADMM with self-adaptive penalty, and it improved the convergence speed as well as made its performance less dependent on initial penalty values. The idea of \cite{he2000} is to change the constraint penalty taking account of the relative magnitudes of \emph{primal} and \emph{dual} residuals of ADMM as follows \begin{align} \eta^{t+1} = \left\{ \begin{array}{ll} \eta^{t} \cdot (1+\tau^{t}) &\text{, if }{\| r^{t} \|}_2 > \mu {\| s^{t} \|}_2 \\[0.5em] \eta^{t} \cdot (1+\tau^{t})^{-1} &\text{, if }{\| s^{t} \|}_2 > \mu {\| r^{t} \|}_2 \\[0.5em] \eta^{t} &\text{, otherwise }\\ \end{array} \right. \label{eq:he2000} \end{align} where $t$ is the iteration index, $\mu > 1$, $\tau^{t} > 0$ are parameters, $r^{t}$ and $s^{t}$ are the primal and dual residuals, respectively\footnote{Please refer~\cite{boyd2010}, page 18 and 51 for their definitions.}. The primal residual measures the violation of the consensus constraints and the dual residual measures the progress of the optimization in the dual space. This update converges when $\tau^{t}$ satisfies $\sum_{t = 0}^{\infty} \tau^{t} < \infty$, i.e. we stop updating $\eta^{t}$ after a finite number of iterations. Typical choice for parameters are suggested as $\mu = 10$ and $\tau^{t} = 1$ at all $t$ iterations. The strength of this approach is that conservative changes in the penalty are guaranteed to converge~\cite{rockafellar1976, boyd2010}. However, like other ADMM speed up approaches mentioned above, this update scheme relies on the global computation of the primal and the dual residuals and requires the $\eta^{t}$ stored in nodes to be homogeneous over entire network thus it is not a fully decentralized scheme. Moreover, the choice of parameters as well as the maximum number of iterations require manually tuning. \section{Proposed Methods} We present our proposed ADMM penalty update schemes in three steps. First, we extend the aforementioned update scheme of (\ref{eq:he2000}) to be applicable on fully decentralized setting. Next, we propose the novel penalty parameter update strategy for ADMM speed up that does not require manual tuning of $\tau^{t}$. Finally, we extend the strategy so that we can automatically select the maximum number of penalty update iterations. \subsection{ADMM with Varying Penalty (ADMM-VP)} Throughout the paper, the superscript $t$ in all terms with subscript $i$ denote either the objective function or parameter at $t$-th iteration for node $i$. In order to extend (\ref{eq:he2000}) for a fully distributed setting, we first introduce $\eta_{i}^{t}$, the penalty for $i$-th node at $t$-th iteration. Next, we need to compute local primal and dual residuals for each node $i$. In the fully distributed learning framework of~\cite{forero2011, yoon2012}, the dual auxiliary variable vanishes from derivation. However, to compute the residuals, we need to keep track of the dual variable, which is essentially the average of local estimates, explicitly over iterations. The squared residual norms for the $i$-th node are defined as \begin{align} \| r_{i}^{t} \|_{2}^{2} = \| \theta_{i}^{t} - \bar{\theta}_{i}^{t} \|_{2}^{2}, \quad \| s_{i}^{t} \|_{2}^{2} = (\eta_{i}^{t})^{2} \| \bar{\theta}_{i}^{t} - \bar{\theta}_{i}^{t-1} \|_{2}^{2}, \quad \bar{\theta}_{i}^{t} = \frac{1}{|\mathcal{B}_{i}|} \sum_{j \in \mathcal{B}_{i}} \theta_{j}^{t}. \end{align} Note the difference from the standard residual definitions for consensus ADMM~\cite{boyd2010}, used in (\ref{eq:he2000}), where the dual variable is considered as a single, globally accessible variable, $\bar{\theta}^{t}$ instead of local $\bar{\theta}_{i}^{t}$. This allows each node to change its $\eta_{i}^{t}$ based on its own local residuals. The penalty update scheme is similar to (\ref{eq:he2000}) but $\eta^{t}$, $\| r^{t} \|_{2}$ and $\| s^{t} \|_{2}$ are replaced with $\eta_{i}^{t}$, $\| r_{i}^{t} \|_{2}$ and $\| s_{i}^{t} \|_{2}$, respectively. Lastly,~\cite{he2000} stopped changing $\eta^{t}$ after $t > 50$. However, in ADMM-VP, if we stop the same way, we end up with heterogeneously fixed penalty values which impacts the convergence of ADMM by yielding heavy oscillations near the saddle point. Therefore we reset all penalty values in all nodes to a pre-defined value (e.g. $\eta^{0}$, the initial penalty parameter) after a fixed number of iterations. As we fix the penalty values homogeneously after a finite number of iterations, it becomes the standard ADMM after that point thus the convergence of ADMM-VP update is guaranteed. \subsection{ADMM with Adaptive Penalty (ADMM-AP)} We further extend $\eta_{i}$ by introducing a bi-directional graph with a penalty constraint parameter $\eta_{ij}$ specific to directed edge $e_{ij}$ from node $i$ to $j$. The modified augmented Lagrangian $\mathcal{L}_{i}$ is similar to (\ref{eq:lagrangian}) except that we replace $\eta$ with $\eta_{ij}$. The penalty constraint controls the amount each constraint contributes to the local minimization problem. The penalty constraint parameter $\eta_{ij}$ is determined by evaluating the parameter $\theta_{j}$ from node $j$ with the objective function $f_{i}(\cdot)$ of node $i$ as \begin{flalign} \eta^{t+1}_{ij} = \left\{ \begin{array}{ll} \eta^{0} \cdot (1+ \tau_{ij}^{t}) & \text{, if } t < t^{max} \\ [0.5em] \eta^{0} &\text{, otherwise} \end{array} \right. \label{eq:eta_update_ap} \end{flalign} where $t^{max}$ is the maximum number of iterations for the update as proposed in~\cite{he2000} and \begin{align} \tau_{ij}^{t} &= \frac{\kappa_{i}^{t}(\theta_{i}^{t})}{\kappa_{i}^{t}(\theta_{j}^{t})} - 1 \,, \quad \kappa_{i}^{t}( \theta ) =\left( \frac{ f_{i}^{t}(\theta) - f_{i}^{min} }{ f_{i}^{max} - f_{i}^{min} } + 1 \right)\,, \\ f_{i}^{max} &= \max \{ f_{i}^{t}(\theta_{i}^{t}), f_{i}^{t}(\theta_{j}^{t}) : j \in \mathcal{B}_{i} \}\,, \quad f_{i}^{min} = \min \{ f_{i}^{t}(\theta_{i}^{t}), f_{i}^{t}(\theta_{j}^{t}) : j \in \mathcal{B}_{i} \}\,. \end{align} The interpretation of this update strategy is straightforward. In each iteration $t$, each $i$-th node will evaluate its objective using its own estimate of $\theta_{i}^{t}$ and the estimates from other nodes $\theta_{j}^{t}$ (we use $\rho_{ij}^{t}$ instead of actual $\theta_{j}^{t}$ to retain locality of each node from the neighbors). Then, we assign more weight to the neighbor with better parameter estimate for the local $f_{i}(\cdot)$ (i.e. larger penalty $\eta_{ij}^{t}$ if $f_{i}(\theta_{j}) < f_{i}(\theta_{i})$) with the above update scheme. The intuition behind the ADMM-AP update is to emphasize the local optimization during early stages and then deal with the consensus update at later, subsequence stages. If all local parameters yield similarly valued local objectives $f_{i}(\cdot)$, the onus is placed on consensus. This makes ADMM-AP different from pre-initialization that does the local optimization using the local observations and ignores the consensus constraints. Note that unlike the update strategy of~(\ref{eq:he2000}), we do not need to specify $\tau^{t}$ and the update weight is automatically chosen according to the normalized difference in the local objective evaluation among neighboring parameters. The proposed algorithm also emphasizes the objective minimization over the minimization that solely depends on the norms of primal and dual residuals of constraints. The hope is that we not only achieve the consensus of the parameters of the model but also a \emph{good} estimate with respect to the objective. On the other hand, the convergence property of~\cite{he2000} still holds for the proposed algorithm. Following Remark 4.2 of~\cite{he2000}, the requirement for the convergence is to satisfy the update ratio to be fixed after some $t^{\max} < \infty$ iteartions. Moreover, the proposed update ensures bounding by $\eta_{ij}^{t+1} / \eta_{ij}^{t} \in [0.5, 2]$, which matches with the increase and decrease amount suggested in~\cite{he2000, boyd2010}. One may use $t^{\max} = 50$ as in~\cite{he2000}. \subsection{ADMM with Network Adaptive Penalty (ADMM-NAP)} To extend the proposed method for automatically deciding the maximum number of penalty updates, the penalty update for the ADMM becomes \begin{flalign} \eta^{t+1}_{ij} = \left\{ \begin{array}{ll} \eta^{0} \cdot (1+ \tau_{ij}^{t} ) & \text{, if } \sum_{u=1}^{t} |\tau_{ij}^{u}| < \mathcal{T}_{ij}^{t} \\ [0.5em] \eta^{0} &\text{, otherwise}. \end{array} \right. \label{eq:eta_update_nap} \end{flalign} Fig.~\ref{fig1:proposed} depicts how the proposed model have different structures from centralized and traditional distributed models, and how nodes share their parameters via network. In addition to the adaptive penalty update, the inequality condition on the summation of $\tau_{ij}^{u}, u = 1..t$ encodes the spent \emph{budget} that the edge $e_{ij}$ can change $\eta_{ij}$. All nodes have its upper bound $\mathcal{T}_{ij}^{t}$ and everytime it makes a change to $\eta_{ij}$, it has to \emph{pay} exactly the amount they changed. If the edge has changed too much, too often, the update strategy will block the edge from changing $\eta_{ij}$ any more. The update scheme is guaranteed to convergence if $\mathcal{T}_{ij}^{t}$ is simply set to constant $\mathcal{T}$ for all $i, j, t$ or if $\tau_{ij}^{t} = 0$ for $t > t^{max}$. However, with a different objective function and different network connectivity, a different upper bound should be imposed. This is because a given upper bound $\mathcal{T}$ or maximum iteration $t^{max}$ could be too small for a certain node to fully take an advantage of our adaptation strategy or they could be too big so that it converges much slowly because of the continuously changing $\eta_{ij}^{t}$. To this end, we propose updating strategy for $\mathcal{T}_{ij}^{t}$ as following: \begin{flalign} \mathcal{T}_{ij}^{t+1} = &\left\{ \begin{array}{ll} \mathcal{T}_{ij}^{t} + \alpha^{n} \mathcal{T} &\text{, if } \sum_{u=1}^{t} |\tau_{ij}^{u}| \geq \mathcal{T}_{ij}^{t} ~~\text{and } | f_{i}(\theta_{i}^{t}) - f_{i}(\theta_{i}^{t-1}) | > \beta \\ [1em] \mathcal{T}_{ij}^{t} & \text{, otherwise }\\ \end{array} \right \label{eq:T_update} \end{flalign} where $\mathcal{T}_{ij}^0$ is set by an initial parameter $\mathcal{T}$ and $\alpha, \beta \in (0, 1)$ are parameters. Whenever $\mathcal{T}_{ij}^{t+1} > \mathcal{T}_{ij}^{t}$, we increase $n$ by 1. Once $\sum_{u=1}^{t} |\tau_{ij}^{u}| \geq \mathcal{T}_{ij}^{t}$ but its objective value is still significantly changing, i.e. $| f_{i}(\theta_{i}^{t}) - f_{i}(\theta_{i}^{t-1}) | > \beta$, $\mathcal{T}_{ij}^{t+1}$ is increased by $\alpha^n \mathcal{T}$. Note that the independent upper bound $\mathcal{T}_{ij}^{t}$ for each $\eta_{ij}^{t}$ update on the edge $e_{ij}$ makes it sensitive to the various network topology, but it still satisfies the convergence condition because \begin{flalign} \lim_{t \rightarrow \infty} \mathcal{T}_{ij}^{t} \leq \sum_{n=1}^{\infty} \alpha^{n-1}~\mathcal{T} = \frac{1}{1-\alpha}\mathcal{T}. \end{flalign} \subsection{Combined Update Strategies (ADMM-VP + AP, ADMM-VP + NAP)} Observing (\ref{eq:he2000}) and the proposed update schemes (\ref{eq:eta_update_ap}) and (\ref{eq:eta_update_nap}), one can easily come up with a combined update strategy by replacing $\tau^{t}$ in (\ref{eq:he2000}) with $\tau_{ij}^{t}$. Based on preliminary experiments, we found that this replacement yields little utility. Instead, we suggest another penalty update strategy combining ADMM-VP and ADMM-AP as \begin{align} \eta_{ij}^{t+1} = \left\{ \begin{array}{ll} \eta_{ij}^{t} \cdot (1 + \tau_{ij}^{t}) \cdot 2 &\text{, if }{\| r_{i}^{t} \|}_2 > \mu {\| s_{i}^{t} \|}_2 \\[0.5em] \eta_{ij}^{t} \cdot (1 + \tau_{ij}^{t}) \cdot (1/2) &\text{, if }{\| s_{i}^{t} \|}_2 > \mu {\| r_{i}^{t} \|}_2 \\[0.5em] \eta_{ij}^{t} &\text{, otherwise }\\ \end{array} \right. \label{eq:combined} \end{align} which we denote as ADMM-VP + AP. We reset $\eta_{ij}^{t} = \eta^{0}$ when $t > t^{\max}$. In order to combine ADMM-VP and ADMM-NAP, we consider the summation condition of $\tau_{ij}^{t}$ as in (\ref{eq:eta_update_nap}). We denote this strategy as ADMM-VP + NAP. \section{Distributed Maximum Likelihood Learning} \label{sec:dpl} In this section, we show how our method can be applied to an existing distributed learning framework in the context of distributed probabilistic principal component analysis (D-PPCA). D-PPCA can be viewed as fundamental approach to a general matrix factorization task in the presence of potentially missing data, with many applications in machine learning. \subsection{Probabilistic Principal Component Analysis} The Probabilistic PCA (PPCA) \cite{tipping1999} has many applications in vision problems, including structure from motion, dictionary learning, image inpainting, etc. We here restrict our attention to the linear PPCA without any loss of generalization. The centralized PPCA is formulated as the task of projecting the source data $\mathbf{x}$ according to $ \mathbf{x} = \mathbf{W} \mathbf{z} + \bm{\mu} + \bm{\epsilon} $ where $\mathbf{x} \in \mathbb{R}^{D}$ is the observation column vector, $\mathbf{z} \in \mathbb{R}^{M}$ is the latent variable following $\mathbf{z} \sim\mathcal{N}(\mathbf{0},\mathbf{I})$, $\mathbf{W} \in \mathbb{R}^{D \times M}$ is the projection matrix that maps $\mathbf{x}$ to $\mathbf{z}$, $\bm{\mu} \in \mathbb{R}^{D}$ allows non-zero mean, and the Gaussian observation noise $\bm{\epsilon}\sim\mathcal{N}(\mathbf{0},a^{-1}\mathbf{I})$ with the noise precision $a$. When $a^{-1} = 0$, PPCA recovers the standard PCA. The posterior estimate of the latent variable $\mathbf{z}$ given the observation $\mathbf{x}$ is \begin{flalign} p( \mathbf{z}|\mathbf{x} ) \sim \mathcal{N}(\mathbf{M}^{-1}\mathbf{W}^\top ( \mathbf{x}- \bm{\mu} ), a^{-1} \mathbf{M}^{-1}), \label{eq:posterior_zx} \end{flalign} where $\mathbf{M}=\mathbf{W}^\top\mathbf{W}+a^{-1}\mathbf{I}$. The parameters $\mathbf{W}$, $\bm{\mu}$, and $a$ can be estimated using a number of methods, including SVD and Expectation Maximization (EM) algorithm. \subsection{Distributed PPCA} The distributed extension of PPCA (D-PPCA)~\cite{yoon2012} can be derived by applying ADMM to the centralized PPCA model above. Each node learns its local copy of PPCA parameters with its set of local observations $\mathbf{X}_i = \{ \mathbf{x}_{in} | n = 1..N_{i} \}$ where $\mathbf{x}_{in}$ denotes the $n$-th observation in $i$-th node and $N_{i}$ is the number of observations available in the node. Then, they exchange the parameters using the Lagrange multipliers and impose consensus constraints on the parameters. The global constrained optimization is \begin{align} \label{eq:obj} \min_{\bm{\Theta}_i}\, -\log p(\mathbf{X}_i | \bm{\Theta}_i)\quad s.t. &\quad \bm{\Theta}_i = \rho_{ij}^{\bm{\Theta}}, \rho_{ij}^{\bm{\Theta}} = \bm{\Theta}_j, \end{align} where $i \in \mathcal{V}, j \in \mathcal{B}_i$, $\bm{\Theta}_i = \{\mathbf{W}_i, \bm{\mu}_i, a_i\}$ is the set of local parameters and $\rho_{ij}^{\bm{\Theta}} = \{ \rho_{ij}^{\mathbf{W}}, \rho_{ij}^{\bm{\mu}}, \rho_{ij}^{a} \}$ is the set of auxiliary variables for the parameters. For the details regarding how the decentralized model is optimized, see~\cite{yoon2012}. \vspace{-0.1in} \subsection{D-PPCA with Network Adpative Penalty} The augmented Lagrangian applying the proposed ADMM with Network Adpative Penalty is similar to~\cite{yoon2012} except that $\eta$ becomes $\eta_{ij}$. with $\lambda_{i}$, $\gamma_{i}$, $\beta_{i}$ are Lagrange multipliers for the PPCA parameters for node $i$. The adaptive penalty constraint $\eta_{ij}^{t}$ controls the speed of parameter propagation dynamically so that the overall optimization empirically converges faster than~\cite{yoon2012}. One can solve this optimization using the distributed EM approach~\cite{forero2011}. The E-step of the D-PPCA is the same as centralized counterpart~\cite{tipping1999}. The M-step is similar to~\cite{yoon2012} except we use separate $\eta_{ij}$ for each edge. Since the update formulas for the three parameters are similar, we present the $\bm{\mu}_{i}$ update as an example. First, $\bm{\mu}_{i}$ can be updated as {\small \begin{flalign} \label{eq:mu} &\bm{\mu}_i^{t+1} = \left\{ a_i \sum_{n=1}^{N_i}\left( \textbf{x}_{in} - \textbf{W}_i \mathbb{E}[\textbf{z}_{in}] \right) - 2\gamma_i^{t} + \sum_{j \in \mathcal{B}_i} \eta_{ij} \left( \bm{\mu}_i^{t} + \bm{\mu}_j^{t} \right) \right\} \cdot \left( N_i a_i + 2\sum_{j \in \mathcal{B}_i} \eta_{ij}^{t} \right)^{-1}, \end{flalign} }% where $\mathbb{E}[\mathbf{z}_{in}]$ denotes the posterior estimates of the $n$-th latent variable of node $i$. Note that unlike D-PPCA where we computed the normalization factor as $N_{i} a_{i} + 2 \eta | \mathcal{B}_{i} |$ where $| \cdot |$ is the cardinality, we add up $\eta_{ij}^{t}, \forall j \in \mathcal{B}_{i}$. The corresponding Lagrange multiplier can be computed as penalty-weighted summation of consensus errors $\gamma_i^{t+1} = \gamma_i^{t} + (1/2)\sum_{j \in \mathcal{B}_i} \eta_{ij}^{t} \left( \bm{\mu}_i^{t+1} - \bm{\mu}_j^{t+1} \right)$. Once all the parameters and the Lagrange multipliers are updated, we update $\eta_{ij}$ and $\mathcal{T}_{ij}$ using (\ref{eq:eta_update_nap}) and (\ref{eq:T_update}), respectively. Algorithm 1 in the appendix summarizes the overall steps for the D-PPCA with Network Adpative Penalty. \section{Experiments} We first analyze and compare the proposed methods (ADMM-VP, ADMM-AP, ADMM-NAP, ADMM-VP + AP, ADMM-VP + NAP) with the baseline method using synthetic data. Next, we apply our method to a distributed structure from motion problem using two benchmark real world datasets. For the baseline, we compare with the standard ADMM-based D-PPCA~\cite{yoon2012} denoted as \texttt{ADMM}. Unless noted otherwise, we used $\eta^{0} = 10$. To assess convergence, we compare the relative change of~(\ref{eq:obj}) to a fixed threshold ($10^{-3}$ in this case) for the D-PPCA experiments as in~\cite{yoon2012}. \subsection{Synthetic Data} \begin{figure*}[t] \centering $\begin{array}{c c c} \begin{subfigure}{0.31\textwidth} \includegraphics[width=1\textwidth]{Synthetic_Gaussian_Random_Subspace_Error_ETA10_Nodes12_Complete.eps} \caption{12 nodes (complete)} \label{fig_s1} \end{subfigure} & \begin{subfigure}{0.31\textwidth} \includegraphics[width=1\textwidth]{Synthetic_Gaussian_Random_Subspace_Error_ETA10_Nodes16_Complete.eps} \caption{16 nodes (complete)} \label{fig_s2} \end{subfigure} & \begin{subfigure}{0.31\textwidth} \includegraphics[width=1\textwidth]{Synthetic_Gaussian_Random_Subspace_Error_ETA10_Nodes20_Complete.eps} \caption{20 nodes (complete)} \label{fig_s3} \end{subfigure} \\ \begin{subfigure}{0.31\textwidth} \includegraphics[width=1\textwidth]{Synthetic_Gaussian_Random_Subspace_Error_ETA10_Nodes20_Ring.eps} \caption{20 nodes (ring)} \label{fig_s4} \end{subfigure} & \begin{subfigure}{0.31\textwidth} \includegraphics[width=1\textwidth]{Synthetic_Gaussian_Random_Subspace_Error_ETA10_Nodes20_Cluster.eps} \caption{20 nodes (cluster)} \label{fig_s5} \end{subfigure} & \begin{subfigure}{0.25\textwidth} \includegraphics[width=1\textwidth]{Synthetic_Gaussian_Random_Subspace_Error_ETA10_Nodes4_Complete.eps} \label{fig_s6} \end{subfigure} \\ \\ \end{array}$ \caption{The comparison of proposed methods and the baseline ADMM using the subspace angle error of the projection matrix with (a-c) different graph size and (c-e) different network topology} \label{fig:synthetic1} \vspace{-0.2in} \end{figure*} We generated 500 samples of 20 dimensional observations from a 5-dim subspace following $\mathcal{N}(\mathbf{0}, \mathbf{I})$, with the Gaussian measurement noise following $\mathcal{N}(\mathbf{0}, 0.2 \cdot \mathbf{I})$. For the distributed settings, the samples are assigned to each node evenly. All experiments are ran with 20 independent random initializations. We measured the number of iterations to convergence and the maximum subspace angle error versus the ground truth defined as the maximum of subspace angles between each node's projection matrix and the ground truth projection matrix. We examined the impact of different graph topologies and different graph sizes. We tested three network topologies: complete, ring and cluster (a connected graph consists of two complete graphs linked with an edge). For the graph size, we tested on 12, 16 and 20 nodes settings. Top three plots in Fig.~\ref{fig:synthetic1} depict results over varying number of nodes while fixing the graph topology as the complete graph. We plot the median result out of the 20 independent initializations. We observed that the speed up with the proposed method, particularly for ADMM-VP and its variants, becomes more significant as the number of nodes increases. This suggests the proposed method can be of particular use as the size of an application problem increases. Fig.~\ref{fig_s3} to Fig.~\ref{fig_s5} in the figure show the performance in the context of different network topologies. Our proposed methods converge faster or at the same rate as the standard ADMM. The proposed method works most robustly in the complete graph setting. In other words as the graph connectivity increases, the convergence property of the proposed method improves. Note also that ADMM-VP works best in complete graph while ADMM-AP / NAP are better than the ADMM-VP in weakly connected networks. This makes sense as ADMM-VP depends on residual computation and the proposed local residual computation become less accurate compared to the complete graph when the global residual can be computed. \subsection{Distributed Affine Structure from Motion} We tested the performance of our method on five objects of Caltech Turntable~\cite{Moreels2007} and Hopkins 155~\cite{tron2007} dataset as in~\cite{yoon2012}. The goal here is to jointly estimate the 3D structure of the objects as well as the camera motion, however in a distributed camera network setting. The input measurement matrix is defined as $2 \times F$ by $N$ where $F$ denotes the number of frames and $N$ denotes the number of points. By applying PCA, we can decompose the input into the camera pose $\mathbf{W}_{i}$ and the 3D structure $\mathbb{E}[\mathbf{z}_{in}], n = 1..N_{i}$. For the detailed experimental setting, refer to~\cite{tron2011, yoon2012}. As the performance measure, we used the maximum subspace angle error versus the centralized SVD-reconstructed structure. The network setting assumes five cameras on a complete graph. Fig.~\ref{fig:caltech} shows the result on the Caltech Turntable dataset. First, we compare Fig.~\ref{fig_c1} and Fig.~\ref{fig_c2}. One can see that when the graph is less connected (Fig.~\ref{fig_c1}), the proposed adaptive penalty method can boost ADMM-VP which cannot utilize the full residual information of fully connected case (Fig.~\ref{fig_c2}), as explained in synthetic data experiments. Next, we compare Fig.~\ref{fig_c2} and Fig.~\ref{fig_c3}. The network topologies are the same (complete) but $t^{\max}$ value required for ADMM-VP, ADMM-AP, ADMM-VP + AP is different in these two groups of experiments. When $t^{max}$ = 50 (Fig.~\ref{fig_c2}), all methods can accelerate throughout the iterations. However, when $t^{max}$ = 5 (Fig.~\ref{fig_c3}), the methods that depend on $t^{max}$ cannot accelerate after 5 iterations thus showing behavior similar to the baseline ADMM. On the other hand, ADMM-NAP based methods can accelerate by adaptively modifying the maximum number of penalty updates. Note that one can choose any small value of $\mathcal{T}$ and $\mathcal{T}_{ij}$ is increased automatically using (\ref{eq:T_update}). \begin{figure*}[t] \centering $\begin{array}{c c c c c} \begin{subfigure}[h]{0.27\textwidth} \includegraphics[width=1\textwidth]{Tmax50/Caltech_Subspace_Error_ETA10_Obj4_Ring.eps} \caption{$t^{max} = 50$ (ring)} \label{fig_c1} \end{subfigure} & \begin{subfigure}[h]{0.27\textwidth} \includegraphics[width=1\textwidth]{Tmax50/Caltech_Subspace_Error_ETA10_Obj4_Complete.eps} \caption{$t^{max} = 50$ (complete)} \label{fig_c2} \end{subfigure} & \begin{subfigure}[h]{0.27\textwidth} \includegraphics[width=1\textwidth]{Tmax5/Caltech_Subspace_Error_ETA10_Obj4_Complete.eps} \caption{$t^{max} = 5$ (complete)} \label{fig_c3} \end{subfigure} \end{array}$ \caption{The comparison of proposed methods and the baseline ADMM using the subspace angle error of the reconstructed 3D structure with one object in Caltech dataset (Standing). Results on the remaining four objects can be found in the appendix. See Fig.~\ref{fig:synthetic1} for the plot labels.} \label{fig:caltech} \vspace{-0.25in} \end{figure*} For the Hopkins 155 dataset, we compared methods on 135 objects using the same approach as~\cite{yoon2012}. For each method considered, we computed the mean number of iterations until convergence. Since some objects in the dataset are point trajectories of non-rigid structure, it is inevitable for simple linear models to fail for those objects. Thus we omitted objects yielded more than 15 degrees when calculating the mean. For each object, we tested 5 independent random initializations. For ADMM-AP, ADMM-NAP and ADMM-VP + NAP, we found no significant speed up over the baseline ADMM. For ADMM-VP and ADMM-VP + AP, we could obtain 40.2\%, 37.3\% speed up, respectively if we use complete network. In ring network, the amount of improvement becomes smaller. This small or no improvement of speed is mainly due to the fact that the baseline ADMM converges fast enough (typically $< 100$ iterations) thus there is little room for the proposed methods to speed up the optimization. As observed from the synthetic experiments and Caltech dataset, the acceleration of the proposed methods occurs at the earlier iterations of the optimization. Thus if one can come up with a better convergence checking criterion depending on the application, the proposed methods can be a very viable choice due to its parameter-free nature. \vspace{-0.1in} \section{Conclusion} We introduced a novel adaptive penalty update methods for ADMM that can be applied to consensus distributed learning frameworks. Contrary to previous approaches, our adaptive penalty update methods, ADMM-AP and ADMM-NAP does not depend on the parameters that require manual tuning. Using both synthetic and real data experiments, we showed the empirical effectiveness of the methods over the baseline. In addition, we found that the performance of ADMM-VP decreases with weakly connected graphs, and in those cases, ADMM-AP and ADMM-NAP can be useful. The proposed methods do leave some room for improvements. For the problems when the standard ADMM can converge fast enough, the proposed methods may show less than significant gains. A better convergence criterion may help stop the proposed algorithms at earlier iterations (e.g. a criterion that can stop algorithms to remove long tails in Fig.~\ref{fig_s2} or Fig.~\ref{fig_s3}).
1,108,101,566,060
arxiv
\section{Introduction} A gravitational system (see, e.g., \cite{Birrell,Barvinsky:1985an,Buchbinder,Frolov,Donoghue:1994dn,Mukhanov} for reviews) is much subtler and more complex than a non-gravitational one in many ways. This aspect is manifest in various forms, most notably in the challenges in quantization, which in turn have been spawning various obstructions. One can easily name several areas in which a firmer grasp of the quantization would better position one for a more complete treatment. Any study, in particular, the study of black hole information, in which the back reaction of the metric plays (or is expected to play) an important role, should be an example. The cosmological constant problem is also likely to benefit since it is the vacuum energy the complete understanding of which must be accompanied by an account of its quantum shift. Much of the difficulty in the quantization must be attributed to the large amount of gauge symmetry, the diffeomorphism. Therefore, one can reasonably expect that the key to the puzzle should lie in proper handling of the gauge symmetry. It has recently been realized that the diffeomorphism symmetry can be tamed in a manner that accomplishes the long-sought renormalizability of gravity (in its physical sector) \cite{Park:2014tia}. The renormalization procedures of pure Einstein gravity and an Einstein-scalar system have been carried out in \cite{Park:2014noa,Park:2015ota,Park:2015xoa} and \cite{Park:2015ybl,Park:2016zgt}, respectively. We extend and expand those analyses to an Einstein-Maxwell system in this work. The difficulties in a gravitational system could foster great opportunity, as, for instance, in holography, for understanding Nature. As is often the case (although nevertheless surprising if true), all these different aspects may not be unrelated and may well in fact hinge closely on one another. Our recent works on the gravity quantization were motivated by the black hole information. While working on the quantization, we have come to realize that our understanding of the boundary conditions and dynamics is as yet incomplete; a more systematic and sound analysis of the boundary conditions needs precede \cite{Park:2013iqa,Park:2016fxc,Park:2016vam,Nurmagambetov:2018het}\footnote{As far as we are aware, its seriousness and importance have not, up until recently, been accordingly stressed. (See the recent work by Witten, \cite{Witten:2018lgb} for a discussion of the boundary conditions.)} a complete treatment of the quantization. We have raised the possibility that information may be bleached through a quantum gravitational process in the vicinity of the horizon and released before the entry of the matter into the horizon\cite{Park:2013rm}\cite{Park:2017wiw}. The cosmological constant is generically generated by the loop effects \cite{Park:2016zgt}, as will be reviewed below, and contributes to the generation of time-dependent solutions that in turn are linked with the black hole information \cite{Nurmagambetov:2018het}. The divergence analysis of an Einstein-Maxwell system was carried out long ago in an extensive work by Deser and van Nieuwenhuizen \cite{Deser:1974cz}. The counter-terms to the ultraviolet divergences were determined essentially by dimensional analysis and covariance. In our approach they are directly calculated in the Feynman diagrammatic method, with the results complementing their work in several aspects. We will also see, as a byproduct, how the long-known gauge dependence issue \cite{Vilkovisky:1984st,Fradkin:1983nw,Huggins:1987zw,Odintsov:1989gz,Odintsov:1991fk} arises and is cleared up (at least) in the present framework. The case analysis of an Einstein-Maxwell system carries several more imminent significances for our perspective. Firstly, the matter part itself is a gauge system and this poses additional hurdles; overcoming them should constitute meaningful progress in the field. Secondly, it is in this work where the field redefinition-utilized renormalization program is more thoroughly carried out: the focus of \cite{Park:2016zgt} was on establishing the {\em renormalizability} of a gravity-matter system. A detailed and explicit analysis of, e.g., the running of the coupling constants was not conducted. In this work, the running of the cosmological constant and Newton's constant is addressed in much detail. Since the renormalization involves a field redefinition, which is not necessary in the usual renormalizable theories, the explicit steps of the renormalization will be worth presenting - all of the required steps are taken in the present work. Further, the predictability of the theory - brought along by the renormalizability - is also explicitly addressed. \vspace{.3in} The paper is organized as follows. \vspace{.1in} \noindent In section 2, we outline the one-loop renormalization procedure in a general background metric $g_{\m\n}$ that denotes a solution of the metric field equation. The analysis should make it clear that the methodology can be applied to an arbitrary background $g_{\m\n}$. The first several relatively simple diagrams and their relevant vertices are identified. In section 3, we carry out the explicit one-loop counter-term computation by taking $g_{\m\n}=\h_{\m\n}$. A certain diagram yields a non-covariant expression and its inspection leads to a connection with the long-known problem of the gauge choice-dependence of the effective action. The gauge choice-dependence is then resolved. The origin of the gauge choice-dependence is found in the limitation of the background field method (BFM), which can alternatively be viewed as a reflection of the complexity of a gravitational system. In section 4 we consider renormalization of the cosmological constant and Newton's constant. The vacuum-to-vacuum and tadpole diagrams are responsible for their renormalization. Unlike in a non-gravitational theory, the tadpole diagrams play a potentially important role. There are several technical subtleties, some of which have to do with dimensional regularization: the flat propagator yields vanishing results for the vacuum-to-vacuum and tadpole diagrams. The shifts in the cosmological and Newton's constants are introduced through finite renormalization. We show that the original Einstein-Hilbert action with the counter-terms can be rewritten as another Einstein-Hilbert action in terms of a redefined metric. Several ramifications including the theory's predictability are discussed. Section 5 contains a summary and future directions. We contemplate on the several possible procedures of renormalization. We also comment on the higher-loop extension of the present work. \section{Loop computation setup} The preliminary step for renormalization is to compute the one-particle-irreducible (1PI) effective action in the given background. (See, e.g., \cite{Kallosh:1978wt}\cite{Capper:1984qq}\cite{Buchbinder}\cite{Antoniadis:1995fc} for reviews of various methods of computing the effective action.) In this section, we lay broader outlines of the counter-term computation in an arbitrary background $g_{\m\n}$ before getting into the flat case in the next section. We will focus on several two-point amplitudes. Let us consider the Einstein-Maxwell action,\footnote{To carry out renormalization, one starts with the renormalized form of the action: \begin{eqnarray} S=\int \sqrt{-\hat{g}_r}\;\Big(\fr1{\k_r^2} \hat{R}_r-\fr14 \hat{F}_{r\m\n}^2 \Big) \label{EM} \end{eqnarray} where the renormalized quantities are indicated by the subscript $r$ that has been omitted in \rf{EM2} for simplicity of notation.} \begin{eqnarray} S=\int \sqrt{-\hat{g}}\;\Big(\fr1{\k^2}\hat{R}-\fr14 \hat{F}_{\m\n}^2 \Big) \label{EM2} \end{eqnarray} For the perturbative analysis in the background field method (BFM), introduce the fluctuation fields, $(h_{\m\n}, a_\m)$, according to \begin{eqnarray} \hat{g}_{\m\n}\equiv h_{\m\n}+\tilde{g}_{\m\n}\quad,\quad \hat{A}_\m \equiv a_\m+\tilde{A}_\m \label{gshift} \end{eqnarray} The graviton propagator associated with the traceless fluctuation mode \cite{Park:2014tia,Park:2015ota,Park:2015xoa} (see also \cite{Morris:2018axr})) can be written as \begin{eqnarray} <h_{\m\n}(x_1)h_{\rho\s}(x_2)>&=& \tilde{P}_{\m\n\rho\s}\, \tilde{\D}(x_1-x_2) \label{h2pt} \end{eqnarray} where the tensor $\tilde{P}_{\m\n\rho\s}$ is given by \begin{eqnarray} \tilde{P}_{\m\n\rho\s} &\equiv& \fr{(2\k^2)}2\Big(\tilde{g}_{\m\rho}\tilde{g}_{\n\s}+\tilde{g}_{\m\s}\tilde{g}_{B\n\rho} - \fr12\tilde{g}_{\m\n}\tilde{g}_{\rho\s}\Big); \label{fpt} \end{eqnarray} $\tilde{\D}(x_1-x_2)$ is the Green's function for a scalar theory in the background metric $\tilde{g}_{\m\n}$. (There is of course the full propagator for the vector field; we will focus on the graviton sector.) It turns out that it is convenient to employ two different layers of perturbation. As we will see, it is possible to formally construct $\tilde{\D}(x_1-x_2)$ in a closed-form; one may compute some of the diagrams by employing the full propagator \rf{h2pt} (as well as the full propagator of the Maxwell sector) - which we call the ``first-layer" perturbation. For other diagrams such as the vacuum-to-vacuum amplitudes, one may employ the ``second-layer" perturbation\footnote{The second-layer perturbation is not necessary in non-gravitational theories.} by splitting $\tilde{g}, \tilde{A}_\m$ into \begin{eqnarray} \quad \tilde{g}_{\m\n} \equiv \varphi_{\m\n}+g_{\m\n}\quad,\quad \tilde{A}_\m \equiv A_\m+A_{0\m} \label{split} \end{eqnarray} where $\varphi_{\m\n},A_\m$ represents the background fields and $g_{\m\n}, A_{0\m}$ the classical solutions. (For instance, we will take $g_{\m\n}=\eta_{\m\n}, A_{0\m}=0$ in section 3.) For most of the diagrams that we will consider, the structures of the vertices allow one to approximate $\tilde{P}_{\m\n\rho\s}$, for the given order \begin{eqnarray} \tilde{P}_{\m\n\rho\s} \simeq P_{\m\n\rho\s} \equiv \fr{(2\k^2)}2\Big(g_{\m\rho}g_{\n\s}+g_{\m\s}g_{\n\rho} - \fr12g_{\m\n}g_{\rho\s}\Big) \label{apt} \end{eqnarray} where $P_{\m\n\rho\s}$ is the leading-order $\varphi_{\m\n}$-expansion of $\tilde{P}_{\m\n\rho\s}$. (We will also see the use of the full tensor $\tilde{P}_{\m\n\rho\s}$ in one of the computations, the first-layer perturbation example.) For the divergence analysis one can use $\tilde{\D}(x_1-x_2)\simeq \D(x_1-x_2)$ where $ \D(x_1-x_2)$ denotes the scalar propagator for $g_{\m\n}=\h_{\m\n}$, \begin{eqnarray} \D(x_1-x_2)=\int \fr{d^4k}{(2\pi)^4}\fr{e^{ik\cdot (x_1-x_2)}}{i k^2} \end{eqnarray} In this ``bottom-up" approach, the quantities that one intends to calculate in the first-layer perturbation can be calculated through the second-layer perturbation. Dimensional analysis and 4D covariance provide useful consistency checks as will be demonstrated in section 3. \vspace{.1in} Let us expand the action in terms of the fluctuation fields $h_{\m\n}, a_\m$. Including the gauge-fixing and ghost terms, one gets \begin{eqnarray} S=\int \Big( \fr1{\k^2}{\cal L}_{grav}+{\cal L}_{matter}\Big) \label{combinedaction} \end{eqnarray} where\footnote{The $\tilde{R}_{\m\n}\bar{C}^\m C^\n$ term of the gravity sector action ${\cal L}_{grav}$ presented in \cite{Park:2016zgt} has a sign error due to mixed conventions. It, with affected equations, has been corrected in \cite{Park:2015ota}.} \begin{eqnarray} &&\k^2{\cal L}_{grav} =\fr1{2} \sqrt{-\tilde{g}}\,\Big( -\fr12\tilde{\nabla}_\g h^{\a\b}\tilde{\nabla}^\g h_{\a\b}+\fr14 \tilde{\nabla}_\g h^{\a}_\a \tilde{\nabla}^\g h^{\b}_\b \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&+h_{\a\b}h_{\g\d}\tilde{R}^{\a\g\b\d}-h_{\a\b}h^{\b}{}_\g \tilde{R}^{\k\a\g}{}_{\k} { -}h^{\a}{}_{\a}h_{\b\g}\tilde{R}^{\b\g}-\fr12 h^{\a\b}h_{\a\b}\tilde{R} +\fr14 h^{\a}_\a h^{\b}_\b \tilde{R} +\cdots\Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&-\tilde{\nabla}^\n \bar{C}^\m \tilde{\nabla}_\n C_\m { +}\tilde{R}_{\m\n}\bar{C}^\m C^\n -\w^* \tilde{\nabla}^\m\tilde{F}_{\m\n}C^\n-\w^* \tilde{F}_{\m\n} \tilde{\nabla}^\m C^\n +\cdots \end{eqnarray} and \begin{eqnarray} &&\hspace{-.3in}{\cal L}_{matter} =-\fr14 \sqrt{-\tilde{g}}\Big[\tilde{g}^{\m\n}\tilde{g}^{\rho\s}-\tilde{g}^{\m\n}h^{\rho\s} -\tilde{g}^{\rho\s}h^{\m\n}+\fr12 \tilde{g}^{\m\n}\tilde{g}^{\rho\s}h +\tilde{g}^{\m\n}h^{\rho\k}h_\k^{\s} +\tilde{g}^{\rho\s}h^{\m\k}h_\k^{\n}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.7in} -\fr12 \tilde{g}^{\m\n}hh^{\rho\s} -\fr12 \tilde{g}^{\rho\s}hh^{\m\n}+h^{\m\n}h^{\rho\s} +\fr18 \tilde{g}^{\m\n} \tilde{g}^{\rho\s}(h^2-2h_{\k_1\k_2}h^{\k_1\k_2} ) \Big] \Big( f_{\m\rho}f_{\n\s} {+} 2f_{\m\rho}\tilde{F}_{\n\s}+\tilde{F}_{\m\rho}\tilde{F}_{\n\s} \Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \hspace{.5in}-\fr12\sqrt{-\tilde{g}}\; (\tilde{\nabla}_\k a^\k)^2 -\tilde{\nabla}\w^* \tilde{\nabla}\w+\cdots \end{eqnarray} where the raising and lowering are done by $\tilde{g}^{\m\n}$ and $\tilde{g}_{\m\n}$, respectively. Above, $(C^\k, \w)$ are the ghosts for the diffeomorphism and vector gauge transformation, respectively.\footnote{These ghost terms correspond to the following transformations of the fluctuation fields \cite{Deser:1974cz}: \begin{eqnarray} h'_{\m\n}&=&h_{\m\n} +(\tilde{g}_{\m\k} \tilde{D}_\n+\tilde{g}_{\n\k} \tilde{D}_\m)\eta^\k +(h_{\m\k} \tilde{D}_\n+h_{\n\k} \tilde{D}_\m)\eta^\k +\eta^\k \tilde{D}_\k h^{\m\n} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ a'_\m&=& a_\m+\eta^\k \tilde{F}_{\k\m}+\tilde{D}_\m \eta^5+a_\k\tilde{D}_\m\eta^\m+\eta^\k \tilde{D}_\k a_\m \end{eqnarray} under $x'^{\a}=x^\a-\eta^\a$ and the vector gauge transformation with the parameter $-\eta^\k \tilde{A}_\k+\eta^5$. } Putting it all together, \rf{combinedaction} can be written in a more useful form as the sum of the kinetic part and the vertices: \begin{eqnarray} S\equiv S_{k}+S_{v} \end{eqnarray} with \begin{eqnarray} \hspace{-.5in}S_{k}&\!\!=&\!\! \int \sqrt{-\tilde{g}}\, \fr1{2\k^2} \Big( -\fr12\tilde{\nabla}_\g h^{\a\b}\tilde{\nabla}^\g h_{\a\b}+\fr14 \tilde{\nabla}_\g h^{\a}_\a \tilde{\nabla}^\g h^{\b}_\b \Big) -\fr14 \sqrt{-\tilde{g}}\;\Big(\tilde{g}^{\m\n}\tilde{g}^{\rho\s}f_{\m\rho}f_{\n\s}\Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && -\fr12\sqrt{-\tilde{g}}\; (\tilde{\nabla}_\k a^\k)^2+ \fr1{2\k^2}\sqrt{-\tilde{g}}\; (-\tilde{\nabla}^\n \bar{C}^\m \tilde{\nabla}_\n C_\m)-\sqrt{-\tilde{g}}\;\tilde{\nabla}^\rho\w^* \tilde{\nabla}_\rho\w \label{Sk} \end{eqnarray} and \begin{eqnarray} S_{v}&=&\int\sqrt{-\tilde{g}}\;\fr1{2\k^2} \Big(h_{\a\b}h_{\g\d}\tilde{R}^{\a\g\b\d}-h_{\a\b}h^{\b}{}_\g \tilde{R}^{\k\a\g}{}_{\k} { -}h^{\a}{}_{\a}h_{\b\g}\tilde{R}^{\b\g}-\fr12 h^{\a\b}h_{\a\b}\tilde{R} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&+\fr14 h^{\a}_\a h^{\b}_\b \tilde{R} \Big) -\fr14 \sqrt{-\tilde{g}}(\tilde{g}^{\m\n}\tilde{g}^{\rho\s}) \Big( {+} 2f_{\m\rho}\tilde{F}_{\n\s}+\tilde{F}_{\m\rho}\tilde{F}_{\n\s} \Big) -\fr14 \sqrt{-\tilde{g}}\Big[-\tilde{g}^{\m\n}h^{\rho\s}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && -\tilde{g}^{\rho\s}h^{\m\n}+\fr12 \tilde{g}^{\m\n}\tilde{g}^{\rho\s}h +\tilde{g}^{\m\n}h^{\rho\k}h_\k^{\s} +\tilde{g}^{\rho\s}h^{\m\k}h_\k^{\n} -\fr12 \tilde{g}^{\m\n}hh^{\rho\s} -\fr12 \tilde{g}^{\rho\s}hh^{\m\n}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&+h^{\m\n}h^{\rho\s} +\fr18 \tilde{g}^{\m\n} \tilde{g}^{\rho\s}(h^2-2h_{\k_1\k_2}h^{\k_1\k_2} ) \Big] \Big( f_{\m\rho}f_{\n\s} {+} 2f_{\m\rho}\tilde{F}_{\n\s}+\tilde{F}_{\m\rho}\tilde{F}_{\n\s} \Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \hspace{.5in}+ \fr1{2\k^2}\sqrt{-\tilde{g}}\; \Bigg( { +}\tilde{R}_{\m\n}\bar{C}^\m C^\n + \fr12\tilde{\nabla}^\m\w^* \tilde{F}_{\m\n}C^\n \Bigg) +\cdots \label{Sv} \end{eqnarray} \subsection{on the gauge-fixing} A crucial feature of the action above - which has been set up for the refined BFM - is how the graviton gauge-fixing has been implemented: \begin{eqnarray} -\fr12\Big[\tilde{\nabla}_\n h^{\m\n}-\fr12 \tilde{\nabla}^\m h \Big]^2 \label{bfmgf} \end{eqnarray} This is the refined BFM version of the usual gauge-fixing, \begin{eqnarray} -\fr12\Big[{\nabla}_\n h^{\m\n}-\fr12 {\nabla}^\m h \Big]^2 \label{nbfmgf} \end{eqnarray} that is $\tilde{g}_{\m\n}$-background non-covariant. In other words, one starts with \rf{nbfmgf} and converts it into \rf{bfmgf} when turning to the refined BFM. The physical content of the gauge condition satisfied by $h_{\m\n}$ is still \rf{nbfmgf} since the BFM is just a convenience device that allows one to conduct the analysis more covariantly than otherwise. (The field $\varphi_{\m\n}$ satisfies the same gauge-fixing; see \rf{vfgf} below.) Naively, one expects that with the gauge-fixing \rf{bfmgf} the 1PI effective action will come out to be $\tilde{g}_{\m\n}$-covariant. Later we will see that the 1PI action is non-covariant due to the presence of the terms that can be removed by enforcing the strong form of the gauge condition, which provides an important clue as to how to solve the gauge choice-dependence of the effective action. \subsection{two-point diagrams} In general the renormalization in a curved background $g_{\m\n}$ is technically involved. It is nevertheless possible to outline the steps of the amplitude computation for an arbitrary solution metric $g_{\m\n}$. Cautionary remarks are in order. It is important to distinguish the second-layer diagrams from the first-layer ones. Only the first-layer diagrams will individually yield covariant results. A given first-layer diagram corresponds, in general, to multiple second-layer diagrams even at a fixed order of $\varphi_{\m\n}$. Consider, for example, the graviton kinetic action, \begin{eqnarray} {\cal L}_{grav, kin} =\fr1{2\k^2} \sqrt{-\tilde{g}}\,\Big( -\fr12\tilde{\nabla}_\g h^{\a\b}\tilde{\nabla}^\g h_{\a\b}+\fr14 \tilde{\nabla}_\g h^{\a}_\a \tilde{\nabla}^\g h^{\b}_\b \Big) \end{eqnarray} and the one-loop vacuum-to-vacuum amplitude. Although there is a unique one-loop vacuum-to-vacuum amplitude,\!\!\begin{fmffile}{vacandtad2} \!\!\!\!\Scale[0.4]{ \begin{gathered} \begin{fmfgraph*}(75,50)\fmfpen{thick} \fmfi{gluon}{reverse fullcircle scaled .5w shifted (.5w,.5h)} \end{fmfgraph*}\!\! \end{gathered}}, \end{fmffile}in the first-layer perturbation, the diagram corresponds to multiple second-layer ones. At the second order in $\varphi_{\a\b}$, the relevant diagram is the one given in Fig. 1 (a). More on this as we continue. With the split given in \rf{split}, the kinetic terms themselves yield the vertices for the second-layer perturbation expansion. For instance, the graviton kinetic term is expanded as \begin{eqnarray} \hspace{.2in}2\k^2 {\cal L}_{grav, kin}= -\fr12 {\pa}_\g h^{\a\b}{\pa}^\g h_{\a\b}+\fr14 {\pa}_\g h^{\a}_\a {\pa}^\g h^{\b}_\b \label{lv12qq} \end{eqnarray} \[ \hspace{-.2in} + \Big(2g^{\b\b'}\tilde{\G}^{\a' \g\a}- g^{\a\b}\tilde{\G}^{\a' \g\b'}\Big)\pa_\g h_{\a\b}\, h_{\a'\b'} +\Big[\fr12(g^{\a\a'}g^{\b\b'}\varphi^{\g\g'}+g^{\b\b'}g^{\g\g'}\varphi^{\a\a'} \] \[ \hspace{-.2in} +g^{\a\a'}g^{\g\g'}\varphi^{\b\b'}) -\fr14 \varphi\, g^{\a\a'}g^{\b\b'}g^{\g\g'}-\fr12 g^{\g\g'}g^{\a'\b'}\varphi^{\a\b} +\fr14 (-\varphi^{\g\g'}+\fr12 \varphi g^{\g\g'})g^{\a\b}g^{\a'\b'} \Big] \pa_\g h_{\a\b}\, \pa_{\g'}h_{\a'\b'} \] \noindent where the raising and lowering are done by $g^{\m\n}$ and $g_{\m\n}$, respectively. The terms in the second and third lines serve as the vertices responsible for Fig. \ref{fig:2gh} (a). The corresponding ghost diagram is given in Fig. \ref{fig:2gh} (b). \begin{figure}[t] \begin{center} \begin{fmffile}{2ghostfig} \quad\quad \parbox{40mm}{ \begin{fmfgraph*}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{gluon,left,label=$h$,tension=1}{v1,v2,v1} \fmflabel{$\varphi$}{i} \fmflabel{$\varphi$}{o} \fmf{phantom,label.dist=0}{v1,v2} \end{fmfgraph*} } \quad\quad \parbox{40mm}{ \begin{fmfgraph*}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{dashes,left,label=$C$,tension=1}{v1,v2,v1} \fmflabel{$\varphi$}{i} \fmflabel{$\varphi$}{o} \fmf{phantom,label.dist=0}{v1,v2} \end{fmfgraph*} } \end{fmffile} \end{center} \vspace{.1in} \hspace{1.51in} (a) \hspace{1.64in} (b) \caption{\label{fig:2gh}graviton and ghost diagrams (indices on fields suppressed)} \end{figure} The forms of all possible second-layer vertices can be obtained by applying this scheme to the rest of the terms in eq. \rf{Sk} and \rf{Sv}. The first several relatively simple matter-involving diagrams are listed in Fig. 2. In general, we restrict the maximum number of the graviton external lines to two for simplicity. Overall, the diagrams are classified into four categories. The first class is the diagrams with both vertices from the graviton sector: the pure gravity sector two-point amplitude and the corresponding ghost-loop diagram in Fig. \ref{fig:2gh}. They were considered in \cite{Park:2015ota} and will be reviewed below. The second is the diagrams with both vertices from the matter sector, Fig. 2 (a) and (c). The third is the diagrams with one vertex from the graviton sector and the other from the matter sector, Fig. 2 (d). \vspace{.1in} \[ \begin{fmffile}{13gr2} \hspace{-.05in} \parbox{40mm}{ \begin{fmfgraph*}(70,40) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{photon,left,label=$a$,tension=1.3}{v1,v2,v1} \fmflabel{$\varphi$}{i} \fmflabel{$\varphi$}{o} \end{fmfgraph*}\\ \hspace{.34in} (a) } \parbox{40mm}{ \begin{fmfgraph*}(70,40) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{dots,left,label=$a$,tension=1.3}{v1,v2,v1} \fmflabel{$\varphi$}{i} \fmflabel{$\varphi$}{o} \end{fmfgraph*}\\ \hspace{.36in} (b) } \hspace{-.3in}\parbox{40mm}{\begin{fmfgraph*}(78,55) \fmfstraight \fmfleft{i1,i2} \fmfright{o1,o2} \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{photon}{i1,v1,i2} \fmf{photon}{o1,v2,o2} \fmf{gluon,left,label=$h$,tension=.5}{v1,v2,v1} % \fmflabel{A}{i1} \fmflabel{A}{i2} \fmflabel{A}{o1} \fmflabel{A}{o2} \end{fmfgraph*}\\ \hspace{.41in} (c) } \parbox{40mm}{\begin{fmfgraph*}(75,55) \fmfstraight \fmfleft{i1,i2} \fmfright{o} \fmf{photon}{i1,v1,i2} \fmf{gluon}{v2,o} \fmf{gluon,left,label=$h$,tension=.26}{v1,v2,v1} % \fmflabel{A}{i1} \fmflabel{A}{i2} \fmflabel{$\varphi$}{o} \end{fmfgraph*}\\ \hspace{.3in} (d) } \end{fmffile} \] \vspace{-.1in} \[ \mbox{Figure 2: matter-involving diagrams} \] All of the diagrams so far have ``homogeneous" loops whereas the diagrams in Fig. 3 have ``inhomogeneous" ones. They are classified as the fourth class due to the fact that they require special care. \[ \hspace{-.1in} \parbox{80mm}{ \begin{fmffile}{mixrel} \Scale[0.99]{ \begin{fmfgraph*}(90,70) \fmfleft{i} \fmfright{o} \fmf{photon,tension=4}{i,v1} \fmf{photon,tension=4}{v2,o} \fmf{photon,left,tension=1}{v1,v2} \fmf{gluon,left,tension=1}{v2,v1} \fmflabel{$\varphi$}{i} \fmflabel{$\varphi$}{o} \end{fmfgraph*} } \quad\quad\quad\quad \Scale[.9]{ \begin{fmfgraph*}(90,70) \fmfleft{i1,i2,i3,i4} \fmfright{o} \fmf{phantom,tension=1}{i1,v1} \fmf{photon,tension=1}{i2,v1} \fmf{gluon,tension=1}{i3,v1} \fmf{phantom,tension=1}{i4,v1} \fmf{photon,tension=3}{v2,o} \fmf{photon,left,tension=.5}{v1,v2} \fmf{gluon,left,tension=.8}{v2,v1} \fmflabel{$\varphi$}{i3} \fmflabel{$\varphi$}{o} \fmflabel{A}{i2} \end{fmfgraph*} } \end{fmffile} \vspace{-.2in} \hspace{0.45in} (a) \hspace{1.69in} (b) \\ \hspace{-.15in}\mbox{ Figure 3: diagrams with inhomogeneous loops} } \] \noindent The vertex, $V_{g}$, responsible for the diagrams in Fig. \ref{fig:2gh} (a), is defined by rewriting \rf{lv12qq} as \begin{eqnarray} {\cal L}&=& \fr1{\k'^2}\Big[-\fr12 {\pa}_\g h^{\a\b}{\pa}^\g h_{\a\b}+\fr14 {\pa}_\g h^{\a}_\a {\pa}^\g h^{\b}_\b + {\cal L}_{V_{g1}} \Big] \label{eawv} \end{eqnarray} where \begin{eqnarray} \k'^2\equiv 2\k^2 \end{eqnarray} and \[ \hspace{-.07in} V_{g} \equiv \sqrt{-g}\Big(2g^{\b\b'}\tilde{\G}^{\a' \g\a}\!-\! g^{\a\b}\tilde{\G}^{\a' \g\b'}\Big)\pa_\g h_{\a\b}\, h_{\a'\b'} \!+\! \sqrt{-g}\Big[\fr12(g^{\a\a'}g^{\b\b'}\varphi^{\g\g'}+\!g^{\b\b'}g^{\g\g'}\varphi^{\a\a'} \] \vspace{-.2in} \[ \hspace{-.3in} +g^{\a\a'}g^{\g\g'}\varphi^{\b\b'}) -\fr14 \varphi\, g^{\a\a'}g^{\b\b'}g^{\g\g'} -\fr12 g^{\g\g'}g^{\a'\b'}\varphi^{\a\b} +\fr14 (-\varphi^{\g\g'} +\fr12 \varphi g^{\g\g'} )g^{\a\b} g^{\a'\b'} \Big] \pa_\g h_{\a\b}\, \pa_{\g'}h_{\a'\b'} \] \vspace{-.3in} \begin{eqnarray} && \hspace{-.2in}+\sqrt{-\tilde{g}}\Big( h_{\a\b}h_{\g\d}\tilde{R}^{\a\g\b\d}-h_{\a\b}h^{\b}{}_\g \tilde{R}^{\k\a\g}{}_{\k} { -}h^{\a}{}_{\a}h_{\b\g}\tilde{R}^{\b\g} -\fr12 h^{\a\b}h_{\a\b}\tilde{R} +\fr14 h^{\a}_\a h^{\b}_\b \tilde{R} \Big)\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \end{eqnarray} As for $\tilde{g}_{\m\n}$-containing quantities, expansion in terms of $\varphi_{\m\n}$ is to be understood. The vertex responsible for the ghost-loop diagram can be similarly identified by expanding the terms quadratic in the ghost field: \begin{eqnarray} V_{C} &\equiv &-\Big[ \fr{1}{2}\varphi \pa^\m \bar{C}^\n \pa_\m {C}_\n -\tilde{\G}^\lambda_{\m\n}(\pa^\m \bar{C}^\n { C_\lambda} -\pa^\m {C}^\n \bar{C}_\lambda ) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&-(g^{\n\b}\varphi^{\m\a}+g^{\m\a}\varphi^{\n\b})\pa_\b \bar{C}_\a \pa_\n {C}_\m \Big] {+}R_{\m\n}\bar{C}^\m C^\n \end{eqnarray} The vertices responsible for the diagrams in Fig. 2 and Fig. 3 can be similarly obtained by examining the matter part of the action (the trace piece $h\equiv \tilde{g}^{\a\b}h_{\a\b}$ has been set to zero \cite{Park:2014tia,Park:2015ota,Park:2015xoa}: \begin{eqnarray} V_{m1} &\equiv& -\fr14 \sqrt{-g}\Big[ -g^{\rho\s}\varphi^{\m\n}-g^{\m\n}\varphi^{\rho\s} \Big] f_{\m\rho}f_{\n\s} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ V_{m2} &\equiv& -\fr14 \sqrt{-g}\Big[g^{\m\n}h^{\rho\k}h_\k^{\s} +g^{\rho\s}h^{\m\k}h_\k^{\n} +h^{\m\n}h^{\rho\s} -\fr14 g^{\m\n} g^{\rho\s}h_{\k_1\k_2}h^{\k_1\k_2} \Big] \tilde{F}_{\m\rho}\tilde{F}_{\n\s} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ V_{m3} &\equiv& \fr12 \sqrt{-g} \Big[g^{\m\n}h^{\rho\s}+g^{\rho\s}h^{\m\n}\Big] f_{\m\rho}F_{\n\s} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ V_{m4}&\equiv& -\fr12 \sqrt{-g} \Big[ \varphi^{\m\n}h^{\rho\s}+\varphi^{\rho\s}h^{\m\n}\Big] f_{\m\rho}F_{\n\s} \end{eqnarray} Let us work out the counter-terms to the diagrams in Fig. 1 to 3. Below \begin{eqnarray} ``&\Rightarrow&" \mbox{ means that the diagram on the left-hand side leads to } \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \quad\quad \mbox{the counter-term(s) on the right-hand side} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document} \end{eqnarray} The graviton and ghost contributions respectively are { \begin{eqnarray} \begin{fmffile}{pure1} \Scale[0.4]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{gluon,left,tension=1}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow & -\fr12 { \fr{1}{\k'^4}} <\Big(\int V_{g}\Big)^2> \label{tghpri} \end{eqnarray}} and \begin{eqnarray} \begin{fmffile}{pure2} \!\!\Scale[0.4]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{dashes,left,tension=1}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow& -\fr12 { \fr{1}{\k'^4}}<\Big(\int V_{C} \Big)^2> \label{tghpri2} \end{eqnarray} The numerical $-\fr12$ is the combinatoric factor that arises when the vertices are brought down from the exponent in the path integral. The total gravity sector one-loop counter-terms are given by the sum of these two; the result for the flat case was obtained in \cite{Park:2015ota} will be quoted in section 3. The diagrams in Fig. 2 (a) and (c) have two vertices, $V_{m1}$ and $V_{m2}$, inserted respectively. For the diagram in Fig. 2 (a) and (c) one gets \begin{eqnarray} &&\hspace{-.3in}\begin{fmffile}{res1} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon}{i,v1} \fmf{gluon}{v2,o} \fmf{photon,left,tension=.3}{v1,v2,v1} \end{fmfgraph} \end{gathered} } \end{fmffile} \Rightarrow -\fr12<\Big(\int V_{m1}\Big)^2> = -\fr12 <\Big[\int\fr14 ( g^{\rho\s}\varphi^{\m\n}+g^{\m\n}\varphi^{\rho\s}) ( f_{\m\rho}f_{\n\s} )\Big]^2> \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \end{eqnarray} \vspace{-.2in} \begin{eqnarray} \!\!\begin{fmffile}{res2} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{photon}{i1,v1,i2} \fmf{photon}{o1,v2,o2} \fmf{gluon,left,tension=.5}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} \!\!&\Rightarrow&\!\! -\fr12<\Big(\int V_{m2}\Big)^2> = -\fr12<\Big[\int\fr14 (g^{\m\n}h^{\rho\k}h_\k^{\s} +g^{\rho\s}h^{\m\k}h_\k^{\n} +h^{\m\n}h^{\rho\s}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{1.5in} -\fr14 g^{\m\n} g^{\rho\s}h_{\k_1\k_2}h^{\k_1\k_2} ) ( \tilde{F}_{\m\rho}\tilde{F}_{\n\s} )\Big]^2 > \end{eqnarray} The cross-term diagram in Fig. 2 (d) is generated by the vacuum expectation value of the two vertices, one of which is the matter vertex $V_{m_2}$ and the other $V_{g}$. The diagram corresponds to \begin{eqnarray} \hspace{-.1in}\begin{fmffile}{res3} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o} \fmf{photon}{i1,v1,i2} \fmf{gluon}{v2,o} \fmf{gluon,left,tension=.3}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} & \Rightarrow & - <\!\int\! V_{m2}\!\int V_{g}\!> = - <\!\int \Big(-\fr14\Big) \Big[g^{\m\n}h^{\rho\k}h_\k^{\s} +g^{\rho\s}h^{\m\k}h_\k^{\n} +h^{\m\n}h^{\rho\s} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \hspace{-.7in}-\fr14 g^{\m\n} g^{\m\n}h_{\k_1\k_2}h^{\k_1\k_2} \Big] \Big( \tilde{F}_{\m\rho}\tilde{F}_{\n\s} \Big) \times \fr1{\k'^2}\int \bigg\{ \Big(2g^{\b\b'}\tilde{\G}^{\a' \g\a}- g^{\a\b}\tilde{\G}^{\a' \g\b'}\Big)\pa_\g h_{\a\b}\, h_{\a'\b'} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.5in}\left. + \Big[\fr12(g^{\a\a'}g^{\b\b'}\varphi^{\g\g'}+g^{\b\b'}g^{\g\g'}\varphi^{\a\a'} +g^{\a\a'}g^{\g\g'}\varphi^{\b\b'}) -\fr12 g^{\g\g'}g^{\a'\b'}\varphi^{\a\b} \right. \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.9in} \left.-\fr14 \varphi^{\g\g'}g^{\a\b}g^{\a'\b'} \Big] \pa_\g h_{\a\b}\, \pa_{\g'}h_{\a'\b'} +\Big( h_{\a\b}h_{\g\d}\tilde{R}^{\a\g\b\d}-h_{\a\b}h^{\b}{}_\g \tilde{R}^{\k\a\g}{}_{\k} -\fr12 h^{\a\b}h_{\a\b}\tilde{R} \Big) \right\}>\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \end{eqnarray} The computation of the diagrams with the inhomogeneous loops serves as an example of the first-layer perturbation. For them it is necessary to use the full propagator in \rf{h2pt}, a step not needed for the other diagrams so far for a structural reason. In the first-layer perturbation, the graph to calculate is \[ \begin{fmffile}{fullmixrel} \Scale[0.99]{ \begin{fmfgraph*}(90,70)\fmfpen{thick} \fmfleft{i} \fmfright{o} \fmf{photon,tension=4}{i,v1} \fmf{photon,tension=4}{v2,o} \fmf{photon,left,tension=1}{v1,v2} \fmf{gluon,left,tension=1}{v2,v1} \end{fmfgraph*} } \end{fmffile} \] \vspace{-.4in} \[ \mbox{Figure 4: first-layer perturbation diagram} \] Note that unlike Fig. 3 (a), the lines have been thickened. The external lines represent the full fields, i.e., the fields with tildes. By the same token, the internal lines represent the full propagators. (The two diagrams in Fig. 3 are the first two terms that result from, so to speak, $\varphi_{\a\b}$-expanding the graph in Fig. 4. There are additional contributions coming from the internal lines when the full propagators are used.) As for the diagrams in Fig. 3, they can be set up in a manner similar to the others: \begin{eqnarray} \hspace{-.2in}\begin{fmffile}{mixed2pt} \Scale[0.6]{ \begin{gathered} \begin{fmfgraph}(80,60) \fmfleft{i} \fmfright{o} \fmf{photon}{i,v1} \fmf{photon}{v2,o} \fmf{photon,left,tension=.3}{v1,v2} \fmf{gluon,left,tension=.3}{v2,v1} \end{fmfgraph} \end{gathered} } \end{fmffile} &\Rightarrow& -\fr12<\Big(\int V_{m3}\Big)^2> = -\fr12<\Big(\int\fr12 (g^{\m\n}h^{\rho\s}+g^{\rho\s}h^{\m\n}) f_{\m\rho}F_{\n\s}\Big)^2> \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \begin{fmffile}{2ndmixrel} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,60) \fmfleft{i1,i2,i3,i4} \fmfright{o} \fmf{phantom,tension=1}{i1,v1} \fmf{photon,tension=1}{i2,v1} \fmf{gluon,tension=1}{i3,v1} \fmf{phantom,tension=1}{i4,v1} \fmf{photon,tension=3}{v2,o} \fmf{photon,left,tension=.5}{v1,v2} \fmf{gluon,left,tension=.8}{v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow& - <\int V_{m3}\int V_{m4}>= - <\int\fr12 (g^{\m\n}h^{\rho\s}+g^{\rho\s}h^{\m\n}) f_{\m\rho}F_{\n\s}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{1.2in}\times \int \fr{-1}2 (\varphi^{\m'\n'}h^{\rho'\s'}+\varphi^{\rho'\s'}h^{\m'\n'}) f_{\m'\rho'}F_{\n'\s'}> \end{eqnarray} We will leave it for now and come back in section 3 where we show a more convenient way of effectively calculating all the contributions, including those arising from the full internal propagators. \subsection{vacuum-to-vacuum and tadpole diagrams} In the first-layer perturbation, the shifts in the cosmological and Newton's constants are caused by the vacuum-to-vacuum and tadpole diagrams, respectively (more details in section 4). Fig. 5 lists the diagrams for the pure gravity sector; there are similar diagrams for the matter-involving sector. For the graviton vacuum-to-vacuum amplitude, for example, one is to compute \begin{eqnarray} \int \prod_x dh_{\k_1\k_2}\;e^{\fr{i}{\k'^2} \int \sqrt{-\tilde{g}}\,\Big( -\fr12\tilde{\nabla}_\g h^{\a\b}\tilde{\nabla}^\g h_{\a\b} \Big) } \end{eqnarray} This vacuum energy amplitude in the first-layer perturbation will give a vacuum diagram and a tadpole diagram in the second-layer. These diagrams as well as the genuine tadpole diagrams will be analyzed in detail in section 4. \begin{figure}[t] \begin{center} \begin{fmffile}{vacandtad} \parbox{30mm}{\begin{fmfgraph*}(75,50) \fmfi{gluon}{reverse fullcircle scaled .5w shifted (.5w,.5h)} \end{fmfgraph*}\\ \hspace{.38in} (a) } \parbox{30mm}{\begin{fmfgraph*}(75,50) \fmfi{dashes}{reverse fullcircle scaled .5w shifted (.5w,.5h)} \end{fmfgraph*}\\ \hspace{.38in} (b) } \parbox{30mm}{\begin{fmfgraph*}(75,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=2}{i,v1} \fmf{gluon,left}{v1,o,v1} \end{fmfgraph*}\\ \hspace{.6in} (c) } \quad\parbox{30mm}{\begin{fmfgraph*}(75,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=2}{i,v1} \fmf{dashes,left}{v1,o,v1} \end{fmfgraph*}\\ \hspace{.6in} (d) } \end{fmffile} \vspace{.2in} Figure 5: vacuum and tadpol diagrams \end{center} \end{figure} \section{Flat space analysis} In this section we consider a flat background. The analysis can also be viewed as the computation of the divergences in a curved background: the flat space analysis captures them since the ultraviolet divergence is a short-distance phenomenon. In the past the counter-terms were determined by dimensional analysis and covariance \cite{Deser:1974cz}. We directly calculate them in the refined background field method; dimensional analysis and covariance play the {\em subsidiary} role of checking the results. Let us consider a flat background \begin{eqnarray} g_{\m\n}=\eta_{\m\n}\quad ,\quad A_{0\m}=0 \end{eqnarray} with dimensional regularization. In what follows we will present the explicit flat spacetime computations for the two-point diagrams considered for a generic background $g_{\m\n}$ in the previous section. Although the techniques of the counter-term computation themselves are similar to those used in the pure gravity \cite{Park:2015ota} and gravity-scalar \cite{Park:2016zgt} analyses, the present case has several additional complications. As an unexpected spin-off of our direct approach, we will see how the long-known gauge choice-dependence issue is resolved. \subsection{two-point diagrams} The pure gravity sector was analyzed in \cite{Park:2015ota}. Consider the ghost loop diagram in Fig. \ref{fig:2gh} (b) first. The ghost vertex takes, in the flat spacetime, \[ V_{C}= -\Big[ -\tilde{\G}^\lambda_{\m\n}({ -C_\lambda} \pa^\m \bar{C}^\n+\bar{C}_\lambda\pa^\m {C}^\n ) -(\eta^{\n\b}\varphi^{\m\a}+\eta^{\m\a}\varphi^{\n\b})\pa_\b \bar{C}_\a \pa_\n {C}_\m \Big] {+}R_{\m\n}\bar{C}^\m C^\n \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \label{ghkinexp} \] Let us define, for convenience, \begin{eqnarray} V_{C}=V_{C,I}+V_{C,II} \end{eqnarray} with \begin{eqnarray} &&\hspace{-.5in} V_{C,I} \equiv -\Big[ -\tilde{\G}^\lambda_{\m\n}({ -C_\lambda} \pa^\m \bar{C}^\n+\bar{C}_\lambda\pa^\m {C}^\n ) -(\eta^{\n\b}\varphi^{\m\a}+\eta^{\m\a}\varphi^{\n\b})\pa_\b \bar{C}_\a \pa_\n {C}_\m \Big] \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{1.6in} V_{C,II} \equiv R_{\m\n}\bar{C}^\m C^\n \end{eqnarray} The correlator to be computed is \begin{eqnarray} &&\hspace{-.3in} -\fr12 { \fr{1}{\k'^4}}<\Big(\int V_{C,I}+V_{C,II} \Big)^2> = -\fr12 { \fr{1}{\k'^4}}<\Big\{\int \Big[ -\tilde{\G}^\lambda_{\m\n}(\pa^\m \bar{C}^\n { C_\lambda} -\pa^\m {C}^\n \bar{C}_\lambda) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{.7in} -(\eta^{\n\b}\varphi^{\m\a}+\eta^{\m\a}\varphi^{\n\b})\pa_\b \bar{C}_\a \pa_\n {C}_\m \Big] -R_{\m\n}\bar{C}^\m C^\n\Big\}^2> \label{totgh1} \end{eqnarray} To see how the dimensional analysis and covariance can be utilized to check the final results, consider, e.g., $<(\int V_{C,I})^2>$; a direct calculation yields \begin{eqnarray} \hspace{-.2in} -\fr12 { \fr{1}{\k'^4}}<\Big(\int V_{C,I}\Big)^2> \end{eqnarray} \vspace{-.1in} \[ = -\fr12 \fr{\G(\epsilon)}{(4\pi)^2}\int \Big[ -\fr{2}{15}\pa^2\varphi_{\m\n}\pa^2 \varphi^{\m\n}+\fr{4}{15}\pa^2 \varphi^{\a\k}\pa_\k \pa_\s \varphi_\a^\s -\fr{1}{30}(\pa_\a \pa_\b \varphi^{\a\b})^2 \Big] \] where the parameter $\varepsilon$ is related to the total spacetime dimension $D$ by \begin{eqnarray} D=4-2\varepsilon \end{eqnarray} The result above (and some of the present ones) were obtained with the help of the Mathematica package xAct`xTensor` in performing the index contractions. By invoking dimensional analysis and covariance, one expects the result to come out to be a sum of $R^2$ and $R_{\m\n}^2$ to the second order of $\varphi_{\rho\s}$ with appropriate coefficients. With the traceless condition $\varphi=0$ explicitly enforced, $R^2$ and $R_{\m\n}^2$ are given, to the second order in $\varphi_{\a\b}$, by \begin{eqnarray} R^2 &=& \pa_{\m}\pa_{\n}\varphi^{\m\n}\,\pa_{\rho}\pa_{\s}\varphi^{\rho\s} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ R_{\a\b}R^{\a\b} &=& \fr14\Big[\pa^2 \varphi^{\m\n}\,\pa^2 \varphi_{\m\n}-2\pa^2 \varphi^{\a\k}\pa_\k \pa_\s \varphi_\a^\s +2(\pa_{\m}\pa_{\n}\varphi^{\m\n})^2 \Big]; \label{covctr} \end{eqnarray} from these it follows \begin{eqnarray} -\fr12 { \fr{1}{\k'^4}}<\Big(\int V_{C,I}\Big)^2> = -\fr1{2} \fr{\G(\epsilon)}{(4\pi)^2}\int \Big[-\fr{8}{15}\tilde{R}_{\a\b}\tilde{R}^{\a\b}+\fr{7}{30}\tilde{R}^2\Big] \end{eqnarray} The tildes on the fields in the counter-terms will be omitted from now on. Let us complete the other terms in \rf{totgh1}; collecting all, one gets, for the ghost diagram, \begin{eqnarray} \begin{fmffile}{pure2} \!\!\Scale[0.4]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{dashes,left,tension=1}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow& -\fr12 \fr{\G(\epsilon)}{(4\pi)^2}\int \Big[ \fr{7}{15}R_{\m\n}{ R^{\m\n}} {+\fr{17}{30}}R^2 \Big] \label{tgh} \end{eqnarray} As for the graviton-loop diagram in Fig. \ref{fig:2gh} (a), the vertex $V_{g} $ takes \begin{eqnarray} &&\hspace{-.41in} V_{g} \equiv \Big(2\h^{\b\b'}\tilde{\G}^{\a' \g\a}\!-\! \h^{\a\b}\tilde{\G}^{\a' \g\b'}\Big)\pa_\g h_{\a\b}\, h_{\a'\b'} \!+\! \Big[\fr12(\h^{\a\a'}\h^{\b\b'}\varphi^{\g\g'}+\! \h^{\b\b'}\h^{\g\g'}\varphi^{\a\a'} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{.5in} +\h^{\a\a'}\h^{\g\g'}\varphi^{\b\b'}) -\fr12 \h^{\g\g'}\h^{\a'\b'}\varphi^{\a\b} -\fr14 \varphi^{\g\g'}\h^{\a\b}\h^{\a'\b'} \Big] \pa_\g h_{\a\b}\, \pa_{\g'}h_{\a'\b'} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \hspace{.5in}+ \Big( h_{\a\b}h_{\g\d}\tilde{R}^{\a\g\b\d}-h_{\a\b}h^{\b}{}_\g \tilde{R}^{\k\a\g}{}_{\k} -\fr12 h^{\a\b}h_{\a\b}\tilde{R} \Big) \end{eqnarray} By using the traceless propagator one can show: \begin{eqnarray} \begin{fmffile}{pure1} \Scale[0.4]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon,tension=6}{i,v1} \fmf{gluon,tension=6}{v2,o} \fmf{gluon,left,tension=1}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow & -\fr12\fr{\G(\epsilon)}{(4\pi)^2}\int \Big[-\fr{23}{20}R_{\m\n}R^{\m\n}-{ \fr{23}{40}}R^2\Big] \label{tghpri} \end{eqnarray} The correlators for the matter-involving sector have also been outlined in the previous section. Their flat spacetime evaluation leads to the following results for the diagrams in Fig. 2 (a)-(c): \begin{eqnarray} \begin{fmffile}{res1} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon}{i,v1} \fmf{gluon}{v2,o} \fmf{photon,left,tension=.3}{v1,v2,v1} \end{fmfgraph} \end{gathered} } \end{fmffile} &\Rightarrow& \fr{\G(\epsilon)}{(4\pi)^2}\int \Big(\fr1{30} R^2-\fr1{10}R_{\a\b}R^{\a\b} \Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \begin{fmffile}{respl} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph}(80,50) \fmfleft{i} \fmfright{o} \fmf{gluon}{i,v1} \fmf{gluon}{v2,o} \fmf{dots,left,tension=.3}{v1,v2,v1} \end{fmfgraph} \end{gathered} } \end{fmffile} &\Rightarrow& -\fr{\G(\epsilon)}{(4\pi)^2} \fr1{15}\int R_{\a\b}R^{\a\b} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \begin{fmffile}{res2} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{photon}{i1,v1,i2} \fmf{photon}{o1,v2,o2} \fmf{gluon,left,tension=.5}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow & \fr{{ \k'^4}\,\G(\epsilon)}{(4\pi)^2} \fr3{64} \int (F_{\a\b}F^{\a\b})^2 \end{eqnarray} These results are covariant as expected. The direct calculation of the diagram in Fig. 2 (d) yields \begin{eqnarray} \begin{fmffile}{res3} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o} \fmf{photon}{i1,v1,i2} \fmf{gluon}{v2,o} \fmf{gluon,left,tension=.3}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile}\bigg{|}_{V_I+V_{II}} \Rightarrow \fr{\k'^2\,\G(\epsilon)}{(4\pi)^2} \int \Big(\fr1{16}F_{\m\n}F^{\m\n} \pa_\a\pa_\b \varphi^{\a\b} + \fr12 F_{\m\k}F_\n{}^{\k} \pa^2 \varphi^{\m\n} \Big) \label{f2c} \end{eqnarray} which is non-covariant.\footnote{In the case of the Einstein-scalar system analyzed in \cite{Park:2016zgt}, we obtained a covariant result for a similar diagram, \begin{fmffile}{gl2s} \Scale[0.3]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o} \fmf{plain}{i1,v1,i2} \fmf{gluon}{v2,o} \fmf{gluon,left,tension=.3}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile}. The present non-covariant result is one reflection of the complexity of the gauge matter system.} As a matter of fact, this is the diagram that suggests the solution for the gauge choice-dependence. This non-covariant result will be examined in section 3.2 and we will see how the covariance is restored. The diagram above also receives a contribution from $V_{III}$ vertex: \begin{eqnarray} \begin{fmffile}{res3} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o} \fmf{photon}{i1,v1,i2} \fmf{gluon}{v2,o} \fmf{gluon,left,tension=.3}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile}\bigg{|}_{V_{III}} &\Rightarrow& \fr{\k'^2\,\G(\epsilon)}{(4\pi)^2} \int \Big(\fr34 F_{\a\k}F_\b{}^{\k}R^{\a\b}+\fr18 F_{\a\b}F^{\a\b}R \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && +\fr14 F_{\a\d}F_{\b\g}R^{\a \b\g\d} -\fr14 F_{\a\b}F_{\g\d}R^{\a \b\g\d}\Big) \end{eqnarray} As for the diagrams with the inhomogeneous loops, the first-layer diagram to be computed is the one in Fig. 4. It corresponds to several second-layer diagrams, two of which are Fig. 3 (a) and (b); one can show \begin{eqnarray} \hspace{-.2in}\begin{fmffile}{mixed2pt} \Scale[0.6]{ \begin{gathered} \begin{fmfgraph}(80,60) \fmfleft{i} \fmfright{o} \fmf{photon}{i,v1} \fmf{photon}{v2,o} \fmf{photon,left,tension=.3}{v1,v2} \fmf{gluon,left,tension=.3}{v2,v1} \end{fmfgraph} \end{gathered} } \end{fmffile} &\Rightarrow& \fr{{ \k'^2}}2 \fr{\G(\epsilon)}{(4\pi)^2}\int \Big(\fr13 \pa_\a F^\a{}_{\k} \pa_\b F^{\b\k}-\fr1{12} \pa_\rho F_{\a\b}\pa^\rho F^{\a\b} \Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \begin{fmffile}{2ndmixrel} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,60) \fmfleft{i1,i2,i3,i4} \fmfright{o} \fmf{phantom,tension=1}{i1,v1} \fmf{photon,tension=1}{i2,v1} \fmf{gluon,tension=1}{i3,v1} \fmf{phantom,tension=1}{i4,v1} \fmf{photon,tension=3}{v2,o} \fmf{photon,left,tension=.5}{v1,v2} \fmf{gluon,left,tension=.8}{v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile} &\Rightarrow & {\k'^2}\fr{\G(\epsilon)}{(4\pi)^2} \int \Big( \fr13 F_{\a\k}\pa_\lambda \pa^\b F_\b{}^\k \varphi^{\a\lambda}- \fr1{12} F_{\a\k}\pa^2F_\b{}^\k \varphi^{\a\b} \Big) \label{ild3} \end{eqnarray} where all of the index contractions are done with the flat metric. Whereas the first diagram is covariant at the leading order, the second diagram is not at its given order, the $\varphi_{\a\b}$-linear order. There are also contributions arising from the higher-order internal propagators and all of these three different contributions are required for the covariance since they altogether correspond to the single first-layer diagram in Fig. 4. Keeping track of the higher-order internal propagators obviously requires the full (or at least higher-order) propagator expression $\tilde{\D}$. Therefore, instead of separately computing the individual contributions, it will be more economical to compute them at one stroke. The calculation can be done by performing the following steps: let us consider \begin{eqnarray} {\bf V}\equiv \fr12 \Big[ \tilde{g}^{\rho\s}h^{\m\n}+\tilde{g}^{\m\n}h^{\rho\s}\Big] f_{\m\rho}F_{\n\s} \end{eqnarray} where $\tilde{g}^{\m\n}$ is taken to be the linear-order expression, $\tilde{g}^{\m\n}=g^{\m\n}-\varphi^{\m\n}$; the contractions are carried out by $\tilde{g}_{\m\n}$. At this point we introduce the orthonormal basis $e_a^\m$: \begin{eqnarray} \tilde{e}_a^\m \tilde{e}_b^\n \tilde{g}_{\m\n}=\h_{ab} \end{eqnarray} where the Latin indices run $a,b=0,1,2,3$. The full scalar propagator $\tilde{\D}$ can be written \begin{eqnarray} \tilde{\D}(X_1-X_2)=\int \fr{d^4L}{(2\pi)^4}\fr{e^{iL_c (X_1-X_2)^c}}{i L_a L_b \h^{ab}} \end{eqnarray} where $X^a$ and $L_c$ are the coordinates and momenta associated with the orthonormal basis. Then the computation of the two-point amplitude goes identically with that of Fig. 3 (a); switching back to the original frame, one gets \begin{eqnarray} \hspace{-.2in}\begin{fmffile}{fpdwl} \Scale[0.6]{ \begin{gathered} \begin{fmfgraph}(80,60)\fmfpen{thick} \fmfleft{i} \fmfright{o} \fmf{photon}{i,v1} \fmf{photon}{v2,o} \fmf{photon,left,tension=.3}{v1,v2} \fmf{gluon,left,tension=.3}{v2,v1} \fmflabel{A}{i} \fmflabel{$\tilde{g}$}{o} \end{fmfgraph} \end{gathered} } \end{fmffile} \Rightarrow -\fr12<\Big(\int {\bf V}\Big)^2>=\fr{{ \k'^2}}2 \fr{\G(\epsilon)}{(4\pi)^2}\int \Big( \fr13 \nabla_\a F^\a{}_{\k} \nabla_\b F^{\b\k}-\fr1{12} \nabla_\rho F_{\a\b}\nabla^\rho F^{\a\b} \Big) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \end{eqnarray} The analysis of the vacuum-to-vacuum amplitudes and tadpoles will be presented in section 4. \subsection{on gauge-choice independence} Above, we have evaluated the counter-terms for the diagrams in Figures. 1 to 5, and they have led to different types of the counter-terms, one of which, i.e., eq. \rf{f2c}, has come out non-covariant. This means that the effective action, as it stands, is non-covariant and gauge fixing-dependent. It turns out that these two problems have the following common solution: once the gauge-fixing\footnote{Note that the gauge-fixing \begin{eqnarray} \pa_\m \varphi^{\m\n}-\pa^\n \varphi =0 \label{vfgf} \end{eqnarray} reduces to \rf{vfgfr} once the traceless condition $\varphi=0$ is enforced.} \begin{eqnarray} \pa_\m \varphi^{\m\n}=0 \label{vfgfr} \end{eqnarray} is explicitly imposed on the effective action, the covariance and gauge-choice independence are restored. To see this, let us examine the non-covariant counter-terms for Fig. 2 (c) given in \rf{f2c}. Note that the first term in \rf{f2c} vanishes upon imposing the strong form of the gauge condition $\pa_\m \varphi^{\m\n}=0$, which implies\footnote{It is with the following caveat. Since the scalar curvature $R$ is given by \begin{eqnarray} R=\pa_\a\pa_\b \varphi^{\a\b} \end{eqnarray} to the linear order, it is not possible, with the strong form of the gauge condition, to probe the presence of the $R$-factor through the current linear-order calculation. For that, it is necessary to go to the second order. } \begin{eqnarray} \pa_\n\pa_\m \varphi^{\m\n}=0; \end{eqnarray} with it eq. \rf{f2c} now takes \begin{eqnarray} \;\;\begin{fmffile}{res3} \Scale[0.5]{ \begin{gathered} \begin{fmfgraph*}(80,50) \fmfstraight \fmfleft{i1,i2} \fmfright{o} \fmf{photon}{i1,v1,i2} \fmf{gluon}{v2,o} \fmf{gluon,left,tension=.3}{v1,v2,v1} \end{fmfgraph*} \end{gathered} } \end{fmffile}\bigg{|}_{V_I+V_{II}} \Rightarrow \fr{\k'^2\,\G(\epsilon)}{(4\pi)^2} \int \Big( \fr12 F_{\m\k}F_\n{}^{\k} \pa^2 \varphi^{\m\n} \Big)= -\fr{\k'^2\,\G(\epsilon)}{(4\pi)^2} \int F_{\m\k}F_\n{}^{\k} R^{\m\n} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \label{f2c2} \end{eqnarray} where the second equality is valid, as usual, up to a certain order of $\varphi_{\a\b}$, the linear order for the present case. Note that above, the following identity at $\varphi_{\rho\s}$-linear order has been used: \begin{eqnarray} R_{\m\n}&=& \fr12 (\pa^\k\pa_\m \varphi_{\k\n}+\pa^\k\pa_\n \varphi_{\k\m}-\pa_\m\pa_\n \varphi-\pa^2 \varphi_{\m\n} ) = - \fr12 \pa^2 \varphi_{\m\n} \end{eqnarray} where the second equality results once the gauge conditions are enforced. One is now in a position to address the longstanding issue of the gauge choice-dependence of the effective action: The effective action becomes fully covariant and gauge-choice independent after enforcing $\pa_\n \varphi^{\m\n}=0$. One should view the covariant action as still supplemented by the gauge-fixing. (This is just like the classical action: the classical action is fully covariant but is to be supplemented by a gauge-fixing condition.) If one chooses a different gauge-fixing and carries out the amplitude computations in that gauge, one should get exactly the same covariant effective action up to the terms that can be removed by that gauge-condition; this time, the action is supplemented with the very gauge-fixing condition that one has chosen. Therefore, the gauge-choice independence of the effective action should be interpreted to mean that the action is covariant after enforcing the strong form of the gauge condition and that the covariant action is to be supplemented by the gauge-fixing condition of one's choice. (But one can of course choose any gauge-fixing (namely even a gauge-fixing different from the initial one) once the covariant effective action is obtained.) One may wonder whether the appearance of the factors of $\pa_\m \varphi^{\m\n}$ in the counter-term calculation of \rf{f2c} could by any chance be made to disappear, say, without imposing the gauge condition. It appears that the gauge choice-dependence has a deeper root. To be specific, let us consider the proof of the gauge-choice independence in chap. 15 of \cite{Weinberg2}. The proof is for a gauge theory in the ordinary (i.e., non-BFM) path-integral. The gauge choice-dependence gets to reside in a field-independent constant (therein denoted by $C$; see eq. (15.5.19)), which is then duly disregarded. If one employs the BFM, however, that constant comes to depend on the background fields (say, $\varphi_{\m\n}$ for the present case, for example) and this must be the gauge choice-dependence that we have observed. This shows that the BFM, refined or not, has a limitation when applied to a gravitational system: it is introduced aiming for a more covariant treatment of the effective action computation. It turns out to be at odds with the gauge-choice independence.\footnote{Nevertheless, the refined BFM has an advantage compared to the conventional BFM in that the latter would yield results non-covariant in an uncontrollable way whereas the former gives the results covariant up to the gauge-choice dependent terms that can be removed by enforcing the strong form of the gauge condition.} The limitation is overcome by imposing the gauge condition in its strong form as we have just discussed. \section{One-loop renormalization} The focus of the previous section was on the divergent parts of the diagrams; the analysis was involved but relatively straightforward. The forms of the counter-terms have been obtained with the infinite parts of the coefficients specified. As well known in the standard quantum field theory, one has the freedom to adjust the finite parts of the coefficients through the renormalization scheme, which one may take to be the modified minimal subtraction $\overline{\mbox{MS}}$ (see, e.g., \cite{Sterman} for a review). The main focus of the present section is to carry out the renormalization in detail and study its implications. For the detailed analysis involving the renormalization conditions, it is convenient, as commonly done, to introduce a scale parameter $\m$ by making the following shift: \begin{eqnarray} \k^2\rightarrow \m^{-n/2+2}\k^2 \end{eqnarray} With this, eq.\!\! \rf{EM} takes \begin{eqnarray} S=\int \sqrt{-\hat{g}}\;\Big(\fr1{\k^2 \m^{2-n/2}}\hat{R}-\fr14 \hat{F}_{\m\n}^2 \Big) \end{eqnarray} One can proceed and compute various amplitudes and counter-terms; that was basically what we did in the previous section, but this time the parameter $\m$ will be included. The finite parts can now be kept track of with the fixed renormalization scheme. One of the main goals of this section is to analyze the renormalization of the cosmological and Newton's constants (earlier works can be found, e.g., in \cite{Reuter:1996cp,Donkin:2012ud,Falls:2015qga}). In its entirety the procedure involves dealing with an infinite number of counter-terms, not just the cosmological constant and Einstein-Hilbert terms. It will be nevertheless useful to first hone in on the renormalization of those two constants, a task undertaken at the end of section 4.1 before working out the further details of the whole procedure in section 4.2. This is because these constants carry special physical meanings unlike the other newly appearing couplings that will ultimately be absorbed by a metric field redefinition. Moreover, there are some subtleties in the evaluation of the diagrams responsible for their renormalization. Let us first frame the analysis of the vacuum and tadpole diagrams in preparation for section 4.1. The vacuum-to-vacuum amplitude Fig. 5 (a) takes the form of the cosmological constant term and diverges (see, e.g., in \cite{Weinberg2} and \cite{Park:2016zgt}). (The discussion here is for a flat spacetime, but the divergence will be quite generically produced for an arbitrary background.) Thus if we were dealing with a massive theory it would take a counter-term of the form of the cosmological constant of an infinite value to remove the divergence. However, the vacuum energy diagram vanishes due to an identity (eq. \rf{lid} below) in dimensional regularization. This is a rather undesirable feature of dimensional regularization when dealing with a massless theory.\footnote{For instance, the identities in \rf{vmi} and \rf{lid} often obscure cancellations between the bosonic and fermionic amplitudes in a supersymmetric field theory, making them separately vanish.} The diagrams responsible for the renormalization of the Newton's constant are the tadpole diagrams. As we will see in detail in section 4.1, the (would-be) shift in the Newton's constant is caused by a diagram that results from self-contraction of two fluctuation fields within the given vertex. Again, the following identity makes the regularization less suitable for the tadpole diagrams: \begin{eqnarray} \int d^D k \fr1{(k^2)^\w}=0 \label{vmi} \end{eqnarray} where $\w$ is an arbitrary number. The tadpole diagram vanishes due to this: the divergence that would otherwise renormalize the Newton's constant is taken to vanish. For the reasons to be explained, we will introduce the shifts in the cosmological and Newton's constants through finite renormalization. \subsection{vacuum-to-vacuum and tadpole diagrams} The kinetic terms are responsible for the vacuum-to-vacuum amplitudes in the parlance of the first-layer perturbation. We quote them here for convenience; this time it is written as \begin{eqnarray} &&\hspace{-.3in} 2\k^2{\cal L} = \sqrt{-\tilde{g}}\,\Big( -\fr12\tilde{\nabla}_\g h^{\a\b}\tilde{\nabla}^\g h_{\a\b}+\fr14 \tilde{\nabla}_\g h^{\a}_\a \tilde{\nabla}^\g h^{\b}_\b \Big)\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{0.1in}= -\fr12 {\pa}_\g h^{\a\b}{\pa}^\g h_{\a\b}+\fr14 {\pa}_\g h^{\a}_\a {\pa}^\g h^{\b}_\b + V_{g,I}+ V_{g,II} \label{lv12q} \end{eqnarray} where \begin{eqnarray} V_{g,I} &\equiv& \Big(2\eta^{\b\b'}\tilde{\G}^{\a' \g\a}- \eta^{\a\b}\tilde{\G}^{\a' \g\b'}\Big)\pa_\g h_{\a\b}\, h_{\a'\b'} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ V_{g,II} &\equiv& \Big[\fr12(\eta^{\a\a'}\eta^{\b\b'}\varphi^{\g\g'}+\eta^{\b\b'}\eta^{\g\g'}\varphi^{\a\a'} +\eta^{\a\a'}\eta^{\g\g'}\varphi^{\b\b'})\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&-\fr14 \varphi\, \eta^{\a\a'}\eta^{\b\b'}\eta^{\g\g'}-\fr12 \eta^{\g\g'}\eta^{\a'\b'}\varphi^{\a\b} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&+\fr14 (-\varphi^{\g\g'}+\fr12 \varphi \eta^{\g\g'})\eta^{\a\b}\eta^{\a'\b'} \Big] \pa_\g h_{\a\b}\, \pa_{\g'}h_{\a'\b'} \label{lv12} \end{eqnarray} We also define the rest of the vertex in $V_{g}$ as $V_{g,III}$: \begin{eqnarray} \hspace{-.2in}{V_{g,III}} = \sqrt{-\tilde{g}}\Big( h_{\a\b}h_{\g\d}\tilde{R}^{\a\g\b\d}-h_{\a\b}h^{\b}{}_\g \tilde{R}^{\k\a\g}{}_{\k} -\fr12 h^{\a\b}h_{\a\b}\tilde{R} \Big) \label{gver} \end{eqnarray} The vacuum-to-vacuum amplitudes in the first-layer perturbation can be split into two parts in the second-layer perturbation: the vacuum-to-vacuum amplitudes and the tadpoles.\footnote{As we will soon see, there are genuine tadpoles as well, i.e., tadpoles in the first-layer perturbation. As usual we evaluate them through the second-layer perturbation.} Let us consider the vacuum-to-vacuum amplitudes in the second-layer perturbation. The vacuum energy - which leads to the cosmological constant renormalization - comes from \begin{eqnarray} \int \prod_x dh_{\k_1\k_2}\;e^{\fr{i}{\k'^2} \int \;\Big( -\fr12\pa_\g h^{\a\b}\pa^\g h_{\a\b} \Big) } \end{eqnarray} One obtains a constant term (see, e.g., the analysis given in \cite{Weinberg2}) whose divergent part (which will be denoted by {$A_0$} below) is essentially the coefficient of the cosmological constant term. The calculation above leads to a quantum-level cosmological constant. Here is the difference between gravity and a non-gravitational theory. In a non-gravitational theory, appearance of a term absent in the classical action would potentially be a signal toward non-renormalizability.\footnote{Even in a non-gravitational theory, appearance of a {\em finite} number of new couplings is taken to be compatible with renormalizability. } However, in a gravitational theory one has an additional leverage of a metric field redefinition, and we will ponder in section 4.2 the significance of the quantum shift in the cosmological constant in the quantization framework that involves the metric field redefinition. The evaluation of the vacuum-to-vacuum amplitude, whether it is from the graviton or the ghost (or matter), involves the following integral that is taken to vanish in dimensional regularization: \begin{eqnarray} \int d^4p \ln {p^2}=0 \label{lid} \end{eqnarray} Nevertheless, we introduce renormalization by finite renormalization for the following reasons. Although the expression above is taken to vanish in dimensional regularization, the vacuum energy expression (in particular $A_0$ in \rf{wrml}) will not, in general, vanish in other regularization methods for a curved background. To better examine the behavior of the integral let us add a mass term $m^2$ that will be taken to $m^2\rightarrow 0$ at the end, \begin{eqnarray} \sim \int d^4p \ln { (p^2+m^2)} \end{eqnarray} One can then take derivatives with respect to $m^2$ for its evaluation; the result takes the form of \begin{eqnarray} A_f+A_0+A_1 m^2+A_2 m^4 \label{wrml} \end{eqnarray} where $A$'s are some $m$-indepedent constants; the finite piece, $A_f$, takes \begin{eqnarray} A_f\sim m^4\ln m^2 \end{eqnarray} With the limit $m^2\rightarrow 0$, only the term with the constant $A_0$, which is infinite, survives, and in dimensional regularization one sets $A_0=0$. Although each term in \rf{wrml} either vanishes or is taken to zero, not introducing nonvanishing finite pieces seems unnatural (and unlikely to be consistent with the experiment): in a more general procedure of renormalization of a quantum field theory, one can always conduct finite renormalization regardless of the presence of the divergences. (As we will see in section 4.2, not only does the quantum shift need to be introduced but also ``classical" piece of the cosmological constant.) Once a finite piece is introduced and the definition of the physical cosmological constant is made (say, as the coefficient of the $\int \sqrt{-\tilde{g}}$ term), the renormalized coupling will run basically due to the presence of the scale parameter $\m$. \vspace{.1in} Let us now consider the tadpole diagrams; the tadpole diagrams\footnote{Typically, tadpole diagrams in a non-gravitational are cancelled by a counterterm linear in the field and not considered further. More care is needed in a gravitational theory since the counter-terms take the form of the Einstein-Hilbert term. At least a priori it seems safer to view its effect as shifting the Newton's constant. The shift can be set to zero later if, for instance, the consistency of the renormalization program demands its vanishing.} are responsible for the renormalization of the Newton's constant. For the tadpole, the rest of vertices in the kinetic term in \rf{lv12q} - which are nothing but $V_{g,II}$ - as well as $V_{g,III}$ are relevant; the former is part of the vacuum-to-vacuum amplitude in the first-layer perturbation whereas the latter is associated with a genuine tadpole of the first-layer perturbation. It turns out that $V_{g,II}$ leads to a vanishing result in dimensional regularization; we illustrate that with $V_{g,I}$, \begin{eqnarray} V_{g,I}=\Big(2\eta^{\b\b'}\tilde{\G}^{\a' \g\a}- \eta^{\a\b}\tilde{\G}^{\a' \g\b'}\Big)\pa_\g h_{\a\b}\, h_{\a'\b'} \label{vex} \end{eqnarray} The self-contraction of the $h_{\m\n}$'s in \rf{vex} leads to a momentum loop integral with an odd integrand, which thus vanishes. (The other terms in \rf{lv12q} vanish because the self contraction leads to the trace of $\varphi_{\m\n}$.) The vertex $V_{g,III}$ similarly leads to a vanishing result. To see this, consider contraction of the $h_{\a\b}$-fields in $V_{g,III}$. The index structures yield $R$ but the self-contraction is taken to vanish in dimensional regularization due to the identity in \rf{vmi}. Then for the renormalization of the Newton's constant the story goes similarly to the case of the cosmological constant: although the dimensional regularization does not lead to a divergence for the tadpole diagram, the shift is introduced through finite renormalization. \subsection{renormalization by field redefinition} The full one-loop renormalization procedure is in order. Many steps of the procedure below have analogous steps in the Einstein-scalar case studied in \cite{Park:2016zgt} and \cite{Park:2016vam}. Presently we put more efforts in keeping track of the finite parts, and comparison with the future experimental results is elucidated in more detail. Combining all the results so far, the renormalized action plus the counter-terms are given by \begin{eqnarray} &&\hspace{.3in}\int \sqrt{-g}\;(e_1+ e_2 R+ e_3 R^2+e_4 R_{\a\b}^2) \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.3in}+\int \sqrt{-g}\Big( e_5 F_{\m\k}F_\n{}^{\k} R^{\m\n} +e_6 F_{\a\b}F^{\a\b}R + e_7 F_{\a\d}F_{\b\g}R^{\a \b\g\d} \quad \label{totctr} \end{eqnarray} \vspace{-.2in} \[ \;\; +e_8 F_{\a\b}F_{\g\d}R^{\a \b\g\d} + e_{9} \nabla^\a F_{\a\k}\nabla^\b F_\b{}^\k +e_{10} \nabla_\lambda F_{\m\n}\nabla^\lambda F^{\m\n} + e_{11}(F_{\a\b}F^{\a\b})^2 +\cdots \Big) \] where $e_1$ is the constant previously denoted by $A_0$. More precisely, $[e_1]=A_0$ where the square bracket $[e_i]$ denotes the infinite parts of the coefficient $e_i$ calculated by employing dimensional regularization. Similarly, the would-be divergence of the tadpole diagrams will be denoted $B_0=[e_2]$. ($A_0, B_0$ are taken to vanish in dimensional regularization.) For the rest of the coefficients, one has, by collecting the results in section 3, \begin{eqnarray} && [e_3]=-\fr{17}{60}+\fr{23}{80}+\fr1{30},\quad [e_4]= -\fr{7}{30}+\fr{23}{40}-\fr1{10}-\fr1{15},\quad \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && [e_5]=\Big(-1+\fr34\Big) \k'^2,\quad [e_6]=\fr{\k'^2}8,\quad [e_7]=\fr{\k'^2}4,\quad [e_8]=-\fr{\k'^2}4,\quad \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && [e_{9}]=\fr{\k'^2}6,\quad [e_{10}]= -\fr{\k'^2}{24} ,\quad [e_{11}]=\fr{3}{64}\k'^4,\quad \end{eqnarray} where the common factor $ \fr{\G(\epsilon)}{(4\pi)^2}$ has been suppressed. The finite pieces of each coefficient are determined by the $\overline{\mbox{MS}}$ scheme. Not all these counter-terms are independent because of the following relationships, the second of which is valid up to total derivative terms: \begin{eqnarray} F_{\a\b}F_{\g\d}R^{\a \b\g\d} &=& \nabla_\m F_{\n\rho} \nabla^\m F^{\n\rho} +2 F_{\m\k} F_\n{}^\k R^{\m\n}-2 \nabla^\lambda F_{\lambda \k} \nabla^\s F_{\s}{}^\k \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ F_{\a\d}F_{\b\g}R^{\a \b\g\d} &=& -\fr12 F_{\a\b}F_{\g\d}R^{\a \b\g\d} \end{eqnarray} Upon substituting these into \rf{totctr}, one gets\footnote{The analysis in this section is to illustrate the renormalization procedure and is based on the computation that we have carried out in the previous sections. Some of the diagrams that we did not explicitly calculate will change the numerical values of certain coefficients. For instance, there are tadpole diagrams with the matter fields running on the loop. Such diagrams will generate a counter-term of the form $\sim F_{\a\b}^2$.} \begin{eqnarray} &&\hspace{-.3in} \int \sqrt{-g} \; \Big[ e_1+ e_2 R+ e_3 R^2+e_4 R_{\a\b}^2 +(e_5-e_7+2e_8) F_{\m\k}F_\n{}^{\k} R^{\m\n} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \quad +e_6 F_{\a\b}F^{\a\b}R + (e_7-2e_8+e_{9} ) \nabla^\a F_{\a\k}\nabla^\b F_\b{}^\k \quad \label{totctrind} \end{eqnarray} \vspace{-.2in} \[ \;\; +(-e_7/2+e_8+e_{10} )\nabla_\lambda F_{\m\n}\nabla^\lambda F^{\m\n} + e_{11}(F_{\a\b}F^{\a\b})^2 +\cdots \Big] \] The strategy is to absorb these counter-terms by redefining the metric in the bare action. Inspection reveals that the counter-terms of the forms $\nabla_\lambda F_{\m\n}\nabla^\lambda F^{\m\n},\nabla^\a F_{\a\k}\nabla^\b F_\b{}^\k$ cannot be absorbed by a bare action that consists of the Einstein-Hilbert term and the Maxwell term: one needs the cosmological constant term as well. The reason is that under a metric shift $g_{\m\n}\rightarrow g_{\m\n}+\d g_{\m\n}$, the Einstein-Hilbert part shifts according to \begin{eqnarray} \sqrt{-g}\,R \rightarrow \sqrt{-g}\,R+R\,\d g^{\m\n} \fr{\d \sqrt{-g}}{\d g_{\m\n}}+\sqrt{-g}\,\d g^{\m\n} \fr{\d R}{\d g_{\m\n}} \end{eqnarray} so the shifted part comes either with $R$ or $R_{\m\n}$, and is thus inadequate to absorb the aforementioned counter-terms; so is the shifted part from the Maxwell's action. We assume the presence of the cosmological constant in the bare action and proceed; more in the conclusion. Let us consider the following shifts\footnote{One may wonder about the traceless condition on the newly defined metric. The traceless condition was in order for the propagator to be well-defined. Once the effective action is obtained, one may choose a different gauge-fixing for solving the field equations in which the traceless condition may not be imposed.} \cite{tHooft:1973bhk}\cite{Park:2016zgt}, \[ \k\rightarrow \k+\d\k \quad,\quad \L\rightarrow \L+\d\L \] \begin{eqnarray} g_{\m\n} & \rightarrow& \mathscr{G}_{\m\n} \equiv l_0 g_{\m\n}+l_1 g_{\m\n}R+l_2 R_{\m\n} + l_3g_{\m\n}F_{\rho\s}^2 +l_4 F_{\m\k}F_\n{}^\k \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{.45in}+l_5 R F_{\m\k}F_\n{}^\k +l_6 R_{\m\n} F_{\k_1\k_2}^2 + l_7 g_{\m\n} R F_{\rho\s}^2 + l_8 g_{\m\n} R^{\a\b}F_{\a\k}F_\b{}^\k \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{.45in} + l_9 R_{\m}{}^\a{}_\n{}^\b F_{\a\k}F_\b{}^\k +l_{10}R (F_{\k_1\k_2} F^{\k_1\k_2} )^2 \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && \hspace{.45in} +l_{11}\nabla_\m F_{\k_1\k_2} \nabla_\n F^{\k_1\k_2} +l_{12}\nabla^\lambda F_{\lambda\m} \nabla^\k F_{\k\n} \label{ms} \end{eqnarray} One can straightforwardly show that under these, the gravity and matter sectors shift, respectively, \begin{eqnarray} &&\hspace{-.4in} -(\fr{2}{\k^2}\L)\int \sqrt{-g} + \fr1{\k^2}\int d^4 x \sqrt{-g}\;R \rightarrow -{2}\Big(\fr{\L}{\k^2}+ \fr{\d\L}{\k^2}-\fr{2\d\k \L}{{ \k^3}}+2l_0 \L \Big) \int \sqrt{-g} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ & &\hspace{-.4in} + \Big(\fr1{\k^2} -\fr{2\d \k}{\k^3} +\fr{l_0}{{ \k^2}}-\fr{\L}{{ \k^2}} (4l_1+l_2)\Big)\int \sqrt{-g}\;R +{ \fr1{\k^2}} \int \sqrt{-g}\Big[(l_1+\fr12 l_2)R^2-l_2 R_{\m\n}R^{\m\n}\Big] \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.2in} + { \fr1{\k^2}}\int \sqrt{-g} \bigg[ -\L(4l_3+l_4)F_{\a\b}F^{\a\b}+\Big(l_3+l_4/2-\L[l_5+l_6+4l_7]\Big)RF_{\a\b}F^{\a\b}\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.2in}-\L(4l_8+l_9)R^{\a\b}F_{\a\k}F_\b{}^\k -4\L l_{10}(F_{\rho\s}F^{\rho\s})^2 - \L l_{11}(\nabla_\m F_{\n\rho})^2 -\L l_{12}(\nabla^\k F_{\k\n})^2 +\dots \bigg]\nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \end{eqnarray} and\footnote{It is likely that the counter-term of the form $F_{\a\k}F_\b{}^\k F^{\a\k'}F^\b{}_{\k'}$ will appear at two-loop.} \begin{eqnarray} &&\hspace{-.3in}-\fr14\int \sqrt{-g}\; F_{\m\n}^2 \rightarrow -\fr14\int \sqrt{-g}\; F_{\m\n}^2 +\int \sqrt{-g}\;\bigg[ -\fr{l_2}{8} RF_{\a\b}F^{\a\b} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&+\fr{l_2}{2} R^{\a\b}F_{\a\k}F_\b{}^\k +\fr{l_3}{2}(F_{\rho\s}F^{\rho\s})^2 +\fr{l_4}{2}F_{\a\k}F_\b{}^\k F^{\a\k'}F^\b{}_{\k'} +\cdots\bigg] \end{eqnarray} Combining these two, one gets \begin{eqnarray} &&\hspace{-.5in} \fr1{\k^2}\int d^4 x \sqrt{-g}\;(R-2\L) -\fr14\int \sqrt{-g}\; F_{\m\n}^2 \rightarrow -{2}\Big(\fr{\L}{\k^2}+ \fr{\d\L}{\k^2}-\fr{2\d\k \L}{{ \k^3}}+2l_0 \L \Big) \int \sqrt{-g} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ & & \hspace{.2in} + \Big(\fr1{\k^2} -\fr{2\d \k}{\k^3} +{ \fr1{\k^2}}l_0-{ \fr1{\k^2}}\L (4l_1+l_2)\Big)\int \sqrt{-g}\;R -\fr14\int \sqrt{-g}\; F_{\m\n}^2 \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.2in} + { \fr1{\k^2}}\int \sqrt{-g}\Big[(l_1+\fr12 l_2)R^2-l_2 R_{\m\n}R^{\m\n}\Big]+{ \fr1{\k^2}} \int \sqrt{-g} \bigg[ -\L(4l_3+l_4)F_{\a\b}F^{\a\b} \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ &&\hspace{-.2in} +\Big(l_3+\fr{l_4}2-\L[l_5+l_6+4l_7] -\fr{l_2}{8} { \k^2}\Big)RF_{\a\b}F^{\a\b}+\Big({ {\k^2}}\fr{l_2}{2}-\L[4l_8+l_9]\Big)R^{\a\b}F_{\a\k}F_\b{}^\k \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ && ({ \k^2} l_3/2 -4\L l_{10})(F_{\rho\s}F^{\rho\s})^2 -l_{11}(\nabla_\m F_{\n\rho})^2 -l_{12}(\nabla^\k F_{\k\n})^2 +\dots \bigg] \end{eqnarray} Not all of the terms in the expansion have not been explicitly recorded: additional diagrams such as 3-pt amplitudes should be considered to account for some of them. Let us consider the first several coefficients of the shifted action and compare them with those of \rf{totctrind}. We start with the cosmological constant term and Einstein-Hilbert term. Their counter-terms can be absorbed by setting\footnote{Note that $A_0= - \mathscr{I}_{div}$ used, e.g., in \cite{Park:2016zgt}.} \begin{eqnarray} -\fr{2}{\k^2}\Big(\d\L-\fr{2\d\k \L}{{ \k}}\Big) = A_0 \label{dL} \end{eqnarray} and \begin{eqnarray} -\fr{2}{{ \k^3}}\d\k +\fr1{\k^2}l_0 - \fr{\L}{\k^2} (4l_1+l_2) = B_0 \label{dk} \end{eqnarray} respectively. We assume that the constants $A_0,B_0$ now contain the non-vanishing finite pieces introduced by the aforementioned finite renormalization. Eq. \rf{dk} determines the infinite part of $\d \k$ \begin{eqnarray} \d\k =\fr{\k}{2}l_0 - \fr{\k\L}{2} (4l_1+l_2) - \fr{\k^3}{2}B_0 \end{eqnarray} $\d \L$ is determined once this result is substituted into \rf{dL}: \begin{eqnarray} \d \L=l_0 \L-\L^2 (4l_1+l_2)-\fr{\k^2}2 A_0 \end{eqnarray} The counter-terms of the forms $R^2,R_{\m\n}^2$ can be absorbed by setting \begin{eqnarray} l_1+\fr12 l_2=e_3 \quad,\quad - l_2= e_4 \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \end{eqnarray} which yields \begin{eqnarray} l_1= e_3+\fr12e_4 \quad,\quad l_2= -e_4 \end{eqnarray} Inspection of the coefficients of $(F_{\a\b})^2$ implies \begin{eqnarray} 4l_3+l_4={\cal O}(\k^4) \end{eqnarray} The coefficients of $RF_{\a\b}F^{\a\b}$, $R^{\a\b}F_{\a\k}F_\b{}^\k$ should match with the corresponding coefficients of the counter-term action: \[ l_3+\fr{l_4}2-\L (l_5+l_6+4l_7)-\fr{l_2}8 { \k^2}=e_6\;,\; \fr{{ \k^2}}2 l_2-\L (4l_{8}+l_{9}) =e_5-e_7+2e_8 \nonumber} \def\bd{\begin{document}} \def\ed{\end{document}\\ \] These constraints are to be combined with those coming from the higher order counter-terms. \subsection{scattering predictability and more of 1PI action} It may be useful to recall the case of a non-gravitational theory before considering the gravitational case. Suppose one performed the loop computations and found new vertices required to remove the divergences. In the standard procedure of renormalization, those vertices will be included with the corresponding arbitrary coupling constants in the bare action. If the number of the new vertices is finite, the theory is called renormalizable and one proceeds to obtain the 1PI effective action. If the number is infinite, the theory is declared to be unrenormalizable; the infinite number of coupling constants lead to loss of the predictive power of the theory. In a gravity theory there are an infinite number of counter-vertices, some of which we have seen in section 3. The idea of the field redefinition is that one starts with the bare action of the same form as the classical action with possible addition of the cosmological constant term. The metric in the bare action is the field-redefined one, $\mathscr{G}_{\m\n}$, in \rf{ms}. The crucial point is that all of the coupling constants associated with the higher-order counter-vertices are absorbed into the {\em redefined} metric $\mathscr{G}_{\m\n}$, thus unobservable \cite{tHooft:1973bhk}. The predictability of the theory for scattering amplitudes then follows. Let us paraphrase. The divergences arising from the loop diagrams can be removed by the counter-vertices present on the right-hand side of the definition of $\mathscr{G}_{\m\n}$ in \rf{ms}. In other words, with the counter-terms added, the renormalized action now contains all of the new coupling constants. However, the counter-terms with those coupling constants can be combined into Einstein-Hilbert form in terms of the redefined metric $\mathscr{G}_{\m\n}$. This means that the bare action has two coupling constants, the cosmological and Newton's, in terms of the new metric, and therefore the theory is predictive. More specifically, the theory becomes predictive by following the usual ``routines": suppose the experimental values of the cosmological and Newton's constants are known accurately to the extent that we may discern the quantum corrections. One can find the values of the renormalized cosmological and Newton's constants by imposing certain renormalization conditions. Once the renormalized constants are determined in terms of physical constants one can proceed to compute, for instance, various scattering amplitudes and make predictions on the corresponding experimental outcomes. The fact that the infinite number of the coupling constants are absorbed by the metric field redefinition must be not taken as to mean that the quantum effects are immaterial. There will be all those vertices present with fixed finite values of the coefficients in the effective action.\footnote{In general a full effective action is a highly complicated object even containing nonlocal terms (that may be important for the black hole physics \cite{Mukhanov:1994ax}). Here we focus on the starting renormalized action with the added vertices with the fixed finite coefficients.} At this point one can consider yet another field redefinition in conjunction with the quantum deformation of the geometry that plays an important role in the context of black hole information \cite{Park:2017dib}\cite{Park:2017wiw}\cite{Nurmagambetov:2018het}. \section{Conclusion} In this work, we have extended the one-loop renormalization of an Einstein-scalar system to an Einstein-Maxwell system. As in the previous works the amplitude calculations have been carried out with the two layers of perturbation in the refined background field method. Since the Maxwell part itself is a gauge system, the extension involves overcoming several additional hurdles. The direct Feynman diagrammatic computation has lead to a gauge choice-dependent 1PI effective action: the effective action is covariant only up to the metric gauge-fixing. The origin of the dependence was found in the limitation of the background field method. The proper interpretation of the gauge-choice independence is such that the effective action is made covariant by removing all of the terms containing the gauge-fixing condition. At the same time, the action is to be supplemented by the gauge-fixing condition in the usual manner. We have taken one step further compared to our previous works: with the fixed renormalization scheme chosen, we have enumerated the quantum corrections of various physical quantities such as the cosmological and Newton's constants. The role of the finite renormalization is important. We have seen that the metric field redefinition a la 't Hooft brings predictive power to the theory. There are several highlights worth recapitulating. Firstly, note that one ends up taking three different measures to ensure the covariance: removal of the trace part of the metric, employment of the refined BFM, and enforcement of the strong form of the gauge-fixing. Secondly, the cosmological constant has several special features in the context of renormalization. It is the leading term in the derivative expansion and generically generated regardless of the background under consideration. Even if one's starting action does not include the cosmological constant term, the renormalizability dictates its presence in the bare action (and thus in the effective action). Thirdly, the renormalizability requires a metric field definition. The existence of such a field redefinition should not be a coincidence but must be a reflection of the quantum deformation of the geometry. The freedom of such a field redefinition is powerful and distinguishes gravity from non-gravitational unrenormalizable theories. We have seen that the renormalizability requires the presence of the cosmological term in the bare action. One may take this as a rationale of the presence of a renormalized cosmological constant in the starting renormalized action. In fact, this seems to suggest a future direction that stands out. In the main body we carried out the analysis without including the cosmological constant since we were interested in a flat background. The fact that the cosmological constant is generically generated and required for the renormalizability seems to suggests the possibility that it should be included in the starting renormalized action. This would imply that one should consider the propagator associated with a de Sitter (or anti-de Sitter) background, although the flat spacetime analysis can still be employed for the divergence analysis. Once the cosmological constant is included and expanded around the fluctuation metric, it can be treated as a source for additional vertices.\footnote{Alternatively, it can be treated as the ``graviton mass" term and with this the graviton propagator becomes a massive one. This may appear too contrived but the massive propagator makes it unnecessary to introduce the finite renormalization. An objection may be raised that the spacetime is no longer flat in the presence of the cosmological constant term. This is an issue worth further exploring. The bottom line is that the flat space analysis catches the divergences. Also, in a more realistic setup including a Higgs-type scalar, mixing between the physical and unphysical states \cite{Pius:2014iaa} is expected.} It appears that there are several variant renormalization procedures depending on, e.g., whether or not to include the cosmological constant in the starting renormalized action. As a matter of fact, there is an intriguing possibility when choosing a renormalization scheme. Although the flat space analysis catches the divergent parts of the proper curved space analysis, the finite parts require, in general, the due curved space propagators. One may choose the renormalization scheme such that the finite parts become the same as the corresponding flat analysis. It will be interesting to see whether or not the renormalization procedure could be consistently conducted with such a special scheme. If it could be and yields the same results as the curved space analysis, the flat space analysis will serve as a highly convenient alternative to the proper curved space analysis, and that would imply, in a certain sense, the background independence of the whole framework. Another direction is the two-loop extension of the results of the present work. As stated in the introduction, the renormalization procedure in this paper is entirely within the standard framework and in particular the reduction of the physical states did not play a role (other than providing assurance that the present procedure can, in principle, be extended to two- and higher- loops. Although the direct two-loop analysis is expected to be much harder, the difficulty will be of technical character and associated with computing the Feynman diagrams themselves. One may turn to the approach where the counter-terms are determined by dimensional analysis and covariance \cite{Goroff:1985th}. Once the counter-terms are obtained one way or another, it should be possible, with reasonable effort, to extend the field redefinition-utilized renormalization to two-loop. The reduction to the physical sector \cite{Park:2014tia,Park:2015ybl} is expected to play a role at two- and higher- loops. \newpage
1,108,101,566,061
arxiv
\section{Introduction} G11.2$-$0.3 is a composite-type SNR with a central pulsar wind nebula (PWN) surrounded by a circular shell. The shell is bright both in radio and X-rays, and has an outer diameter of $4'$ and a thickness of $0.'5$ \citep{gre88,rob03}. The shell is clumpy with several clumps protruding its outer boundary. The bright radio shell with high circular symmetry indicates that the remnant is young and it is thought to be the best candidate for the possible historical supernova of AD 386 \citep{ste02}. The PWN with an associated pulsar was discovered at the very center of the remnant in X-rays with {\it ASCA}, and later its detailed structure was studied with the Chandra X-ray Observatory \citep[][and references therein]{vas96, kas01, rob03}. The PWN is elongated along the NE-SW direction with a total extent of $1'$, and appears to be surrounded by a radio synchrotron nebula with similar extent and shape. The distance to G11.2$-$0.3 determined from {\sc Hi}\ absorption is 5 kpc \citep{gre88}. The overall morphology of G11.2$-$0.3 resembles Cassiopeia A (Cas A). Both have a thick, bright, and clumpy shell, although the shell of Cas A is much brighter than that of G11.2$-$0.3, i.e., 2720 Jy vs 22 Jy at 1 GHz \citep{gre04} \footnote{Also available at http://www.mrao.cam.ac.uk/surveys/snrs/.}. At a distance of 3.4 kpc, the outer radius of Cas A shell is 2.0 pc and its expansion velocity is 4,000--6,500 ~km s$^{-1}$\ \citep[e.g.,][]{fes96}, while they are 2.9 pc and $\sim 1,000$~km s$^{-1}$\ for G11.2$-$0.3. Cas A has a faint $5'$ (or 2.5 pc)-diameter plateau extending beyond the bright shell \citep[e.g.,][]{hwa04}. The plateau represents swept-up circumstellar (or ambient) medium while the bright shell is thought to be mainly the ejecta swept up by a reverse shock. Such plateau has not been detected in G11.2$-$0.3. \cite{che05} classified both Cas A and G11.2$-$0.3 into SN IIL/b category which has a red supergiant (RSG) progenitor star with some H envelope but most lost \citep[cf.][]{hwa03, you06}. Detailed observations have revealed that the explosion of Cas A was turbulent and asymmetric and that the ejecta is now interacting with a clumpy circumstellar wind \citep[see][and references therein]{hwa04, che03}. However, very little is known about the explosion and the interaction of the 1620(?) yr-old G11.2$-$0.3 despite its close similarity to Cas A. In this paper, we report the discovery and detailed studies of [Fe II]\ and H$_2$\ filaments in the SNR G11.2$-$0.3 using near-infrared (IR) imaging and spectroscopic observations. Although the recent mid-IR data obtained with the Spitzer Space Telescope show the presence of very faint wispy emission close to its SE boundary \citep{lee05, rea06}, our near-IR observations reveal much more prominent and extended features both at the boundary and interior of the remnant, which provide important clues on the origin and evolution of G11.2$-$0.3. \section{Observations} We carried out near-IR imaging observations of the SNR G11.2-0.3 with Wide-field Infrared Camera (WIRC) on the Palomar 5-m Hale telescope using several narrow- and broad-band filters in 2003 June and 2005 August (Table 1). WIRC is equipped with a Rockwell Science Hawaii II HgCdTe 2K infrared focal plane array, covering $\sim 8.'5\times 8.'5$ field of view with $0.''25$ pixels scale. For the basic data reduction, we subtracted dark and sky background from each individual dithered image and then normalized it by a flat image. We finally combined the individual images to make a final image. The seeing was typically $0.''8$--$1''$ over the observations. We obtained the flux calibration of our narrow-band filter (i.e., [Fe II]\ and H$_2$\ ) images using the H (for [Fe II] 1.644~$\mu$m) and Ks (for H$_2$ 2.122~$\mu$m) band magnitudes of $\ge 20$ nearby isolated, unsaturated 2MASS stars. For this, we first converted the 2MASS magnitudes to fluxes \citep{coh03}, and then obtained the fluxes of the [Fe II]\ and H$_2$\ emission after we deconvolved the responsivities of both WIRC ([Fe II]\ and H$_2$\ ) and 2MASS ($H$ and $K_s$) filters. The overall uncertainty in the flux calibration is less than $10 \%$. We attribute the major source of uncertainty to the different band response of each filter as the uncertainty in photometry itself is typically a few percent. For the astrometric solutions of our images, we used all the cataloged 2MASS stars in the field, and found that they are consistent with that of 2MASS with rms uncertainty of $0.''15$. After identifying several emission-line features in the aforementioned imaging observations, we have carried out follow-up spectroscopic observations of them using Long-slit Near-IR Spectrograph of the Palomar 5-m Hale Telescope \citep{lar96} in 2005 August. The spectrograph has a $256\times 256$ pixel HgCdTe NICMOS 3 array with a fixed slit of $38''$ length. We placed the slit along the bright [Fe II]\ and H$_2$\ filaments crossing their peak positions. Toward the [Fe II]\ filament, four spectra around 1.25, 1.52, 1.63 $\mu$m (for [Fe II]\ emission), and 2.16 $\mu$m (for {\sc Hi}\ Br$\gamma$\ ) were obtained, while, toward the H$_2$\ filament, one spectrum around 2.11 $\mu$m was obtained. Over the observations, the slit width was fixed to be $1''$, resulting in spectroscopic resolution of 650--850 with 0.06--0.12 $\mu$m usable wavelength coverage. For all the obtained spectra, the individual exposure time was 300 s with the same amount of exposure of nearby sky for sky background subtraction. For the [Fe II]\ lines, we performed the exposure twice (with different sky positions, but the same source position), and combined them, while, for the H$_2$\ line, we performed the exposure only once. Just after the source observations, we obtained the spectra of the G3V star HR 8545, which was at the similar airmass of the source by uniformly illuminating the slit using the f/70 chopping secondary of the telescope. We then divided the source spectra by those of HR 8545 and multiplied by a blackbody radiation curve of the G3V star temperature, which is equivalent to simultaneous flat fielding and atmospheric opacity correction. G stars, however, have numerous intrinsic (absorption) feautres, so that this procedure could inflate the intensities of emission lines if they fall on these stellar features. We have estimated the errors using the G2V solar spectrum \citep{liv91}\footnote{Also available at http://diglib.nso.edu/contents.html.} as a template of our reference star HR 8545 \citep[cf.][]{mai96,vac03}. The estimated errors in the observed line fluxes are $\le 5$\% for all lines except Br$\gamma$\ line for which it is $10$\%. The resulting errors in the line ratios, which are used for the derivation of physical parameters, are $\le 2$\% except for the Br$\gamma$\ /[Fe II] 1.644~$\mu$m ratio for which it is 7\%. These calibration errors are all less than their statistical ($1\sigma$) errors (see Table 2). Another source of error is different atmospheric condition. All spectra of the source and HR 8545 were obtained at air mass of 1.6--1.8 except the H$_2$ spectra of the source which was obtained at an air mass of 2.27. There is a strong atmospheric CO$_2$ absorption line between 2.05 and 2.08 $\mu$m, and the different airmasses can give an error in the intensity of H$_2$ (2--1) S(3) line at 2.0735 $\mu$m. According to \cite{han96}, the error is about $2$\% when the air masses differ by 0.3, so that it would be $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 5$\% for our H$_2$ (2--1) S(3) line. Again, this is less than the ($1\sigma$) statistical error. We therefore consider that the uncertainty due to the calibration errors is less than the statistical errors quoted in this paper (Table 2). For the wavelength solutions of the spectra, we used the OH sky lines \citep{rou00}. \section {Results} Fig. 1 (right) is our three-color image representing the near-IR [Fe II] 1.644~$\mu$m (B), H$_2$ 2.122~$\mu$m (G), and Br$\gamma$\ 2.166 $\mu$m (R) emission of the SNR G11.2$-$0.3. We also show an 1.4 GHz VLA image for comparison which was obtained by \cite{gre88} in 1984--85 with $3''$ resolution. Note that the expansion rate of G11.2$-$0.3 at 1.4 GHz is $0.''057\pm 0.''012$ yr$^{-1}$ \citep{tam03} which amounts to $\sim 1''$ over the last 20 years (see also \S 3.3). The near-IR emission features in Fig. 1 can be summarized as follows: (1) an extended ($\sim 2.'5$), bright [Fe II]\ (blue) filament along the SE radio shell; (2) some faint, knotty [Fe II]\ emission features along the NW radio shell as well as in the interior of the source; (3) a small ($30''$), bright H$_2$\ (green) filament along the outer boundary of the source in the SE; (4) another small, faint H$_2$\ filament outside the NE bounday of the source. Overall the [Fe II]\ filaments are located either within the radio shell or inside of the source, while the H$_2$\ filaments are along the radio boundary or even outside of it. We have not found any apparent Br$\gamma$\ filament in our rather shallow imaging observation, although we have detected faint Br$\gamma$\ line emission toward the [Fe II]\ peak position in our spectroscopic observation. In the following, we summarize the results on the [Fe II]\ and H$_2$\ emission features. \subsection{[Fe II] 1.644~$\mu$m emission} \subsubsection{Photometry} In order to see the [Fe II]\ emission features more clearly, we have produced an `star-subtracted' image (Fig. 2). We first performed PSF photometry of H-cont and [Fe II] 1.644~$\mu$m images, and removed stars in the [Fe II] 1.644~$\mu$m image if they had corresponding ones in the H-cont image. This PSF photometric subtraction left residuals around bright stars which we masked out. The faint stars, which were not removed by the PSF subtraction because the H-cont image is not as deep as that of [Fe II]\ , were then removed by subtraction of median value of $15 \times 15$ nearby pixels. Fig. 2 is the final star-subtracted image where we can see the detailed features of [Fe II]\ emission more clearly. As in Fig. 1 (right), the extended filament within the southeastern SNR shell, [Fe II]-SE filament hereafter, is most prominent. The filament is composed of two bright, $30''$-long, elongated segments in the middle and two clumpy segments at the ends. The one at the southern end is a little bit apart from the other three. The total extent of the filament is $\sim 2.'5$. The filament is not very thin but has a width of $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10''$. Fig. 3 shows a detailed structure of the filament, where we have just masked out stars using the K-cont image in order to avoid any possible artifacts associated with the PSF photometric subtraction. We can see that the filament has a very good correlation with the radio shell both in morphology and brightness. The peak [Fe II] 1.644~$\mu$m surface brightness of the filament is $1.9 \pm 0.2\times 10^{-3}$ ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$, which is larger than any previously reported brightness of [Fe II] 1.644~$\mu$m filaments in other remnants, e.g., 1.1--3$\times 10^{-4}$~ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$ in IC 443 and Crab \citep{gra87, gra90} or $1.5\times 10^{-3}$~ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$ in RCW103 \citep{oli89}. On the opposite side of the SE filament lies another long ($\sim 2.'5$) filament within the northwestern SNR shell (Fig. 4). This filament ([Fe II]-NW filament) is relatively faint and appears to be clumpy. It has little correlation with the radio emission. We note that [Fe II]-SE and NW filaments lie roughly symmetric with respect to the line of position angle $\approx 60^\circ$, which is close to the inclination of the central PWN of G11.2$-$0.3 in X-ray \citep{rob03}. In addition to these two extended filaments, some faint, knotty emission features are also seen in the interior of the remnant, particularly in the southern area (Fig. 5). These features spread over an area of $\sim 2'$ extent and filametary, with some of them having a partial ring-like structure. There are also several bright clumps of $\sim 5''$ size. Most of the clumps appear to be connected to the filaments, although some are rather isolated. The brightnesses of these central emission features and NW filament are $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3\times 10^{-4}$~ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$. The observed total [Fe II] 1.644~$\mu$m flux is estimated to be $1.1\pm 0.2 \times 10^{-11}$~erg cm$^{-2}$ s$^{-1}$, $76\pm 12$\% of which is from the SE filament. \subsubsection{Spectroscopy} We have detected several [Fe II]\ lines toward the peak position of the [Fe II]-SE filament ([Fe II]-pk1). Table 2 summarizes the detected lines and their relative strengths, and Fig. 6 shows the spectra. [Fe II]\ 1.257~$\mu$m and [Fe II] 1.644~$\mu$m lines originate from the same upper level, so that their unreddened flux ratio is fixed by relative Einstein $A$ coefficients which is 1.04 according to \cite{qui96}. Toward [Fe II]-pk1, the ratio is 0.31, which implies $A_V=13$ mag ($A_{1.644\mu {\rm m}}=2.43$ mag) or H-nuclei column density of $2.49\pm 0.07 \times 10^{22}$~cm$^{-2}$ using the extinction cross section of the carbonaceous-silicate model for interstellar dust with $R_V=3.1$ of \cite{dra03}\footnote{Data available at http://www.astro.princeton.edu/~draine/dust/dustmix.html.}. This is a little larger than the column density to the remnant derived from X-ray observations $(1.7-2.4)\times 10^{22}$~cm$^{-2}$ \citep{rob03}. We note that the numerical values of the Einstein $A$ coefficients for near-IR [Fe II]\ lines in the literature differ as much as 50\%: using the values of \cite{nus88}, the expected [Fe II]\ 1.257~$\mu$m to [Fe II] 1.644~$\mu$m line-intensity ratio is 1.36, while \cite{smi06} empirically derived 1.49 from their spectroscopy of P Cyg. If the intrinsic ratio is 1.36 or 1.49, we obtain a little (20--30\%) higher column density. We adopt the $A$-values of \cite{qui96} in this paper which yield a column density closer to the X-ray one. According to \cite{har04}, they also yield extinction more consistent with optical spectroscopic result for a protostellar jet. The ratios of the other three lines, e.g., [Fe II]\ 1.534~$\mu$m, 1.600~$\mu$m, and 1.664~$\mu$m, to [Fe II]\ 1.644~$\mu$m\ are good indicators of electron density \citep[e.g.,][]{oli90}. We solved the rate equation using the atomic parameters assembled by CLOUDY \citep[version C05.05,][]{fer98} which adopts the Einstein A coefficients of \cite{qui96} and collision strengths of \cite{pad93} and \cite{zha95}. We have included 16 levels which is enough at temperatures of our interest ($\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10^4$~K). We consider only the collisions with electrons, neglecting those with atomic hydrogen, even if the degree of ionization of the emitting region could be low (see \S~4.2). This should be acceptable since the rate coefficients for atomic hydrogen collisions are more than two orders of magnitude smaller than those for electron collisions \citep{hol89}. The ratios of 1.534~$\mu$m\ and 1.664~$\mu$m\ lines yield consistent results, e.g., $6,000\pm 400$~cm$^{-3}$ and $5,900\pm 400$~cm$^{-3}$, while 1.600~$\mu$m\ line ratio yields a little higher density ($7,800\pm 400$~cm$^{-3}$) at $T=5,000$~K which is the mean temperature estimated for [Fe II]\ line-emitting regions in other SNRs \citep[][; see also \S~4.2]{gra87, oli89}. The result is not sensitive to temperature, e.g., a factor of 2 variation in temperature causes 10--20\% in density. We adopt the average value $6,600\pm 900$~cm$^{-3}$ at $T=5,000$~K as the characteristic electron density of the [Fe II]\ filaments. We also detected Br$\gamma$\ line toward [Fe II]-pk1. The dereddened ratio of [Fe II] 1.644~$\mu$m to Br$\gamma$\ line is $77^{+14}_{-10}$, which is much greater than that ($\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 0.1$) of HII regions but comparable to the ratios observed in other SNRs (see \S~4.2). \subsubsection{Proper Motion during 2003--2005} We have two [Fe II] 1.644~$\mu$m images taken in 2.2 years apart, i.e., in 2003 June and 2005 August. The time interval is not long enough to notice the proper motion of the [Fe II]\ filaments in the difference image obtained by subtracting one from the other. We instead inspect one-dimensional intensity profiles of the bright [Fe II]\ -SE filament to search for its proper motion associated with an expansion. Fig. 7 shows the intensity profiles across the two bright segments of the [Fe II]-SE filament along the cuts (dashed lines) in Fig. 3. The cuts are made to point to the central pulsar which is very close to the geometrical center of the SNR shell \citep{kas01}. The distance in the abscissa is measured from the upper right end of the cuts, so that it increases outward from the remnant center. Note that the profiles of the filament in 2005 (solid lines) are slightly shifted outward from those in 2003 (dashed lines). We fit the profiles along the cuts A and B with a Gaussian and obtain shifts of $0.''063\pm 0.''032$ and $0.''095\pm 0.''064$ in their central positions, respectively. For comparison, the profiles of nearby stars, e.g., the strong peak at $24''$ in Fig. 7 (left), do not show any appreciable shift. The mean shift in stellar positions from the same 1-dimensional Gaussian analysis of nearby seven stars is found to be $-0.''0067 \pm 0.''0029$. Therefore, the mean proper motion of the SE filament with respect to the nearby stars during 2.2 years amounts to $0.''076 \pm 0.''029$, which corresponds to a rate of $0.''035\pm 0.''013$ yr$^{-1}$. \subsection{H$_2$ 2.122~$\mu$m emission: Photometry and Spectroscopy} Fig. 8 is a star-subtracted and median-filtered H$_2$ 2.122~$\mu$m image. The image has been made in the same way as Fig. 2. Two small ($\sim 30''$) filaments, one at the southern SNR radio boundary and another fainter one outside the NE boundary are now clearly seen. The one in the southeast (H$_2$\ -SE filament) is bright and elongated along the radio boundary. Its peak surface brightness is $3.0\pm 0.3 \times 10^{-4}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ and its flux is $4.3 \pm 0.4 \times 10^{-13}$~erg cm$^{-2}$ s$^{-1}$. The NE filament (H$_2$-NE filament) is just outside of the SNR boundary and is located where the radio continuum boundary is distorted. Its surface brightness is $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 40$\% of the SE filament peak brightness, while its flux density is $\sim 50$\% of the SE filament. There is no [Fe II] 1.644~$\mu$m emission associated with either H$_2$\ filament. A long ($\sim 2'$) filamentary feature seems to be present well outside the southeastern SNR boundary, but it is too faint to be confirmed. Fig. 9 shows a detailed structure of the H$_2$-SE filament. It is composed of two bright segments surrounded by a diffuse envelope. It is just outside of the bright [Fe II]-SE filament, but there is no apparent correlation between the two (cf. Fig. 3). We have detected two H$_2$\ lines, (1,0) S(1) and (2,1) S(3), toward the peak position of the filament, H$_2$-pk1 (Fig. 10). Their dereddened ratio, using the column density derived from [Fe II]\ line ratios ($A_{2.12 \mu {\rm m}}=1.59$ mag), is $0.14\pm 0.01$ (Table 2), which gives $T_{\rm ex}\approx 2,100$ K using the transition probabilities of \cite{wol98}. \section{Discussion} G11.2$-$0.3 has been proposed to be a young remnant of an SN IIL/b interacting with a dense RSG wind based on its PWN and the small size of the SNR shell \citep{che05}. The thick, bright shell is thought to be shocked SN ejecta in contact with shocked wind material. The outer edge of the shell is not sharp and it was suggested that the ambient shock propagating into wind material could be at a larger distance \citep{gre88,che05}. In the following, we first discuss the physical properties of the H$_2$\ filaments that we have discovered in this paper, and show that our results support the SN IIL/b scenario. Then we discuss the physical properties of the [Fe II]\ filaments which are thought to be composed of both shocked wind material and shocked SN ejecta. \subsection{H$_2$\ Filaments and Presupernova Circumstellar Wind} \subsubsection{Excitation of H$_2$\ filaments} The H$_2$-SE filament is located at the rim of the bright SNR shell and elongated along the rim, which suggests that it is excited by the SNR shock. The derived $v=2$--1 excitation temperature ($\approx 2,100$~K) is also typical for shocked molecular gas \citep{bur89}. The dereddened peak H$_2$ 2.122~$\mu$m surface brightness is $1.3\pm 0.1 \times 10^{-3}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, and the dereddened total flux of the SE filament is $1.9\pm 0.2 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$. The interstellar ultraviolet (UV) photons in principle could excite and heat the H$_2$\ gas to produce similar excitation temperature if the gas is dense enough for collisions to dominate deexcitation \citep{ste89, bur90}. However, the expected H$_2$ 2.122~$\mu$m surface brightness by UV photon excitation is low unless the density is high and the radiation field is very strong, e.g., $n_{\rm H}\ge 10^5$~cm$^{-3}$ and $G_0\ge 10^4$ for $\ge 1\times 10^{-4}$~erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ where $n_{\rm H}$ is the number density of H nuclei and $G_0$ is far UV (FUV) intensity relative to the interstellar radiation field in the solar neighborhood \citep{bur90}. Note that $G_0=10^4$ corresponds to an O4-type star at a distance of $\sim 1$ pc \citep{tie05}. No such strong FUV source exists around the filament. X-ray emission from the remnant is another source that could possibly excite and heat the H$_2$\ filament. We may consider a molecular clump situated at some distance from an SN explosion. As the SN explodes and the SNR evolves, the X-ray flux increases and, in principle, an ionization-dissociation front may develop and propagate into the clump. If the density is sufficiently high, the H$_2$\ lines from heated molecular gas could show `thermal' line ratios \citep{gre95}. The H$_2$\ line intensities from such clump depend on details, and no model calculations that may be directly applicable to our case are available \citep[cf.][]{dra90, dra91, mal96}. In the following, we instead simply consider the energy budget. If the H$_2$ 2.122~$\mu$m line is emitted by reprocessing the X-ray photons from the SNR falling onto the molecular clump, its luminosity may be written as $L_{2.122} \sim \epsilon L_X (\Omega_{\rm cl}/4\pi)$ where $\epsilon$ is an efficiency of converting the incident X-ray energy flux into H$_2$ 2.122~$\mu$m line emission, $L_X$ is the X-ray luminosity of the remnant, and $\Omega_{\rm cl}$ is the solid angle of the clump seen from the SNR center. The above formula is accurate if the clump is small and if the X-ray source is spherically symmetric. Although G11.2$-$0.3 is not a spherically symmetric source in X-rays, we may use the formula to make a rough estimate of the expected H$_2$ 2.122~$\mu$m line luminosity. The conversion efficiency for SNRs embedded in molecular clouds was calculated to be $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 1 \times 10^{-3}$ \citep{lep83, dra90, dra91}. The efficiency is a function of X-ray energy absorbed per H-nucleon and the above inequality might be valid for X-ray irradiated small clumps too. Now if we assume that the H$_2$\ clump has the line-of-sight extent similar to the extent on the sky ($\sim 0.'5$), then $ \Omega_{\rm cl}/4\pi \sim 4 \times 10^{-3}$. Since the X-ray luminosity of G11.2$-$0.3 is $L_X\sim 10^{36}$~erg s$^{-1}$ in 0.6--10 keV band \citep{vas96}, we have $L_{2.122}\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 4 \times 10^{30} $~erg s$^{-1}$. This is much less than the observed H$_2$ 2.122~$\mu$m luminosity of the SE filament, which is $\sim 6 \times 10^{33}$~erg s$^{-1}$. Therefore, the X-ray excitation/heating does not appear to be important for the H$_2$-SE filament. The above consideration leads us to conclude that the H$_2$-SE filament is excited by the SNR shock associated with G11.2$-$0.3. The absence of associated [Fe II] 1.644~$\mu$m or Br$\gamma$\ emission suggests that the H$_2$\ emission from the H$_2$-SE filament might be from warm molecules swept-up by a slow, non-dissociative $C$ shock not from reformed molecules behind a fast, dissociative $J$ shock. The critical velocity for a shock to be a non-dissociative $C$ shock is $\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 50$~km s$^{-1}$\ \citep{dra83, mck84}. The dereddened mean surface brightness of the H$_2$-SE filament is $\sim 8\times 10^{-4}$~erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$. This is comparable to the (normal) brightness of a $\sim 30$~km s$^{-1}$\ shock propagating into molecular gas of $n_{\rm H}=10^4$~cm$^{-3}$ according to the $C$-shock model of \cite{dra83}. We were unable to find model calculations for lower densities. But, since the intensity will be proportional to the preshock density, provided that the density in the emitting gas is less than the critical density \citep[$\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^5$~cm$^{-3}$;][]{bur89}, the results of \cite{dra83} indicates that a 40--50~km s$^{-1}$\ shock propagating into molecular gas of $n_{\rm H}=10^3$~cm$^{-3}$ might have similar (normal) surface brightness. A slower shock with a lower preshock density would be possible if the shock propagating into the H$_2$\ filament is tangential along the line of sight, so that the brightness normal to the shock front is lower. The situation is not so clear for the H$_2$-NE filament for which we lack spectroscopic information. Its flux density, however, is comparable to that of the SE filament and we may rule out the excitation by X-rays from G11.2$-$0.3. We checked 2MASS colors of nearby ($\le 2'$) stars, but found no OB stars that would be responsible for the UV excitation. This leaves again the shock excitation for the origin of the H$_2$\ emission. A difficulty with the shock excitation is that the filament is located outside the radio SNR boundary. But as have been pointed out in previous studies \citep[e.g.,][]{gre88}, the radio continuum boundary is not sharp and the ambient shock is thought to have propagated beyond the apparent radio boundary. It therefore seems to be reasonable to consider that the H$_2$-NE filament is excited by the SNR shock too, although we need spectroscopic observations to understand the nature of the H$_2$-NE filament. \subsubsection{Circumstellar Origin of H$_2$\ filaments} The H$_2$\ filaments are more likely of circumstellar origin than interstellar. If interstellar, they must be dense clumps originally in an ambient or parental molecular cloud. We do not expect to observe molecular material around small, young core-collapse SNe in general because massive stars clear out the surrounding medium with their strong UV radiation and strong stellar winds during their lifetime. Some molecular material may survive if the progenitor star is an early B-type (B1--B3) star, which does not have strong UV radiation nor strong stellar winds \citep{mck84b, che99}. A difficulty with this scenario, however, is that then the swept up mass at the current radius (3 pc) is likely to be much greater than the ejecta mass, so that the remnant should have been already in Sedov stage where it would appear as a thin, limb-brightened shell. The thick-shell morphology of G11.2$-$0.3, however, indicates that it is not yet in Sedov stage. We therefore consider that the H$_2$\ filaments are of circumstellar origin which fits well into the SN IIL/b scenario. It is plausible that the progenitor of G11.2$-$0.3 had a strong wind which contains dense clumps. Numerous such clumps have been observed in Cas A, e.g., ``Quasi-stationary flocculi (QSF)", which are slowly moving, dense optical clumps immersed within a smoother wind \citep{van71, van85}. In the 320-yr old Cas A, the shock propagating into the clump is fast \citep[100--200~km s$^{-1}$;][]{che03} while in the 1620-yr old G11.2$-$0.3 it is slow (30--50~km s$^{-1}$). Their velocity ratio is comparable to the ratio ($\sim 1/5$) of SNR expansion velocities, which suggests that the winds in G11.2$-$0.3 and Cas A have similar properties. We may estimate the density contrast between the clump and the smoother wind from the ratio of the shock speed into the clump ($v_c=30-50$~km s$^{-1}$) to the SNR forward shock speed $v_{\rm exp}$. If we adopt the result of the radio (20 cm) expansion studies by \cite{tam03}, $v_{\rm exp}=1350\pm280$~km s$^{-1}$\ so that the density contrast would be $(v_{\rm exp}/v_c)^2=700-3,000$. For comparison, \cite{che03} estimated a density contrast of $3,000$ for Cas A. \subsection {[Fe II]\ Filaments and SN Ejecta} \subsubsection {Shock Parameters of the [Fe II]-SE filament} The [Fe II]\ filaments are located within the bright SNR shell in contrast to the H$_2$\ filaments. The [Fe II]-SE filament has a remarkable correlation with the radio shell in both morphology and brightness. The knotty emission features inside the remnant might be within the shell too, but projected on the sky. The location of the filaments suggests that the [Fe II]\ emission is almost certainly from the shocked gas. The shock must be radiative and the [Fe II]\ emission should originate from the cooling layer behind the shock. The [Fe II]-SE filament is very bright with the dereddened [Fe II] 1.644~$\mu$m peak surface brightness of $1.80 \pm 0.18\times 10^{-2}$ ergs cm$^{-2}$ s$^{-1}$ sr$^{-1}$. It is in fact the brightest among the known [Fe II] 1.644~$\mu$m filaments associated with SNRs. The total dereddened [Fe II] 1.644~$\mu$m flux is $1.0\pm 0.1 \times 10^{-10}$~erg cm$^{-2}$ s$^{-1}$. The ratio of [Fe II] 1.644~$\mu$m to Br$\gamma$\ line ($\sim 80$) toward the peak position of the [Fe II]-SE filament is larger or comparable to the ratios observed in other SNRs, e.g., 27 to $\ge 71$ in IC 443 \citep{gra87} or 34 in RCW 103 \citep{oli89}. It was pointed out in previous studies that the high ratio can result from SNR shocks {\em interacting with the ISM} by the combined effects of `shock excitation' and the enhanced gas-phase iron abundance. First, since the ionization potential of iron atom is only 7.9 eV, FUV photons from the hot shocked gas can penetrate far downstream to maintain the ionization state of Fe$^+$ where H atoms are primarily neutral \citep{mck84, hol89b, oli89}. Therefore, [Fe II]\ lines are emitted mainly in gas with a low degree of ionization at $T=10^3$--$10^4$~K. This partly explains the observed high ratio of [Fe II] 1.644~$\mu$m to Br$\gamma$\ lines, but not all. Shock model calculations showed that the ratio is $\sim 1$ if the gas-phase iron abundance is depleted as in normal ISM. According to \cite{hol89b}, the ratio is $\sim 1.5$ for shocks at velocities 80--150~km s$^{-1}$\ propagating into a molecular gas of $n_{\rm H}=10^3$~cm$^{-3}$ with iron depletion $\delta_{\rm Fe}\equiv{\rm [Fe/H]/[Fe/H]_\odot}=0.03$ where [Fe/H]$_\odot$=$3.5\times 10^{-5}$. \cite{mck84} presented the results on atomic shock calculations including grain destruction: for a 100~km s$^{-1}$\ shock propagating into atomic gas of $n_{\rm H}=10$ and 100~cm$^{-3}$, [Fe II] 1.2567 $\mu$m/H$\beta$=2.7 and 3.7 with $\delta_{\rm Fe}=0.53-0.58$ in the far downstream. If we use 0.033 as the ratio of Br$\gamma$\ to H$\beta$\ line intensities which corresponds to a Case B nebula at 5,000 K \citep{ost89}, the ratio corresponds to [Fe II] 1.644~$\mu$m/Br$\gamma$\ =80 and 110, comparable to the observed ratio. Therefore, gas-phase iron abundance close to the solar is required to explain the observed [Fe II] 1.644~$\mu$m to Br$\gamma$\ ratio toward [Fe II]-pk1. The preshock density may be estimated from the [Fe II] 1.644~$\mu$m brightness. The [Fe II] 1.644~$\mu$m surface brightness toward the [Fe II]-SE filament varies $\sim 1-10\times 10^{-3}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$. Its morphology in Fig. 4 suggests that the shock front might be tangential along the line of sight to enhance the surface brightness of the filament. The normal surface brightness of the 100~km s$^{-1}$\ shock propagating into atomic gas of $n_{\rm H}=100$~cm$^{-3}$ is $2.5\times 10^{-4}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ \citep{mck84}. It is $0.3-2\times 10^{-3}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ for 80--150~km s$^{-1}$\ shocks propagating into a molecular gas of $n_{\rm H}=10^3$~cm$^{-3}$ if the gas-phase abundance of iron was solar \citep{hol89b}. Therefore, the preshock density needs to be $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1,000$ cm$^{-3}$. This appears to be roughly consistent with the electron density derived from [Fe II]\ lines ratios. As we pointed out above, the ionization fraction of the [Fe II]-emitting region is expected to be low. \cite{oli89} estimated a mean ionization fraction of 0.11, in which case $n_{\rm H}\approx n_e/0.11\approx 6\times 10^4$~cm$^{-3}$. For a 100~km s$^{-1}$\ shock, the final compression factor would be $\sim 80$ \citep{hol89}, so that the above postshock density implies a preshock density of $\sim 800$~cm$^{-3}$. This is close to the density required to explain the surface brightness considering the uncertainties in various parameters. Therefore, a 100~km s$^{-1}$\ shock propagating into a gas of $n_{\rm H}\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1,000$ cm$^{-3}$ and destroying dust grains seems to explain the observed parameters of the [Fe II]-SE filament. \subsubsection{Origin of [Fe II]\ filaments} The [Fe II]\ filaments could be either shocked circumstellar medium (CSM) or shocked ejecta, or both under the context of the Type IIL/b scenario. In the SE filament, {\sc Hi}\ Br$\gamma$\ line is detected at the peak position and its ratio to [Fe II] 1.644~$\mu$m line is consistent with a $100$~km s$^{-1}$\ {\em interstellar} shock (\S~4.2.1), which implies that the emission is not from metal-rich ejecta but from shocked CSM. For example, when the H$_2$\ clumps in the previous section are swept up by shocked dense ejecta, a stronger shock will propagate into the clumps to dissociate and ionize the gas to produce [Fe II]\ emission. Radio observation also suggests that the remnant is more heavily affected by the ambient medium toward this direction: \cite{kot01} showed that the magnetic field structure of the bright radio shell is radial in general except the bright SE shell where the degree of polarization is significantly low compared to the other parts of the shell. The non-radial magnetic field and the low degree of polarization suggest that the synchrotron emission is dominated by shocked ambient gas not by shocked ejecta. On the other hand, the SE filament is located in the middle of the radio shell and has a large radial proper motion. If the proper motion is due to expansion of the SNR shell, which is very likely, it implies an expansion velocity of $\ge 830\pm 310$~km s$^{-1}$\ (see next). This suggests that the filament is associated with ejecta. It is possible that some [Fe II]\ emission originates from dense, Fe-rich ejecta recently swept-up and excited by a reverse shock. We suppose that the [Fe II]-SE filament consists of both the shocked CSM and the shocked ejecta, although it is not obvious how the two interact to develop the observed properties. The derived proper motion of the [Fe II]-SE filament ($0.''035\pm 0.''013$ yr$^{-1}$) may be compared to the expansion rate of the radio shell. \cite{tam03} obtained a mean expansion rate of $0.''057\pm 0.''012$ yr$^{-1}$ at 1.465 GHz and $0.''040\pm 0.''013$ yr$^{-1}$ at 4.860 GHz by comparing radio images separated by 17 years. Our proper motion is comparable to the 4.860-GHz expansion rate but is smaller than the 1.465-GHz expansion rate which was considered to be more reliable by the authors. It is possible that the [Fe II]-SE filament is not moving perpendicularly to the sight line, so that the true space motion is greater. But, considering that the filament is located near the boundary of the remnant, the projection effect is probably not large. Instead the difference may be because the proper motion that we have derived in this paper represents the velocity of the brightest portion of the filament while the radio expansion rate might be close to the pattern speed, e.g., the SNR shock speed. Since the velocities of shocked ambient gas and shocked ejecta in the shell might be less than the SNR shock velocity, it is plausible that our `expansion rate' is less than the radio one. We will explore the dynamical properties of G11.2$-$0.3 in our forthcoming paper. The [Fe II]-NW filament and the knotty emission features are considered to be mostly, if not all, dense SN ejecta. The radial magnetic field supports this interpretation \citep{kot01}. Their filamentary and ring-like structure may be a consequence of bubbly Fe ejecta \citep[e.g.,][]{blo01}. It is worth to note that the [Fe II]\ emission is distributed mainly along the NW-SE direction (Fig. 2), the direction perpendicular to the PWN axis. The long and symmetric morphology of the [Fe II]-SE and -NW filaments resembles the main optical shell of Cas A. Cas A, in optical forbidden lines of O, S ions, shows a complex northern shell composed of several bright, clumpy filamentary structures at varying distances from the center and a relatively simple-structured southern shell \citep[e.g.,][]{fes01}. These northern and southern portions of the main optical shell are opposite across the jet-axis along the NE-SW axis. The optical shell is generally believed to be dense clumps in ejecta recently swept up by reverse shock, although it contains QSFs too. The similarity to Cas A suggests that the explosion in G11.2$-$0.3 was asymmetric as in Cas A. The total [Fe II] 1.644~$\mu$m luminosity is $\sim 75 L_\odot$. This is two orders of magnitude greater than Kepler or Crab, but comparable to RCW 103 or IC 443 \citep{oli89, kel95}. In collisional equilibrium at $T=5,000$~K with $n_e\approx 6,600$~cm$^{-3}$, this converts to Fe mass of $\sim 5.3\times 10^{-4}$~{M$_\odot$}. Both the shocked ejecta and the shocked CSM constitue this. The $^{56}$Fe mass that would have been formed from the radioactive decay of $^{56}$Ni in 15--25 {M$_\odot$}\ SN explosion is 0.05--0.13~{M$_\odot$}\ \citep{woo95, thi96}. Therefore, the Fe ejecta detected in [Fe II] 1.644~$\mu$m emission is less than one percent of the total Fe ejecta. On the other hand, the observed Fe mass corresponds to H (+He) mass of 0.27 $M_\odot$ for the solar abundance, which implies that the mass of the shocked CSM comprising the Fe filaments should be a tiny fraction of the swept-up CSM too. \section{Conclusion} G11.2$-$0.3 has been known as an evolved version of Cas A, both being SN IIL/b with a significant mass loss before explosion. Our H$_2$\ results confirm that G11.2$-$0.3 is indeed interacting with a clumpy circumstellar wind as in Cas A. Clumps with a density contrast of $\sim 3,000$ may be common in presupernova circumstellar wind of SN IIL/b. As far as we are aware, G11.2$-$0.3 is the first source where the presupernova wind clumps are observed in H$_2$\ emission. The H$_2$\ filament in the northeast is of particular interest because it could provide a strong evidence for an ambient shock beyond the bright radio shell. Future spectroscopic studies will reveal the nature of this filament. The [Fe II]\ filaments in G11.2$-$0.3 are probably composed of both shocked CSM and shocked ejecta. The one in the southeast is the brightest among the known [Fe II] 1.644~$\mu$m filaments associated with SNRs and is thought to be where the ejecta is heavily interacting with dense CSM. We note that RCW 103, which is another young remnant of SN IIL/b \citep{che05}, has a very bright [Fe II]\ filament too. The source is similar to G11.2$-$0.3 in the sense that H$_2$\ emission is detected beyond the apparent SNR boundary, although the H$_2$\ emission in RCW 103 extends along the entire bright SNR shell \citep{oli90}. It is possible that the [Fe II]\ filaments in the two remnants are of the same origin. The other faint [Fe II]-emitting features of G11.2$-$0.3 are thought to be mostly SN ejecta. The distribution of [Fe II]\ filaments suggests that the explosion produced G11.2$-$0.3 was asymmetric as in Cas A. In Cas A, however, Fe ejecta have been observed mainly in X-rays although faint [Fe II] 1.644~$\mu$m lines have been detected toward several fast-moving ejecta knots from spectroscopic observations by \cite{ger01}. Future detailed spectroscopic studies will help us to understand the nature of the [Fe II]\ filaments and knots in G11.2$-$0.3 as well as the SN explosion itself. \acknowledgements We thank Dave Green for providing his VLA images of G11.2$-$0.3. We also wish to thank Chris McKee and Roger Chevalier for their helpful comments. D-SM acknowledges a Millikan fellowship from California Institute of Technology. This work was supported by the Korea Science and Engineering Foundation (ABRL 3345-20031017).
1,108,101,566,062
arxiv
\section{Introduction} Let $\mathcal O$ be a bounded domain in $\mathbb R^2$. The electrical impedance tomography problem (e.g., \cite{borcea}) concerns determining the impedance in the interior of $\mathcal O$, given simultaneous measurements of direct or alternating electric currents and voltages at the boundary $\partial \mathcal O$. If the magnetic permeability can be neglected, then the problem can be reduced to the inverse conductivity problem (ICP), i.e., to the problem of reconstructing function $\gamma(z), z=(x,y) \in \mathcal O$, from the set of data $(u|_{\partial \mathcal O},\gamma\frac{\partial u}{\partial\nu}|_{\partial \mathcal O})$, dense in an adequate topology, where \begin{equation}\label{set27A} \mbox{div}(\gamma \nabla u(z)) =0, ~ z\in \mathcal O. \end{equation} Here $\nu$ is the unit outward normal to $\partial \mathcal O$, $\gamma(z) = \sigma(z)+ i \omega\epsilon(z)$, where $\sigma$ is the electric conductivity and $\epsilon$ is the electric permittivity. If the frequency $\omega$ is negligibly small, then one can assume that $\gamma$ is a real-valued function, otherwise it is supposed to be a complex-valued function. An extensive list of references on the tomography problem can be found in the review \cite{borcea}. Here we will mention only the papers that seem to be particularly related to the present work. For real $\gamma$, the inverse conductivity problem has been reduced to the inverse problem for the Schr\"{o}dinger equation. The latter was solved by Nachman in \cite{nachman} in the class of twice differentiable conductivities. Later, Brown and Uhlmann \cite{bu} reduced the ICP to the inverse problem for the Dirac equation, which has been solved in \cite{bc1}, \cite{sung1}. This approach requires the existence of only one derivative of $\gamma$. The authors of \cite{bu} proved the uniqueness for the ICP. Later, Knudsen and Tamasan \cite {knud} extended this approach and obtained a method to reconstruct the conductivity. Finally, the ICP has been solved by Astala and Paivarinta in \cite{ap} for real conductivities when both $\gamma-1$ and $1/\gamma-1$ are in $L^\infty_{\rm comp}(\mathbb R^2)$. If a complex conductivity has at least two derivatives, then one can reduce equation (\ref{set27A}) to the Schr\"{o}dinger equation and apply the method of Bukhgeim \cite{bukh} (or some of the works extending this method, such as \cite{BIY15}, \cite{lnv} or \cite{T}). This approach does not work in the case of only one time differentiable complex valued conductivities. On the other hand, the work of Francini \cite{fr}, where the ideas of \cite{bu} were extended to deal with complex conductivities with small imaginary part, are not applicable to general complex conductivities due to possible existence of the so called {\it exceptional points}. In \cite{lbcond}, Lakstanov and Vainberg extended the ideas of \cite{lnv} to apply the $\overline \partial$-method in the presence of exceptional points and reconstructed generic conductivities under the assumption that $ \gamma-1 \in W^{1,p}_{\rm{comp}}(\mathbb R^2), p>4, $ and $\mathcal F(\nabla \gamma )\in L^{2-\varepsilon}(\mathbb R^2)$ (here $\mathcal F$ is the Fourier transform). In this paper, we will prove that complex-valued Lipschitz conductivities are uniquely determined by information on the boundary. Since we use the standard reduction of (\ref{set27A}) to the Dirac equation followed by the solution of the inverse problem for the Dirac equation, the condition on $\gamma$ can be restated in the form $Q \in L^{\infty}_{comp}(\mathbb R^2) $, where $Q$ is the potential in the Dirac equation. Our present result is based on a development of the Bukhgeim approach, combined with some of the arguments of Brown and Uhlmann from \cite{bu}. The statement of our main theorem is the following. \begin{theorem}\label{uniqueness} Let $\mathcal O$ be a bounded Lipschitz domain in the plane and let $\gamma_1, \gamma_2$ be complex-valued Lipschitz conductivities. Then $$ \Lambda_{\gamma_1} = \Lambda_{\gamma_2} \, \Rightarrow \, \gamma_1 = \gamma_2, $$ where $\Lambda_{\gamma_j}$ is the Dirichlet-to-Neumann map for the conductivity $\gamma_j$. \end{theorem} The Dirichlet-to-Neumann (DtN) map $\Lambda_\gamma : H^{1/2}(\mathcal{\partial O}) \to H^{-1/2}(\mathcal{\partial O})$ is defined by $$\Lambda_\gamma [u\vert_{\partial \mathcal{O}}] = \gamma \frac{\partial u}{\partial \nu}\vert_{\partial \mathcal{O}},$$ where $u$ is a solution to \eqref{set27A} and $\frac{\partial u}{\partial \nu}$ is the normal derivative of $u$ at the boundary of $\mathcal{O}$. Function $\gamma \frac{\partial u}{\partial \nu}\in H^{-1/2}(\mathcal{\partial O})$ is defined as such an element of the space dual to $H^{1/2}(\mathcal{\partial O})$ that $$\langle \gamma \frac{\partial u}{\partial \nu}, v \rangle = \int_{\mathcal{ O}} \gamma \nabla u \cdot \nabla vdxdy$$ for each $v \in H^{1}(\mathcal{O})$. In section \ref{outline}, we will describe our approach, stating the most relevant results. All the proofs will be given in section \ref{proofs}. \section{Main steps}\label{outline} \subsection{Reduction to the Dirac equation} From now on, we will consider $z$ as a point of a complex plane: $z=x+iy\in\mathbb C$, and $\mathcal O$ will be considered as a domain in $\mathbb C$. The following observation made in \cite{bu} plays an important role. Let $u$ be a solution of (\ref{set27A}) and let $\partial = \frac 1 2 \left (\frac{\partial}{\partial x} - i\frac{\partial}{\partial y} \right )$. Then the pair $\phi=\gamma^{1/2}(\partial u, \overline{\partial} u)^t$ satisfies the Dirac equation \begin{equation}\label{firbc} \left ( \begin{array}{cc} \overline{\partial} & 0 \\ 0 & \partial \end{array} \right ) \phi = q {\phi}, \quad z\in \mathcal O, \end{equation} where \begin{eqnarray}\label{char1bc} q(z)=\left ( \begin{array}{cc}0 &q_{12}(z) \\ q_{21}(z) & 0\end{array} \right ), \quad q_{12}=-\frac{1}{2}\partial \log \gamma, \quad q_{21}=-\frac{1}{2}\overline{\partial }\log \gamma. \end{eqnarray} Thus the inverse Dirac scattering problem is closely related to the ICP. If $q$ is found and the conductivity $\gamma $ is known at one point $z_0\in \overline{\mathcal O}$, then $\gamma$ in $\mathcal O$ can be immediately found from (\ref{char1bc}). From now on, we will use a different form of equation (\ref{firbc}): instead of Beals-Coifmann notations $\phi=(\phi_1,\phi_2)^t$, we will rewrite the equation in Sung notations: $\psi_1=\phi_1,\psi_2=\overline{\phi_2}$. We will consider the equation in the whole plane by extending the potential $q$ outside $\mathcal O$ by zero. Then the {\it vector} $\psi=(\psi_1,\psi_2)^t$ is a solution of the following system \begin{equation}\label{fir} \overline{\partial }\psi = Q \overline{\psi}, \quad z\in \mathbb C, \end{equation} where \begin{eqnarray}\label{char1} Q(z)=\left ( \begin{array}{cc}0 &Q_{12}(z) \\ Q_{21}(z) & 0\end{array} \right ), \quad Q_{12} = q_{12}, \quad Q_{21} = \overline{q_{21}}. \end{eqnarray} \subsection{Solving the Dirac equation for large $|\lambda|$} Let $\psi$ be a {\it matrix} solution of (\ref{fir}) that depends on parameter $\lambda \in \mathbb C$ and has the following behavior at infinity \begin{equation}\label{lim} \psi(z,w,\lambda) e^{-\lambda(z-w)^2/4} \rightarrow I, ~ z \rightarrow \infty. \end{equation} Note that the unperturbed wave \begin{equation}\label{exp} \varphi_0(z,\lambda,w):=e^{\lambda (z-w)^2/4}, \quad w,\lambda\in \mathbb C, \end{equation} depends on the spacial parameter $w$ and the spectral parameter $\lambda$, and grows at infinity exponentially in some directions. The same is true for the elements of the matrix $\psi(z,\lambda,w)$. Let us stress that, contrary to the standard practice, we consider function $\psi$ (and other functions defined by $\psi$) for all complex values of $\lambda$, not just for $i\lambda,~\lambda>0$. This allows us to generalize the Bukhgeim method to the case of potentials in $L^\infty_{com}(\mathbb R^2)$. From the technical point of view, this allows us to use the Hausdorff-Young inequality. Problem (\ref{fir})-(\ref{lim}) can be rewritten using a bounded function \begin{equation}\label{mu1} \mu(z,w,\lambda):= \psi(z,w,\lambda)e^{-\lambda (z-w)^2/4}, \end{equation} i.e., (\ref{fir})-(\ref{lim}) is equivalent to \begin{equation}\label{fir2} \overline{\partial }\mu(z,w,\lambda) = Q \overline{\mu}e^{[\overline{\lambda(z-w)^2}- \lambda(z-w)^2]/4}, \quad z\in \mathbb C; \quad \mu\rightarrow I, ~ z \rightarrow \infty. \end{equation} Using the fact that $\overline{\partial}\frac{1}{\pi z}=\delta(0)$, equation (\ref{fir2}) can be reduced to the Lippmann-Schwinger equation \begin{equation}\label{19JanA} \mu(z,\lambda,w)=I+ \frac{1}{\pi}\int_{\mathbb C} Q(z') \frac{e^{-i\Im [\lambda(z'-w)^2]/2}}{z-z'}\overline{\mu}(z',\lambda,w) \, d{\sigma_{z'}}, \end{equation} where $d{\sigma_{z'}}=dx'dy'$ and $\mu\to I$ as $z\to\infty$. Denote \begin{equation}\label{muSet5} \mathcal L_\lambda \varphi (z)= \frac{1}{\pi}\int_{\mathbb C} \frac{e^{-i\Im[\lambda(z'-w)^2]/2} }{z-z'} \, \varphi(z') \, d{\sigma_{z'}}. \end{equation} Then equation (\ref{19JanA}) implies that \begin{equation}\label{2006A} \mu = I + \mathcal L_\lambda Q (I + \overline{\mathcal L_\lambda} \overline{Q} \mu ). \end{equation} In particular, for the component $\mu_{11}$ of the matrix $\mu$, we have $ \mu_{11} = 1 + M \mu_{11}$, with $ M = \mathcal L_\lambda Q_{12} \overline{\mathcal L_\lambda} \overline{Q_{21}},$ leading to \begin{equation}\label{2410C} (I-M)(\mu_{11} - 1) = M 1. \end{equation} By inverting $I-M$, we can obtain $\mu_{11}$. Other components of $\mu$ can be found similarly. Denote by $L^\infty_{z,w}(B)$ the space of bounded functions of $z,w\in \mathbb C$ with values in a Banach space $B$. The following two lemmas show that $M$ is a contractive operator in the space $L^\infty_{z,w}(L^p_\lambda(\lambda:|\lambda|>R))$ if $R$ is large enough, and that $M 1$ also belongs to this space. After these lemmas are proved, one can find the solution $\mu$ of (\ref{19JanA}) (using, for example, the Neumann series for the inversion of $I-M$). Then formula (\ref{mu1}) provides the solution $\psi$ of (\ref{fir})-(\ref{lim}). \begin{lemma}\label{2310A} Let $p>2$. Then $$ \lim_{R \rightarrow \infty} \|M\|_{L^\infty_{z,w}(L^p_\lambda(\lambda:|\lambda|>R)) } = 0. $$ \end{lemma} \begin{lemma}\label{2310C} Let $p>2$. Then there exists $R>0$ such that $$ M 1 \in {L^\infty_{z,w}(L^p_\lambda(\lambda:|\lambda|>R))}. $$ \end{lemma} Note that (\ref{2410C}) together with Lemmas \ref{2310A} and \ref{2310C} allows one to solve the direct but not the inverse problem, since operator $M$ depends on $Q$. The following inclusion is an immediate consequence of (\ref{2410C}) and Lemmas \ref{2310A} and \ref{2310C}: \begin{equation}\label{mm1} \mu_{11}-1 \in {L^\infty_{z,w}(L^p_\lambda(\lambda:|\lambda|>R))}, \quad p>2, \end{equation} for large enough $R$. \subsection{Determination of the potential} Let the matrix $h$ be the {\it (generalized) scattering data}, given by the formula \begin{equation}\label{14Abr1} {h}(\lambda,w) = \int_{\mathbb{C}} e^{{-i\Im[\lambda(z-w)^2]/2}} Q(z)\overline{\mu}(z,\lambda,w) \, d{\sigma_{z}}. \end{equation} One can use Green's formula $$ \int_{\partial \mathcal O} f \, dz = 2i \int_{\mathcal O} \overline \partial {f} \, d{\sigma_{z}} $$ to rewrite $h$ as \begin{equation}\label{2106A} h(\lambda,w) =\frac{1}{2i} \int_{\partial\mathcal O}{\mu}(z,\lambda,w) \, dz. \end{equation} Thus, one does not need to know the potential $Q$ in order to find $h$. Function $h$ can be evaluated if the Dirichlet data $\psi|_{\partial\mathcal O}$ is known for equation (\ref{fir}), since $\mu|_{\partial\mathcal O}$ in (\ref{2106A}) can be expressed via $\psi|_{\partial\mathcal O}$ using (\ref{mu1}). The spectral parameter $i\lambda$ with real $\lambda$ was used in the standard approach to recover the potential from scattering data (\ref{14Abr1}), and the potential was recovered by the limit of the scattering data as $\lambda\to\infty$. Instead, in the present work, we have $\lambda\in \mathbb C$, and the potential is determined by integrating the scattering data over a large annulus in the complex $\lambda$-plane. Let $T^\lambda$ be the operator defined by \begin{equation}\label{0911A} T^\lambda [G]= \int_{\mathcal O} e^{-i \Im [\lambda(z-w)^2]/2} Q(z) G(z) \, d{\sigma_{z}}, \end{equation} where $G$ can be a matrix- or scalar-valued function. Then \begin{equation}\label{hhh} h(\lambda,w)=T^\lambda [\mu]= T^\lambda [I]+T^\lambda [\mu-I]. \end{equation} We will show that the following statement is valid. \begin{theorem}\label{t23} Let $Q$ be a complex-valued bounded potential. Then \begin{equation} \sup_{w\in \mathcal O}|\int_{R<|\lambda|<2R} |\lambda|^{-1} \, T^{\lambda} [\mu-I] \, d\sigma_\lambda| \rightarrow 0, \quad \text{as } \ R \to \infty, \label{remainder} \end{equation} and \begin{equation} \int_{\mathcal{O}} g(w) \int_{R<|\lambda|<2R} |\lambda|^{-1} T^{\lambda} [I] \, d\sigma_\lambda \, d{\sigma_{w}} \to 4\pi^2 \ln 2\int_{\mathcal{O}} g(z) Q(z) \, d{\sigma_{z}}, \quad \text{as } R \to \infty, \label{mainTerm} \end{equation} for every smooth $g$ with a compact support in $\mathcal O$. Thus \[ \int_{\mathcal{O}} g(z) Q(z)d\sigma_{z}=\frac{1}{4\pi^2\ln 2}\lim_{R\to\infty}\int_{R<|\lambda|<2R} |\lambda|^{-1} \, \int_{\mathbb{C}} g(w) h(\lambda,w)\, d{\sigma_{w}} d\sigma_\lambda. \] \end{theorem} Therefore, if the scattering data is uniquely determined by the DtN map, then so is the potential $Q$. In order to prove \eqref{remainder}, we use the two lemmas stated below and (\ref{2410C}) rewritten as follows \begin{align}\label{rem} \mu_{11} - 1 = M (\mu_{11} - 1) + M 1 \end{align} (other entries of the matrix $\mu-I$ can be handled in a similar way). Relation \eqref{mainTerm} follows from the stationary phase approximation. \begin{lemma}\label{2410D} Let $p>1$. Then there exists $R>0$ such that $$T^\lambda M 1 \in L^\infty_{w}(L^p_\lambda(\lambda:|\lambda|>R)).$$ \end{lemma} \begin{lemma}\label{2410I} Let $p>1$. Then there exists $R>0$ such that $$T^\lambda M (\mu_{11} - 1) \in L^\infty_{w}(L^p_\lambda(\lambda:|\lambda|>R)).$$ \end{lemma} \section{Proofs}\label{proofs} In order to make the calculations more compact, we introduce the following notation for the $L^p$-space on the complement of the ball: $$\quad L^p_{|\lambda|>R} = L^p_\lambda(\lambda:|\lambda|>R).$$ We will also use the real-valued function $$\rho_{\lambda,w}(z) = \Im[\lambda(z-w)^2]/2,$$ where the dependence on $\lambda$ and $w$ will be omitted in some cases. \subsection{Preliminary results} \begin{lemma}\label{2210A} Let $1\leq p<2$. Then the following estimate is valid for an arbitrary $0 \neq a \in \mathbb C$ and some constants $C=C(p,R)$ and $\delta=\delta(p)>0$: $$ \left \| \frac{1}{u(\sqrt{u}-a)} \right \|_{L^p(u\in \mathbb C:|u|<R)} \leq C(1+|a|^{-1+\delta}). $$ \end{lemma} {\bf Remark.} A more accurate estimate will be proved below with $\delta=\frac{4}{p}-2$ if $1\leq p<4/3$, and with the right-hand side replaced by $C(1+|\ln|a||^{1/p})$ when $p=4/3$, or by a constant when $4/3<p<2$. {\bf Proof.} The statement is obvious if $|a|\geq 1$. If $|a|<1$, then the left-hand side $L$ in the inequality above takes the following form after the substitution $u=|a|^2v$: \begin{equation}\label{lll} L=|a|^{\frac{4}{p}-3}\left \| \frac{1}{v(\sqrt{v}-\dot{a})} \right \|_{L^p(v\in \mathbb C:|v|<R/|a|^2)}, \quad \dot{a}=a/|a|. \end{equation} Without loss of the generality, one can assume that $R>2$. We split the function $f:=\frac{1}{v(\sqrt{v}-\dot{a})}$ into two terms $f_1+f_2$ obtained by multiplying $f$ by $\alpha$ and $1-\alpha$, respectively, where $\alpha$ is the indicator function of the disk of radius two. The norm of $f_1$ can be estimated from above by an $a$-independent constant. The second function can be estimated from above by $\frac{2}{|v|^{3/2}}$. The norm of the latter function can be easily evaluated, and it does not exceed a constant if $p>4/3$. It does not exceed $C(1+|\ln|a||^{1/p})$ if $p=4/3$, and it does not exceed $C|a|^{3-\frac{4}{p}}$ if $p<4/3$. Since $\|f_1\|\leq C\|f_2\|$, we can replace $f$ in (\ref{lll}) by $Cf_2$, and this implies the statement of the lemma. \qed \begin{lemma}\label{2310F} Let $z_1,w \in \mathbb C$, $p > 2$ and $\varphi \in L^\infty_{\rm{comp}}$. Then $$ \left \|\int_{\mathbb C} \varphi(z)\frac{e^{i\rho_{\lambda,w}(z)} }{z-z_1} \, d{\sigma_{z}} \right \|_{L^p_\lambda(\mathbb C)} \leq C\frac{\|\varphi\|_{L^\infty}}{|z_1-w|^{1-\delta}}, $$ where constant $C$ depends only on the support of $\varphi$ and on $\delta =\delta(p)>0$. \end{lemma} {\bf Proof}. Denote by $F=F(\lambda,w,z_1)$ the integral in the left-hand side of the inequality above. We change variables $u=(z-w)^2$ in $F$ and take into account that $d{\sigma_{u}} =4|z-w|^2 d{\sigma_{z}}$. Then $$ F=\frac{1}{4}\sum_{\pm} \int_{\mathbb C} \varphi(w\pm\sqrt{u})\frac{e^{i\Im(\lambda u)/2}}{|u|(\pm\sqrt{u}-(z_1-w))} \, d{\sigma_{u}}. $$ Using the Hausdorff-Young inequality with $p'=p/(p-1)$ and Lemma \ref{2210A}, we obtain that $$ \|F\|_{L^p_\lambda} \leq \frac{1}{2}\sum_{\pm}\left \| \frac{\varphi(w\pm\sqrt{u})}{|u|(\pm\sqrt{u}-(z_1-w))} \right \|_{L^{p'}_u} \leq C \frac{\|\varphi\|_{L^\infty} }{|z_1-w|^{1-\delta}}. $$ \qed \subsection{Proof of Lemma \ref{2310A}} Let \begin{equation}\label{AAA} A(z,z_2,\lambda,w) =\pi^{-2}\int_{\mathcal O} \frac{e^{-i\rho_{\lambda,w}(z_1)}}{{z}-{z_1}} {Q}_{12}(z_1) \frac{e^{i\rho_{\lambda,w}(z_2)}}{\overline{z_1}-\overline{z_2}} \overline{Q}_{21}(z_2) \, d{\sigma_{z_1}}, \end{equation} so that $$Mg(z) = \int_{\mathcal{O}} A(z,z_2,\lambda,w) g(z_2) \, d{\sigma_{z_2}}.$$ Then, from the Minkowski's integral inequality, we have \begin{align*} \|Mg(z,\cdot)\|_{L^p_{|\lambda|>R}} &\leq \int_{\mathcal O} \|A(z,z_2,\lambda,w)g(z_2,\cdot)\|_{L^p_{|\lambda|>R}} \, d{\sigma_{z_2}} \\ &\leq \int_{\mathcal O} \sup_{\lambda:|\lambda|>R}|A(z,z_2,\lambda,w)| \, d{\sigma_{z_2}} \, \sup_{z_2}\|g(z_2,\cdot)\|_{L^p_{|\lambda|>R}}. \end{align*} Thus it remains to show that, uniformly in $z \in \mathbb C$ and $w \in \mathcal{O}$, we have \[ \int_\mathcal O|A(z,z_2,\lambda,w)| \, d{\sigma_{z_2}} \to 0 \quad {\rm as} \quad |\lambda|\to\infty. \] Let $A^{s}$ be given by (\ref{AAA}) with the extra factor $\alpha(s|z-z_1|)\alpha(s|z_1-z_2|))$ in the integrand, where $\alpha\in C^\infty,~ \alpha=1$ outside of a neighborhood of the origin, and $\alpha$ vanishes in a smaller neighborhood of the origin. Since \begin{align*} \int_{B_1(0)} \int_{B_1(0)} \frac{1}{|z_1|}\frac{1}{|z_1-z_2|} \, d{\sigma_{z_1}} \, d{\sigma_{z_2}} < \infty, \end{align*} for each $\varepsilon$ there exists $s=s_0(\varepsilon)$ such that \[ \int_\mathcal O|A-A^{s_0}|\, d{\sigma_{z_2}}<\varepsilon \] for all the values of $z,w,\lambda$. Denote by $A^{s_0,n}$ the function $A^{s_0}$ with potentials ${Q}_{12},{Q}_{2 1}$ replaced by their $L_1$-approximations ${Q}_{12}^n,{Q}_{2 1}^n\in C_0^\infty$. Since the other factors in the integrand of $A^{s_0}$ are bounded (they are infinitely smooth), we can choose these approximations in such a way that \[ \int_\mathcal O|A^{s_0}-A^{s_0,n}|\, d{\sigma_{z_2}}<\varepsilon \] for all the values of $z,w,\lambda$. Now it is enough to show that \[ |A^{s_0,n}(z,z_2,\lambda,w)|\to 0 \quad {\rm as} \quad |\lambda|\to \infty \] uniformly in $z,z_2,w$. The latter relation follows immediately from the stationary phase method, since the amplitude function in the integral $A^{s_0,n}$ and all the derivatives in $z_1$ of the amplitude function are uniformly bounded with respect to all the arguments. \qed \subsection{Proof of Lemma \ref{2310C}} Recall that \begin{align*} M1 =\pi^{-2} \int_{\mathcal{O}} \int_{\mathcal{O}} \frac{e^{-i \rho_\lambda (z_1)}}{z-z_1} Q_{12}(z_1) \frac{e^{i \rho_\lambda (z_2)}}{\overline{z_1} - \overline{z_2}} \overline{Q_{21}}(z_2) \, d{\sigma_{z_2}} \, d{\sigma_{z_1}}. \end{align*} Let $C$ be a constant that may depend on $\| Q \|_{L^\infty}$ and $\mathcal{O}$. Then, by Minkowski's integral inequality and Lemma \ref{2310F}, we have \begin{eqnarray*} \|M 1\|_{L^p_{|\lambda|>R}} &\leq& \int_{\mathcal O} \left \| \frac{e^{-i \rho_\lambda (z_1)}}{z-z_1} Q_{12}(z_1) \int_{\mathcal O} \frac{e^{i\rho_\lambda(z_2)}}{\overline{z_1}-\overline{z_2}}\overline{Q_{21}}(z_2) \, d{\sigma_{z_2}} \right \|_{L^p_{|\lambda|>R}} d{\sigma_{z_1}} \\ &\leq& \int_{\mathcal O} \left | \frac{Q_{12}(z_1)}{z-z_1} \right |\left \|\int_{\mathcal O} \frac{e^{i\rho_\lambda(z_2)}}{\overline{z_1}-\overline{z_2}}\overline{Q_{21}}(z_2) \, d{\sigma_{z_2}} \right \|_{L^p_{|\lambda|>R}} d{\sigma_{z_1}} \\ &\leq& \nonumber C\int_{\mathcal O} \frac{1}{|z-z_1||z_1-w|^{1-\delta}} \, d{\sigma_{z_1}} < \infty, \end{eqnarray*} since $\delta>0$. \qed \subsection{Proof of Lemma \ref{2410D}} Let $C$ be a constant that may depend on $\| Q \|_{L^\infty}$ and $\mathcal{O}$. Then, applying successively Minkowski's integral inequality, Holder's inequality, and Lemma \ref{2310F}, we see that \begin{align*} \|T^\lambda[M1]\|_{L^p_{|\lambda|>R}} & \leq \int_{\mathcal O} \left \|\int_{\mathcal O} \frac{e^{-i (\rho (z_1) + \rho(z))}}{z-z_1} Q(z) \, d{\sigma_{z}} \int_{\mathcal O}\frac{e^{i\rho (z_2)}}{\overline{z_1}-\overline{z_2}} \overline{Q_{21}}(z_2) \, d{\sigma_{z_2}} \right \|_{L^p_{|\lambda|>R}}|Q_{12}(z_1)| d{\sigma_{z_1}} \\ & \leq C\int_{\mathcal O} \left \|\int_{\mathcal O} \frac{e^{-i \rho (z)}}{z-z_1} Q(z) \, d{\sigma_{z}} \right \|_{L^{2p}_{|\lambda|>R}} \left \| \int_{\mathcal O}\frac{e^{i\rho (z_2)}}{\overline{z_1}-\overline{z_2}} \overline{Q_{21}}(z_2) \, d{\sigma_{z_2}} \right \|_{L^{2p}_{|\lambda|>R}} d{\sigma_{z_1}} \\ & \leq C\int_{\mathcal O} \frac{1}{|z_1-w|^{1-\delta}} \frac{1}{|z_1-w|^{1-\delta}} \, d{\sigma_{z_1}} < \infty, \end{align*} as $\delta>0$. \qed \subsection{Proof of Lemma \ref{2410I}} Let $f = \mu_{11}-1$ and let $C$ be a constant that may depend on $\| Q \|_{L^\infty}$ and $\mathcal{O}$. Then the same arguments as in the proof of Lemma \ref{2410D} imply that \begin{align*} \|T^\lambda[M f]\|_{L^p_{|\lambda|>R}} & \leq C\int_{\mathcal O} \left \|\int_{\mathcal O} \frac{e^{-i \rho(z)}}{z-z_1} Q(z) d{\sigma_{z}} \right \|_{L^{2p}_{|\lambda|>R}} \left \| \int_{\mathcal O}\frac{e^{i\rho(z_2)}}{\overline{z_1}-\overline{z_2}} \overline{Q_{21}}(z_2) f(z_2) d{\sigma_{z_2}} \right \|_{L^{2p}_{|\lambda|>R}} d{\sigma_{z_1}} \\ & \leq C \int_{\mathcal O} \left \|\int_{\mathcal O} \frac{e^{-i \rho(z)}}{z-z_1} Q(z) d{\sigma_{z}} \right \|_{L^{2p}_{|\lambda|>R}} \int_{\mathcal O} \left |\frac{\overline{Q_{21}}(z_2) }{\overline{z_1}-\overline{z_2}} \right | \left \| f(z_2) \right \|_{L^{2p}_{|\lambda|>R}} d{\sigma_{z_2}} d{\sigma_{z_1}} \\ & \leq C\|f\|_{L^\infty_{z,w}\left(L^{2p}_{|\lambda|>R} \right)}\int_{\mathcal O} \frac{1}{|z_1-w|^{1-\delta}} \, d{\sigma_{z_1}} < \infty, \end{align*} since $\delta>0$ and (\ref{mm1}) holds for $f=\mu_{11}-1$. \qed \subsection{Proof of Theorem \ref{t23}} Let us prove (\ref{remainder}). We fix $p\in(1,2)$. From (\ref{rem}) and Lemmas \ref{2410D} and \ref{2410I}, it follows that there exists $R>0$ such that $ T^\lambda[\mu_{11}-1]\in L^\infty_{w}(L^p_{|\lambda|>R})$. Other entries of matrix $\mu-I$ can be treated similarly, i.e., \[ T^\lambda[\mu-I]\in L^\infty_{w}(L^p_{|\lambda|>R}). \] Since $q=\frac{p}{p-1}>2$, Holder's inequality implies that \[ |\int_{R<|\lambda|<2R} |\lambda|^{-1} \, T^{\lambda} [\mu-I] \, d\sigma_\lambda|\leq[\int_{R<|\lambda|<2R} |\lambda|^{-q}d\sigma_\lambda]^{\frac{1}{q}} \|T^{\lambda} [ \mu-I]\|_{L^\infty_{w}(L^p_{|\lambda|>R})}\to 0 \] as $R\to\infty.$ Relation (\ref{remainder}) is proved. The stationary phase approximation implies that \[ \int_{\mathcal O}T^\lambda[1]g(w)d\sigma_w=\int_{\mathcal O} \int_{\mathcal O}e^{-i \Im [\lambda(z-w)^2]/2} g(w)d\sigma_w Q(z) \, d{\sigma_{z}}=\int_{\mathcal O}[\frac{2\pi}{|\lambda|}g(z)+O(|\lambda|^\frac{-3}{2})]Q(z)d\sigma_z. \] This immediately justifies (\ref{mainTerm}). The last statement of the theorem follows from (\ref{hhh})-(\ref{mainTerm}). \qed \subsection{Proof of Theorem \ref{uniqueness}} Due to Theorem \ref{t23}, one only needs to show that the scattering data $h$ for $|\lambda|\gg 1 $ is uniquely determined by the Dirichlet-to-Neumann operator $\Lambda_{\gamma}$. This will be done by repeating the arguments used in \cite[Theorem 4.1]{bu} and \cite[Theorem 5.1]{fr}. Let $\gamma_j, j=1,2,$ be two Lipshitz conductivities in $\mathcal O$ such that $\Lambda_{\gamma_1} = \Lambda_{\gamma_2}$. Since $\gamma_j$ is Lipschitz continuous, it is differentiable almost everywhere, and the derivatives are bounded \cite{ev}. Since $\Lambda_{\gamma_1} = \Lambda_{\gamma_2}$ and $\gamma_1, \gamma_2 \in W^{1,\infty}(\mathcal{O})$, we have $\gamma_1 |_{\partial \mathcal{O}} = \gamma_2 |_{\partial \mathcal{O}}$ (see \cite{a90}). We extend $\gamma_j$ outside $\mathcal{O}$ in such a way that $\gamma_1=\gamma_2$ in $\mathbb{C} \setminus \mathcal{O}$ and $1-\gamma_j \in W^{1, \infty}_{comp}(\mathbb{C})$. Let $\widetilde{\mathcal O}$ be a bounded domain with a smooth boundary that contains supports of functions $1-\gamma_j$. All the previous results will be used below with $\mathcal{O}$ replaced by $\widetilde{\mathcal{O}}$ and $\gamma$ extended as described above. Let $Q_j, \psi_j, \mu_j, h_j,~j=1,2,$ be the potential and the solution in \eqref{fir}, the function in \eqref{mu1}, and the scattering data in \eqref{14Abr1} associated with the extended conductivity $\gamma_j$. Let us note that functions $\psi_j, \mu_j, h_j,~j=1,2,$ defined by the conductivity problem in $\widetilde{\mathcal{O}}$ are not extensions of the functions defined by the problem in $\mathcal{O}$. Due to equation \eqref{2106A}, we have \begin{align*} h_j(\lambda,w) =\frac{1}{2i} \int_{\partial\widetilde{\mathcal O}}{\mu_j}(z,\lambda,w) \, dz. \end{align*} Thus it is enough to prove that \begin{align}\label{mumu} \mu_1 = \mu_2 \quad \text{on } \partial \widetilde{\mathcal{O}}\quad \text{when } |\lambda|\gg 1 . \end{align} Let $\varphi = (\varphi_1, \varphi_2)^t$ be the first column of $\psi_1$ and $v = \gamma_1^{-1/2} \varphi_1$, $w = \gamma_1^{-1/2} \overline{\varphi_2}$. Since $\overline{\partial} \varphi = Q_1 \overline{\varphi} $, and equation (\ref{firbc}) holds for $\phi^{(1)} = (\varphi_1, \overline{\varphi_2})^t$, it follows that $\overline{\partial} v = \partial w$ in $\mathbb C$, and therefore there exists $u_1$ such that \begin{align*} \partial u_1 = v, \quad \overline{\partial} u_1 = w \quad \text{in } \mathbb C, \end{align*} which is a solution to \begin{align*} \mbox{div}(\gamma_1 \nabla u_1) =0 \quad \text{in } \mathbb C. \end{align*} Now we define $u_2$ by \begin{align*} u_2= \begin{cases} u_1 \quad \text{in } \mathbb{C} \setminus \mathcal{O} \\ \widehat{u} \quad \text{in } \mathcal{O}, \end{cases} \end{align*} where $\widehat{u}$ is the solution to the Dirichlet problem \begin{align*} \begin{cases} \mbox{div}(\gamma_2 \nabla \widehat{u}) =0 & \text{in } \mathcal{O} \\ \widehat{u} = u_1 & \text{on } \partial \mathcal{O}. \end{cases} \end{align*} Let $g \in C^\infty_0 (\mathbb{C})$. Then \begin{align*} \int_\mathbb{C} \gamma_2 \nabla u_2 \nabla g \, d\sigma_z &= \int_{\mathbb C \setminus \mathcal{O}} \gamma_1 \nabla u_1 \nabla g \, d\sigma_z + \int_\mathcal{O} \gamma_2 \nabla \widehat{u} \nabla g \, d\sigma_z \\ &=- \int_{\partial \mathcal{O}} \Lambda_{\gamma_1} [u_1 |_{\partial \mathcal{O}}] g \, dz + \int_{\partial \mathcal{O}} \Lambda_{\gamma_2} [\widehat{u} |_{\partial \mathcal{O}}] g \, dz \\ &= 0. \end{align*} Hence $\mbox{div}(\gamma_2 \nabla u_2) =0 $ in $ \mathbb{C}$. Then \begin{align*} \phi^{(2)} = \gamma_2^{1/2} \left( \partial u_2, \overline{\partial }u_2 \right)^t \end{align*} is the solution of (\ref{firbc}) with $\gamma=\gamma_2$, and \begin{align*} \varphi^{(2)} = (\phi^{(2)},\overline{\phi^{(2)}})^t \end{align*} is the solution of (\ref{fir}) with $Q=Q_2$. Lemmas \ref{2310A} and \ref{2310C} imply the unique solvability of the Lippmann-Schwinger equation when $|\lambda|>R$ and $R$ is large enough. Thus, $\varphi^{(2)}$ is equal to the first column of $\psi_2$ when $|\lambda|>R$. On the other hand, $\varphi^{(2)}$ in $\mathbb C\setminus\mathcal{O}$ coincides with the first column $\varphi$ of $\psi_1$. Thus the first columns of $\psi_1$ and $\psi_2$ are equal on $\mathbb C\setminus\mathcal{O}$ when $|\lambda|>R$. Repeating the same steps with the second columns of $\psi_1,\psi_2$, we obtain that $\psi_1|_{\partial\widetilde{\mathcal{O}}}=\psi_2|_{\partial\widetilde{\mathcal{O}}}$ when $|\lambda|>R$, and therefore (\ref{mumu}) holds. The uniqueness of $h$ and Theorem \ref{t23} imply that the potential $Q$ in the Dirac equation (\ref{fir}) is defined uniquely, and therefore $q$ is defined uniquely. Now the conductivity $\gamma$ can be found from (\ref{char1bc}) uniquely up to an additive constant. Finally, this constant can be defined uniquely since $\gamma |_{\partial \mathcal{O}}$ is defined uniquely by $\Lambda_\gamma$. \qed {\bf Acknowledgments.} The authors are thankful to Daniel Faraco and Keith Rogers for useful discussions.
1,108,101,566,063
arxiv
\section{Introduction} The study of thermodynamic aspects of black holes over the past decades has given several insights into the nature of gravity as described by Einstein's General Relativity, and is expected to be a crucial link in constructing a quantum theory of gravity (see \cite{paddy-newinsights} for a recent review and references). In a paper more than a decade back \cite{jacobson-eq-of-state}, Jacobson speculated that it might be possible to invert the logic of the ``physical process" version of the laws of black hole mechanics, developed by Wald, and by applying it to local Rindler horizons, one can derive Einstein field equations from Clausius relation, $T \mathrm{d} S = \mathrm{d} E_M$, where $E_M$ is related to matter flux (and vanishes when $T_{ab}=0$). The essential new idea introduced by Jacobson was that of local Rindler horizons in a small patch of spacetime which can be approximated as flat once one has set the acceleration length scale appropriately. (See Appendix \ref{app:lif-conds} for an elaboration on this construction.) Einstein equations would then emerge as consistency conditions on the background. In a later paper \cite{paddy-pdv}, Padmanabhan pointed out that if one actually looks at the structure of Einstein tensor near a spherically symmetric horizon, it has the form $T \mathrm{d} S=\mathrm{d} E_G+P \mathrm{d} V$, where $E_G$ is associated with horizon energy (and unlike $E_M$, $E_G \neq 0$ when $T_{ab}=0$) and $P$ with matter flux (these are defined below). In fact, the above relation has been shown to hold for a wide class of horizons, including \textit{arbitrary static horizons in Lanczos-Lovelock theory} as well. This result looks different from what Jacobson had started with to deduce the null-null part of Einstein equations - specifically, the energy term $E_G$ has nothing to do with $E_M$, which is more like the $P \mathrm{d} V$ term but with different interpretation in terms of matter flux. So, while the Clausius relation seems to yield null-null component of Einstein equations, the Einstein tensor itself has a \textit{very different structure}. It is important to relate these results and understand where the difference comes from, which we intend to do in this note. Before proceeding, we would like to clarify an important point so as to put the analysis presented here in proper perspective. To begin with, we must mention that our main emphasis here is {\it not} to analyze pros and cons of one method over the other, but rather to clarify {\it why} they differ and to characterise the difference(s) from a physical point of view. It is indeed true that a priori there is a difference between the approaches of Jacobson and Padmanabhan; while Jacobson's analysis concerns deriving Einstein equations from Clausius relation, Padmanabhan's result demonstrates that Einstein equations on horizon is same as the first law of thermodynamics. However, once the physical content of Einstein equations has been claimed to be equivalent to a particular thermodynamic relation, one would have expected a mapping between the two results, unless there are subtle differences at a fundamental level. Indeed, the $T \mathrm{d} S$ term in the thermodynamic relation is fairly unambiguous, so that the remaining terms in the equations must correspond in some manner. If they do not, then it implies that there is a difference at a conceptual level, which is what we shall show in this note. We shall show that \textit{the difference arises in the particular manner in which matter fluxes across the horizon are treated.} Specifically, Padmanabhan's result arises due to deformations of the future horizon \textit{normal} to itself, generated by ingoing null geodesic congruences, and this yields the force term $P \mathrm{d} V$ in the final result. We discuss in some detail the resulting difference in physical interpretations. Furthermore, we show that the additional term $\mathrm{d} E_G$ is essentially the change in quasi-local energy associated with the horizon $2$-surface, and is related to horizon topology; more precisely, we show that $\mathrm{d} E_G/ \mathrm{d} \lambda \propto \int \mathrm{d}^2 x \ \sqrt{\sigma} \ {}^{(2)}R$. To summarize, we shall do the following in this note: \begin{enumerate} \item Clarify the role of horizon deformations to be considered in a Rindler patch when matter crosses the future Rindler horizon of the observer. \item Clarify the differences between the ``heat flux" term of Jacobson, and the ``$P \mathrm{d} V$" term of Padmanabhan, and highlight the physical implications. \item Indicate clearly that the change in area of a horizon cross-section is determined by the expansion (and not its first derivative) of the ingoing null congruence normalised to have unit Killing energy. \comment{Using the outgoing congruence, \textit{normalised in this same way}, also gives the same result, although we do not use them since these have divergent components in the inertial frame (see below).} \item Give an explicit expression for the expansion $\theta$ for the congruence mentioned in the previous point, in terms of combination of curvature tensor components (see Eq.~(\ref{eq:riemm-area-change}) below), and compare with the corresponding combination occuring in the Raychaudhuri equation. In particular, {\it the area change involves not just the Ricci tensor, but also the Riemann tensor} -- a point which is of relevance in the context of deriving field equations from thermodynamics. \item Exhibit explicitly the ``thermodynamic" structure of Einstein tensor and show that there is a term corresponding to quasi-local energy of the horizon, which must be separately accounted for when considering energy flow across the horizon. \item Show that, when using the Raychaudhuri equation with our prescribed null congruence, the $O(\lambda)$ term does not vindicate or necessitate setting the expansion to zero. \end{enumerate} We shall address all the above points in the following sections. To avoid distraction from the main points, we have relegated most of the mathematical details to appendices. Before proceeding, let us also clarify the restrictions on the local frame of the accelerated observer that we shall impose. The most important restriction is that of staticity; that is, we shall assume that, in the local coordinates near the observer worldline, one can define an approximate timelike Killing vector field. Consequently, the near horizon geometry is assumed to be static. For static spacetimes, we shall use, for the near horizon metric, the form: $\mathrm{d} s^2 = -N^2 \mathrm{d} t^2 + \mathrm{d} z^2 + \sigma_{AB} \mathrm{d} y^A \mathrm{d} y^B$, with the Taylor expansions for $N$ and $\sigma_{AB}$ derived by Visser et al \cite{visser}. As our discussion will make clear, the above form of metric is just a good, convenient parametrization -- the final results are of course stated in a manifestly tensorial form. The only crucial input is staticity, which requires a satisfactory notion of a timelike Killing vector which is hypersurface orthogonal, and a spacelike surface whose unit normal points in the direction of acceleration. \comment{Also, for greater clarity and notational convenience, we use boldface subscripts for contraction on corresponding vectors; for e.g., $R_{\bm \xi \bm k} \equiv R_{ab} \xi^{a} k^{b} $.} \section{The null basis near a horizon} Let us concentrate on the future horizon $\mathcal{H}$ of the right Rindler wedge, which is generated by outgoing null rays. The most natural transverse null vector for $\mathcal{H}$ is therefore defined by affinely parametrized \cite{conf-proc} ingoing null geodesics, $\bm k$, and can be chosen to be: $\bm k = N^{-1} (\bm u - \bm n)$. Here, $N=\sqrt{-\xi^2}, \bm u = \bm \xi/N$, and existence of a local timelike Killing field $\bm \xi$ (which generates local Lorentz boosts) is assumed. Also, $\bm n$ is the unit normal in the direction of acceleration of $\bm u$. The existence of $\bm \xi$ is also assumed in the work of Jacobson \cite{jacobson-eq-of-state}, and without this no further progress can be made. The choice of normalization is such that $\bm k \cdot \bm \xi = -1$, implying that $\bm k$ has unit Killing energy. It must be also noted that the corresponding outgoing null rays are given by $\bm l = N^{-1} (\bm u + \bm n)$; we note that, $N^2 \bm l \rightarrow \bm \xi$ on $\mathcal{H}$. More precisely, $N^2 \bm l$ become tangent to horizon generators [the vector $\bm l$ itself does not, since $\bm l \cdot \bm \xi = -1$ by construction]. The standard Rindler transformations in the local inertial fram (LIF) has an additional parameter $\kappa$ which characterizes orbits of Lorentz boosts and generates constant acceleration trajectories. In inertial coordinates $(T,X,Y,Z)$, we have $\bm k=\kappa^{-1} (X+T)^{-1} \left( \bm \partial_T - \bm \partial_X \right) \rightarrow (2 \kappa X)^{-1} \left( \bm \partial_T - \bm \partial_X \right)$ on $\mathcal{H}$, i.e., $T=X$. In fact, we could as well have used $\bm l$ below for discussion without any change in the final result, but this would be a weird thing to do since these geodesics behave badly near the future horizon. Specifically, $\bm l=\kappa^{-1} (X-T)^{-1} \left( \bm \partial_T + \bm \partial_X \right)$, so the components in the locally inertial coordinates blow up at $X=T$. \comment{ \footnote{ Actually, the vector $\bm k$ we are dealing with is the so called RIGGING vector field, which defines a natural projector onto the horizon. [See also the \textit{Living Reviews} article by Ashtekar et. al. on isolated/dynamical horizons.] } } We shall now demonstrate that the expansion of $\bm k$ (or $\bm l$) governs the changes in cross sectional area of the horizon. To exhibit the result for both $\bm k$ and $\bm l$ simultaneously, we write $\bm k_{\epsilon} = N^{-1} \left( \bm u - \epsilon \bm n \right)$, where $\epsilon=+1$ corresponds to $\bm k$ and $\epsilon=-1$ to $\bm l$. First, note that, in terms of the covariant derivative ${}^{(3)}D$ compatible with $t=$constant hypersurface, we have \begin{eqnarray} {}^{(3)}D \cdot \bm n &=& \left( g^{ab} + u^a u^b \right) \nabla_a n_b \nonumber \\ &=& \nabla \cdot \bm n - \bm n \cdot \bm a \end{eqnarray} Now evaluate \begin{eqnarray} \nabla \cdot \bm k_{\epsilon} &=& -\epsilon N^{-1} \nabla \cdot \bm n + N \bm k_{\epsilon} \cdot \nabla N^{-1} \nonumber \\ &=& -\epsilon \frac{1}{N} \; {}^{(3)}D \cdot \bm n - \epsilon \frac{\bm a \cdot \bm n}{N} - \frac{\bm k_{\epsilon} \cdot \nabla N}{N} \nonumber \\ &=& - \epsilon \frac{1}{N} \; {}^{(3)}D \cdot \bm n \end{eqnarray} where we have noted that $\bm a=\nabla N/N$, so that the last two terms in second equality cancel. Now, since the $t=$constant metric is $\mathrm{d} z^2 + \sigma_{AB} \mathrm{d} y^A \mathrm{d} y^B$, the $\bm n={\bm \partial}_z$ congruence is an affinely parametrized geodesic congruence, and therefore we can use the standard interpretation of ${}^{(3)}D \cdot \bm n$ in terms of fractional rate of change in ``volume" of $z=$constant surfaces, which corresponds to the $2$-$D$ manifold described by the metric $\sigma_{AB}$. Hence, we finally get \begin{eqnarray} \nabla \cdot \bm k_{\epsilon} = - \epsilon \frac{1}{N} \partial_z \ln \sqrt{\sigma} = - \epsilon \frac{\mathrm{d} ~}{\mathrm{d} \lambda} \ln \sqrt{\sigma} \label{eq:NEW-exp-area-change} \end{eqnarray} which is the desired result. (We have used $N \mathrm{d} z = \mathrm{d} \lambda$ in arriving at the second equality, see Appendix \ref{app:wald} for details.) This straightforward evaluation should leave no doubt as to how the change in cross sectional area of the horizon is actually described by considering ingoing (or outgoing) null geodesic congruences, constructed in the manner we have described. In fact, the choice of ingoing congruence $\bm k$ is also strengthened by some old results due to T. Dray and G. 't Hooft \cite{dray-thooft}, which clearly shows that a massless particle falling into a Schwarzschild black hole corresponds to a shift in the \textit{ingoing} Kruskal coordinate, the shift being proportional to the particle energy. \section{The thermodynamic structure of Einstein tensor} We shall now analyze the near-horizon form of Einstein tensor, and reveal how its thermodynamic structure emerges. Before proceeding, however, we wish to clarify an important point concerning the variations we shall be considering. We shall base our discussion on the ingoing null geodesics $\bm k$ of the previous section, satisfying $\bm k \cdot \bm \xi = -1$. As should be evident from comments in the previous section, \textit{the entire analysis can be repeated in a straightforward manner using outgoing null geodesics $\bm l$ satisfying $\bm l \cdot \bm \xi =-1$}; the only difference is the change in sign in $\bm n$ at various intermediate steps, while \textit{the final result remains unchanged}. The reason for using ingoing null geodesics $\bm k$, as mentioned above, is that these have components which are well behaved at the future horizon in the locally inertial coordinates; the only crucial thing is the normalization based on unit Killing energy. We begin with the following (exact) identity [see Appendix \ref{app:gauss-codazzi-rel} for a proof]: \begin{eqnarray} G_{ab} g^{\perp ab} = 2 \left( R_{ab}\xi^ak^b\comment{R_{\bm \xi \bm k}} - R_{abcd}u^an^bu^cn^d\comment{R_{\bm u \bm n \bm u \bm n}} \right) &-& \ {}^{(2)}R \nonumber \\ &-& N^2 R_{ab}k^ak^b\comment{R_{\bm k \bm k}} + \Pi[K,k] \label{eq:eins-struc} \end{eqnarray} where $g^{\perp}_{ab}=-u_a u_b + n_a n_b$ is the metric on the surface orthogonal to the horizon, and $\Pi[K,k]=f(k) - f(K) - \phi(K)$, with $f(K)=K^2 - K_{\mu \nu}^2$ (similarly for $f(k)$), and $\phi(K)=n^{\mu} n^{\rho} \left( K^{\nu}_{\mu} K_{\nu \rho} - K K_{\mu \rho} \right)$. Here, $K_{\mu \nu}$ and $k_{AB}$ are extrinsic curvatures of level surfaces of $\bm u$ embedded in $4$-$D$ spacetime, and of $\bm n$ embedded in the resultant $3$-$D$ space, respectively. Note that the above expression is true for an arbitrary spacetime without any geometric constraints imposed so far. \footnote{In particular, for a flat $3$-$D$ space in a flat $4$-$D$ spacetime, one obtains $\ {}^{(2)}R = f(k)$, which is essentially the content of Gauss's \textit{Theorema Egregium}.} We shall now impose the condition of staticity, that is, we shall require that the near horizon geometry, to a sufficient approximation, has a local timelike Killing vector field. In that case, we can show that (see Appendix \ref{app:riemm-area-change}), on the horizon $z \rightarrow 0$: \begin{eqnarray} R_{ab}\xi^ak^b - R_{abcd}u^an^bu^cn^d\comment{R_{\bm \xi \bm k} - R_{\bm u \bm n \bm u \bm n}} = \kappa \frac{\mathrm{d} ~}{\mathrm{d} \lambda} \ln \sqrt{\sigma} \label{eq:riemm-area-change} \end{eqnarray} The above equation gives the derivative of area (rather than it's second derivative) in terms of curvature components, and deserves several comments, which we list below: \begin{itemize} \item It clearly shows that the change in cross sectional area (obtained by integrating $\sqrt{\sigma}$ over transverse coordinates) of the $\bm k$ (or the $\bm l$) congruence (normalized so as to have unit Killing energy), on a cross section of $\mathcal{H}$, depends on a very different combination of {\it Riemann tensor} components than the one occurring in the Raychaudhuri equation [which only involves Ricci tensor, $R_{ab}k^ak^b\comment{R_{\bm k \bm k}}$]. \item Raychaudhuri equation gives \textit{second} derivative of area and our analysis above shows that ``integrating" it naively to obtain the first derivative will, in general, be tricky. Indeed, the null-null component does not appear in the above equation at all! In section \ref{sec:raych-jacobson}, we shall present an analysis a la Jacobson using Raychaudhuri equation, which should clarify further what is going on here. (This and the previous comment are important particularly when we consider Jacobson's argument and compare it with our result, see section \ref{sec:raych-jacobson}.) \item The appearance of $R_{abcd}u^an^bu^cn^d\comment{R_{\bm u \bm n \bm u \bm n}}$ also must be highlighted; one could have simply ignored this term by demanding it to be small, and calling this demand a further restriction on the definition of a local Rindler horizon. This, however, would be adhoc, since for Schwarzschild horizon, it involves $\partial_r^2 (1-2M/r)$. Indeed, as is evident from above, there is actually no need to throw away this term, since it occurs in just the right combination in Einstein tensor so as to give the change in area correctly. \item Even if we did throw away the $R_{abcd}u^an^bu^cn^d\comment{R_{\bm u \bm n \bm u \bm n}}$ term, we are left with $R_{ab}\xi^ak^b\comment{R_{\bm \xi \bm k}}$ which has nothing to do with the null-null component of Ricci [recall that $\bm k \cdot \bm \xi = -1$]. \end{itemize} Proceeding to the main analyis, note that if $R_{ab}k^ak^b\comment{R_{\bm k \bm k}}$ [and hence $G_{ab}k^ak^b\comment{G_{\bm k \bm k}}$] is finite on the horizon, then the corresponding term on RHS of Eq.~(\ref{eq:eins-struc}) is $O(z^2)$. Also, $\Pi[K,k]$ is ignorable because it is $O(z^2)$. This comes about as follows: $K_{\mu \nu}$ is zero due to staticity. On the other hand, $k_{AB} \propto \partial_z \sigma_{AB}$ is $O(z)$ since, from the Taylor expansion of area, $\sigma_{AB}=$ ($z$-${\rm independent~part}$) $+O(z^2)$ (see Ref. \cite{visser}). Since $\Pi[K,k]$ is quadratic in $k_{AB}$, it is $O(z^2)$. So we finally obtain: \begin{eqnarray} P \sqrt \sigma = \frac{\kappa}{2 \pi} \frac{\mathrm{d} ~}{\mathrm{d} \lambda} \left( \frac{1}{4} \sqrt{\sigma} \right) - \frac{1}{16 \pi} \ {}^{(2)}R \sqrt{\sigma} \label{eq:eq6} \end{eqnarray} where we have defined $P = (1/2) T_{ab} g^{\perp ab}$. The differential version of the above equation (multiplying it by $\mathrm{d} \lambda$) yields Padmanabhan's result: \begin{eqnarray} P \mathrm{d} V = T \mathrm{d} S - \mathrm{d} E_G \end{eqnarray} Having established the above relation, we can ask how general it is. It might seem that the result is very specific to Einstein gravity since in arriving at it, we used Eq.~(\ref{eq:riemm-area-change}) for change of area, and in Einstein gravity horizon entropy is proportional to area. We could therefore relate entropy change to area change and derive the result. However, when one goes beyond Einstein theory, entropy is no longer proportional to area but is instead given by Wald entropy. It is therefore quite a non-trivial fact that exactly the same result can be proved for a much larger class of lagrangians -- the so called Lanczos-Lovelock (LL) lagrangians -- for which the horizon entropy is given by a non-trivial function of area. Once again, we find that the near-horizon structure of field equations for LL actions can be cast in the form: \begin{eqnarray} P \mathrm{d} V = T \mathrm{d} S_{LL} - \mathrm{d} E_{(G) LL} \end{eqnarray} and the resulting expressions for $S_{LL}$ and $E_{(G) LL}$ turn out to be \cite{thermod-static}: \begin{eqnarray} S_{\mathrm{LL}} \propto \int \sqrt{\sigma} L_{m-1}^{(D-2)} \\ (\mathrm{d} E_{G}/\mathrm{d} \lambda)_{\mathrm{LL}} \propto \int \sqrt{\sigma} L_{m}^{(D-2)} \end{eqnarray} We see that $S$ is precisely the Wald entropy, whereas $E_G$ gives the correct expression for quasilocal energy when applied to known black hole solutions. \footnote{A general definition for quasilocal energy, such as Hawking's definition for Einstein theory, is not available for the LL actions; in fact, ours can be taken as a natural generalization of Hawking's quasilocal energy for LL actions.} Before turning to Raychaudhuri equation, let us make another relevant comment: It is easy to see, from our definitions, that $G_{ab} g^{\perp ab} = -2 G_{ab}\xi^ak^b\comment{G_{\bm k \bm \xi}} + N^2 G_{ab}k^ak^b\comment{G_{\bm k \bm k}}$. Therefore, provided $G_{ab}k^ak^b\comment{G_{\bm k \bm k}}$ is finite on the horizon, one obtains, in the limit $N \rightarrow 0$: $G_{ab}\xi^ak^b\comment{G_{\bm k \bm \xi}} \rightarrow - (1/2) G_{ab} g^{\perp ab}$ (which, incidentally, is the so called \textit{work function}, $W$, defined by Hayward in the context of \textit{spherically symmetric}, dynamical horizons \cite{hayward}; note that we have \textit{not} assumed spherical symmetry to obtain Eq.~(\ref{eq:eq6})). On the horizon, we therefore have natural interpretation for this term as the force acting on the horizon in the direction defined by $\bm k$. Let us also mention its form for an ideal fluid, described by $T_{ab}=\rho_0 v_av_b + p_0(g_{ab}+v_av_b)$, where $v^a$ is the fluid $4$-velocity, and we assume for simplicity that it lies only in the $\bm u$--$\bm n$ plain. Then, a trivial calculation shows that $T_{ab}u^au^b=\gamma_{rel}^2 (\rho_0+p_0v_{rel}^2)$ and $T_{ab}n^an^b=\gamma_{rel}^2 (p_0+\rho_0v_{rel}^2)$, where $\gamma_{rel}=-\bm u \cdot \bm v$. We then immediately obtain $P = (1/2) T_{ab} g^{\perp ab}=(p_0-\rho_0)/2$. It is also instructive to compare this analysis with the one given by Jacobson, in which case the most natural starting point would be the Raychaudhuri equation. We do this in section \ref{sec:raych-jacobson}. We shall show that, for our $\bm k$ (or $\bm l$) congruence, the starting assumption of equating $T \mathrm{d} S$ with matter flux gives, at $O(\lambda^0)$, a relation which is inconsistent with the algebraic identity obtained in this section. However, if one makes further approximations and ignore certain terms, then we do recover the null-null part of Einstein equations at $O(\lambda)$, although in a manner completely different from Jacobson's, since our analysis is not based on the null generators. Most importantly, we do not require the vanishing of expansion of the null congruence at all. Before proceeding, we must emphasize that, in the next section, we shall be trying to follow Jacobson's reasoning \textit{in our setup}; the final results and implications must, of course, be interpreted keeping this in mind. Needless to say, our main emphasis is towards trying to understand why there are differences between the work and energy terms in the two approaches; the answer, as we hope this note would make evident, lies in different ways of treating fluxes across the horizon. \vspace{0.2in} \section{Analysis based on Raychaudhuri equation} \label{sec:raych-jacobson} In this section, we turn to Raychaudhuri equation, in an attempt to understand better the difference between above result and Jacobson's derivation of the null-null component of the field equations. To do so, we repeat Jacobson's analysis using the $\bm k$ congruence; this should indicate where the difference lies. Once again, it is worth emphasizing that we will obtain the same results upon using the outgoing $\bm l$ congruence of unit Killing energy. As we have shown above, the Einstein tensor on the whole has a much richer structure due to the presence of the $\ {}^{(2)}R$ term, which we would want to explore further. Unfortunately, Raychaudhuri equation, as we will see, has nothing much to say about this term, but our analysis will {shed some light on the role of certain assumptions in Jacobson's derivation, and also the differences between the work term as well as horizon energy.} Start with the equation defining variation of area in terms of expansion $\theta$ of a congruence of ingoing null geodesics w.r.t. the affine parameter $\lambda$ along $\bm k$ (see Appendix \ref{app:wald} for more details). Assuming that entropy is proportional to area, this gives: \begin{eqnarray} T_H \mathrm{d} S = \alpha^{-1} \int \theta \; \mathrm{d} \Sigma \; \mathrm{d} \lambda \end{eqnarray} where $\alpha = (8 \pi c L_{_{\rm P}}^2/ \hbar) / \kappa$, and the integration is over the null 3-surface generated by the cross-section of a bundle of ingoing null geodesics $\bm k$ across an affine distance $\lambda$. The horizon is at $\lambda=0$. Now expand $\theta$ \begin{eqnarray} \theta(\lambda) = \theta(0) + \dot \theta (0) \lambda + \frac{1}{2} \ddot \theta (0) \lambda^2 + O(\lambda^3) \end{eqnarray} in obvious notation. Now we can use Raychaudhri equation to substitute for the first derivative of $\theta$ {\it evaluated at $\lambda=0$}. That is, \begin{eqnarray} \dot \theta (0) = - \frac{1}{2} \theta^2(0) - \left[ R_{ab}k^ak^b\comment{R_{\bm k \bm k}} \right]_{\lambda=0} \end{eqnarray} where \comment{$R_{\bm k \bm k}=R_{ab} k^a k^b$ and }we have ignored shear and rotation for the time being (which is also an assumption in Jacobson's work). Now consider the heat flux through $\mathrm{d} \Sigma \; \mathrm{d} \lambda$: \begin{eqnarray} \mathrm{d} Q = \int T_{ab} \xi^a k^b \; \mathrm{d} \Sigma \; \mathrm{d} \lambda \comment{= \int T_{\bm \xi \bm k} \; \mathrm{d} \Sigma \; \mathrm{d} \lambda} \end{eqnarray} for which a similar expansion gives: \begin{eqnarray} T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} = \left[ T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} + \lambda \; \left[ \frac{\mathrm{d}}{\mathrm{d} \lambda} T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} + O(\lambda^2) \end{eqnarray} Following Jacobson, we now impose the Clausius relation, $T \mathrm{d} S = \mathrm{d} Q$ and equate equal powers of $\lambda$ on both sides. That is, \begin{eqnarray} \alpha^{-1} \int \left[ \theta(0) + \dot \theta (0) \lambda + \frac{1}{2} \ddot \theta (0) \lambda^2 + O(\lambda^3) \right] \; \mathrm{d} \Sigma \; \mathrm{d} \lambda = \int \left[ \left[ T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} + \lambda \; \left[ \frac{\mathrm{d}}{\mathrm{d} \lambda} T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} + O(\lambda^2) \right] \; \mathrm{d} \Sigma \; \mathrm{d} \lambda \end{eqnarray} This gives \begin{eqnarray} O(\lambda^0)&:& \theta(0) = \alpha \left[ T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} \label{eq:order-zero} \\ \nonumber \\ O(\lambda^1)&:& - \frac{1}{2} \theta^2(0) - \left[ R_{ab}k^ak^b\comment{R_{\bm k \bm k}} \right]_{\lambda=0} = \alpha \left[ \frac{\mathrm{d}}{\mathrm{d} \lambda} T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} \label{eq:order-lambda} \end{eqnarray} Using Eq.~(\ref{eq:order-zero}) to replace $\theta^2(0)$, this becomes \begin{eqnarray} O(\lambda^1)&:& - \frac{1}{2} \left[ \alpha T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]^2_{\lambda=0} - \left[ R_{ab}k^ak^b\comment{R_{\bm k \bm k}} \right]_{\lambda=0} = \alpha \left[ \frac{\mathrm{d}}{\mathrm{d} \lambda} T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} \nonumber \end{eqnarray} The relevant points to note here are: \begin{itemize} \item[--] On the horizon, $\kappa \lambda k^a$ goes to $\xi^a$, that is $\left[\kappa \lambda k^a \right]_{\lambda=0} = \xi^a$ (see Appendix \ref{app:wald}) , which is obviously a $O(\lambda^0)$ expression and NOT $O(\lambda)$. This is a key difference from Jacobson's argument, arising because Jacobson considers fluxes along generators ${\bar k}^a$ of the horizon. [In that case, $\kappa {\bar \lambda} {\bar k}^a=\xi^a$ is valid all across the Killing horizon ($\bar \lambda$ being the affine parameter along the generators $\bar k^a$). This then necessitates that expansion of the generators vanish at the bifurcation surface $\bar \lambda=0$ (corresponding to $T=0=X$), since the matter flux term becomes $O(\bar \lambda)$.] {In our opinion, since one would expect to associate entropy with cross sections of arbitrary null vectors in an arbitrary curved spacetime, such an assumption on the expansion is restrictive.} \item[--] In our {setup}, it would actually be incorrect to deduce that $\left[ T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0}$ is $O(\lambda)$; as seen from above, this term is in fact related to $\theta(0)$, which in general does not vanish. In fact, one would expect arbitrary null congruences to block information of a certain region of spacetime from a class of observers; for such congruences, there is actually no need to constrain the expansion to vanish. \end{itemize} So, whether Einstein equations come out at $O(\lambda)$ depends on $\left[ {\mathrm{d}_\lambda} \left( T_{ab}\xi^ak^b \right)\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0} = \left[ k^a \nabla_a \left( T_{ab}\xi^ak^b \right)\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0}$. In general, it is not at all obvious what this will lead to, but let us consider this term in more detail: \begin{eqnarray} k^c \nabla_c \left[ T_{ab} \xi^a k^b \right] &=& \xi^a k^b k^c \underbrace{\nabla_c T_{ab}}_{\mathrm{ignore}} \; + \; T_{ab} \xi^a \underbrace{k^c \nabla_c k^b}_{= 0} \; + \; T_{ab} k^b k^c \nabla_c \xi^a \end{eqnarray} where we have ignored the derivatives of $T_{ab}$, which is justified in the approximation in which we are working here. For consistency, one must then also ignore the $T_{ab}^2$ term on the LHS of Eq.~(\ref{eq:order-lambda}). Now concentrate on the term involving $k^c \nabla_c \xi^a$, which is to be evaluated at $\lambda=0$ {\it after} computing the derivative. This term at $\lambda=0$ can be shown to give $-\kappa k^a$\comment{[THIS IS EASILY VERIFIED FOR STATIC SPACETIMES AND IS TRUE IN GENERAL]}. Therefore we obtain: \begin{eqnarray} k^c \nabla_c \left[ T_{ab} \xi^a k^b \right] \approx - \kappa T_{ab}k^ak^b\comment{T_{\bm k \bm k}} \end{eqnarray} The last term in $\left[ {\mathrm{d}} \left(T_{ab}\xi^ak^b\right)\comment{T_{\bm \xi \bm k}} / {\mathrm{d} \lambda} \right]_{\lambda=0}$ therefore reproduces precisely the contribution which would come from calling (in our case incorrectly) $\left[ T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \right]_{\lambda=0}$ as a $O(\lambda)$ term! Therefore the $O(\lambda^1)$ becomes [Note that $\alpha = (8 \pi c L_{_{\rm P}}^2/ \hbar) / \kappa$]: \begin{eqnarray} O(\lambda^1) &:& - \left[ R_{ab}k^ak^b\comment{R_{\bm k \bm k}} \right]_{\lambda=0} = - (8 \pi c L_{_{\rm P}}^2/ \hbar) \left[ T_{ab}k^ak^b\comment{T_{\bm k \bm k}} \right]_{\lambda=0} \nonumber \end{eqnarray} thereby giving the null-null component of Einstein equations yet again, as Jacobson had obtained, but in a completely different manner! Also, note that the above relation is applicable all along the future horizon, and not just near the bifurcation point, in so far as the notion of local, static Rindler horizon remains well defined. The discussion above clearly implies that: \begin{itemize} \item[1.] {In our setup,} Einstein equations do NOT necessarily follow from $T\mathrm{d} S=\mathrm{d} E_{\mathrm{matter}}$; this is so since one could also have included an additional term, involving curvature tensor, which will only modify the $O(\lambda^0)$ term, which is anyway ignored while evaluating the $O(\lambda^1)$ contribution. In fact, as we demonstrated in the previous section, such a term is present in Einstein equations, and corresponds to a change in ``gravitational energy". \item[2.] The above claims can be explicitly demonstrated by evaluating Einstein equations near a static horizon, in which case a very natural energy term is picked out; the resultant equation, in fact, can be thought of as $T\mathrm{d} S - \mathrm{d} E_{\mathrm{horizon}} = P \mathrm{d} V = - \mathrm{d} E_{\mathrm{matter}}$. Note that the $O(\lambda^0)$ piece above can be rewritten as $(\kappa/2 \pi) \mathrm{d} (\delta A/4) = - T_{ab}\xi^ak^b\comment{T_{\bm \xi \bm k}} \delta A$, which is, of course, Padmanabhan's result without the $\ {}^{(2)}R$ term. Of course, since the latter result is algebraically correct (as we showed in the previous section), this actually makes the $O(\lambda^0)$ contribution (and by implication, the starting relation) incorrect at a \textit{purely algebraic} level -- one needs the $\ {}^{(2)}R$ contribution for consistency [the minus sign above is due to $\bm k$ being an ingoing congruence; as an interesting aside, let us also mention that this fact is one of the ``boundary conditions" for dynamical/isolated horizons imposed by Ashtekar et. al.]. In fact, we feel that the ``$\ {}^{(2)}R$" term reinforces the well known quasi-local character of gravitational energy; an ultra-local description of energy balance might therefore be a bit tricky. \item[3.] Let us also highlight the role of the ingoing congruence $\bm k$. The other null congruence $N^2 \bm l$ generates the horizon, and since we have taken great pains to construct a local Killing horizon, the expansion along the generators would vanish everywhere on the horizon so long as our local constructs make sense; when they don't, we cannot even talk about local Rindler observers and local Killing horizons. Instead, the congruence $\bm k$ captures information infinitesimally away from the horizon in the direction normal to it [that it is a natural ``normal" is easily seen in inertial coordinates, where future horizon is $T-X=0$ whose normal is clearly along $\bm k$.], and provides a natural flow to define variations of various quantities. We hope to have shown that it is this congruence which gives, in a sensible manner, the change in a cross-section of the horizon, as well as the matter flux across it. Of course, the result can also be justified by applying to \textit{event horizons} of a black hole solution of Einstein field equations. For a stationary black hole horizon, the only sensible change in area when matter crosses the horizon is given by expansion of $\bm k$, since the expansion of horizon generators vanish for a Killing horizon. Moreover, as we have already mentioned before, it has been shown rigorously by T. Dray and G. 't Hooft that a massless particle falling into a Schwarzschild black hole corresponds to a shift in the ingoing Kruskal coordinate, the shift being proportional to the particle energy. This further strenghtens our motivation for using the ingoing congruence for horizon deformations. \item[4.] One of our main conclusions, as far as comparison with Jacobson's analysis is concerned, is the following: \textit{The difference between Einstein equations being identical to the first law of thermodynamics, as was pointed out by Padmanabhan, differs from the Clausius relation of Jacobson due to difference in the manner matter fluxes across the horizon are defined.} Our analysis seems to be closer to the one in Refs. \cite{hayward} and \cite{ashtekar-dynamical}. \item[5.] There are also other significant issues which go beyond the algebraic ones we have mostly concentrated on till now. In using the Clausius relation, one is trying to derive field equations from a starting thermodynamic relation. In such a case, one has to change the starting relation depending on what type of congruence one chooses. These additional terms are then interpreted as dissipation terms and are accounted for by adding suitable entropy production terms in the Clausius relation. However, our analysis above shows that, for a suitably defined horizon (with static near-horizon geometry), the field equations take the form of the first law of thermodynamics without involving any additional terms, under prescribed horizon deformations. The only sensible quantity to concentrate on, while comparing these two approaches, is the $T \mathrm{d} S$ term (which can have no ambiguity once the entropy density is suitably defined, and which can be verified by applying it to known cases of black hole horizons); in our case, this term is derived using the expansion $\theta$ alone, rather than its derivative, as is required while using Clausius relation. {However, as has already been emphasized several times before, it must be remembered that the difference originates in using different definitions for matter fluxes (and {\it not} due to any in-correctness in any of the two approaces); the physical motivations for the choice used here are mentioned in point (3) above.} \end{itemize} Before moving on, we would like to point out that there have been some recent attempts to derive field equations for higher derivative gravity theories \cite{pmb-gen} along the lines of Jacobson. In these works (except Padmanabhan's, see below), the starting point is Clausius relation along with Wald's definition of entropy in terms of Noether charge of diffeomorphism invariance. However, while the definition of matter flux is similar to Jacobson's, the Raychaudhuri equation is never invoked, thereby avoiding any need for assumptions such as vanishing expansion etc. present in original Jacobson work. It would be worth investigating further how the comments in this note are to be considered in the context of these recent attempts. (For one thing, the difference in the definition of matter flux remains.) We must, however, point out that the situation is far from clear since these works do not all agree with each other. For e.g., Parikh and Sarkar have pointed out issues with the Brustein and Hadad paper, whereas Padmanabhan has highlighted certain conceptual issues regarding {\it interpretation} of both these papers. Specifically, Padmanabhan has stressed the subtleties in interpreting these results as derivation of field equations from thermodynamics; the result, he argues, is better viewed as interpretation of field equations as a thermodynamic relation. More importantly, Brustein and Hadad derive their result using a {\it different} form for Noether potential (used to define entropy) than the other two papers; this calls for a more detailed and critical look at the analysis therein. Moreover, the fact that they still derive the same result implies non-uniqueness of the analysis (which a quick look at these papers will confirm), whereas in Jacobson's case, once the assumptions are stated, the analysis is unique. Hence, the status of these results in the light of the original Jacobson calculation remains unclear; the derivations are not only very different from Jacobson's, they are not even similar to each other! Perhaps the most important point indicating why these analyses are {\it conceptually} different from Jacobson's is $f(R)$ theory: whereas the above papers derive the field equations from a Clausius relation, Jacobson and collaborators needed to add extra terms to the Clausius relation in their earlier paper \cite{eling-fR}, to proceed with the analysis. We hope further work will clarify these issues. \section{Implications} In this brief paragraph, we would like to emphasize the need for the analysis done in this note. Whether Einstein equations are just equations of thermodynamics in disguise is a well motivated question, and we do agree that the answer to this might be yes. The important point realized by Jacobson in \cite{jacobson-eq-of-state} while addressing this question, was to introduce local Rindler frames in an arbitrary curved spacetime, and use the thermal aspects of corresponding horizons as probes of the background curvature. However, one needs to impose certain restrictions to proceed from there, and to put the result in a physically relevant context. The necessity of highlighting such restrictions goes hand in hand with identifying specific geometric quantities with (variation of) the thermodynamic variables. In this sense, as we have shown, the expression for change in entropy of a cross section of a static horizon is related to very specific components of the Riemann tensor (and is readily verified for known black hole solutions). Once we agree on these algebraic identifications, Einstein equations take the nice form of the first law of thermodynamics, provided one attributes to a $2$-surface an energy proportional to its intrinsic curvature. Therefore, Einstein equations resemble the first law of thermodynamics in this very specific form. Of course, we have also shown that using Raychaudhuri equation also yields, at a higher order (and after justifying ignoring certain terms), the null-null component of Einstein equations; the crucial point, however, is that the null-null component of Ricci itself has no clear meaning in terms of change of entropy. Looked at in this way, the null-null part of Einstein equations seem to be a secondary consequence of the first law itself. Of course, one can reinterpret Einstein equations as representing some sort of a Clausius relation; if one insists upon doing so, one must redefine the matter flux suitably. It is easy to see that such a definition of flux will involve the trace of the matter stress tensor, and will not be equivalent to the heat flux defined by Jacobson; see, for example, reference \cite{hayward} which gives one such definition for dynamical horizons, but assuming spherically symmetry. In fact, this can be easily demonstrated using our Eq.~(\ref{eq:riemm-area-change}): \begin{eqnarray} \left( \frac{\kappa}{2\pi} \right) \frac{\mathrm{d} ~}{\mathrm{d} \lambda} \left( \frac{1}{4} \sqrt{\sigma} \right) &=& \frac{1}{8 \pi} \sqrt{\sigma} \left( R_{ab}k^a\xi^b + \frac{1}{2} R_{\perp} \right) \nonumber \\ &\approx& \sqrt{\sigma} \left( T_{ab} - \frac{1}{2} T g_{ab} \right) k^a\xi^b \end{eqnarray} where $R_{\perp}$ is defined in Appendix \ref{app:riemm-area-change}. We have used the field equations in the second equality, and the approximation is obtained \textit{after ignoring the $R_{\perp}$ term}. Moreover, inverting the logic and deriving field equations in this latter case is again not sraightforward. We hope to have clarified all the above issues in the present note. At a more conceptual level, one of the possible implications of this note is that it might be necessary to adopt a new starting point if one wants to establish Einstein theory in terms of thermodynamics, in which case the thermodynamic structure of the Einstein tensor would serve as the most important supporting evidence. This point of view, of course, also applies to the wider class of Lanczos-Lovelock lagrangians for which similar results hold. A survey of some of the attempts that have been made along these lines can be found in \cite{paddy-newinsights, paddy-equip}. \section*{Acknowledgements} I thank T. Padmanabhan for discussions, comments, and careful reading of the manuscript. The author's research is funded by National Science \& Engineering Research Council (NSERC) of Canada, and Atlantic Association for Research in the Mathematical Sciences (AARMS). \vspace{0.2in}
1,108,101,566,064
arxiv
\section{Introduction}\label{Sec:1} \section{Introduction}\label{sec:1} \setcounter{section}{1} \setcounter{equation}{0}\setcounter{theorem}{0} Nowadays, non-local applied mathematical models based on the use of fractional derivatives in time and space are actively discussed \cite{baleanu2012fractional,eringen2002nonlocal,kilbas2006theory}. Many models, which are used in applied physics, biology, hydrology and finance, involve both sub-diffusion (fractional in time) and supper-diffusion (fractional in space) operators. Supper-diffusion problems are treated as evolutionary problems with a fractional power of an elliptic operator. For example, suppose that in a bounded domain $\Omega$ on the set of functions $u(\bm x) = 0, \ \bm x \in \partial \Omega$, there is defined the operator $\mathcal{A}$: $\mathcal{A} u = - \triangle u, \ \bm x \in \Omega$. We seek the solution of the Cauchy problem for the equation with a fractional power of an elliptic operator: \[ \frac{d u}{d t} + \mathcal{A}^\alpha u = f(t), \quad 0 < t \leq T, \] \[ u(0) = u_0, \] for the given $f(\bm x, t)$, $u_0(\bm x), \ \bm x \in \Omega$ with $0 < \alpha < 1$ using the notation $f(t) = f(\cdot,t)$. To solve numerically evolutionary equations of first order, as a rule, two-level difference schemes are used for approximation in time. Investigation of stability for such schemes in the corresponding finite-dimensional (after discretization in space) spaces is based on the general theory of operator-difference schemes \cite{Samarskii1989,SamarskiiMatusVabischevich2002}. In particular, the backward Euler scheme and Crank-Nicolson scheme are unconditionally stable for a non-negative operator. As for one-dimensional problems for the space-fractional diffusion equation, an analysis of stability and convergence for this equation was conducted in \cite{jin2014error} using finite element approximation in space. A similar study for the Crank-Nicolson scheme was considered earlier in \cite{tadjeran2006second} using finite difference approximations in space. In discussing the problems of the numerical solution of multidimensional problems for the space-fractional diffusion equation, emphasis is on spatial approximations. Many researchers (see, e.g., \cite{chen2013implicit,tadjeran2007second,yang2010numerical}) are oriented to using finite difference approximations for problems with fractional derivatives in separate directions. Approximations of fractional derivatives leads to a system of ordinary differential equations with a filled matrix. The solution of such problems requires high computational costs \cite{roop2006computational}. To solve problems with fractional powers of elliptic operators, we can apply finite volume and finite element methods oriented to using arbitrary domains and irregular computational grids \cite{KnabnerAngermann2003,QuarteroniValli1994}. The computational realization is associated with the implementation of the matrix function-vector multiplication, i.e., $\varPhi(A) b$. For example, considering the backward Euler scheme, we have $\varPhi(z) = (1 + \tau z^\alpha)^{-1}$, where $\tau$ is a time step. To evaluate $\varPhi(A) b$, different approaches \cite{higham2008functions} are available. Problems of using Krylov subspace methods with the Lanczos approximation when solving systems of linear equations associated with the fractional elliptic equations are discussed in \cite{ilic2009numerical}. A comparative analysis of the contour integral method, the extended Krylov subspace method, and the preassigned poles and interpolation nodes method for solving space-fractional reaction-diffusion equations is presented in \cite{burrage2012efficient}. The simplest variant is associated with the explicit construction of the solution using the known eigenvalues and eigenfunctions of the elliptic operator with diagonalization of the corresponding matrix \cite{bueno2012fourier,ilic2005numerical,ilic2006numerical}. Unfortunately, all these approaches demonstrates too high computational complexity for multidimensional problems. In the recent paper \cite{vabishchevich2014numerical}, we have proposed a computational algorithm for solving an equation for fractional powers of elliptic operators on the basis of a transition to a pseudo-parabolic equation. For the auxiliary Cauchy problem, the standard two-level schemes are applied. The computational algorithm is simple for practical use, robust, and applicable to solving a wide class of problems. A small number of time steps is required to find a solution. Here this computational algorithm for solving equations with fractional powers of operators is extended to transient problems. To solve numerically the problem, we construct a special two-level regularized difference scheme, which is unconditionally stable. The paper is organized as follows. The formulation of an unsteady problem containing a fractional power of an elliptic operator is given in Section 2. Finite element approximation in space is discussed in Section 3. In Section 4, we construct a regularized difference scheme and investigate its stability. The computational algorithm for solving the equation with a fractional power of an operator based on the Cauchy problem for a pseudo-parabolic equation is proposed in Section 5. The results of numerical experiments are described in Section 6. \section{Problem formulation}\label{sec:2} \setcounter{section}{2} \setcounter{equation}{0}\setcounter{theorem}{0} In a bounded polygonal domain $\Omega \subset R^m$, $m=1,2,3$ with the Lipschitz continuous boundary $\partial\Omega$, we search the solution for the problem with a fractional power of an elliptic operator. Define the elliptic operator as \begin{equation}\label{2.1} \mathcal{A} u = - {\rm div} k({\bm x}) {\rm grad} \, u + c({\bm x}) u \end{equation} with coefficients $0 < k_1 \leq k({\bm x}) \leq k_2$, $c({\bm x}) \geq 0$. The operator $\mathcal{A}$ is defined on the set of functions $u({\bm x})$ that satisfy on the boundary $\partial\Omega$ the following conditions: \begin{equation}\label{2.2} k({\bm x}) \frac{\partial u }{\partial n } + \mu ({\bm x}) u = 0, \quad {\bm x} \in \partial \Omega , \end{equation} where $\mu ({\bm x}) \geq \mu_1 > 0, \ {\bm x} \in \partial \Omega$. In the Hilbert space $H = L_2(\Omega)$, we define the scalar product and norm in the standard way: \[ <u,v> = \int_{\Omega} u({\bm x}) v({\bm x}) d{\bm x}, \quad \|u\| = <u,u>^{1/2} . \] In the spectral problem \[ \mathcal{A} \varphi_k = \lambda_k \varphi_k, \quad \bm x \in \Omega , \] \[ k({\bm x}) \frac{\partial \varphi_k}{\partial n } + \mu ({\bm x}) \varphi_k = 0, \quad {\bm x} \in \partial \Omega , \] we have \[ \lambda_1 \leq \lambda_2 \leq ... , \] and the eigenfunctions $ \varphi_k, \ \|\varphi_k\| = 1, \ k = 1,2, ... $ form a basis in $L_2(\Omega)$. Therefore, \[ u = \sum_{k=1}^{\infty} (u,\varphi_k) \varphi_k . \] Let the operator $\mathcal{A}$ be defined in the following domain: \[ D(\mathcal{A} ) = \{ u \ | \ u(x) \in L_2(\Omega), \ \sum_{k=0}^{\infty} | (u,\varphi_k) |^2 \lambda_k < \infty \} . \] Under these conditions $\mathcal{A} : L_2(\Omega) \rightarrow L_2(\Omega)$ and the operator $\mathcal{A}$ is self-adjoint and positive defined: \begin{equation}\label{2.3} \mathcal{A} = \mathcal{A} ^* \geq \delta I , \quad \delta > 0 , \end{equation} where $I$ is the identity operator in $H$. For $\delta$, we have $\delta = \lambda_1$. In applications, the value of $\lambda_1$ is unknown (the spectral problem must be solved). Therefore, we assume that $\delta \leq \lambda_1$ in (\ref{2.3}). Let us assume for the fractional power of the operator $A$ \[ \mathcal{A} ^\alpha u = \sum_{k=0}^{\infty} (u,\varphi_k) \lambda_k^\alpha \varphi_k . \] More general and mathematically complete definition of fractional powers of elliptic operators is given in \cite{yagi2009abstract}. We seek the solution of the Cauchy problem for the evolutionary first-order equation with the fractional power of the operator $\mathcal{A}$. The solution $u(\bm x,t)$ satisfies the equation \begin{equation}\label{2.4} \frac{d u}{d t} + \mathcal{A}^\alpha u = f(t), \quad 0 < t \leq T, \end{equation} and the initial condition \begin{equation}\label{2.5} u(0) = u_0, \end{equation} under the restriction $0 < \alpha < 1$. \section{Discretization in space}\label{sec:3} \setcounter{section}{3} \setcounter{equation}{0}\setcounter{theorem}{0} To solve numerically the problem (\ref{2.4}), (\ref{2.5}), we employ finite element approximations in space \cite{brenner2008mathematical,Thomee2006}. For (\ref{2.1}) and (\ref{2.2}), we define the bilinear form \[ a(u,v) = \int_{\Omega } \left ( k \, {\rm grad} \, u \, {\rm grad} \, v + c \, u v \right ) d {\bm x} + \int_{\partial \Omega } \mu \, u v d {\bm x} . \] By (\ref{2.3}), we have \[ a(u,u) \geq \delta \|u\|^2 . \] Define a subspace of finite elements $V^h \subset H^1(\Omega)$. Let $\bm x_i, \ i = 1,2, ..., M_h$ be triangulation points for the domain $\Omega$. Define pyramid function $\chi_i(\bm x) \subset V^h, \ i = 1,2, ..., M_h$, where \[ \chi_i(\bm x_j) = \left \{ \begin{array}{ll} 1, & \mathrm{if~} i = j, \\ 0, & \mathrm{if~} i \neq j . \end{array} \right . \] For $v \in V_h$, we have \[ v(\bm x) = \sum_{i=i}^{M_h} v_i \chi_i(\bm x), \] where $v_i = v(\bm x_i), \ i = 1,2, ..., M_h$. We define the discrete elliptic operator $A$ as \[ a(u,v) = \ <Au, v>, \quad \forall \ u,v \subset V^h , \] where, similarly to (\ref{2.3}), \begin{equation}\label{3.1} A = A^* \geq \delta I , \quad \delta > 0 . \end{equation} For the problem (\ref{2.4}), (\ref{2.5}), we put into the correspondence the operator equation for $w(t) \subset V^h$: \begin{equation}\label{3.2} \frac{d w}{d t} + A^\alpha w = \psi(t), \quad 0 < t \leq T, \end{equation} \begin{equation}\label{3.3} w(0) = w_0, \end{equation} where $\psi(t) = P f(t)$, $w_0 = P u_0$ with $P$ denoting $L_2$-projection onto $V^h$. Now we will obtain an elementary a priori estimate for the solution of (\ref{3.2}), (\ref{3.3}) assuming that the solution of the problem, coefficients of the elliptic operator, the right-hand side and initial conditions are sufficiently smooth. Let us multiply equation (\ref{3.2}) by $w$ and integrate it over the domain $\Omega$: \[ \left <\frac{d w}{d t}, w \right > + < A^\alpha w, w > \, = \, < \psi, w > . \] In view of the self-adjointness and positive definiteness of the operator $A^\alpha$, the right-hand side can be evaluated by the inequality \[ < \psi, w > \, \leq \, <A^\alpha w, w > + \frac{1}{4} <A^{-\alpha} \psi, \psi > . \] By virtue of this, we have \[ \frac{d}{d t} \|w\|^2 \leq \frac{1}{2} \|\psi \|^2_{A^{-\alpha}} , \] where $\|\psi \|_{A^{-\alpha}} = <A^{-\alpha} \psi, \psi >^{1/2}$. The latter inequality leads us to the desired a priori estimate: \begin{equation}\label{3.4} \|w(t)\|^2 \leq \|w_0\|^2 + \frac{1}{2} \int_{0}^{t}\|\psi(\theta) \|^2_{A^{-\alpha}} d \theta . \end{equation} Taking into account (\ref{3.1}), the estimate (\ref{3.4}) can be simplified: \begin{equation}\label{3.5} \|w(t)\|^2 \leq \|w_0\|^2 + \frac{1}{2} \delta^{-\alpha} \int_{0}^{t}\|\psi(\theta) \|^2 d \theta . \end{equation} We will focus on the estimates (\ref{3.4}), (\ref{3.5}) for the stability of the solution with respect to the initial data and the right-hand side in constructing discrete analogs of the problem (\ref{3.2}), (\ref{3.3}). \section{Regularized scheme}\label{sec:4} \setcounter{section}{4} \setcounter{equation}{0}\setcounter{theorem}{0} To solve numerically the problem (\ref{3.2}), (\ref{3.3}), we use the simplest implicit two-level scheme. Let $\tau$ be a step of a uniform grid in time such that $w^n = w(t^n), \ t^n = n \tau$, $n = 0,1, ..., N, \ N\tau = T$. It seems reasonable to begin with the simplest explicit scheme \begin{equation}\label{4.1} \frac{w^{n+1} - w^{n}}{\tau } + A^\alpha w^{n} = \psi^{n}, \quad n = 0,1, ..., N-1, \end{equation} \begin{equation}\label{4.2} w^0 = w_0 . \end{equation} Advantages and disadvantages of explicit schemes for the standard parabolic problem ($\alpha = 1$) are well-known, i.e., these are a simple computational implementation and a time step restriction (see, e.g., \cite{Samarskii1989,SamarskiiMatusVabischevich2002}). In our case ($\alpha \neq 1$), the main drawback (conditional stability) remains, whereas the advantage in terms of implementation simplicity does not exist. The approximate solution at a new time level is determined via (\ref{4.1}) as \begin{equation}\label{4.3} w^{n+1} = w^{n} - \tau A^\alpha w^{n} + \tau \psi^{n} . \end{equation} Thus, we must calculate $A^\alpha w^{n}$. In view of these problems, considering the scheme (\ref{4.1}), it is more correct to speak of the scheme with the explicit approximations in time in contrast to the standard fully explicit scheme. Let us approximate equation (\ref{3.2}) by the backward Euler scheme: \begin{equation}\label{4.4} \frac{w^{n+1} - w^{n}}{\tau } + A^\alpha w^{n+1} = \psi^{n+1}, \quad n = 0,1, ..., N-1. \end{equation} The main advantage of the implicit scheme (\ref{4.4}) in comparison with (\ref{4.1}) is its absolute stability. Let us derive for this scheme the corresponding estimate for stability. Multiplying equation (\ref{4.4}) scalarly by $\tau w^{n+1}$, we obtain \begin{equation}\label{4.5} \begin{split} < w^{n+1}, w^{n+1}> \ & + \ \tau < A^\alpha w^{n+1}, w^{n+1}> \ \\ & = \ < w^{n}, w^{n+1}> + \tau < \psi^{n+1}, w^{n+1}> . \end{split} \end{equation} The terms on the right side of (\ref{4.5}) are estimated using the inequalities: \[ < w^{n}, w^{n+1}> \ \leq \frac{1}{2} < w^{n+1}, w^{n+1}> + \frac{1}{2} < w^{n}, w^{n}> , \] \[ < \psi^{n+1}, w^{n+1}> \ \leq \ < A^\alpha w^{n+1}, w^{n+1}> + \frac{1}{4} < A^{-\alpha} \psi^{n+1}, \psi^{n+1}> . \] The substitution into (\ref{4.5}) leads to the following level-wise estimate: \[ \|w^{n+1}\|^2 \leq \|w^{n}\|^2 + \frac{1}{2} \tau \|\psi^{n+1}\|^2_{A^{-\alpha}} . \] This implies the desired estimate for stability: \begin{equation}\label{4.6} \|w^{n+1}\|^2 \leq \|w_0\|^2 + \frac{1}{2} \sum_{k=0}^{n}\tau \|\psi^{k+1}\|^2_{A^{-\alpha}} , \end{equation} which is a discrete analog of the estimate (\ref{3.4}). Similarly to (\ref{3.5}), in view of (\ref{3.1}), from (\ref{4.6}), we get \begin{equation}\label{4.7} \|w^{n+1}\|^2 \leq \|w_0\|^2 + \frac{1}{2} \delta^{-\alpha} \sum_{k=0}^{n}\tau \|\psi^{k+1}\|^2 . \end{equation} To obtain the solution at the new time level, it is necessary to solve the problem \[ (I + \tau A^\alpha) w^{n+1} = w^{n} + \tau \psi^{n} . \] In our case, we must calculate the values of $\varPhi(A) b$ for $\varPhi(z) = (1+ \tau z^\alpha)^{-1}$. A more complicated situation arises in the implementation of the Crank-Nicolson scheme: \[ \frac{w^{n+1} - w^{n}}{\tau } + A^\alpha \frac{w^{n+1} + w^{n}}{2} = \psi^{n+1/2}, \quad n = 0,1, ..., N-1. \] In this case, we have \[ \left (I + \frac{\tau}{2} A^\alpha \right ) w^{n+1} = w^{n} - \frac{\tau}{2} A^\alpha w^{n} + \tau \psi^{n+1/2} , \] i.e., we need to evaluate both $\varPhi(z) = (1+ 0.5 \tau z^\alpha)^{-1}$ and $\varPhi(z) = z^\alpha$. The numerical implementation of the above-mentioned approximations in time for the standard parabolic problems ($\alpha = 1$ in (\ref{3.2})) is based on calculating the values of $\varPhi(A) b$ for $\varPhi(z) = (1+ \sigma \tau z)^{-1}, \ \sigma =0.5, 1$ and $\varPhi(z) = z$. For problems with fractional powers of elliptic operators, we apply the approach proposed early in our paper \cite{vabishchevich2014numerical}. It is based on the computation of $\varPhi(A) b$ for $\varPhi(z) = z^{-\beta}, \ 0 < \beta < 1$. For the explicit approximation in time, we rewrite (\ref{4.3}) in the form \[ w^{n+1} = w^{n} - \tau A A^{-\beta} w^{n} + \tau \psi^{n} , \quad \beta = 1 - \alpha . \] Therefore, the computational implementation is based on the evaluation of $\varPhi(A) b$ for $\varPhi(z) = z^{-\beta}$ and $\varPhi(z) = z$. A similar approach is not valid for the backward Euler scheme (\ref{4.2}), (\ref{4.4}) and moreover for the Crank-Nicolson scheme. To construct a more appropriate from a computational point of view approximations in time for the Cauchy problem (\ref{3.2}), (\ref{3.3}), we apply the principle of regularization for operator-difference schemes proposed by A.A. Samarskii \cite{Samarskii1989}. For a regularizing operator $R = R^* > 0$, the simplest regularized scheme for solving (\ref{3.2}), (\ref{3.3}) has the form (see, e.g., \cite{Vabishchevich2014}): \begin{equation}\label{4.8} (I + \tau R) \frac{w^{n+1} - w^{n}}{\tau } + A^\alpha w^{n} = \psi^{n+1}, \quad n = 0,1, ..., N-1. \end{equation} Now we will derive the stability conditions for the regularized scheme (\ref{4.2}), (\ref{4.8}) and after that we will select the appropriate regularizing operator $R$ itself. Rewrite equation (\ref{4.8}) in the form \[ \left (I + \tau \left ( R - \frac{1}{2}A^\alpha \right ) \right ) \frac{w^{n+1} - w^{n}}{\tau } + A^\alpha \frac{w^{n+1} + w^{n}}{2} = \psi^{n+1} . \] Multiplying it scalarly by $\tau (w^{n+1} + w^{n})$, we get \[ \begin{split} < D w^{n+1}, w^{n+1}> - < D w^{n}, w^{n}> & + \frac{\tau }{2} <A^\alpha (w^{n+1} + w^{n}), w^{n+1} + w^{n} > \\ & = \tau <\psi^{n+1}, w^{n+1} + w^{n} > , \end{split} \] where \begin{equation}\label{4.9} D = I + \tau \left ( R - \frac{1}{2}A^\alpha \right ) . \end{equation} For \begin{equation}\label{4.10} R \geq \frac{1}{2}A^\alpha \end{equation} we have $D = D^* \geq I$, and $D = I + \mathit{O}(\tau)$. Under these conditions, we obtain the inequality \[ \|w^{n+1}\|_D^2 \leq \|w^{n}\|_D^2 + \frac{1}{2} \tau \|\psi^{n+1}\|^2_{A^{-\alpha}} . \] Thus, for the regularized difference scheme (\ref{4.2}), (\ref{4.8}), under the condition (\ref{4.10}), the following estimate for stability with respect to the initial data and the right-hand side holds: \begin{equation}\label{4.11} \|w^{n+1}\|_D^2 \leq \|w_0\|_D^2 + \frac{1}{2} \sum_{k=0}^{n}\tau \|\psi^{k+1}\|^2_{A^{-\alpha}} . \end{equation} To select an appropriate regularizing operator $R$, we should take into account two conditions, i.e., first, to satisfy the inequality (\ref{4.10}), and secondly, to simplify calculations. Our choice is based on the inequality \begin{equation}\label{4.12} A^\alpha \leq \alpha A + (1-\alpha) I , \end{equation} which is the simplest version of Young's inequality for positive operators (see, e.g., \cite{FangDu}). In the scheme (\ref{4.8}), we put \begin{equation}\label{4.13} R = \sigma (\alpha A + (1-\alpha) I) . \end{equation} For $\sigma \geq 0.5$, in view of (\ref{4.12}), the inequality (\ref{4.10}) holds. The result of our analysis is the following statement. \begin{theorem}\label{Th1} The regularized scheme (\ref{4.2}), (\ref{4.8}) with the regularizer $R$ selected according to (\ref{4.13}) is unconditionally stable for $\sigma \geq 0.5$. The approximate solution satisfies the a priori estimate (\ref{4.9}), (\ref{4.11}). \end{theorem} The transition to a new time level is performed via the formula \[ \begin{split} (I + & \sigma \tau (\alpha A + (1-\alpha) I ) w^{n+1} = (I + \sigma \tau (\alpha A + (1-\alpha) I ) w^{n} \\ & - \tau A A^{-\beta} w^{n} + \tau \psi^{n+1} , \quad \beta = 1 - \alpha . \end{split} \] Therefore, it is necessary to calculate the values $\varPhi(A) b$ for $\varPhi(z) = (1+\tau\widetilde{\sigma} z)^{-1}$ and $\varPhi(z) = z^{-\beta}, \ 0 < \beta < 1$. \section{Calculation of the operator with the fractional power}\label{sec:5} \setcounter{section}{5} \setcounter{equation}{0}\setcounter{theorem}{0} The main peculiarity of solving the Cauchy problem (\ref{3.2}), (\ref{3.3}) is the necessity to evaluate values \[ g^n = A^{-\beta} w^{n}, \quad n = 0,1, ..., N-1, \quad 0 < \beta = 1 - \alpha < 1 . \] The computational algorithm is based on the consideration of the auxiliary Cauchy problem \cite{vabishchevich2014numerical}. Assume that \[ y(s) = \delta^{\beta} (s (A - \delta I) + \delta I)^{-\beta} y(0) , \] then for the determination of $g^n$, we can put \begin{equation}\label{5.1} g^n = y(1), \quad y(0) = \delta^{-\beta} w^{n}. \end{equation} The function $y(s)$ satisfies the evolutionary equation \begin{equation}\label{5.2} (s G + \delta I) \frac{d y}{d s} + \beta G y = 0 , \end{equation} where $G = A - \delta I \geq 0$. Thus, the calculation of $A^{-\beta} w^{n}$ is based on the solution of the Cauchy problem (\ref{5.1}), (\ref{5.2}) within the unit interval for the pseudo-parabolic equation. To solve numerically the problem (\ref{5.1}), (\ref{5.2}), we use the simplest implicit two-level scheme. Let $\eta$ be a step of a uniform grid in time such that $y_k = y(s_k), \ s_k = k \eta$, $k = 0,1, ..., K, \ K\eta = 1$. Let us approximate equation (\ref{5.2}) by the backward Euler scheme \begin{equation}\label{5.3} (s_{k+1} G + \delta I) \frac{ y_{k+1} - y_{k}}{\eta } + \beta G y_{k+1} = 0, \quad k = 0,1, ..., K-1, \end{equation} \begin{equation}\label{5.4} y_0 = \delta^{-\beta } w^{n} . \end{equation} For the Crank-Nicolson scheme, we have \begin{equation}\label{5.5} (s_{k+1/2} G + \delta I) \frac{ y_{k+1} - y_{k}}{\eta } + \beta G \frac{ y_{k+1} + y_{k}}{2} = 0, \quad k = 0,1, ..., K-1 . \end{equation} The difference scheme (\ref{5.4}), (\ref{5.5}) approximates the problem (\ref{5.1}), (\ref{5.2}) with the second order by $\eta $, whereas for scheme (\ref{5.3}), (\ref{5.4}) we have only the first order. The above two-level schemes are unconditionally stable. The corresponding level-wise estimate has the form \begin{equation}\label{5.6} \|y_{k+1}\| \leq \|y_{k}\| , \quad k = 0,1, ..., K-1 . \end{equation} To prove (\ref{5.6}) (see \cite{vabishchevich2014numerical}), it is sufficient to multiply scalarly equation (\ref{5.3}) by $y_{k+1}$ and equation (\ref{5.5}) by $y_{k+1}+y_{k}$. Taking into account (\ref{5.4}), from (\ref{5.6}), we obtain \begin{equation}\label{5.7} \|y_K\| \leq \delta^{-\beta } \|w^{n}\| . \end{equation} The solution of the Cauchy problem (\ref{5.3}), (\ref{5.4}) (or (\ref{5.4}), (\ref{5.5})) may be written in the form \begin{equation}\label{5.8} y_K = Q_K(A) w^{n} . \end{equation} For instance, for the backward Euler scheme (\ref{5.3}), (\ref{5.4}), we have \begin{equation}\label{5.9} Q_K(A) = \prod_{k = 1}^{K} (k \eta G + \delta I + \eta \beta G)^{-1} (k \eta G + \delta I) . \end{equation} As for the Crank-Nicolson scheme (\ref{5.3}), (\ref{5.4}), we obtain the representation \begin{equation}\label{5.10} Q_K(A) = \prod_{k = 1}^{K} (k \eta G + \delta I + 0.5 \eta \beta G)^{-1} (k \eta G + \delta I - 0.5 \eta \beta G) . \end{equation} The computational implementation of the regularized scheme (\ref{4.2}), (\ref{4.8}), (\ref{4.13}) based on solving the auxillary evolutionary problems (\ref{5.1}), (\ref{5.2}) corresponds to the new scheme \begin{equation}\label{5.11} (I + \tau \sigma (\alpha A + (1-\alpha) I)) \frac{w^{n+1} - w^{n}}{\tau } + A Q_K(A) w^{n} = \psi^{n+1} . \end{equation} \begin{theorem}\label{Th2} The scheme (\ref{4.2}), (\ref{5.9}), (\ref{5.11}) is unconditionally stable for \begin{equation}\label{5.12} \sigma \geq \frac{1}{2} + \frac{1-\alpha }{2\alpha \delta } . \end{equation} Moreover, the solution satisfies the stability estimate \begin{equation}\label{5.13} \|w^{n+1}\|_D \leq \|w_0\|_D + \sum_{k=0}^{n}\tau \|\psi^{k+1}\|_{D^{-1}} , \end{equation} where $D \geq I$ and \begin{equation}\label{5.14} D = I + \tau \left (\sigma (\alpha A + (1-\alpha) I) - \frac{1}{2}A Q_K(A) \right ) . \end{equation} \end{theorem} \proof First, we will show that under the above restrictions on $\sigma$ (\ref{5.12}), we have that the operator $D \geq I$. According to (\ref{5.7}), for (\ref{5.9}), we have \begin{equation}\label{5.15} 0 < Q_K(A) \leq \delta^{-\beta } I . \end{equation} By (\ref{3.1}), (\ref{5.15}) and Youngs inequality, we obtain \[ \begin{split} \sigma (\alpha A + (1-\alpha) I) & - \frac{1}{2}A Q_K(A) \geq \sigma (\alpha A + (1-\alpha) I) - \frac{1}{2} \delta^{-\beta } A \\ & \geq (2 \delta)^{-1} (2 \sigma \alpha \delta - \delta^{\alpha}) A \\ & \geq (2 \delta)^{-1} ((2 \sigma - 1) \alpha \delta - (1-\alpha)) A \geq 0 . \end{split} \] Next, rewrite the scheme (\ref{5.11}) in the form \[ D \frac{w^{n+1} - w^{n}}{\tau } + A Q_K(A) \frac{w^{n+1} + w^{n}}{2} = \psi^{n+1} . \] Multiplying this equation scalarly by $\tau (w^{n+1} + w^{n})$, in view of the left inequality (\ref{5.15}), we arrive at \[ \|w^{n+1}\|_D^2 - \|w^{n}\|_D^2 \leq \tau <\psi^{n+1}, w^{n+1} + w^{n} > . \] Taking into account \[ \|w^{n+1}\|_D^2 - \|w^{n}\|_D^2 = (\|w^{n+1}\|_D - \|w^{n}\|_D) (\|w^{n+1}\|_D + \|w^{n}\|_D), \] \[ <\psi^{n+1}, w^{n+1} + w^{n} > \ \leq \|\psi^{n+1}\|_{D^{-1}} (\|w^{n+1}\|_D + \|w^{n}\|_D), \] we get the estimate \[ \|w^{n+1}\|_D \leq \|w^{n}\|_D + \tau \|\psi^{n+1}\|_{D^{-1}} . \] These inequalities prove the estimate (\ref{5.13}), (\ref{5.14}) for stability of the difference scheme with respect on the initial data and the right-hand side. \hfill$\Box$ Thus, the stability of the difference scheme with an approximate calculation of $A^{-\beta} w^{n}$ via the backward Euler scheme (\ref{5.3}), (\ref{5.4}) is proved under more strong restrictions on the weight parameter $\sigma$. In the original regularized scheme (see Theorem~\ref{Th1}), it was enough to take $\sigma \geq 0.5$, whereas here we have (\ref{5.12}). As for the Crank-Nicolson scheme (\ref{5.4}), (\ref{5.5}), for the approximate evaluation of $A^{-\beta} w^{n}$ in the scheme (\ref{5.11}), the operator $Q_K(A)$ is determined according to (\ref{5.10}). This operator is no longer positive, i.e., instead of (\ref{5.15}), we have the bilateral inequality \[ - \delta^{-\beta } I \leq Q_K(A) \leq \delta^{-\beta } I . \] In this case, it is no possible to establish the unconditional stability of the scheme (\ref{4.2}), (\ref{5.9}). \section{Numerical experiments}\label{sec:6} \setcounter{section}{6} \setcounter{equation}{0}\setcounter{theorem}{0} Capabilities of the proposed method are illustrated by solving a model two-dimensional problem. The computational domain is shown in Fig.~\ref{f-1}. Triangulation is performed to discretize this domain. Calculations are performed using the coarse (grid 1: 198 nodes, 315 triangles), medium (see Fig.~\ref{f-2}) and fine (grid 3: 2470 nodes, 4631 triangles) grids. The unsteady problem (\ref{2.4}), (\ref{2.5}) is considered for for the elliptic operator (\ref{2.1}), (\ref{2.2}) with constant coefficients: \[ k({\bm x}) = 1, \quad c({\bm x}) = 0, \quad \mu ({\bm x}) = \mu. \] The right-hand side and the initial condition are given as \begin{equation}\label{6.1} f(\bm x, t) = \frac{2}{1 + \exp(\gamma(x_1-x_2))} , \quad u_0(\bm x) = 0 . \end{equation} For $\gamma \rightarrow \infty$, the right-hand side becomes discontinuous. Finite element approximations lead to the Cauchy problem (\ref{3.2}), (\ref{3.3}). \begin{figure}[htp] \begin{center} \begin{tikzpicture}[scale = 0.6] \draw [ultra thick, fill=gray!20] (0,0) -- (0,10) -- (2.5,7.5) arc [radius=3.55, start angle=135, end angle= 315] -- (7.5,2.5) -- (10,0) -- (0,0); \draw(-1,-1) node {$(0,0)$}; \draw(11,-1) node {$(1,0)$}; \draw(4,7) node {$(0.25,0.75)$}; \draw(-1,11) node {$(0,1)$}; \draw(7.5,3) node {$(0.75,0.25)$}; \end{tikzpicture} \caption{Computational domain $\Omega$} \label{f-1} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.2] {f-2.png} \caption{Medium grid 2: 679 nodes, 1201 triangles} \label{f-2} \end{center} \end{figure} \clearpage To estimate the constant $\delta$ in (\ref{3.1}), we solve the spectral problem \begin{equation}\label{6.2} a(v,v) = \lambda <v,v>, \quad v \in V^h, \end{equation} where $\delta = \lambda_{1}$. When choosing piecewise linear finite elements ($V^h \subset H^1(\Omega)$), the corresponding values of the constant $\delta$ for the above-mentioned computational grids and $\mu = 1, 10, 100$ are presented in Table~\ref{tbl-1}. The results show that for the evaluation of $\delta$, we can use the solution of the incomplete eigenvalue problem obtained on the coarse grid. If we use standard algorithms of inverse iteration \cite{bjorck2015numerical,golub2012matrix}, then computational costs are not significant. \begin{table}[!h] \caption{Constant $\lambda_{1}$} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Grid & \multicolumn{3}{c|}{ $\mu$ } \\ \cline{2-4} & 1 & 10 & 100 \\ \hline 1 & 11.413872661 & 73.277955924 & 167.20923186 \\ 2 & 11.422023432 & 72.928682512 & 164.36008412 \\ 3 & 11.424101908 & 72.839189190 & 163.65578746 \\ \hline \end{tabular} \end{center} \label{tbl-1} \end{table} Here we present some numerical results for the stationary problem \[ A^\beta y = P f. \] Just this problem is solved at each time level in the regularization scheme (\ref{4.2}), (\ref{4.8}). First, we consider the problem with the constant righ-hand side ($\gamma = 0$ in (\ref{6.1})) and $\beta = 0.5$. Calculations are performed on grid 2 with $\mu =10$. The most interesting fact for this problem is the dependence on the time step. Figure~\ref{f-3} presents the dependence of the maximum (over the entire computational domain) value of the approximate solution on the time step for the backward Euler scheme (\ref{5.3}). In this case, we used $\delta = \lambda_1 \approx 72.928682512$. Figure~\ref{f-4} shows that the parameter $\delta$ demonstrates practically no influence on the solution. This calculation was performed with $\delta = 50$. The convergence of the approximate solution with the first order in time is observed in these figures. Similar data are depicted in Fig.~\ref{f-5},~\ref{f-6} for the symmetric scheme, i.e., the Crank-Nicolson scheme (\ref{5.5}). Here we see much more rapid convergence and so we can obtain acceptable in accuracy results using fairly coarse meshes. The approximate solution itself is given in Fig.~\ref{f-7}. \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-3.png} \caption{Dynamics of the solution for the Euler scheme ($\delta = \lambda_1$)} \label{f-3} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-4.png} \caption{Dynamics of the solution for the Euler scheme ($\delta = 50$)} \label{f-4} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-5.png} \caption{Dynamics of the solution for the Crank-Nicolson scheme ($\delta = \lambda_1$)} \label{f-5} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-6.png} \caption{Dynamics of the solution for the Crank-Nicolson scheme ($\delta = 50$)} \label{f-6} \end{center} \end{figure} The effect of the right-hand side is illustrated by the calculations with various values of $\gamma$, which are depicted in (\ref{6.1}). For $\gamma = 100$, the right-hand side has the form shown in Fig.~\ref{f-8}. Figure~\ref{f-9} shows the approximate solution. The dynamics of the maximum value of the solution is presented in Fig.~\ref{f-10}. Thus, the calculations demonstrate the high accuracy of the computational algorithm for solving the equation with fractional powers of elliptic operators via the Crank-Nicolson scheme. Moreover, they show a weak dependence of the accuracy of the approximate solution on the parameter $\delta$ from (\ref{3.1}) as well as on the smoothness of the right-hand side. \clearpage \begin{figure}[!h] \begin{center} \includegraphics[width=0.6\linewidth] {f-7.png} \caption{Solution on grid 2 ($y_{max} = 0.141663$)} \label{f-7} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.6\linewidth] {f-8.png} \caption{The right-hand side for $\gamma = 100$} \label{f-8} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.6\linewidth] {f-9.png} \caption{Solution on grid 2 for $\gamma = 100$} \label{f-9} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-10.png} \caption{The Crank-Nicolson scheme ($\delta = 50$, $\gamma = 100$)} \label{f-10} \end{center} \end{figure} Now we discuss the numerical results for unsteady problem with $\gamma = 100$ in (\ref{6.1}) and $\sigma = 0.5$. The achievement of the steady-state solution when $\alpha = 0.5$ is shown in Fig.~\ref{f-11}. We observe the convergence of the approximate solution with the first order by $\tau$. The calculations were performed using the Crank-Nicolson scheme (\ref{5.3}), (\ref{5.4}) with $K=10$. The condition $\sigma \geq 0.5$ is sufficient for the unconditional stability of the regularized scheme (\ref{4.2}), (\ref{4.8}), (\ref{4.13}). Trancation error increases when the value of the parameter $\sigma$ becomes higher. Data depicted in Fig.~\ref{f-12} demonstrate the effect of $\sigma$. The problem is solved on the time grid with $N = 40$, and for a comparison, data predicted on the fine grid $N = 500$ with $\sigma = 0.5$ is shown, too. Note that in this example, the instability takes place only if $\sigma \geq \sigma^* \approx 0.01$. Thus, the sufficient condition for stability seems to be essentially exaggerated. In problems, which are closer to the standard problems of unsteady diffusion ($\alpha \rightarrow 1$), restrictions on $\sigma$ seems to be close to optimal ones. For example, Figure~\ref{f-13} presents the calculations with $\alpha = 0.95$ and t$\sigma = 0.5$. In this case, accuracy is much higher, and $\sigma^* \approx 0.35$ for the grid of $N = 40, T = 0.1$. \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-11.png} \caption{Unsteady problem solution} \label{f-11} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-12.png} \caption{Predictions with various $\sigma$} \label{f-12} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth] {f-13.png} \caption{Solution of the unsteady problem for $\alpha = 0.95$} \label{f-13} \end{center} \end{figure} \section*{Acknowledgements} This work was supported by the Russian Foundation for Basic Research (project 14-01-00785). \clearpage
1,108,101,566,065
arxiv
\section{Introduction}\label{sec1} The strong unique continuation property (SUCP) for second order elliptic equations with smooth coefficients is well-known. It asserts that a solution vanishing to infinite order at a point must entirely vanish; on the other hand the weak unique continuation principle (WUCP) asserts that a function vanishing on an open subset, must vanish entirely. Clearly SUCP implies WUCP. There are a few known approaches: by Carleman estimates (see \cite{KT07} for a survey) and the frequency method (see \cite{GL} for this approach). It is not difficult to see from this that elliptic systems with diagonal principal part also satisfy the SUCP (see e.g. \cite{cek1}). One reason to be interested in this property is that the zero sets of such solutions have a suitable structure: they are countably $(n-1)$-rectifiable \cite{bar}, i.e. covered by a countable union of codimension one smooth submanifolds. Consider a domain $\Omega \subset \mathbb{R}^n$ equipped with a positive definite (uniformly) $n$ by $n$ matrix function $a^{ij}$. Suppose $F: \Omega \to \mathbb{C}^{m \times m}$ is a solution to \begin{equation}\label{eqn} PF(x) = -\partial_i\big(a^{ij}\partial_jF\big)(x) + L(x, F(x), dF(x)) = 0 \end{equation} for $x \in \Omega$, where $L$ is a smooth matrix function, linear in $F$ and $dF$ entries. We will sometimes write $(\Omega, g) \subset \mathbb{R}^n$ when $a^{ij}$ comes from a Riemannian metric $g$ on $\Omega$, so that $a^{ij} = \frac{g^{ij}}{|g|}$ represents the Laplace-Beltrami operator, where $|g| = \sqrt{\det g}$. We address the question: \begin{que}\label{mainque} Does the SUCP hold for $\det F$, where $F$ satisfies \eqref{eqn}? If not, does the WUCP hold? \end{que} Here are a few starting remarks -- firstly, in \cite{cek1} we notice that if $g$ is analytic and so are the coefficients of $L$, then by the classical theory so are the entries of $F$ and consequently, so is $\det F$ and the SUCP holds. Secondly, the obvious approach to produce an elliptic equation that $\det F$ satisfies does not seem to work (if we compute $\Delta_g \det F$ we obtain a function of $F$ and $dF$). Some further motivation is also due. Except that this problem is a natural one to consider when studying systems, the author is motivated by the case of the connection Laplacian $P = d_A^*d_A$, where $d_A = d + A$ is a covariant derivative, $A$ is the $m \times m$ connection matrix of $1$-forms and $d_A^*$ is the formal adjoint of $d_A$ in the natural inner products. For, this problem appeared to be one of the crucial ones when studying the inverse problem of Calder\'on for Yang-Mills connections \cite{cek1} -- there, the gauge relating two connections $A$ and $B$ which have the same local Dirichlet-to-Neumann map was shown to be $H = FG^{-1}$ where $d_A^*d_AF = d_B^*d_B G = 0$ if $m = 1$ or for any $m$ if the metric is analytic. The tactic is to use unique continuation near the boundary and to analyse the zero set of $\det G$ to further extend $H$ smoothly inside the manifold. So the unique continuation property for $\det G$ for $m > 1$ comes into focus. We propose a few approaches to this problem. In 2D we may use a set of special coordinates which reduce us to the case of special matrix $a^{ij}$; then by quotienting out one entry we are further reduced to the case where one of the entries is equal to $1$. Another simple technique is to compare the leading order Taylor coefficients of the entries, which we employ in the case $m = 2$. We also give several negative results for non-diagonal systems and some for diagonal systems; most of them are based on the simple observation that a PDE can be viewed as an equation for its coefficients Unless otherwise stated, the coefficients of the equations and the solutions are assumed to be in $C^\infty$. In the following Theorem, we summarise the positive and negative results for the SUCP for $\det F$ that are proven in this paper. As far as I know, this is the first time someone considered this problem and so the results are new in this sense. \begin{theorem}\label{mainthm'} The following table summarises the answers to Question \ref{mainque} that were proved in this paper for varying operator $P$, $m$ and $n$ and includes some open cases \begin{center} \begin{tabular}{ l | l | l | p{6cm} } Operator $P =$ & $m$ & $n$ & SUCP: Yes, No or Unknown? \\ \hline $\frac{d^2}{dt^2}\times Id + \begin{pmatrix} 0 & 0\\ -\frac{d^2c}{dt^2} \frac{d}{dt} & 0 \end{pmatrix}$ & $m \geq 2$ & $n = 1$ & No (counterexample to WUCP).\\ \hline $\Delta_g \times Id + \begin{pmatrix} X_{11}^y \partial_y & X_{12}\\ \partial_x & X_{22} \end{pmatrix}$ & $m \geq 2$ & $n \geq 1$ & No (counterexample to WUCP).\\ \hline $a\partial_1^2 + b\partial_2^2 + c\partial_1 + d\partial_2$ & $m \geq 2$ & $n \geq 2$ & No (counterexample to WUCP). \\ \hline $-\partial_i(a^{ij}\partial_j + b^i)$ & $m \geq 2$ & $n \geq 2$ & No (counterexample to WUCP). \\ \hline Analytic coefficients & $m \geq 1$ & $n \geq 1$ & Yes. \\ \hline $\Delta_g \times Id$ & $m \geq 1$ & $n = 2$ & Yes. \\ \hline $d_A^*d_A$ (for $A$ Yang-Mills) & $m \geq 1$ & $n = 2$ & Yes. \\ \hline $\frac{d^2}{dt^2} + a\frac{d}{dt} + b$ & $m \geq 1$ & $n = 1$ & Yes. \\ \hline $\Delta_g \times Id$ & $m \geq 2$ & $n \geq 3$ & Unknown. \\ \hline $\partial_i(a^{ij}\partial_j) \times Id$ & $m = 2$ & $n = 2$ & Unknown if $\det A \neq 1$ or $A \neq A^T$. \\ \hline \end{tabular} \end{center} \end{theorem} \vspace{3mm} We expect the last two SUCP properties in the table above to be false, but it seems difficult to construct direct counterexamples and we do not have a proof of this fact. Next, we use the SUCP result for the operator $P = d_A^*d_A$ in the following application to the Calder\'on problem for connections, by using the techniques from \cite{cek1}. As explained above, the zero set of $\det G$ is then countably $(n - 1)$-rectifiable and we may re-run the proof of Theorem 1.2. in \cite{cek1}. There are slight complications near the zero set, since the order of degeneracy of $\det G$ can be high, but we work around this by going to a harmonic coordinate system near such a point. Before stating the theorem, let us briefly recall the definition of a Yang-Mills connection, which are connections important in physics and geometry -- see \cite{DK} for more details. A unitary connection $A$ on a Hermitian vector bundle $E$ over a Riemannian manifold $(M, g)$ is called \emph{Yang-Mills} if it is the critical point of the Yang-Mills functional $F_{YM}$: \begin{align}\label{YMeqn} F_{YM}(A) = \int_M |F_A|^2 dvol_g \end{align} where $F_A = dA + A\wedge A$ is (locally) the curvature two form. Alternatively, it satisfies the equation \begin{align}\label{YMeqn} D_A^*F_A = 0 \end{align} where $D_A S = dS + [A, S]$ is the induced covariant derivative on the endomorphism bundle End$E$. \begin{theorem}\label{mainthm} Let $(M, g)$ be a compact smooth $2$-dimensional Riemannian manifold with non-empty boundary and let $A$ and $B$ be two Yang-Mills connections over $M \times \mathbb{C}^m$ (for $m \in \mathbb{N}$). Further, let $\Gamma \subset \partial M$ be a non-empty open subset of the boundary. Then $\big(\Lambda_A f\big)|_{\Gamma} = \big(\Lambda_B f\big)|_{\Gamma}$\footnote{In this setting, recall that the \emph{Dirichlet-to-Neumann map} $\Lambda_A$ is defined by applying the covariant normal derivative at the boundary to the solution of the corresponding Dirichlet problem $\Lambda_A(f) = d_A(u)(\nu)|_{\partial M}$, where $\nu$ is the outer normal to $\partial M$ and $u$ solves the Dirichlet problem $d_A^*d_A u = 0$ with $u|_{\partial M} = f$.} for all $f \in C_0^\infty(\Gamma, \mathbb{C}^m)$ implies the existence of a unitary matrix function $H \in C^\infty(M, \mathbb{C}^{m \times m})$ with $H|_{\Gamma} = Id$ and $H^*A = B$. \end{theorem} Note that for $\Gamma = \partial M$, i.e. full data, the above Theorem follows from the work in \cite{AGTU}, which recovers a general matrix potential and the connection on an arbitrary vector bundle up to gauges with a different technique based on the Complex Geometric Optics (CGO) solutions. One advantage of Theorem \ref{mainthm} is that it holds for partial data. Also, it extends the new technique of \cite{cek1} based on analysing the zero set, which gives hope this technique can be extended to more general contexts. Finally, we note there is a different, but related variation of Quesion \ref{mainque} where one considers the Jacobian of a system and its zero set. As observed in \cite{B}, this is of some importance in hybrid inverse problems. For example, in \cite{AN01}, in 2D, the authors consider the Jacobian $J = \det DU$ formed by solutions to $\div A\nabla u_i = 0$ for $i = 1, 2$ (these are also called $A$-harmonic functions -- see Section \ref{sec5}), where $U = (u_1, u_2)^T$. They state conditions on the boundary values of $U$ under which an $A$-harmonic extension of $U$ to the domain is univalent (injective) and provide local bounds on $\log J$.\footnote{Interestingly, they derive an elliptic equation for $J$ in this case. This seems unavailable for our problem.} See references in \cite{AN01, B} for more about this problem and its applications (also in higher dimensions). The paper is organised as follows. In Section \ref{sec2} we consider counterexamples in the non-diagonal case and also to the general diagonal case, as stated in Theorem \ref{mainthm'}. In Section \ref{sec3} we consider positive results in 1D. In Section \ref{sec4} we consider the $n = 2$ case in more detail. More precisely, we prove a few positive results, including the case of $P = \Delta_g$ and arbitrary $m$, see Theorem \ref{isothermal2dsucp}; we also prove a slightly more general result for $m = 2$. Furthermore, we reduce the problem to a simpler form for $m = 2$ by using properties of harmonic polynomials in 2D and a reduction lemma: see Proposition \ref{mainreduction}. Some further reductions in 2D are given in Section \ref{sec5}, based on the theory of quasiconformal maps. In Section \ref{sec6} we prove a positive result in two dimensions for the connection Laplacian operator twisted with a Yang-Mills connection. Finally, in Section \ref{sec7} we consider an application to the Calder\'on problem in two dimensions, based on the recent techniques in \cite{cek1}. In Appendix \ref{secapp} we prove a simple geometric lemma and a result on products of harmonic polynomials in two dimensions that we need. \vspace{4mm} \textbf{Acknowledgements.} I would like to thank Herbert Koch for helpful discussions and to Gabriel Paternain for useful comments. Also, the author thanks the Max-Planck Institute for Mathematics for financial support. \section{Negative results}\label{sec2} We start with the negative results and by showing what we cannot expect to hold. \begin{theorem}[Counterexample]\label{cex} Assume $g = g_{eucl}$ is the Euclidean metric and $0 \in \Omega \subset \mathbb{R}^2$. Let $c: \Omega \to \mathbb{R}$ be a smooth function to be specified later. Define \begin{equation} X = \begin{pmatrix} X_{11}^y \partial_y & 0\\ \partial_x & 0 \end{pmatrix} \end{equation} be a first order matrix derivative, where \begin{equation}\label{X11'} X_{11}^y(x, y) = \frac{\partial_x \Delta_g c + \int_0^x \partial_y^2 \Delta_g c(t, y) dt}{1 - \int_0^x \partial_y \Delta_g c(t, y) dt} \end{equation} Moreover, define \begin{equation} b(x, y) = y - \int_0^x \Delta_g c(t, y) dt \end{equation} and let \begin{equation}\label{Fdef} F := \begin{pmatrix} 1 & b\\ 0 & c \end{pmatrix} \end{equation} Then $F$ satisfies (here $X$ acts by matrix multiplication) \begin{equation}\label{Feqn} \Delta_g F + XF = 0 \end{equation} and also $\det F = c$. So, by allowing $c = e^{-\frac{1}{|x|^2}}$ (vanishes to infinite order at zero) or letting $c$ to be a bump function equal zero in a neighbourhood of zero, we obtain respectively a counterexample to the SUCP and the WUCP. \end{theorem} \begin{proof} We have $\Delta_g = -(\partial_x^2 + \partial_y^2)$ and we are left to verify a simple computation. Note that $X_{11}^y$ in \eqref{X11'} is well defined in a neighbourhood of zero and near the zero set of $c$ in the second case. We can easily check that, from the definitions \begin{align*} \Delta_g b\; +& &X_{11}^y \partial_y b& &= 0 &\iff -\partial_x \Delta_g c - \int_0^x \partial_y^2 \Delta_g c + \frac{\partial_x \Delta_g c + \int_0^x \partial_y^2 \Delta_g c}{1 - \int_0^x \partial_y \Delta_g c} \cdot (1 - \int_0^x \partial_y \Delta_g c) = 0\\ \Delta_g c\;+& &\partial_x b& &= 0 &\iff \Delta_gc - \Delta_g c = 0 \end{align*} \end{proof} This is one of the simplest counterexamples we could find. We can upgrade it to: \begin{theorem}\label{cex2} In the same setting as Theorem \ref{cex}, we let \begin{equation} X = \begin{pmatrix} X_{11}^y \partial_y & X_{12}\\ \partial_x & X_{22} \end{pmatrix} \end{equation} where $X_{12}$ and $X_{22}$ are smooth first order derivatives. Then by letting \begin{equation}\label{X11} X_{11}^y(x, y) = \frac{\partial_x (\Delta_g c + X_{22}c) - X_{12}c + \int_0^x \partial_y^2 \Delta_g c(t, y) dt}{1 - \int_0^x \partial_y \Delta_g c(t, y) dt} \end{equation} and \begin{equation} b(x, y) = y - \int_0^x \big(\Delta_g c(t, y) + X_{22}c(t, y)\big) dt \end{equation} we obtain the solution $F$ from \eqref{Fdef} satisfying equation \eqref{Feqn} and so we generalise the counterexample to this case. \end{theorem} \begin{rem}\rm Note that Theorem \ref{cex2} provides us in particular with a counterexample to SUCP and WUCP for $X$ symmetric (Hermitian) or anti-symmetric (skew-Hermitian). This is relevant for the twisted Laplacian operator which is of the form \begin{equation} d_A^*d_A = \Delta_g - 2g^{ij} A_i \partial_j + d^*A - g^{ij}A_iA_j \end{equation} Here $A = A_i dx^i$ is the connection one form, $g^{ij}$ is the inverse of the metric matrix $g_{ij}$ and $d^*$ is the co-differential. If the connection $A$ is unitary, then $A_i$ is skew-Hermitian. What the previous theorem is telling us is that we should not expect the SUCP to hold for $d_A^*d_A$ for $n \geq 2$ and general $A$. The fact that for $A$ Yang-Mills and $n = 2$ (c.f. Section \ref{sec6}) we have SUCP is due to the special analytical properties in suitable gauges in 2D, so we do not expect the SUCP to hold even for Yang-Mills connections and $n \geq 3$, but this remains open. \end{rem} In the similar vein as the counterexamples above, we give a simple counterexample in the $1$-dimensional case. More precisely, we have: \begin{prop}\label{cexn=1} Let us define the smooth matrix function, for a smooth $c: \mathbb{R} \to \mathbb{R}$ \[F(t) = \begin{pmatrix} 1 & t\\ 0 & c(t) \end{pmatrix}\] Furthermore, let us define the first order smooth matrix derivative \[X (t)= \begin{pmatrix} 0 & 0\\ -\frac{d^2c}{dt^2} \frac{d}{dt} & 0 \end{pmatrix}\] Then $F$ satisfies $\frac{d^2 F}{dt^2} + XF = 0$ and we have $\det F = c$. By letting $c$ to be an infinitely vanishing function at zero and a bump function equal vanishing near zero, we obtain counterexamples to the SUCP and WUCP, respectively. \end{prop} \begin{proof} Immediate from the construction. \end{proof} The next counterexample rules out even \emph{diagonal} operators in dimension $4$. It is based on the simple idea that a solution to a PDE can be viewed as an equation in the coefficients and some linear algebra. The more coefficients we have, the more space we have to prescribe the solutions and then determine the coefficients -- this is why dimension $4$ is useful. \begin{theorem}[Counterexample in the diagonal case in 4D]\label{cexn=4} There exist an $\varepsilon > 0$ and smooth, positive and real coefficient functions $a, b, c, d$ on $B_\varepsilon$ and smooth functions $f_1, f_2, f_3$ on $B_{2\varepsilon}$, such that for $\mathcal{L}: = (a \partial_1^2 + b\partial_2^2 + c\partial_3^2 + d\partial_4^2)$, we have \begin{align}\label{eqncexn=4} \mathcal{L} f_i = (a \partial_1^2 + b\partial_2^2 + c\partial_3^2 + d\partial_4^2)f_i = 0 \end{align} for $i = 1, 2, 3$. Also, we have $f_1 f_2 = f_3$ on $B_{\varepsilon}$, but $f_1 f_2 \neq f_3$ on $B_{2\varepsilon} \setminus B_{\varepsilon}$. Therefore, $F := \begin{pmatrix} f_3 & f_2\\ f_1 & 1 \end{pmatrix}$ satisfies $\mathcal{L} F = 0$ and $\det F = 0$ on $B_{\varepsilon}$, but $\det F \neq 0$ on $B_{2\varepsilon} \setminus B_{\varepsilon}$; so the WUCP fails in this case. \end{theorem} \begin{proof} Note that the equation \eqref{eqncexn=4} holds for $i = 1, 2, 3$ if and only if the following matrix equation holds: \begin{align}\label{matrixeqn} \begin{pmatrix} \partial_1^2 f_1 & \partial_2^2 f_1 & \partial_3^2 f_1 & \partial_4^2 f_1\\ \partial_1^2 f_2 & \partial_2^2 f_2 & \partial_3^2 f_2 & \partial_4^2 f_2\\ \partial_1^2 f_3 & \partial_2^2 f_3 & \partial_3^2 f_3 & \partial_4^2 f_3 \end{pmatrix} \begin{pmatrix} a\\ b\\ c\\ d \end{pmatrix} = 0 \end{align} at all points $p$ in the domain of definition. Note that this $3 \times 4$ matrix has nullity $\geq 1$ and so there is always a non-zero solution at each point $p$. Let us choose auxiliary functions \begin{align*} g_1 = x^2 - y^2 + t + x, \quad g_2 = x^2 - z^2 + t - x \quad \text{ and } \quad g_3 = g_1g_2 \end{align*} With this chose, we have full rank at $p = 0$ ($f_1 = g_1$, $f_2 = g_2$, $f_3 = g_3$) and so we have the non-zero kernel spanned with $a = b = c = d = 1$. Since the rank of the $3 \times 4$ matrix from \eqref{matrixeqn} must be full in a neighborhood of zero (determinant of a $3\times 3$ minor is non-zero), there exists and $\epsilon > 0$ such that on $B_\varepsilon$ we have a smooth choice of solutions to \eqref{matrixeqn} with $a(0) = b(0) = c(0) = d(0) = 1$ and for some small $\delta > 0$ \[\min_{B_\varepsilon}\{a, b, c, d\} \geq 1 - \delta\] Now choose smooth extensions $f_1, f_2, f_3$ to be such that they agree with $g_1, g_2, g_3$ on $B_\varepsilon$ and such that $f_1 f_2 \neq f_3$ on $B_{2\varepsilon} \setminus B_\varepsilon$ (e.g. multiply with a bump function) \[\min_{B_{2\varepsilon}}\{a, b, c, d\} \geq \frac{1}{2}\] This can be done for $\varepsilon$ and $\delta$ small enough and a good choice of extensions. This finishes our construction. \end{proof} Note that the above construction also gives a counterexample in any dimension $n \geq 4$ and the size of the matrix $m \geq 2$. The next Proposition tells us we can do slightly better by introducing off-diagonal terms in dimension $3$: \begin{prop}[Counterexample in the diagonal case in 3D] There exist an $\varepsilon > 0$ and smooth, positive and real coefficient functions $a, b, c, d$ on $B_{2\varepsilon}$ and smooth functions $f_1, f_2, f_3$ on $B_{2\varepsilon}$, such that the operator $\mathcal{L}: = (a \partial_1^2 + b\partial_2^2 + c\partial_3^2 + 2d\partial_1\partial_2)$ is (strongly) elliptic and we have \begin{align}\label{eqncexn=4} \mathcal{L} f_i = (a \partial_1^2 + b\partial_2^2 + c\partial_3^2 + 2d\partial_1 \partial_2)f_i = 0 \end{align} for $i = 1, 2, 3$. Moreover, we have $f_1 f_2 = f_3$ on $B_{\varepsilon}$, but $f_1 f_2 \neq f_3$ on $B_{2\varepsilon} \setminus B_{\varepsilon}$. Therefore, $F := \begin{pmatrix} f_3 & f_2\\ f_1 & 1 \end{pmatrix}$ satisfies $\mathcal{L} F = 0$ and $\det F = 0$ on $B_{\varepsilon}$, but $\det F \neq 0$ on $B_{2\varepsilon} \setminus B_{\varepsilon}$; so the WUCP fails in this case. \end{prop} \begin{proof} Similar to the proof of the previous theorem. We choose the following functions: \[g_1 = x^2 - y^2 + x, \quad g_2 = x^2 - z^2 + x - 2y \quad \text{ and } \quad g_3 = g_1 g_2\] Note that this yields $a(0) = b(0) = c(0) = 1$ and $d(0) = \frac{1}{2}$ to be the solution as in \eqref{matrixeqn}. From this point, the argument works the same. \end{proof} Finally, we show that if we introduce some linear terms, we can go to two dimensions, as well. \begin{theorem}[Counterexample in the diagonal case in 2D]\label{cexn=4} There exist an $\varepsilon > 0$ and smooth, positive and real coefficient functions $a, b, c, d$ on $B_\varepsilon \subset \mathbb{R}^2$ and smooth functions $f_1, f_2, f_3$ on $B_{2\varepsilon}$, such that for $\mathcal{L}: = (a \partial_1^2 + b\partial_2^2 + c\partial_1 + d\partial_2)$, we have \begin{align}\label{eqncexn=4} \mathcal{L} f_i = (a \partial_1^2 + b\partial_2^2 + c\partial_1 + d\partial_2)f_i = 0 \end{align} for $i = 1, 2, 3$. Also, we have $f_1 f_2 = f_3$ on $B_{\varepsilon}$, but $f_1 f_2 \neq f_3$ on $B_{2\varepsilon} \setminus B_{\varepsilon}$. Therefore, $F := \begin{pmatrix} f_3 & f_2\\ f_1 & 1 \end{pmatrix}$ satisfies $\mathcal{L} F = 0$ and $\det F = 0$ on $B_{\varepsilon}$, but $\det F \neq 0$ on $B_{2\varepsilon} \setminus B_{\varepsilon}$; so the WUCP fails in this case. \end{theorem} \begin{proof} The tactics is the same as before, but we now let \[g_1 = x^2 + x + y, \quad g_2 = x^2 + x - y \quad \text{ and } \quad g_3 = g_1 g_2\] Then at the origin we have the solution to \eqref{matrixeqn} given by $a(0) = b(0) = 1$, $c(0) = -2$ and $d(0) = 0$. Now we extend these functions to $f_1, f_2$ and $f_3$ and note that the ellipticity is preserved by small perturbations. \end{proof} Finally, we give a counterexample for the WUCP in the case of a divergence type operator (with a zero order term under the divergence sign) and a $2$ by $2$ matrix in 2D. The approach combines the ideas above in Theorem \ref{cexn=4} and the reduction techniques of Alessandrini \cite{ales} and Schulz \cite{Schulz}. See also Section \ref{reductions} below on these reduction techniques. The idea is to generate solutions to $Lu = -\div(A\nabla u + b \cdot u) + C\cdot \nabla u + du$ using the techniques above and then use the reduction techniques to get rid of the $C$ and $d$ coefficients. We start by stating an algebraic Lemma (c.f. Lemma \ref{lem1}): \begin{lemma}\label{reductionlemma} Let $A$ be a symmetric matrix, $C$ and $b$ vector functions and $d$ a scalar function (all smooth) on $\mathbb{R}^n$. Consider the operator \begin{align*} \mathcal{L} u = -\partial_i \big(a^{ij} \partial_j u + b^i u\big) + c^i \partial_i u + du \end{align*} Assume $\mathcal{L} \varphi = 0$ and $\mathcal{L}^* \psi = -\partial_i \big(a^{ij} \partial_j \psi + c^i \psi\big) + b^i \partial_i \psi + d\psi = 0$ with $\psi$ non-vanishing ($\mathcal{L}^*$ is the adjoint). Then $v = \frac{\varphi}{\psi}$ satisfies \begin{align*} -\partial_i\big(\psi^2(a^{ij} \partial_j v + (b^i - c^i)v)\big) = 0 \end{align*} \end{lemma} \begin{proof} This is just a lengthy computation similar to the proof of Lemma \ref{lem1}. See also \cite{Schulz} for a use of this identity; a more involved identity for $A$ non-symmetric can be found in \cite{ales}. \end{proof} We are now in shape to prove the following counterexample: \begin{theorem} Assume $\varepsilon > 0$, $f_1, f_2, f_3$ and $c, d$ are as in Theorem \ref{cexn=4}; switch the sign on $a$ and $b$ in the same Theorem. Then there exists a smooth $\psi > 0$, such that $g_k := \frac{f_k}{\psi}$ satisfy, for $k = 1, 2, 3$: \begin{align*} \mathcal{L}' g_k = -\partial_1\big(\psi^2 (a\partial_1g_k + (c + \partial_1 a) g_k )\big) - \partial_2\big(\psi^2( b\partial_2 g_k + (d + \partial_2 b) g_k)\big) &= 0\\ \mathcal{L}' \Big(\frac{1}{\psi}\Big) &= 0 \end{align*} Therefore, $g_3 = g_1 g_2$ on $B_{\varepsilon}$ but not on $B_{2\varepsilon} \setminus B_\varepsilon$ and so we have a contradiction to the WUCP for the divergence type operator $\mathcal{L}'$ and the matrix function $G := \begin{pmatrix} g_3 & g_2\\ g_1 & \frac{1}{\psi} \end{pmatrix}$ \end{theorem} \begin{proof} We rewrite the equations from Theorem \ref{cexn=4} in the following form: \begin{align*} 0 = \mathcal{L} f_k = -\partial_1 \big(a \partial_1 f_k\big) - \partial_2 \big(b \partial_2 f_k\big) + \big(c + \partial_1 a\big)\partial_1 f_k + \big(d + \partial_2 b\big) \partial_2 f_k \end{align*} We want to apply Lemma \ref{reductionlemma} to the operator $\mathcal{L}^*$, or in other words we want to solve \begin{align*} \mathcal{L}^* \psi = -\partial_1\big(a \partial_1 \psi + (c + \partial_1 a)\psi \big) - \partial_2 \big(b \partial_2 \psi + (d + \partial_2 b) \psi\big) = 0 \end{align*} with $\psi > 0$. But we can just solve the Dirichlet problem for $\mathcal{L}^* \psi = 0$ with $\psi = 1$ on $\partial B_{2\varepsilon}$; then the minimum principles for $\mathcal{L}^*$ give that $\psi \geq 1$ in the whole of $B_{2\varepsilon}$. So we may apply the previous Lemma to get $g_k = \frac{f_k}{\psi}$ satisfying $\mathcal{L}' g_k = 0$ for $k = 1, 2, 3$. Furthermore, since $\mathcal{L} (1) = 0$ we clearly have $\mathcal{L}' \Big(\frac{1}{\psi}\Big) = 0$. The conclusion follows from the definition of $f_k$ for $k = 1, 2, 3$. \end{proof} \begin{rem}\rm Note that the technique in Theorem \ref{cexn=4} cannot be applied to only coefficients next to first order and zero order derivatives, for example since in 2D we have $f_1 f_2 = f_3$ implies linear dependence of rows of first order derivatives, so a determinant would vanish. Therefore, we must use coefficients next to second order derivatives. The question of whether there is a counterexample for the pure divergence operators of the form $-\partial_i(a^{ij}\partial_j \cdot)$ remains open. \end{rem} \section{Positive results}\label{sec3} In Sections \ref{sec3}, \ref{sec4}, \ref{sec5} and \ref{sec6} we outline a few approaches to proving the SUCP or WUCP in Question \ref{mainque} in different situations. As we have seen previously, there is little hope in proving UCPs for general operators of form \eqref{eqn}, so we need to restrict the class we consider. In particular, we are interested in \begin{enumerate} \item[1.] Divergence type operators $\partial_i(a^{ij}\partial_j)$.\label{type1div} \item[2.] Conformally Euclidean metrics, i.e. operators of type 1. with $a^{ij}(x) = c(x)\delta^{ij}$ for some positive function $c(x)$.\label{type2confeucl} \item[3.] Elliptic operators of the form $a^{ij}\partial_i \partial_j + b_i\partial_i + c$.\label{type3genell} \end{enumerate} Note that the Laplace-Beltrami operator given by $\Delta_g = -\frac{1}{\sqrt{|g|}} \partial_i \big(\sqrt{|g|} g^{ij} \partial_j\big)$ is of divergence type. In this section, we prove a positive result in the case 1. above with $n = 1$. The proof uses elementary properties of solutions to ODEs in 1D. \begin{prop}[Divergence type for $n = 1$]\label{divn=1} Let $m \in \mathbb{N}$. Assume $F: \mathbb{R} \to \mathbb{C}^{m \times m}$ is a smooth matrix function satisfying \[\frac{d}{dt}\big(a \frac{dF}{dt}\big) = 0\] for a positive smooth function $a$ on $\mathbb{R}$. If $\det F$ vanishes to order $(m + 1)$ at $0$, then $\det F = 0$ on the whole of $\mathbb{R}$. So both the SUCP and WUCP hold in this case. \end{prop} \begin{proof} Note that for an entry $f$ of $F$, we have \[\frac{df}{dt} = \frac{C(f)}{a}\] where $C(f)$ is a constant. Therefore, if $\frac{df}{dt}$ vanishes at any point, we must have $f$ constant. If all entries of $F$ are constant, we are done. If we have $\frac{df}{dt} \neq 0$, then for any other entry $g$ of $F$, we have $\frac{dg}{dt} = C(f, g) \frac{df}{dt}$ for a constant $C(f, g)$ and consequently, we must have $g = C(f, g)f + C'(g)$ for another constant $C'(g)$. Thus, there exists a holomorphic polynomial $p$ of degree up to $m$, such that $\det F (t) = p (f(t))$ for all $t$. Since $\frac{df}{dt} \neq 0$, $f$ maps $[-\varepsilon, \varepsilon]$ diffeomorphically to $f([-\varepsilon, \varepsilon]) \subset \mathbb{C}$ for some $\varepsilon > 0$, by the inverse function theorem. By the chain rule, we obtain that $p$ vanishes to infinite order at $f(0)$, but since $p$ is a holomorphic polynomial, this is impossible unless $p \equiv 0$ and so $\det F \equiv 0$. \end{proof} The proof of the above Proposition works for operators of the form $\frac{d^2}{dt^2} + a\frac{d}{dt}$ in the same way, but what about $P = \frac{d^2}{dt^2} + a\frac{d}{dt} + b$? The following Proposition answers our third question above positively. \begin{prop} Let $F: \mathbb{R} \to \mathbb{C}^{m \times m}$ be a smooth matrix function and we consider, for smooth $a$ and $b$ \[P = \frac{d^2}{dt^2} + a\frac{d}{dt} + b\] Then $PF = 0$ and $\det F$ vanishing to order $(m + 1)$ at zero implies that $\det F \equiv 0$. So $\det F$ satisfies both the SUCP and WUCP. \end{prop} \begin{proof} We follow the proof of Proposition \ref{divn=1}. In this case, the solution space to $Pf = 0$ is two dimensional, depending on values $f(0)$ and $\frac{df}{dt}(0)$. Say this is spanned by $f_1$ and $f_2$, where $f_1(0) = 1$ and $\frac{df_1}{dt}(0) = 0$, while $f_2(0) = 0$ and $\frac{df_2}{dt}(0) = 1$. Since every entry is a linear combination of $f_1$ and $f_2$, we obtain that $\det F(t) = p\big(f_1(t), f_2(t)\big)$, where $p$ is a homogeneous holomorphic polynomial in two variables of degree $m$. But then using homogeneity we get $p\big(f_1(t), f_2(t)\big) = f_1^m(t) p\big(1, \frac{f_2(t)}{f_1(t)}\big)$ near zero, so the auxiliary polynomial $q(z) = p(1, z)$ vanishes to order $(m+1)$ at $z = 0$ and so $q \equiv 0$, implying $\det F \equiv 0$. \end{proof} Together with our counterexample Proposition \ref{cexn=1}, this circles up the story for $n = 1$. \section{Harmonic conjugates} \label{sec4} Here we focus mostly on the $m = 2$ and $n = 2$ case and operators of divergence form. Recall that two functions $u$ and $v$ on $\mathbb{C}$ are \emph{harmonic conjugate} if $u + iv$ is holomorphic. In other words, $u$ and $v$ satisfy the Cauchy-Riemann equations. Given $\Omega \subset \mathbb{C}$ simply connected, then given a harmonic function $u$, there exists a unique harmonic conjugate function (up to constant), that is given by integrating the rotated gradient along an arbitrary curve; that this is well-defined follows from the divergence theorem. More generally, given a smooth metric $g$ on $\Omega \subset \mathbb{C}$ simply connected, we say that two harmonic functions (i.e. $\Delta_g a = \Delta_g b = 0$) $a$ and $b$ are \emph{harmonic conjugate with respect to $g$} if $da = \star db$, where $\star$ is the Hodge star\footnote{In $2$ dimensions, the Hodge star $\star$ is just the rotation by $90$ degrees clockwise.}. Given just a harmonic function $b$, then $a$ exists and is unique up to constants. This follows from the fact that the Laplace-Beltrami operator can be written as $\Delta_g = d^*d$, where $d^* = \star d \star$ and $\star^2 = -1$ on one forms. The harmonicity of $b$ implies $\star db$ is closed, so $a$ exists and is unique up to constants. Moreover, this $a$ is clearly also harmonic and we also notice that $|da|_g = |db|_g$. Also, note that if given two harmonic function $a$ and $b$ with $\Delta_g a = \Delta_g b = 0$ with $\langle{da, db}\rangle = 0$ in $\Omega$, then $a$ and $b$ are harmonic conjugates w.r.t. $g$ (up to constants). To see this, note that $da = \lambda \star db$ for some function $\lambda$ and so by applying $d$ and $d\star$ to both sides, we deduce $d\lambda = 0$. This implies $\lambda$ is constant and so we get our conclusion. Moreover, it is enough to have $\langle{da, db}\rangle = 0$ on an open subset $\Omega' \subset \Omega$ to conclude $a$ and $b$ are conjugate in $\Omega$: namely, note that $a$ determines a unique harmonic conjugate $b'$ in $\Omega$, which is by previous paragraph equal to $b$ in $\Omega'$ (up to multiplication by constant). Thus, by WUCP for $\Delta_g$, we get $b \equiv b'$ is conjugate to $a$ on the whole of $\Omega$. How can we extend this to an arbitrary operator of divergence type $P = \partial_i(a^{ij} \partial_j)$, where $a^{ij}$? Notice firstly that for the operator $\Delta_g$ we have the corresponding $a^{ij} = \frac{g^{ij}}{\sqrt{|g|}}$ where $|g| = \det g$ and in this case $\det A = 1$, where $A_{ij} = a^{ij}$ is the associated matrix. See the next section for the proper treatment of the case of general $A$ and the corresponding structures. We first present a useful Lemma producing an equation for the quotient of the two solutions. \begin{lemma}\label{lem1} Let $f$ and $g$ be two smooth functions in $\mathbb{R}^n$ with $Pf = Pg = 0$ and $g \neq 0$, then $\partial_i(g^2 a^{ij} \partial_j \frac{f}{g}) = 0$, so in other words $\frac{f}{g}$ also satisfies a divergence type equation. \end{lemma} \begin{proof} This follows easily by computation: \begin{align*} 0 = \partial_i(a^{ij}\partial_jf) = \partial_i\big(a^{ij}\partial_j \big(g \cdot \frac{f}{g}\big)\big) = Pg \cdot \frac{f}{g} + 2a^{ij}\partial_i g \partial_j \big(\frac{f}{g}\big) + g \cdot P\big(\frac{f}{g}\big) \end{align*} we multiply both sides with $g$, use chain rule and $Pg = 0$ to re-write this as: \begin{align*} 0 = a^{ij}\partial_i (g^2) \partial_j \big(\frac{f}{g}\big) + g^2 \cdot P\big(\frac{f}{g}\big) = \partial_i\big(g^2 a^{ij} \partial_j \big(\frac{f}{g}\big)\big) \end{align*} \end{proof} Note that if $m = 2$, then this makes us able to reduce the problem (locally) to the case where $F = \begin{pmatrix} h & g\\ f & 1 \end{pmatrix}$ by dividing with a non-zero entry and using Lemma \ref{lem1} to reduce the problem to a matrix of this form, by redefining $A$. Observe now that if $Pf = Pg = 0$, then $P(fg) = 0$ if and only if $a^{ij}\partial_i f \partial_j g = 0$, i.e. $df$ and $dg$ are orthogonal w.r.t. $A$. \begin{rem}\rm If $\det A$ is constant and $A$ is symmetric, then by our discussion above, if $\det F = 0$ in a neighbourhood $\Omega'$ of the origin, then $df$ and $dg$ are orthogonal w.r.t. $A$ in $\Omega'$ and so there is a unique harmonic conjugate (up to constants) to $f$ in $\Omega$; so by unique continuation $g$ is the harmonic conjugate (up to constants) in $\Omega$, too. So we prove the WUCP in this case. For the proof of the SUCP in this case or in other words, of the fact that $\Delta_g a = \Delta_g b = \Delta_g c = 0$ with $c - ab = O (|x|^\infty)$ at zero implies $c \equiv ab$, see Proposition \ref{SUCP1}. \end{rem} Recall the existence of \emph{harmonic coordinates} for surfaces. These are tied with the harmonic conjugates: given $(\Omega, g) \subset \mathbb{R}^2$ and a point $p \in \Omega$, one builds an harmonic function $u$ with $\Delta_g u = 0$ and $u(p) = 0$ with $\nabla u (p) \neq 0$. Then by parametrising with $u$ and the harmonic conjugate of $u$ we get \emph{isothermal coordinates} in which $g = \begin{pmatrix} \lambda & 0\\ 0 & \lambda \end{pmatrix}$ for a positive function $\lambda$. Note that due to conformal invariance, the harmonic function $h$ in these coordinates satisfies \begin{align}\label{eqneucl} \Delta_{eucl} h = \Big(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\Big)h = 0 \end{align} and so is harmonic in the usual sense. In particular we have \begin{theorem}\label{isothermal2dsucp} Let $(\Omega, g)$ be a planar domain with $g$ of class $C^{1, \alpha}$ for $\alpha > 0$. Then let $p: \mathbb{C}^n \to \mathbb{C}$ be a real analytic function. If $\Delta_g f_i = 0$ for $f_i \in C^{2, \alpha}$ and $i = 1, \dotso, N$ for $N \in \mathbb{N}$ and moreover, if $p(f_1, \dotso, f_N)$ vanishes to infinite order at zero, then $p(f_1, \dotso, f_N) \equiv 0$. In particular, we may choose $p(F) = \det(F)$ to be the determinant of an $\mathbb{C}^{m \times m}$ matrix function and so in this case we have the SUCP. \end{theorem} \begin{proof} In these conditions, there exist isothermal coordinates \cite{DTK} (c.f. previous paragraph) and in these coordinates $g \in C^{2, \alpha}$. Moreover, we see that $f_i$ satisfy \eqref{eqneucl}, i.e. they are harmonic in the new coordinates. Therefore by elliptic regularity they are smooth and moreover, analytic. So the composition $p(f_1, \dotso, f_N)$ is analytic and vanishes to infinite order and so must entirely vanish. \end{proof} \begin{rem}\rm We might object and say that the previous proof relies on the analyticity. Is there a proof of SUCP for the determinant that does not use analyticity? We sketch this as follows. We note that if $\Delta_g a = \Delta_g b = \Delta_g c = 0$ for $(\Omega, g) \subset \mathbb{R}^2$, then $c - ab = O(|x|^\infty)$ implies $\langle{da, db}\rangle_g = O(|x|^\infty)$. This orthogonality relation can be seen to determine the full jet of $b$ at zero (up to constants) by going to isothermal coordinates, in which the Taylor polynomials of $a$ and $b$ of any order are harmonic. Then one may inductively determine the Taylor coefficients of $b$ from $a$ and the metric, by using this harmonicity of the coefficient polynomials. This implies that $b$ has the same Taylor expansion as the harmonic conjugate of $a$ and so by the SUCP for $\Delta_g$, we see that $b$ must be the harmonic conjugate of $a$. For a different proof, see Propositon \ref{SUCP1}. \end{rem} We continue our study of the 2D case in divergence form by looking at the blow ups of solutions at a point. More precisely, we look at the leading terms of Taylor polynomials of solutions to equations of elliptic operators. Then we have \begin{prop} Let $u$ be a smooth solution to $\mathcal{L} u = 0$ in $\mathbb{R}^n$ for any $n$, where $\mathcal{L}$ is any one of the three classes of operators in \eqref{type1div}. Then after a linear change of coordinates, the top Taylor coefficient at zero is harmonic. \end{prop} \begin{proof} Change the coordinates by a linear transformation such that the principal part at zero is just $\sum \partial_i^2$. Assume the order of vanishing at zero of $u$ is $N$. Let us introduce $u_r(x) := r^{-N}u(rx)$. Then by Taylor's theorem, $u_r \to p_N$ locally uniformly as $r \to 0$ (with all derivatives), where $p_N$ is the $N$-th Taylor polynomial of $u$. Note that $u_r$ satisfies the following equation: \begin{align}\label{eqnscaled} a_{ij}^r \partial_{ij} u_r + b_i^r \partial_i u_r + c^ru_r = 0 \end{align} Here $a_{ij}^r(x) = a_{ij}(rx)$, $b_i^r(x) = r b_i(rx)$ and $c^r(x) = r^2c(rx)$. Note that we have $a_{ij}^r \to a_{ij}(0)$, $b_i^r \to 0$ and $c^r \to 0$ locally uniformly as $r \to 0$, so when the limit is taken we get \[\sum \partial_i^2 p_N = 0\] \end{proof} \begin{rem}\rm The above Proposition can be generalised to less smooth coefficients $a_{ij}, b_i, c \in C^2_{loc}(\mathbb{R}^n)$ by considering the order of vanishing of a function $u \in L^2_{loc}(\mathbb{R}^n)$ -- the least non-negative integer $N$ such that there exists $R > 0$ and constants $c_1, \dotso, c_N$ \[\int_{B(0, r)} |u(x)|^2 dx \leq c_k^2 r^{2k + n}\] for all $r \leq R$ and $1 \leq k \leq N$ (see \cite{KT07}). Then with $u_r(x) = r^{-N}u(rx)$ as before and $u \in H^2_{loc}(\mathbb{R}^n)$ satisfying $\mathcal{L} u = 0$, we have that $u_r$ is bounded uniformly as $r \to 0$ in $H^3\big(B(0, 1)\big)$ by the scaled elliptic estimates $\lVert{u}\rVert_{H^3(B(0, r))} \lesssim \frac{1}{r^3} \lVert{u}\rVert_{L^2(B(0, r))}$ (note that $D^3 u_r(x) = r^{3 - N} D^3 u(rx)$). So by Rellich compactness, we get a convergent subsequence in $H^{2} (B(0, 1))$ and by taking the $r_k \to 0$ over this subsequence in \eqref{eqnscaled}, that $u_r$ in the limit is harmonic. Note we could have applied the same argument for coefficients in $C^{1, \alpha}$ for any $\alpha > 0$; also, the $L^2$ norm could be replaced by the $\sup$ norm in the above definition of the order of vanishing, by use of Schauder estimates and Arzela-Ascoli. \end{rem} This takes us to proving the following claim, which is an elementary result classifying pairs of harmonic polynomials satisfying a certain property. \begin{lemma}\label{harmpoly} Assume we have four non-zero, real harmonic, homogeneous polynomials $p_{ij}$ in $\mathbb{R}^2$ for $i, j = 1, 2$ with $p_{11}p_{22} = p_{12} p_{21}$. Then one of the following two holds, up to constants and permutations: \begin{itemize} \item We are in the trivial case, $p_{11} = p_{12}$ and $p_{22} = p_{21}$. \item We have $p_{22} = 1$, $p_{12} = A \re (z^k) + B \im (z^k)$ and $p_{12} = C \re (z^k) + D \im (z^k)$ for $A, B, C, D \in \mathbb{R}$ with $AC + BD = 0$ and $k \in \mathbb{N}$. Of course, then $p_{11} = p_{12} p_{21}$. \end{itemize} Conversely, in any of the two cases we get a quadruple of harmonic polynomials with $p_{11}p_{22} = p_{12} p_{21}$. \end{lemma} For a proof, see the Appendix \ref{secapp}. We combine this Lemma with Lemma \ref{lem1} to reduce the problem to the case where one entry is equal to one. \begin{prop}\label{reductionf22=1} Assume $f_{ij}$ are smooth and $A$-harmonic\footnote{$u$ is $A$-harmonic if $\div A\nabla u = 0$. See Section \ref{sec5} for more details} for $i, j = 1, 2$ and satisfy $f_{11}f_{22} - f_{12}f_{21} = O(|x|^\infty)$ at zero. Then if $f_{ij}$ all vanish at zero, we must have $f_{11}f_{22} = f_{12} f_{21}$ on the whole domain. Consequently, by Lemma \ref{lem1} we reduce the problem to the case where one entry is equal to $1$. \end{prop} \begin{proof} We can assume that the matrix $A$ is the identity at zero by a linear change of coordinates. Then the leading Taylor polynomials $p_{ij}$ of $f_{ij}$ are harmonic and satisfy $p_{11}p_{22} = p_{12}p_{21}$ by the condition on $f_{ij}$. If one of the entries vanishes to infinite order, then by the usual SUCP it is zero throughout and we easily conclude $f_{11}f_{22} = f_{12} f_{21}$ on the whole domain, after another use of the SUCP. By Lemma \ref{harmpoly} and since $f_{ij}$ all vanish at zero, we know we are in the second case; i.e. up to constants and permutations we may assume $p_{11} = p_{12}$ of degree $r > 0$ and $p_{22} = p_{21}$ of degree $s > 0$. We distinguish two cases: $r > s$ ($r < s$ is analogous) and $r = s$. If $r > s$, then by subtracting the second column from the first column (i.e. after the linear transform $f_{11} \mapsto f_{11}' = f_{11} - f_{12}$ of degree $r'$ and $f_{21} \mapsto f_{21}' = f_{21} - f_{22}$ of degree $s'$), we increase the orders of vanishing of the first column, i.e. $r' > r$ and $s' > s$. Moreover, the determinant is unchanged and we notice that $r' > r > s$, which gives a contradiction (unless, $r'$ or $s'$ are equal to $\infty$, which we know how to deal with). If $r = s$, by the same subtraction procedure we may reduce to the case where we have $r > s$. This finishes the proof of the first claim. Finally, for the second claim note that if we have $f_{22}(0) \neq 0$, then by Lemma \ref{lem1} we may assume that locally $f_{22} \equiv 1$. \end{proof} Note that by Lemma \ref{isothermal2dsucp} we know how to solve the $\det A = 1$ case. The following Proposition tells us that if $u, v$ and $w$ satisfy $Pu = Pv = Pw = 0$ and $w - uv = O(|x|^\infty)$, then $v$ is the harmonic conjugate of $u$ up to constants -- but we do not use analyticity. \begin{prop}\label{SUCP1} Assume $u, v$ and $w$ are smooth (real or complex) and satisfy $Pu = Pv = Pw = 0$ with $\det A = 1$. Then $w - uv = O(|x|^\infty)$ implies $w = uv$ on the whole domain and that $v$ is the harmonic conjugate of $u$. \end{prop} \begin{proof} We first consider the case where $dv(0) \neq 0$. Then we may write \begin{align}\label{eqnlambdamu} du = \lambda \star dv + \mu dv \end{align} for some functions $\mu$ and $\lambda$. The condition $w - uv = O(|x^\infty|)$ implies that $\langle{du, dv}\rangle_A = O(|x|^\infty)$ ($A$ corresponds to a Riemannian metric) and so $\mu = O(|x|^\infty)$. By applying $d$ and $d\star$ do this equation respectively, we get \begin{align} d\lambda \wedge \star dv = O(|x|^\infty) \quad \text{and} \quad d\lambda \wedge dv = O(|x|^\infty) \end{align} which in turn implies $\lambda = \lambda(0) + O(|x|^\infty)$. Therefore \begin{align} du = \lambda(0) \star dv + O(|x|^\infty) \end{align} But there is the harmonic conjugate $u'$ to $u$, so that $d(u - \lambda(0)u') = O(|x|^\infty)$ and so by the usual SUCP we get $u - \lambda(0)u'$ is constant, which finishes the proof. If $dv(0) = 0$, then by assuming $A(0) = Id$ we may argue by the second case of Lemma \ref{harmpoly} to get that $\lambda$ and $\mu$ extend to zero smoothly, by Taylor's theorem (note also that the zeros of $dv$ are isolated if $v$ is non-constant\footnote{This is true by e.g. going to coordinate system given by Lemma \ref{isotropicoord}, reducing the problem to a first order equation for $\partial v$ and using the results of \cite{bar}}). Once we have equation \eqref{eqnlambdamu}, we argue in the same manner. \end{proof} The problem of generalising the above Proposition is that if $\det A \neq 1$, then the harmonic conjugate is $A^*$-harmonic and $A^* \neq A$ in general (see the next section for the definition of these concepts). In the next proposition, we reduce the problem to the \emph{isotropic} case, i.e. the case of $A = \lambda \times Id$ for positive $\lambda$. \begin{prop}\label{mainreduction} In proving the SUCP for the determinant and operators of divergence type where $A$ is symmetric, it is enough to consider the isotropic case. By combining with Proposition \ref{reductionf22=1}, we are also reduced to the case where $f_{22} = 1$. \end{prop} \begin{proof} Given a symmetric $A$, we have by Lemma \ref{isotropicoord} a diffeomorphism $F$ such that $F_*A = \tilde{a} Id$ for a positive function $\tilde{a}$ (here $F_*$ is the pushforward). This finishes the proof. \end{proof} \begin{rem}\rm Note that we do not need to have $\det A$ constant always, if $u, v$ and $w$ satisfy $P u = Pv = Pw = 0$ and $w = uv$, or $v$ to be conjugate to $u$. For example, we may take $A = \begin{pmatrix} 1 & 0\\ 0 & a \end{pmatrix}$ with $a(x, y) = \frac{f(x)}{g(y)}$ with $f$ and $g$ positive, and let $u(x, y) = x$, $v(x, y) = v(0) + \int_0^y g(t)dt$. Then $uv$ is also $A$-harmonic and we also have $\det A = \frac{f}{g}$ which is not constant for general $f$ and $g$. Moreover, we easily check that $y$ is the harmonic conjugate to $x$, so also in general $v$ is not the harmonic conjugate to $u$. It is tempting to say that we will have $w' = u'v'$, but this is also false: let $u = x$, $v = y$ and $w = xy$ for $a = 1$ as above. Then $u' = y$, $v' = -x$ and $w' = \frac{1}{2}(-x^2 + y^2)$, so $w' \neq u'v'$. \end{rem} \section{More general operators of divergence type}\label{sec5} Following \cite{astala} (Chapter 16.) we consider the case of divergence type where $\det A$ is not necessarily constant or $A$ is not symmetric, by relating the study of elliptic equations in 2D to complex analysis. The main conclusions of this section are reduction results, i.e. we prove it is sufficient to consider special forms of $A$. We assume $A$ is bounded and strongly elliptic on $\Omega \subset \mathbb{C}$, i.e. there exists $K > 0$ such that \begin{align}\label{strongellipticity} \frac{1}{K}|\xi|^2 \leq \langle{A(z) \xi, \xi}\rangle \leq K |\xi|^2 \end{align} for a.e. $z \in \Omega$ and all $\xi \in \mathbb{R}^2$. We call a function $u$ \emph{$A$-harmonic} if \[\div \big(A \nabla u\big) = 0\] where we assume $A$ is just positive definite. This motivates the definition of a harmonic conjugate function $v$ to $u$: \[\nabla v = J A \nabla u\] Here $v$ exists and is uniquely determined up to constant. Note that $v$ is $A^*$-harmonic, where $A^* = -JA^{-1}J = \frac{A^T}{\det A}$, i.e. \[\div \big(A^* \nabla v\big) = 0\] Now the relation to complex analysis is yielded by defining $f = u + iv$ and noting that $f$ satisfies a \emph{Beltrami type equation}: \begin{align}\label{beltrami} \mathcal{L} f = \frac{\partial f}{\partial \bar{z}} - \mu(z) \frac{\partial f}{\partial z} - \nu(z) \overline{\frac{\partial f}{\partial z}} = 0 \end{align} where $\mu$ and $\nu$ depend only on $A$. Note that when $A = Id$, then $\mu = \nu = 0$ and we obtain the Cauchy-Riemann equations. The following Lemma (Theorem 16.1.6. of \cite{astala}) states precisely this connection: \begin{lemma}\label{complexrep} Let $\Omega$ be a simply connected domain and let $u \in W^{1, 1}_{loc}(\Omega)$ be a solution to \[\div(A \nabla u) = 0\] If $v \in W^{1,1}_{loc}(\Omega)$ is the harmonic conjugate to $u$ and $f = u + iv$ satisfies \eqref{beltrami} with: \begin{align}\label{system1} \mu &= \frac{1}{\det(I + A)}\big(A_{22} - A_{11} - i(A_{12} + A_{21})\big)\\ \nu &= \frac{1}{\det(I + A)}\big(1 - \det A + i(A_{12} - A_{21})\big)\label{system2} \end{align} Conversely, if $f \in W^{1, 1}(\Omega)$ satisfies \eqref{system1} and \eqref{system2}, then $u = \re(f)$ is $A$-harmonic and $v = \im(f)$ the harmonic conjugate of $u$. \end{lemma} There are also formulas expressing the entries of $A$ in terms of $\mu$ and $\nu$, but we do not need them here. Note just that $A$ is symmetric if and only if $\nu$ is real valued and that $\det A = 1$ if and only if $\nu$ is pure imaginary; so $A$ is symmetric and has $\det A = 1$ if and only if $\nu = 0$. Another ingredient we will need is a version of \emph{Stoilow factorisation} for operators of the form \eqref{beltrami}. The statement in general is that every $K$-quasiregular map factorizes as a composition of a harmonic map and a quasiconformal homeomorphism. Here, a homeomorphism $f: \Omega \to \Omega'$ in $W_{loc}^{1,2}$ is \emph{$K$-quasiconformal} if and only if $\frac{\partial f}{\partial \bar{z}} = \mu(z) \frac{\partial f}{\partial z}$ for almost every $z \in \Omega$, where $\lVert{\mu}\rVert_{\infty} \leq \frac{K - 1}{K + 1}$.\footnote{So in particular, $f$ is $1$-quasiconformal if and only if it is conformal, i.e. holomorphic and injective.} Moreover, a mapping $f$ is \emph{$K$-quasiregular} if all hypothesis hold as above, except that we do not ask that $f$ is a homeomorphism.\footnote{For instance, this result shows a few nice things about quasiregular maps: they are open and discrete, local $\frac{1}{K}$-H\"older, differentiable with non-vanishing Jacobian a.e..} More precisely, we will need the following form of Stoilow factorisation for general elliptic systems (Theorem 6.1.1. in \cite{astala}): \begin{theorem}\label{Stoilow} Let $f \in W^{1,2}_{loc}(\Omega)$ be a homeomorphic solution to \eqref{beltrami}, where we assume $|\mu| + |\nu(z)| \leq k <1$. Then any other solution $g \in W^{1, 2}(\Omega)$ to $\mathcal{L} g = 0$ takes the form $g = F\big(f(z)\big)$, where $F$ is a $K^2$-quasiconformal mapping satisfying \begin{align}\label{reducedbeltrami} \frac{\partial F}{\partial \bar{w}} = \lambda(w) \im\Big(\frac{\partial F}{\partial \bar{w}}\Big) \end{align} for $w \in f(\Omega)$, where (here $z = f^{-1}(w)$) \begin{align*} \lambda(w) = \frac{-2i \nu(z)}{1 + |\nu(z)|^2 - |\mu(z)|^2} \end{align*} It is easily seen that $|\lambda| \leq \frac{2k}{k^2 + 1} < 1$. Conversely, for any such $F \in W^{1, 2}(\Omega)$ satisfying \eqref{reducedbeltrami}, $g = F \circ f$ solves $\mathcal{L} g = 0$. \end{theorem} We call the equation \eqref{reducedbeltrami} the \emph{reduced Beltrami equation}. We need these two results for the following: \begin{lemma}[Variant of the isothermal coordinates]\label{genisothermal} Let $A$ be smooth and strongly elliptic, i.e. satisfying \eqref{strongellipticity}. For any $p \in \Omega$, there exists a $C^\infty$ coordinate chart $\varphi: p \in \Omega' \to \mathbb{C}$, such that for any solution $u$ to \[\div \big(A \nabla u\big) = 0\] can be written as $u = v \circ \varphi$, where $v$ satisfies $\div \big(\widetilde{A} \nabla v\big) = 0$ with $\widetilde{A} = \begin{pmatrix} 1 & \widetilde{A}_{12}\\ 0 & \widetilde{A}_{22} \end{pmatrix}$, where (for $\varphi(z) = w$) \begin{align}\label{eqnB12} \widetilde{A}_{12}(w) = \frac{-2\im(\lambda)(w)}{1 - \re(\lambda)(w)} \quad \text{ and } \quad \widetilde{A}_{22}(w) = \frac{1 + \re(\lambda)(w)}{1 - \re(\lambda)(w)} \end{align} Here $\lambda(w)$ is given by \eqref{lambda}, where we insert $\mu(z)$ and $\nu(z)$ from the equations \eqref{system1} and \eqref{system2}. Moreover, if $A$ is symmetric then $\widetilde{A}_{22} = 1$; if $\det A = 1$, then $\widetilde{A}_{12} = 0$. \end{lemma} \begin{proof} This is clear by combining Lemma \ref{complexrep} and Theorem \ref{Stoilow}. Consider the harmonic conjugate $u'$ of $u$ and $f = u + iu'$. As a first step, similarly to the proof of existence of isothermal coordinates (which are a special case)\footnote{By taking $A$ symmetric and with $\det A = 1$, we recover the isothermal charts.}, we take an $A$-harmonic function $u_1$ with $u_1(p) = 0$ and $\nabla u_1(p) \neq 0$. Then by taking the harmonic conjugate of $u_1'$, we get a coordinate system locally and define $f_1 = u_1 + iu_1'$, which is a local homeomorphism such that $F: = f \circ f_1^{-1}$ satisfies the reduced Beltrami equation \eqref{reducedbeltrami}. By noting that \begin{align}\label{lambda} \im \Big(\frac{\partial}{\partial z}\Big) = \frac{1}{2}\Big(\frac{\partial}{\partial z} - \overline{\frac{\partial}{\partial z}}\Big) \end{align} we have $\tilde{\mu}(w) = -\tilde{\nu}(w) = \frac{\lambda(w)}{2}$ in these new coordinates, where $\lambda(w)$ is given by \eqref{lambda}. By comparing the coefficients of the new matrix $\widetilde{A}$ in the equations \eqref{system1} and \eqref{system2} we get $\widetilde{A}_{21} = 0$ and $\widetilde{A}_{22} - \widetilde{A}_{11} + 1 - \widetilde{A}_{11}\widetilde{A}_{22} = 0$, which makes us able to assume $\widetilde{A}_{11} = 1$. Then it is easy to get \eqref{eqnB12} by taking the real and imaginary parts of \eqref{system1} for example. Finally, from \eqref{system2} we know that $\nu$ is real if and only if $A$ is symmetric; $\nu$ is pure imaginary if and only if $A$ has $\det A = 1$. The last claim now follows from equation \eqref{lambda}. \end{proof} We separately state a result in the same vein as the previous Lemma; it gives a coordinate system such that $A$ is isotropic. The proof is similar as for the previous two results. For a proof, see the proof of Lemma 3.1. in \cite{ALP} and references therein. \begin{lemma}\label{isotropicoord} Assume $A$ is symmetric. Given a point $p \in \Omega$, there exists a local diffeomorphism such that $F_*A = \tilde{a} \times Id$, where $\tilde{a}(z) = \det(A(F^{-1}(z)))^{\frac{1}{2}}$. Here $F_*$ denotes the pushforward and $F$ is a solution to the Beltrami equation \begin{align*} \frac{\partial F}{\partial \bar{z}} = \mu(z) \frac{\partial F}{\partial z} \end{align*} Here $\mu$ is determined explicitly by $A$ and is given by \begin{align*} \mu(z) = \frac{g_{11}(z) - g_{22}(z) + 2ig_{12}(z)}{2 + g_{11}(z) + g_{22}(z)} \end{align*} where $g_{ij}$ are the entries of the matrix $G = \sqrt{\det A} A^{-1}$. \end{lemma} \subsection{Non-self adjoint equations.}\label{reductions} We remark that by the methods of G. Alessandrini \cite{ales}, where he proves the SUCP properties for possibly non-self adjoint elliptic operators of divergence type with lower order coefficients, we may reduce the case of more general linear equations to an equation of the divergence type. It is based on a reduction method as in Lemma \ref{reductionlemma} and Lemma \ref{lem1} In fact, Alessandrini shows for possibly non-symmetric $A$, that we may introduce two positive multipliers $m, w$, such that the equation \begin{align} Lu = -\div(A\nabla u + u B) + C\nabla u + du \end{align} reduces to a simpler equation, in the following sense. Here $A$ is $2 \times 2$ matrix, $B$ and $C$ are vector functions and $d$ a function. We have for any $v$ \begin{align} \widehat{L}v = w L(mv) \end{align} where $\widehat{L}u = -\div(\widehat{A}\nabla u + u \widehat{B})$. Again, this provides a reduction procedure for our problem and makes it sufficient to consider operators of the form $\widehat{L}$. \section{The case $n = 2$ for the twisted Laplacian}\label{sec6} Here we prove the SUCP for a special class of matrix operators on $\mathbb{R}^2$ which satisfy an additional equation; namely, we consider connections Laplacians of the form $P = d_A^*d_A$ for $A$ a connection, i.e. a matrix of one forms, where we assume the Yang-Mills equation \eqref{YMeqn} for $A$. The motivation is explained in the introduction. \begin{lemma}\label{YMlemma} Let $(\Omega, g) \subset \mathbb{R}^2$ be a domain equipped with a smooth metric $g$. Equip $\Omega \times \mathbb{C}^m$ for $m \in \mathbb{N}$ with a Yang-Mills connection $A$\footnote{Recall that $A$ is \emph{Yang-Mills} if $D_A^*F_A = 0$; here $D_A$ is the natural induced connection on the endomorphism bundle and $F_A = dA + A \wedge A$ is the curvature. See also the introduction.}. Assume $F \in C^\infty(\Omega, \mathbb{C}^{m \times m})$ satisfy $d_A^*d_A F = 0$. Then $\det F$ satisfies the SUCP, and so the WUCP. \end{lemma} \begin{proof} Assume w.l.o.g. $\det F$ vanishes to infinite order at zero. As in Proposition \ref{isothermal2dsucp}, we look at isothermal coordinates near zero, so that $g = \begin{pmatrix} \lambda & 0\\ 0 & \lambda \end{pmatrix}$ in these coordinates for a smooth, positive function $\lambda$. The Yang-Mills equations take the form \begin{align}\label{YM1} 0 = D_A^* F_A = \star D_A \star (dA + A \wedge A) \end{align} Let us write simply $A = A_1 dx + A_2 dy$ for $A_1, A_2$ smooth $m \times m$ matrices. Then the above equation takes the form \begin{align}\label{YM2} 0 = d^*(dA + A \wedge A) + \star [A, \star (dA + A \wedge A)] \end{align} where the second term can be rewritten as \begin{align} \frac{1}{\lambda} \big(-[A_1, G(A)]dy + [A_2, G(A)]dx\big) \end{align} where $G(A) = \lambda \star F_A$ is just a function of $A$. Note that we have, in isothermal coordinates: \begin{align}\label{hodgeisothermal} \star dx = - g^{11} |g|^{1/2} dy= -dy, \quad \star dy = dx \text{\, and \,} \star(|g|^{\frac{1}{2}} dx \wedge dy) = 1 \end{align} Therefore, since $d^* = \star d \star$ and by \eqref{hodgeisothermal}, we have that the Yang-Mills equation \eqref{YM1} is of the following form: $\frac{1}{\lambda}$ times an expression depending only on $A$. Now we have two choices. By taking the Coulomb gauge in which $d^*A = 0$ (see \cite{cek1}), we have that this condition is equivalent to: \begin{align} \frac{\partial A_1}{\partial x} + \frac{\partial A_2}{\partial y} = 0 \end{align} By applying $d$ to this equation and adding to \eqref{YM2} (after multiplying with $\lambda$), we get an equation of the form \begin{align}\label{YMnice} \Delta_{eucl} A + Q(A, \nabla A) = 0 \end{align} where $Q$ is an analytic (polynomial) function of its entries and $\Delta_{eucl}$ is the Euclidean Laplacian that acts diagonally. Therefore by a well-known property of elliptic equations, we have $A$ is analytic in this gauge. Furthermore, since $d_A^*d_A$ is equal to $\frac{1}{\lambda} P_A$, where $P_A$ is a second order elliptic operator depending only on $A$, we have that $F$ is also analytic in this gauge and so is $\det F$, implying the SUCP and WUCP. Alternatively, we may consider the harmonic gauge for the connection, i.e. $d^*A = \frac{1}{\lambda}(A_1^2 + A_2^2)$ (see \cite{cek1} for more details). In this gauge, $A$ satisfies: \begin{align} \frac{\partial A_1}{\partial x} + \frac{\partial A_2}{\partial y} + A_1^2 + A_2^2= 0 \end{align} As before, by applying $d$ to this equation and adding to \eqref{YM1} after multiplication by $\lambda$, we are back to the form of equation \eqref{YMnice} and hence to the previous case. \end{proof} \section{Applications to the Calder\'on problem for connections}\label{sec7} Here we apply the result and the proof of Lemma \ref{YMlemma} to the Calder\'on problem for connections (see \cite{cek1, cek2, AGTU}), by using the technique of the proof of Theorem 1.2 from \cite{cek1} to produce a result for surfaces and bundles of \emph{arbitrary} rank. Calder\'on's problem is an inverse boundary value problem that has picked up a lot of attention in the past thirty and more years \cite{Usurvey}. A similar result for connections was proved in \cite{cek1} for either rank one case and smooth metric, or arbitrary rank but \emph{analytic} metrics and the main novelty here is to extend these methods to the smooth $2$-dimensional case and arbitrary rank. First, we have the following simple geometric lemma: \begin{lemma}\label{geomlemma} Let $(\Omega, g) \subset \mathbb{R}^2$ containing $0$ with $g$ smooth. Fix a smooth embedded curve $0 \in \gamma \subset \Omega$. Then the Riemannian distance function $f(q) := d^2(q, \gamma)$ from a point $q \in \Omega$ to $\gamma$, has the following Taylor expansion at $0$, for $q = (x, y)$: \begin{align} f(x, y) = \begin{pmatrix} x & y \end{pmatrix} P^Tg(0)P\begin{pmatrix} x\\ y \end{pmatrix} + O(|x|^3) \end{align} where $P$ is the projection to $\star \dot{\gamma}(0)$ along $\dot{\gamma}(0)$, where $\dot{\gamma}(0)$ is the unit tangent vector to $\gamma$ at $0$. \end{lemma} \begin{proof} See Appendix \ref{secapp}. \end{proof} We are now ready to prove the main result of this section, Theorem \ref{mainthm}. \begin{proof}[Proof of Theorem \ref{cexn=4}] The proof is analogous to the proof of Theorem 1.2 from \cite{cek1}, once we have Lemma \ref{YMlemma}. Let us recall the proof briefly and underline the differences. Let $F$ and $G$ be $m \times m$ matrix functions solving $d_A^*d_A F = d_B^*d_B G = 0$ with $F = G$ on $\partial M$ and $F = G = Id$ on an open, non-empty set $V \subset \Gamma$. By Lemma \ref{YMlemma}, we have that the zero sets of $\det F$ and $\det G$ are covered by a countable union of curves $\{C_i \mid i \in \mathbb{N}\}$; by SUCP we have $H = FG^{-1}$ satisfying $H^*A = B$ in a neighbourhood of $V$ with $H$ unitary. Next, we perform the drilling procedure from \cite{cek1}. Near a point $p \in C_i$ where $\det F$ vanishes to order $k-1$ locally on $C_i$, meaning that $\det G = y^k g_1$ in the normal coordinate system to $C_i$, by Taylor's theorem, with $g_1(p) \neq 0$. We assume that for $y>0$ (locally) we have $H^*A = B$. Then \begin{align}\label{Hextension} H = FG^{-1} = \frac{F \adj G}{y^k g_1} \end{align} Notice that $H$ is smooth and bounded for $y > 0$, so by Taylor's theorem $F \adj G = y^k H_1$ for some smooth $H_1$ and so $H = \frac{H_1}{g_1}$ extends smoothly to $y < 0$. Now, there exists smooth, invertible and unitary $X$ and $Y$, such that $A' = X^*A$ and $B' = Y^*B$ satisfy the Coulomb gauge equation. If we change coordinates to isothermal coordinates by a diffeomorphism $\varphi$ (with $\varphi(x, y) = (u, v)$), then $A'$ and $B'$ are analytic by Lemma \ref{YMlemma}. Moreover $H' := F' G'^{-1} = X^{-1}HY$ smoothly extends to $y < 0$, too. \emph{If} we had $H'$ analytic, then by $H'^*A' = B'$ for $\varphi^{-1}_2(u, v) > 0$ we would have $H'^* A' = B'$ on the whole chart by analyticity and so $H^*A = B$ for $y < 0$. What follows is the proof of this analyticity. The main issue is that in the version of \eqref{Hextension} for $H'$, the distance function $y$ is not always analytic, since $g$ is just smooth. To work around this, go to isothermal coordinates via $\varphi$ and write \begin{align}\label{F'G'} F'(q) \adj (G')(q) = H_1'(q) \big(d(q, \gamma)\big)^k = H_1'(q) \Big(\frac{d^2(q, \gamma)}{d^2_{eucl}(q, \gamma)}\Big)^\frac{k}{2} \big(d_{eucl}^2(q, \gamma)\big)^\frac{k}{2} \end{align} where $\gamma = \varphi(C_i)$, $d_{eucl}$ is the Euclidean distance and $d(q, \gamma)$ denotes the distance of the point $q$ in the chart from $\gamma$ (w.r.t. the isothermal metric). Since $\gamma$ is analytic in these coordinates by Lemma \ref{YMlemma}, the function $d^2_{eucl}(q, \gamma)$ is analytic. We want to prove the quotient $\frac{d^2(q, \gamma)}{d^2_{eucl}(q, \gamma)}$ smoothly extends over $\gamma$. We want to look at the Taylor expansion of $d^2(q, \gamma)$ at a point on $\gamma$. First change the coordinates by a diffeomorphism $\psi(u, v) = (r, s)$ by going to the normal coordinates for $\gamma$ w.r.t. the Euclidean metric (note this give an analytic chart). Then we apply Lemma \ref{geomlemma} to get that \[d^2\big((r, s), \psi \circ \gamma\big) = c s^2 + O(|r^2 + s^2|^{\frac{3}{2}})\] where $c > 0$ is a positive constant and $s$ is the normal variable. Therefore the quotient $D(q) := \frac{d^2(q, \gamma)}{d^2_{eucl}(q, \gamma)}$ has a smooth extension, since $d_{eucl}\big((r, s), \psi \circ \gamma\big) = s$ in these coordinates. Also, in the $(r, s)$ coordinates, equation \eqref{F'G'} gives that $H_{1}'(r, s) D(r, s)$ is analytic and so we have $H_{1}'(u, v)D(u, v)$ also analytic, since the diffeomorphism $\psi$ is analytic, too. Finally, by going back to equation \eqref{Hextension}, we have that \[H'(q) = F'(q)G'^{-1}(q) = \frac{F'(q) \adj G'(q)}{y^k(q) g'_1(q)} = \frac{H_1'(q) \Big(\frac{d^2(q, \gamma)}{d^2_{eucl}(q, \gamma)}\Big)^\frac{k}{2} \big(d_{eucl}^2(q, \gamma)\big)^\frac{k}{2}}{g_1'(q) \Big(\frac{d^2(q, \gamma)}{d^2_{eucl}(q, \gamma)}\Big)^\frac{k}{2} \big(d_{eucl}^2(q, \gamma)\big)^\frac{k}{2}} \] Here $g_1' = \frac{g_1}{\det Y}$, we used \eqref{F'G'} and the $d_{eucl}$ parts cancel to give an analytic function $H'$ in the $(u, v)$ coordinates; we also applied the procedure as for \eqref{F'G'} to see that $g_1'(q) \Big(\frac{d^2(q, \gamma)}{d^2_{eucl}(q, \gamma)}\Big)^\frac{k}{2}$ is analytic. This finishes the procedure of drilling the holes. Finally, we are left to observe that the remainder of the proof remains more or less the same as in \cite{cek1} (see also Remark 5.2. from \cite{cek1}). \end{proof} \begin{rem}\rm In 2D, there are more powerful techniques to recover the connection (also true in the metric case) from the Dirichlet-to-Neumann map -- see e.g. \cite{AGTU}. However it is useful to have another viewpoint on this problem, extending the technique \cite{cek1} to this case; note also that this technique works for partial data, whereas the results of \cite{AGTU} are stated for full data. \end{rem}
1,108,101,566,066
arxiv
\section{Introduction} \begin{figure}[t]\label{fig:norm} \centering \includegraphics[width=5in]{./1_norm.pdf} \vspace{-2mm} \caption{(a) An illustration of the visualized patterns of convolutional filters. (b) $\ell_1$ norm based filter pruning and functionality-oriented pruning.} \vspace{-3mm} \end{figure} Nowadays, Convolutional Neural Networks (CNNs) have been widely applied to various cognitive tasks (\textit{esp.} image recognition)~\cite{Kriz2012NIPS,luo2018taking,zhu2019sim}. This great success is benefited from CNNs' sophisticated model structures, which utilize multiple inter-connected layers of convolutional filters to hierarchically abstract input data features and assemble accurate prediction results~\cite{Gonzalez:2018:semantic,He:2016:CVPR}. However, the increasing layer width and depth boost not only accuracy but also computation load. For example, a 2.3$\times$ parameter size increment from AlexNet to VGG-16 will introduce 21.4$\times$ more computation load in terms of FLOPs~\cite{Kriz2012NIPS,Simo:2014:arXiv}. Many optimization works have been proposed to relieve the CNN computation cost~\cite{Han:2015:deep,jaderberg2014speeding,Li:2016:pruning}, and the filter pruning is considered as one of the most efficient approaches. Filter pruning methods identify the convolutional filters with the smallest significance based on different metrics. By removing these filters and repeatedly retraining the model, the computation load can be significantly reduced while the prediction accuracy is well retained~\cite{he2018GM,zhuo2018scsp,yu2018nisp}. However, such significance-ranking based filter pruning methods have been increasingly questioned: On the one hand, the correlation between the filter weight and functionality significance (\textit{e.g.}, in~\cite{Li:2016:pruning}) is not theoretically proven. Some works have found that pruning certain small filters may cause severe accuracy drop than pruning large ones~\cite{ye2018rethinking}. On the other hand, the excessive dependence on the retraining process seriously questions the rationality of the conventional filter significance identification. Some works show that the retraining process actually reconstructs the CNN models due to massive filters are inappropriately pruned~\cite{liu2018rethinking}. Therefore, to re-examine the convolutional filter pruning, rather than merely quantitatively ranking the filter significance, it is urgent to qualitatively interpret filter functionalities and identify actual model redundancies. In this work, targeting image-cognitive CNNs, we utilized CNN visualization techniques to interpret the convolutional filter functionality~\cite{Zhou:2016:CVPR:Network-Dissection,zhou2014detectors,Yosinski:2015:ICML:AM}. Specifically, we adopted the Activation Maximization method to synthesize a specific input image for each filter~\cite{Yosinski:2015:ICML:AM}. These images can trigger the maximum activation of corresponding filters, thus representing each filter's preferred input feature pattern as its particularly functionality. As shown in Fig.~\ref{fig:norm}~(a), with the layer depth increasing, the visualized filter functionality patterns evolve from fundamental colors and shapes into recognizable objects. Based on the visualized interpretation, we discover that a certain number of repetitive filters with similar functionality patterns exist in each layer, which can be grouped into multiple functionality clusters (denoted by red boxes). The functionality repetition in each cluster can be considered as a model structural redundancy in the filter level. Motivated by this discovery, we propose a functionality-oriented filter pruning method. In our methods, regardless of the weight values, we find the best pruning ratio of every cluster in every layer to reduce the model structural redundancy. After the pruning, only a small amount of retraining effort is required to fine tune the accuracy performance. Fig.~\ref{fig:norm}~(b) presents a set of filter pruning examples to demonstrate the difference between our proposed method and the conventional $\ell_1$-norm based filter pruning method. With the same amount of filters pruned, we can see that, our method precisely address the redundant filters in every cluster and layer, resulting a negligible accuracy drop. Experiment results show that, on CIFAR-10, our method reduces more than 43\% and 44.1\% FLOPs on ResNet-56 and VGG-16 respectively, achieving 0.05\% and 0.72\% relative accuracy improvement. On CIFAR100, our method reduces more than 37.2\% FLOPs on VGG-16 without losing accuracy. On ImageNet, with 50.64\% and 23.8\% FLOPs reduction, our method can accelerate VGG-16 and ResNet32 without accuracy drop. We also observe the filter functionality transition in the retraining process. Experiments reveal that, the conventional filter pruning methods may significantly compromise filters' functionality integrity, resulting in considerable retraining cost. While, our method demonstrates expected accuracy retaining capability and retraining independence. In the following sections, we will proceed into specific design and evaluation details. \section{Related work} \label{sec:prelim} \textbf{Convolutional Filter Pruning} Previous filter pruning works can be roughly divided into two categories: (1) \textit{Post-Training Filter Pruning} methods are applied to pre-trained CNN models by identifying and pruning the insignificant filters based on particular weights ranking schemes. For example,~\cite{Li:2016:pruning},~\cite{he2018soft}, and~\cite{molchanov2016taylor} utilized $\ell_1$-norm, $\ell_2$-norm, and Taylor expansion respectively to rank the filter weights for pruning in each layer. (2) \textit{Training Phase Filter Pruning} methods apply particular regulation constraints to the CNN model training phase, and enforce a certain amount of filters to become sufficiently small to be safely pruned (\textit{e.g.}, structured sparsity learning~\cite{wen2016learning}, structured Bayesian pruning~\cite{neklyudov2017bayesian}, and $\ell_0$ regularization~\cite{louizos2017learning}). Usually, the second category methods can achieve better optimization performance due to their profound effects in the early training stage. However, the first category methods have better practicability with wider application scenarios. In this work, we will focus on renovating the first category methods and compare our methods with corresponding state-of-the-art. \begin{comment} Channel Pruning~\cite{Li:2016:pruning} alternatively used LASSO regression based channel selection and feature map reconstruction to prune filters. Most of these works are based on quantitative analysis on the filter significance. However, such an approach is already questioned by some recent works:~\cite{ye2018rethinking} proposed to prune filter by enforcing sparsity on the scaling parameter of batch normalization layers. \cite{ye2018rethinking} prune filters by considering the filter correlations based on geometric median. Both methods demonstrated that filters with small values also significantly affect the CNN performance. \end{comment} \noindent\textbf{Convolutional Filter Visualization} As the convolutional filters are designed to capture certain input features, the semantics of the captured feature can conclusively indicate the functionality of each filter~\cite{qin2018visualization,mahendran2015:Network-Inversion,zhou2018revisiting,qin2019captor}. However, the functionality is hard to be directly interpreted, which significantly hinders qualitative CNN model analysis and optimization development~\cite{Gonzalez:2018:semantic}. Recently, many CNN visualization works have been proposed to analyze CNN models in a functionality perspective:~\cite{Zhou:2016:CVPR:Network-Dissection} established the correlation between each filter and a specific semantic concept;~\cite{Yosinski:2015:ICML:AM} designed a novel visualization technique -- Activation Maximization -- to illustrated a filter's maximum activation pattern, which represents the filter's exclusive feature preference as its functionality. In this work, we will utilize the Activation Maximization (AM) as our major visualization tool for filter analysis. \section{Functionality-Oriented Filter Pruning Methods} \label{sec:pruning} \noindent\textbf{Proposed Method Overview} Based on the convolutional filter functionality and structural redundancy analysis, we propose a functionality-oriented filter pruning method. The proposed method consists of the following major steps: \vspace{0.5mm} (1) \textit{Filter Functionality Interpretation}: Given a pre-trained model, each filter's functionality is firstly interpreted by AM visualization; \vspace{0.5mm} (2) \textit{Functionality Redundancy Identification}: Based on proper similarity analysis on the visualized functionality patterns, the filters with repetitive functionalities are clustered together. These repetitive filters can be considered as redundant filters to be pruned; \vspace{0.5mm} (3) \textit{Filter Significance Identification}: Inside each cluster, based on the gradients analysis, each filter's relative accuracy contribution is further evaluated. Such a filter significance identification will be applied to determine the pruning priority in cluster level pruning; \vspace{0.5mm} (4) \textit{Model-wise Filter Pruning}: Given a global pruning ratio $R$, we multiply it with the layer-wise coefficients to get each layer's actual pruning ratio $r_l$. The layer-wise coefficients are determined by each layer's pruning accuracy impact as we will show later. For each layer, this layer-wise pruning ratio $r_l$ will be applied to every filter cluster. \vspace{0.5mm} (5) \textit{Model Fine-tuning}: After model pruning, a small number of retraining iterations might be applied to recover potential accuracy drop. \vspace{0.5mm} The algorithm details are presented as follows: \noindent\textbf{Filter Functionality Interpretation} In AM visualization, each filter's functionality is defined as its feature extraction preference from the CNN inputs. The feature extraction preference of the $i_{th}$ filter $\mathcal{F}_i^l$ in the $l_{th}$ layer is represented by a synthesized input image $X$ that can cause the maximum activation of $\mathcal{F}_i^l$ (\textit{i.e.} the convolutional feature map value). The synthesis process of such an input image can be formulated as: \vspace{-1.5mm} \begin{small} \begin{equation} V(\mathcal{F}^l_i)=\argmax_{X} {A^l_i(X), \hspace{0.6cm} X \leftarrow X} + \eta \cdot \frac{\partial A^l_i(X)}{ \partial X}, \label{eq:am} \vspace{-1.5mm} \end{equation} \end{small} where $A^l_i(X)$ is the activation of filter $\mathcal{F}_i^l$ from an input image $X$, $\eta$ is the gradient ascent step size. With $X$ initialized as an input image of random noises, each pixel of this input is iteratively changed along the $\partial A^l_i(X)$/$\partial X$ increment direction to achieve the maximum activation. Eventually, $X$ demonstrates a specific visualized pattern $V(\mathcal{F}^l_i)$, which contains the filter's most sensitive input features with certain semantics, and represents the filter's functional preference for feature extraction. \begin{figure}% \centering \parbox{0.35\textwidth}{ \begin{scriptsize} \begin{tabular}{p{0.3in}p{0.2in}p{0.2in}p{0.4in}} \toprule Layer & K & Filters & Ratio \\ \midrule Conv1\_1 & 15 & 62 & 96.8\% \\ Conv1\_2 & 17 & 64 & 100\% \\ Conv2\_1 & 20 & 128 & 100\% \\ Conv2\_2 & 31 & 121 & 94.5\% \\ Conv3\_1 & 26 & 251 & 98.0\% \\ Conv3\_2 & 24 & 251 & 98.0\% \\ Conv3\_3 & 14 & 239 & 93.4\% \\ Conv4\_1 & 27 & 447 & 87.3\% \\ Conv4\_2 & 28 & 437 & 85.4\% \\ Conv4\_3 & 38 & 439 & 85.7\% \\ Conv5\_1 & 46 & 500 & 97.7\% \\ Conv5\_2 & 61 & 503 & 98.2\% \\ Conv5\_3 & 40 & 506 & 98.8\% \\ \midrule Sum & 387 & 3942 & 93.3\% \\ \bottomrule \end{tabular} \end{scriptsize} \caption{Filter Cluster Summary} \label{tab:cluster} } \qquad \begin{minipage}[c]{0.5\textwidth}% \centering \includegraphics[width=1\textwidth]{./layer_model.pdf} \vspace{-5mm} \caption{Individual Layer's Accuracy Impact} \label{fig:sensi} \end{minipage} \vspace{-4mm} \end{figure} \vspace{1mm} \noindent\textbf{Functionality Redundancy Identification} To identify the functionality redundancy, we apply \textit{k}-means algorithm with pixel-level Euclidean-distance to cluster the filters with similar visualized patterns in each layer. The pixel-level Euclidean-distance between the AM visualized patterns of filter $\mathcal{F}_k^{l}$ and $\mathcal{F}_i^{l}$ is formulated as: \vspace{-1.5mm} \begin{small} \begin{equation} S_{E}[V(\mathcal{F}_{i}^{l}), V(\mathcal{F}_{k}^{l})]=\|V(\mathcal{F}_i^{l}) - V(\mathcal{F}_k^{l}) \|^2. \label{eq:sd} \vspace{-1.5mm} \end{equation} \end{small} which indicates the functionality similarity of any two convolutional filters. To determine the proper number of clusters (\textit{i.e.} $K$), we perform a grid search from one to half of the total filter number (\textit{i.e.} $I_{l}/2$) in each layer. With a larger cluster number, smaller pattern differences are taken into consideration, causing many clusters may only contain one filter with extremely minimal similarity with others. These filters are considered as non-clustered filters and merged in a locked cluster, which are considered to have unique features and will not be pruned. The maximal $K$ during the grid search is selected as final parameter. In Fig.~\ref{tab:cluster}, we show the filter cluster distribution based on \textit{k}-means analysis. For a VGG-16 model with 13 convolutional layers, the filters in each layer are grouped into 14 to 61 clusters. With the layer depth increment, the cluster number also becomes larger: in layer Conv5\_2, the cluster number is as large as 61. This is because of the feature complexity increases with more divergent visualized graphic patterns. Meanwhile, our proposed method can effectively cluster most filters. The minimum cluster ratio across all convolutional layers remains above 85\%. In average, about 93\% filters are well clustered through the whole model, indicating our method's sufficient filter redundancy analysis capability. \begin{comment} \begin{algorithm}[t] \caption{The Functionality-oriented Filter Pruning Algorithm} \begin{algorithmic}[1] \Require Pre-trained CNN, Number of layers L, and Number of filters in each layer $I_{l}$ \Ensure Pruned CNN \\Visualized pattern $V(\mathcal{F}_i^l)$ synthesizing for every filter in each layer by Eqn.(\ref{eq:am}) \\Visualized pattern clustering by K-means with pixel-level Euclidean-distance (Eqn.(\ref{eq:sd})) \\\hspace{4.5mm} Merge the one filter clusters into locked cluster \\Ranking the filter in each cluster based on contribution index by Eqn.(\ref{eq:contri}) \\\textbf{Repeat} \\\hspace{4.5mm} Parallel pruning each layer based on layer wise sensitivity. \\\hspace{4.5mm} Fine-tuning \\\textbf{Until} Reaching the target trade-off between accuracy and computation reduction. \end{algorithmic} \label{alg:pruning} \end{algorithm} \end{comment} \vspace{1mm} \noindent\textbf{Filter Significance Identification} Ideally, the filters with the same functionality can substitute each other. However, they may still have slight contribution difference to the prediction accuracy. To identify each filter's contribution to the output, we use the first-order Taylor expansion to approximate the CNN output variation under the filter's impact: \vspace{-1.5mm} \begin{small} \begin{equation} Z(A_{i}^{l}+\Delta) = Z(A_{i}^{l}) + \frac{\partial Z(A_{i}^{l})}{\partial A_{i}^{l}} \cdot \Delta,~ where ~ \Delta = A_i^l \rightarrow 0, \label{eq:1} \vspace{-1.5mm} \end{equation} \end{small} where $Z(A_{i}^{l}+\Delta)$ is the CNN output loss and the $A_{i}^{l}$ is $i_{th}$ filter’s output feature map in layer $l$. When filter $\mathcal{F}_{i}^{l-1}$ in the $l-1$ layer is pruned, the filter's output feature map is corresponding set to zero, i.e. changing the $i_{th}$ dimension of $A^l$ to zero. Therefore, the influence on $Z$ can be qualitatively evaluated by its coefficient $\frac{\partial Z(A_i^l)}{\partial A_i^l}$. Before pruning, each filter $\mathcal{F}^{(c,l)}_i$ in the cluster $C_l^k$ of layer $l$ is firstly ranked by the contribution index, which is calculated by examining the average gradients: \vspace{-1.5mm} \begin{small} \begin{equation} \medmuskip=-2mu I(\mathcal{F}^{(c,l)}_i) = \frac{1}{N}\sum_{n=1}^{N} \left \| \frac{\partial Z(F, A^{l})}{\partial A_{i}^{l}(x_{n})} \right \|, \label{eq:contri} \vspace{-1.5mm} \end{equation} \end{small} where $Z(F, A^{l})$ is the CNN output loss of a test image $x_n$, and $A_i^l(x_{n})$ is the feature map of filter $\mathcal{F}^{(c,l)}_i$ for each test image $x_n$. \begin{figure}[t] \centering \includegraphics[width=5in]{./2_pruning_visua.pdf} \vspace{-1.5mm} \caption{Case study of the filter functionality-oriented filter pruning on the Conv3\_2 and Conv5\_2 of VGG-16. The convolutional filters are shown by their visualized patterns, and aligned according to the contribution index increment.} \label{fig:pruning} \vspace{-4mm} \end{figure} \vspace{1mm} \noindent\textbf{Filter Pruning and Fine-Tuning} Based on the functionality redundancy and filter significance identification, we can proceed to the stage of model-wise convolutional filter pruning. Given a global model pruning ratio $R$, the layer-wise pruning ratio of each layer $r_{l}$ is determined by multiplying $R$ and the layer-wise coefficients, which are determined according to each layer's pruning accuracy impact. For example, Fig.\ref{fig:sensi} shows the layer-wise network accuracy impact according to our method and $\ell_1$-norm based method. Clearly, the shallow layers demonstrate higher accuracy impact, while the deeper layers have lower impact. Therefore, we prune more gently for shallower layers but more aggressive for deeper layers. The layer-wise coefficients of the pruning ratios $r_{l}$ from the shallow layers to deep layers (conv1\_x to conv5\_x) is thus set to 0.25:0.125:0.125:0.375:0.375. For other models, dedicated pruning ratios are discussed in later experiments. Meanwhile, we can see that our proposed pruning has a slower accuracy degradation rate compared with the $\ell_1$-norm based pruning method in majority of layers. This means the redundant filters can be more accurately identified and pruned through the proposed interpretive functionality-oriented filter pruning approach. Fig.~\ref{fig:pruning} shows our functionality-oriented filter pruning examples with the convolutional layers of Conv3\_2 (a) and Conv5\_2 (b) with $r_l=$ 30\%. The convolutional filters are shown by their visualized functionality patterns, and the filters with similar patterns are grouped into multiple clusters. When the layer-wise pruning ratio is assigned, the filters with small contribution index values will be firstly pruned. According to our proposed method, larger clusters are pruned with more filters. This make senses since such balanced functionality reduction will lead to non-biased functionality composition as the original model's. Therefore, we can see different amounts of filters are pruned between different clusters but all the pruning ratios are $\sim$30\% as shown in the Fig.~\ref{fig:pruning}. Leveraging the convolutional filters' qualitative functionality analysis, our proposed method is expected to leverage the neural network interpretability for more accurate redundant filter allocation, faster pruning speed, and optimal computation efficiency. After the model-wise pruning, a small amount of model fine-tuning will be applied to recover the potential accuracy drop, as we will show in later experiments. \section{Experiments} We evaluate our proposed method with both single-branch CNN models (\textit{i.e.}, ConvNet, VGG~\cite{Simo:2014:arXiv}) and a multiple-branch CNN model (\textit{i.e.}, ResNet~\cite{He:2016:CVPR}) on CIFAR10/100~\cite{cifar10} and a subset of ImageNet~\cite{imagenet}. A data argumentation procedure is applied to CIFAR10/100 dataset through the horizontal flip and random crop, generating a 4-pixel padded training dataset. The visualization analysis and filter pruning are implemented in Caffe environment~\cite{jia2014caffe}. The contribution index is calculated by using the whole training dataset for more accurate CNN model analysis. The retraining process for our method is executed only by 40 epochs (\textit{i.e.}, 1/4 of the original training epochs) with a constant learning rate of 0.001. While the retraining process for the compared state-of-the-art will be continuously applied until model convergence. Specifically, the compared works include $\ell_1$-norm Pruning~\cite{Li:2016:pruning}, Taylor Pruning~\cite{molchanov2016taylor}, Geometric Median Pruning (GM)~\cite{he2018GM}, and Channel Pruning~\cite{he2017channel}. \vspace{-2mm} \subsection{CIFAR Experiment} \vspace{-2mm} In this section, we first evaluate our method on two CIFAR image datasets, namely CIFAR-10 and CIFAR-100. The experimental results are shown in Table~\ref{tab:CIFAR}. \begin{table}[t] \small \caption{CIFAR Pruning Comparison} \centering \begin{threeparttable} \begin{tabular}{llllllll} \toprule CNN& Pruning & Baseline & FLOPs & FLOPs & Prune & Retrain \\ Models&Methods & acc. (\%) & (x$10^{8}$) & $\downarrow$ (\%) & acc. (\%) & acc. (\%) \\ \bottomrule ConvNet &$\ell_1$-norm* & 90.05 & 5.34 & 37.4 & 83.29 & 88.53 \\ (CIFAR10) &Taylor* & 90.05 & 5.34 & 37.4 & 85.29 & 89.42 \\ &Ours~(40\%) & 90.05 & 5.34 & 37.4 & \textbf{87.88} & \textbf{90.04}\\\bottomrule VGG-16 &$\ell_1$-norm~\cite{Li:2016:pruning}& 93.25 & 2.06 & 34.2 & - & 93.40 \\ (CIFAR10) &Taylor* & 93.25 & 1.85 & 44.10 & 73.24 & 92.31 \\ &GM~\cite{he2018GM} & 93.58 & 2.01 & 35.9 & 80.38 & \textbf{94.00}\\ &Ours~(45\%) & 93.25 & 1.85 & \textbf{44.10}& \textbf{91.13} & 93.30 \\\bottomrule ResNet-56 &$\ell_1$-norm~\cite{Li:2016:pruning} & 93.04 & 0.91 & 27.60 & - & 93.06 \\ (CIFAR10) &Taylor* & 92.85 & 0.71 & 43.00 & 76.32 & 92.01 \\ &Channel~\cite{he2017channel} & 92.80 & 0.56 & \textbf{50.00} & - & 93.23 \\ &Ours~(40\%) & 92.85 & 0.71 & 43.00 & 81.13 & \textbf{93.30} \\\bottomrule VGG-16 &$\ell_1$-norm* & 73.14 & 1.96 & 37.32 & 63.21 & 72.31 \\ (CIFAR100)&Taylor* & 73.14 & 1.96 & 37.32 & 65.19 & 72.52 \\ &Ours~(45\%) & 73.14 & 1.96 & 37.32 & \textbf{68.21} & \textbf{73.21}\\\bottomrule \end{tabular} \label{tab:CIFAR} \begin{tablenotes} \footnotesize \item ``*'' indicates our implementation. Ours(40\%) means 40\% filters are pruned by our method. \end{tablenotes} \end{threeparttable} \vspace{-4mm} \end{table} \vspace{1mm} \noindent\textbf{ConvNet on CIFAR-10} Our ConvNet is designed based on the AlexNet model with 5 convolutional layers, which contain 96-256-384-384-256 filters respectively. All the convolutional filters are constructed with a 3x3 kernel size. The baseline test accuracy of our ConvNet is 90.05\%. As shown in the Table~\ref{tab:CIFAR}, ours method pruned 40\% convolutional filters without accuracy drop and achieved 37.4\% computation load reduction We also implemented the $\ell_1$-norm and Taylor pruning for comparison. These methods pruned the same amount of filters as our method to achieve the same amount of FLOPs reduction. As shown in the Table~\ref{tab:CIFAR}, our method achieves better performance in terms of both pruned accuracy and retrained accuracy. \vspace{1mm} \noindent\textbf{VGG-16 on CIFAR-10} We implemented a VGG-16 model following the same pre-processing and hyper-parameters configurations as~\cite{Li:2016:pruning}, which consists of 13 convolutional layers and 2 fully-connected layers. Each convolutional layer is followed by a batch normalization layer. As shown in Table~\ref{tab:CIFAR}, our method has better performance than the $\ell_1$-norm and Taylor pruning. Under a pruning ratio of 45\%, our proposed method can achieve 44.1\% FLOPs reduction with the accuracy well-retained. Although the GM pruning achieves an even improved accuracy performance after the retraining process, it only reduced 35.9\% computation load, about 18.6\% less than our proposed method. \vspace{1mm} \noindent\textbf{ResNet-56 on CIFAR-10} ResNet-56 is a multiple-branch CNN model, which contains three convolutional stages of residual blocks connected by projection mapping channels, one global average pooling layer, and one fully-connected layer. After trained on CIFAR-10 from scratch using the same training parameters as~\cite{He:2016:CVPR}, the model can achieve a baseline accuracy of 92.85\%. To avoid changing the input and output feature maps of each residual block, same as $\ell_1$, we only prune filters from the first layers of each block. Considering ResNet-56 model has many layers, we set the pruning ratio of all convolutional layers to the same value in this experiment. Our method demonstrate better accuracy retaining capability compared to the $\ell_1$-norm and Taylor pruning. Comparing to the Channel Pruning, although our computation load reduction is slightly smaller, our implementation is much more straightforward, considering the Channel Pruning requires additional multiple-branch enhancement to generalize itself to ResNet. \vspace{1mm} \noindent\textbf{VGG-16 on CIFAR-100} We further evaluated our pruning method on the CIFAR100 dataset. Using the same VGG-16 model, our method introduces less accuracy drop compared to state-of-the-art with the same computation load reduction. Meanwhile, the accuracy drop caused by our pruning method can be fully recovered with 0.07\% accuracy improvement. \vspace{-2mm} \subsection{ImageNet Experiment} \vspace{-2mm} In evaluations on ImageNet, 10 and 100 classes of images out of 1000 classes, namely ImageNet-10 and ImageNet-100 are utilized in this work. Each class contains 1300 training images and 50 validation images. We evaluate our proposed method for the VGG-16, and ResNet-32 models on the ImageNet-10 and ImageNet-100 respectively. The VGG-16 model is implemented as the same architecture as the VGG-16 model on CIFAR. The proportion of the pruning ratios from shallow layers to deep layers (conv1\_x to conv5\_x) is set to 2:1:1:3:3. The ResNets-32 model for ImageNet have three stages of residual blocks, which contain 3, 4, and 2 residual blocks respectively, and each residual block has three convolutional layers. Same as ResNet-56 model, in each residual block, only the first two convolutional layers are pruned to keep the input and output feature maps to be identical. The pruning ratio of all convolutional layers is set to the same value for simplicity. The result of ImageNet pruning comparison is shown in the Table~\ref{tab:imagenet}. \noindent\textbf{ImageNet-10} Table~\ref{tab:imagenet} shows that our method outperforms previous methods on ImageNet10 dataset. For the VGG-16 model, large accuracy loss occurs in the three pruning method. However, our proposed method induces less accuracy degradation compared with the $\ell_1$ and Taylor. With re-training, our method achieves higher accuracy than the baseline accuracy. Using the ResNet-32, it is hard to recover the accuracy drop through retraining on the ImageNet-10 dataset, and hence acceptable pruning rate in this scenario is relatively small. As shown in the Table~\ref{tab:imagenet}, we can achieve 32.90\% FLOPs reduction on ResNet-32 with only 0.06\% accuracy drop. \noindent\textbf{ImageNet-100} With larger image data complexity and more class composition complexity, the feature extraction becomes more complex. It's become more challenge for the filter functionality identification and clustering. Our method can still outperform previous methods on ImageNet100 dataset. As shown in the Table~\ref{tab:imagenet}, we can achieve 50.64\% FLOPs reduction on VGG-16 with 1\% accuracy improvement. For the ResNet-32, our method can still achieve less accuracy and better retraining accuracy improvement than the previous method. \begin{table}[t] \small \caption{ImageNet Pruning Comparison} \centering \begin{threeparttable} \begin{tabular}{lllllll} \toprule CNN &Pruning & Baseline & FLOPs & FLOPs & Prune & Retrain \\ Models &Methods & acc. (\%) & (x$10^{10}$) & $\downarrow$ (\%)& acc. (\%) & acc. (\%) \\ \bottomrule VGG-16 &$\ell_1$-norm* & 93.76& 0.57 & 62.99 & 21.22 & 92.26 \\ (ImageNet-10) &Taylor* & 93.76& 0.57 & 62.99 & 24.37 & 93.12 \\ &Ours(40\%) & 93.76& 0.57 & 62.99 & \textbf{25.13} & \textbf{94.34}\\ \bottomrule ResNet-32 &$\ell_1$-norm* & 88.31& 1.55 & 32.90 & 80.37 & 84.75 \\ (ImageNet-10) &Taylor* & 88.31& 1.55 & 32.90 & 81.03 & 85.12 \\ &Ours(30\%) & 88.31& 1.55 & 32.90 & \textbf{81.25} & \textbf{88.25}\\ \bottomrule VGG-16 &$\ell_1$-norm* & 78.51& 0.76 & 50.64 & 36.35 & 76.56 \\ (ImageNet-100) &Taylor* & 78.51& 0.76 & 50.64 & 40.28 & 77.32 \\ &Ours(30\%) & 78.51& 0.76 & 50.64 & \textbf{43.25} & \textbf{79.51}\\\bottomrule ResNet-32 &$\ell_1$-norm* & 75.34& 1.76 & 23.81 & 63.25 & 72.52 \\ (ImageNet-100) &Taylor* & 75.34& 1.76 & 23.81 & 68.14 & 73.43 \\ &Ours(20\%) & 75.34& 1.76 & 23.81 & \textbf{70.21} & \textbf{75.40}\\ \bottomrule \end{tabular} \label{tab:imagenet} \begin{tablenotes} \footnotesize \item ``*'' indicates our implementation. Ours(40\%) means 40\% filters are pruned by our method. \vspace{-5mm} \end{tablenotes} \end{threeparttable} \end{table} \vspace{-2mm} \subsection{CNN Model Retraining Analysis} \vspace{-2mm} In most filter pruning works, the retraining process is essential to compensate the accuracy drop. However, as aforementioned, its role still lacks certain research. In this work, we also analysis the retraining process quantitatively and qualitatively in filter pruning. Fig.~\ref{fig:Retrain} compares the retraining processes of our method and the $\ell_1$-norm based pruning method with ConvNet and VGG-16 on CIFAR-10. Comparing our method (solid line) with the $\ell_1$-norm based method (dashed line), we can observe that: The models pruned by our method always demonstrate quicker accuracy recovery. Taking VGG-16 as an example, our method is close to the convergence accuracy after only after 100 iterations with less retraining dependency. While the accuracy of the $\ell_1 $-norm based method is still $\sim$3\% lower than ours, resulting in significantly more retraining effort. We also use AM visualization to analysis the filter functionality transition during the retraining process as shown in Fig.~\ref{fig:Retrain_pattern}. We randomly choose and visualize one preserved filter after pruning with our method and the $\ell_1$-norm based method. During the retraining process, we visualize its functionality pattern every $100$ iterations. From Fig.~\ref{fig:Retrain_pattern}, we can see that, the visualized pattern after the $\ell_1$-norm pruning demonstrates dramatic change. This indicates that the $\ell_1$-norm based method significantly defects the original CNN model's functionality integrity, and the retraining process has to reconfigure the filter's functionality for compensation. Meanwhile, the filter functionality pattern remains unchanged in our method, which indicates our method's precise redundant filter identification. \begin{comment} This implies why our pruning method causes significantly less accuracy drop and requires less retraining efforts: The $\ell_1 norm$ based method partially destructs the original neural network's functionality composition, and therefore rebuilds the filter's functionality to recover the original composition. As a result, it needs more retraining iterations to restore the model accuracy. By contrast, we introduce less influence to the original model functionality with our balanced pruning method. Therefore, the accuracy drop is not so severe as other methods, and the costly retraining process becomes less necessary, as our retraining analysis demonstrates. \end{comment} \begin{figure}[t] \centering \includegraphics[width=5in]{./Retrain.pdf} \vspace{-2mm} \caption{Pruned Model Accuracy Recovery by Retraining.} \label{fig:Retrain} \vspace{-2mm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=5in]{./Retrain_visua_wide.pdf} \vspace{-3mm} \caption{Filter Functionality Transformation During Retraining.} \label{fig:Retrain_pattern} \vspace{-3mm} \end{figure} \section{Conclusion} \vspace{-2mm} In this work, through convolutional filter AM visualization and functionality analysis, we firstly demonstrate that filter redundancy exists in the form of functionality repetition. Then, we shows that such functional repetitive filters could be effectively pruned from CNNs to provide computation redundancy reduction. Based on such motivation, we propose an interpretable functionality-oriented filter pruning method: By first interpreting and clustering filters with same functions together, we remove the repetitive filters with smallest significance and contribution in a balanced manner inside each cluster. Therefore, this implicitly helps maintain the similar functionality composition as the original model, and thus brings less damage to the model accuracy. Extensive experiments on CIFAR and ImageNet demonstrate the superior performance of our pruning method over state-of-the-art methods. By analyzing the functionality changing of remaining filters in the retraining process, we further prove our assumption that $\ell_1$-norm based pruning partially destructs original CNNs' functionality integrity. By contrast, our method shows consistent filter functionality during retraining process, demonstrating less harm to original model functionality. \section*{Acknowledgments} This work was supported in part by NSF CNS1717775.
1,108,101,566,067
arxiv
\partial{\partial} \def\mathcal{\mathcal} \def\text{Var\/}{\text{Var\/}} \newcommand{\boldsymbol}{\boldsymbol} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\e}[1]{\exp\left( #1\right)} \newcommand{\mathcal{CAT}}{\mathcal{CAT}} \newcommand{\mathcal{INV}}{\mathcal{INV}} \newcommand{U(\bs j)}{U(\boldsymbol j)} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Remark}[Theorem]{Remark} \theoremstyle{remark} \newtheorem*{Note}{Note} \newtheorem*{Example}{\bf Example} \newtheorem*{Claim}{\it Claim} \numberwithin{equation}{section} \linespread{1.1} \date{\today} \begin{document} \title[Exchange stable matchings]{On random exchange-stable matchings} \author{Boris Pittel} \address{Department of Mathematics, The Ohio State University, Columbus, Ohio 43210, USA} \email{[email protected]} \keywords {stable matchings, exchange, random preferences, asymptotics } \subjclass[2010] {05C30, 05C80, 05C05, 34E05, 60C05} \begin{abstract} Consider the group of $n$ men and $n$ women, each with their own preference list for a potential marriage partner. The stable marriage is a bipartite matching such that no unmatched pair (man, woman) prefer each other to their partners in the matching. Its non-bipartite version, with an even number $n$ of members, is known as the stable roommates problem. Jose Alcalde introduced an alternative notion of exchange-stable, one-sided, matching: no two members prefer each other's partners to their own partners in the matching. Katarina Cechl\'arov\'a and David Manlove showed that the e-stable matching decision problem is $NP$-complete for both types of matchings. We prove that the expected number of e-stable matchings is asymptotic to $\left(\frac{\pi n}{2}\right)^{1/2}$ for two-sided case, and to $e^{1/2}$ for one-sided case. However, the standard deviation of this number exceeds $1.13^n$, ($1.06^n$ resp.). As an obvious byproduct, there exist instances of preference lists with at least $1.13^n$ ($1.06^n$ resp.) e-stable matchings. The probability that there is no matching which is stable and e-stable is at least $1-e^{-n^{1/6+o(1)}}$, ($1-O(2^{-n/2})$ resp.). \end{abstract} \maketitle \section{Introduction and main results} Consider the group of $n$ men and $n$ women, each member each with their own preference list for a potential marriage partner. The stable marriage is a bipartite matching such that no unmatched pair (man,woman) prefer each other to their partners in the matching. A classic theorem, due to David Gale and Lloyd Shapley \cite{GalSha}, asserts that, given any system of preferences there exists at least one stable marriage $M$. The proof of this fundamental theorem was based on analysis of a proposal algorithm: at each step, the men not currently on hold each make a proposal to their best choice among women who haven't rejected them before, and the chosen woman either provisionally puts the man on hold or rejects him, based on comparison of him to her current suitor if she has one already. The process terminates once every woman has a suitor, and the resulting bijection turns out to be stable. Of course, the roles can be reversed. In general, the two resulting matchings, $M_1$ and $M_2$ are different, one men-optimal/women-pessimal, another women-optimal/men-pessimal. The interested reader is encouraged to consult Dan Gusfield and Rob Irving \cite{GusIrv} for a masterful, detailed analysis of the algebraic (lattice) structure of stable matchings set, and a collection of proposal algorithms for determination of the stable matchings in between the two extremal matchings $M_1$ and $M_2$. A decade after the Gale-Shapley paper, McVitie and Wilson \cite{McVWil} developed an alternative, sequential, algorithm in which proposals by one side to another are made one at a time. This procedure delivers the same matching as the Gale-Shapley algorithm; the overall number of proposals made, say by men to women, is clearly the total rank of the women in the terminal matching. This purely combinatorial, numbers-free, description calls for a probabilistic analysis of the problem chosen uniformly at random among all the instances of preference lists, whose total number is $(n!)^{2n}$. In a pioneering paper \cite{Wil} Wilson reduced the work of the sequential algorithm to a classic urn scheme (coupon-collector problem) and proved that the expected running time, whence the expected total rank of wives in the man-optimal matching, is at most $nH_n\sim n\log n$, $H_n=\sum_{j=1}^n 1/j$. Few years later Don Knuth \cite{Knu}, among other results, found that, in fact, the expected running time is asymptotic to $n\log n$, and also that the worst-case running time is $O(n^2)$, attributing the latter to an unpublished work by J. Bulnes and J. Valdes. He also posed a series of open problems, one of them on the {\it expected\/} number of the stable matchings. Don pointed out that an answer might be found via his formula for the probability $P(n)$ that a generic matching $M$ is stable: \begin{equation}\label{Pn=} P(n)=\overbrace {\idotsint}^{2n}_{\bold x,\,\bold y\in [0,1]^n}\,\prod_{1\le i\neq j\le n} (1-x_iy_j)\, d\bold x d\bold y. \end{equation} And then the expected value of $S_n$, the total number of stable matchings, would then be determined from $\textup{E\/}[S_n]=n! P(n)$. Following Don Knuth's suggestion, in \cite{Pit1} we used the equation \eqref{Pn=} to obtain an asymptotic formula $P(n)\sim\frac{e^{-1}n\log n}{n!}$, which implied that $\textup{E\/}[S_n]\sim e^{-1}n\log n$. We also found the integral formulas for $P_k(n)$ ($P_{\ell}(n)$ resp.) the probability that the generic matching $M$ is stable {\it and\/} that the total man-rank $R(M)$ (the total woman-rank $Q(M)$ is $\ell$ resp.). These integral formulas implied that with high probability (w.h.p. from now) for each stable matching $M$ the ranks $R( M)$, $Q(M)$ are between $(1+o(1))n\log n$ and $(1+o(1))n^2/\log n$. It followed, with some work, that w.h.p. $R(M_1)\sim n^2/\log n$, $Q(M_1)\sim n\log n$ and $R(M_2)\sim n\log n$, $Q(M_2)\sim n^2/\log n$. In particular, w.h.p. $R(M_j)Q(M_j)$ $\sim n^3$, ($j=1,2$). Spurred by these results, in \cite{Pit2} we studied the likely behavior of the full random set $\{(R( M),Q(M))\}$, where $M$ runs through all stable matchings for the random instance of preferences. We proved a {\it law of hyperbola\/}: for every $\lambda\in (0,1/4)$, quite surely (q.s) $\max_{M}|n^{-3}Q(M)R( M)-1|\le n^{-\lambda}$; ``quite surely'' means with probability $1-O(n^{-K})$, for every $K$, a notion introduced by Knuth, Motwani and the author \cite{KnuMotPit}. Furthermore, q.s. $S_n\ge n^{1/2-o(1)}$, a significant improvement of the logarithmic bound in \cite{KnuMotPit}, but still far below $n\log n$, the asymptotic order of $\textup{E\/}[S_n]$. Thus, for a large number of participants, a typical instance of the preference lists has multiple stable matchings very nearly obeying the preservation law for the product of the total man-rank and the total woman-rank. Eight years ago with Craig Lennon \cite{LenPit} we extended the techniques in \cite{Pit1}, \cite{Pit2} to show that $\textup{E\/}[S_n^2]\sim (e^{-2}+0.5e^{-3})(n\log n)^2$. Combined with $\textup{E\/}[S_n]\sim e^{-1}n\log n$, this result implied that $S_n$ is of order $n\log n$ with probability $0.84$, at least. A recent breakthrough study of the stable matchings in unbalanced settings by Itai Ashlagi, Yash Kanoria and Jacob Leshno \cite{AshKanLes} (see our follow-up analysis in \cite{Pit4}) proves that the probabilistic aspects of this classic combinatorial scheme continue to be a goldmine of interesting problems. In fact, the recent monograph by David Manlove \cite{Man} covers an astonishing variety of new matching models and algorithms, making some of them ripe for probabilistic study as well. In particular, David discussed an alternative notion of stability suggested by Jose Alcalde \cite{Alc}: a matching $M$ is called exchange-stable (e-stable), if no two members prefer each other's partners to their own partners under $M$. Actually, Jose dealt with the one-sided matchings, so called roommates assignment problem, but the notion of e-stability makes sense for two-sided matchings as well. Somehow, this elegant scheme reminded the author of the stochastic model \cite{Pit5} (see also our appendix to Michael L. Tsetlin's book \cite{Tse}). In that model a randomly chosen pair of city dwellers, currently housed in the residential areas $j_1$ and $j_2$, and employed by the plants $i_1$ and $i_2$, exchange their residencies with probability $\pi(t_{i_1,j_1}) \pi(t_{i_2,j_2})$, $t_{i,j}$ being the commute time from $j$ to $i$, and $\pi(t)$ monotone increasing with $t$. For the large total population $n$, the limiting matrix of the numbers $x_{i,j}$ of persons working in the $i$-th plant and living in the $jt$-th residential district maximizes the weighted entropy $\sum_{i,j} x_{i,j}\log \frac{\nu_{i,j}}{x_{i,j}}$ subject to the row and column constraints. The numbers $\nu_{i,j}=(a_i/\pi(t_{i,j})) /(\sum_k 1/\pi(t_{i,k}))$, $a_i$ being the total roster of the plant $i$, can be interpreted as an ideal allocation of $n$ members among the residential areas, when they do not have to compete for the limited capacities of the residential areas. Katarina Cechl\'arov\'a and David Manlove \cite{CecMan} showed that, in sharp contrast to the classic stable matchings, the e-stable matching decision problem is $NP$-complete for both types of matchings. (It is a good place to mention that the ``fundamental proposal algorithm'' constructed by Rob Irving for the one-sided stable matchings has $O(n^2)$ worst-case running time, \cite{GusIrv}.) This surprising result in \cite{CecMan} prodded us to look at the {\it likely\/} behavior of the e-stable matchings. We prove that the expected number of e-stable matchings is asymptotic to $\left(\frac{\pi n}{2}\right)^{1/2}$, definitely smaller than $e^{-1}n\log n$ for the classic stable matchings \cite{Pit0}, but in the same league qualitatively. Somehow we felt that the second moment of the number of e-stable matchings would grow like $n^{\gamma}$, for some $\gamma\ge 1$ of course. That has been the case so far with the stable matchings, bipartite and non-bipartite, and also the stable partitions introduced and studied, algorithmically, by Jimmy Tan \cite{Tan}, \cite{Tan1}; see \cite{Pit3} for the probabilistic results. However, as a possible reflection of substantial algorithmic complexity of Alcalde's model, this second order moment exceeds $1.28^n$, i.e. grows exponentially fast. Consequently the standard deviation of the number of e-stable matchings exceeds $1.13^n$, signaling that the discernible right tail of the distribution of that number is much longer than the left tail. As an obvious byproduct, we claim existence of preference lists with at least $1.13^n$ e-stable matchings. Similar bounds for the stable marriages have long been known, see \cite{Man}, Section 2.2.2, for discussion and references. However those bounds were obtained via explicit constructions of the preference lists having exponentially many stable matchings. By the very nature of the probabilistic method we use, our claim is purely existential. We also consider the one-sided e-stable matchings on the set of $n$ (even) members, under the assumption that the instance of $n$ preference lists, each with $n-1$ positions, is chosen uniformly at random among all $[(n-1)!]^n$ such instances. We had proved that the expected value of the number of the one-sided stable matchings converges to the finite $e^{1/2}$ , and that the standard deviation of this number is $\sim\left(\frac{\pi n}{4e}\right)^{1/4}$, approaching infinity moderately fast. In this paper we show that the expected value of the number of the e-stable, one-sided matchings, is exactly the same, so is $e^{1/2}$ in the limit. However, its standard deviation is exponentially large, $1.06^n$ at least, in qualitative harmony with the two-sided e-stable matchings. Can the overwhelming ``asymmetry'' of the distribution of the number of e-stable matchings be a hint that, with probability approaching $1$, at least one such matching exists? ``Overwhelming'' is a key word here: for the classic stable matching problem, when the standard deviation is of order $n^{1/4}$ only, the limiting probability that a solution exists is below $0.5e^{1/2}<1$, \cite{IrvPit}. We show that q.s. uniformly for every e-stable matching $M$, two-sided or one-sided, the arithmetic average of the partners' ranks is asymptotic to $n^{1/2}$, just like the stable one-sided matchings on $[n]$, \cite{Pit2}. Katarina Cechl\'arov\'a and David Manlove \cite{CecMan}, Rob Irving \cite{Irv1}, Eric McDermid, Christine Cheng and Ichiro Suzuki \cite{McDCheSuz} studied matchings that are doubly stable, i.e. both classically stable and (coalition)-exchange-stable. We are back to e-stability when coalition size is $2$ only. It was proved in \cite{CecMan} that, for unrestricted coalition size, a doubly stable marriage exists only if a stable marriage is unique. Strikingly, a two-sided instance does not necessarily admit a stable matching which is simply man-exchange stable, \cite{Irv1}. We prove that this kind of incompatibility holds for almost all large-size instances of two-sided and one-sided preference lists. More precisely, the probability that there is no doubly stable matching is at least $1-e^{-n^{1/6-o(1)}}$ (two-sided case), and $1-O(2^{-n/2})$ (one-sided case). \section{Basic identities and bounds} \subsection{Two-sided matchings} Consider an instance of the $n$-men/$n$-women matching problem under preferences chosen uniformly at random among all $(n!)^{2n}$ such instances. We need to derive the integral (Knuth-type) formulas for $\textup{ P\/}(M)$ the probability that a matching $M$ is exchange-stable (e-stable), $\mathcal P(M)$ the probability that $M$ is both e-stable and stable, and $\textup{ P\/}(M_1,M_2)$ the probability that two matchings $M_1\neq M_2$ are each e-stable. It is convenient to view $M$ as a bijection from, say, the men set to the women set. Observe that the uniformly random instance of the $2n$ preference lists can be generated as follows. Introduce two $n\times n$ arrays of the $2n^2$ independent random variables, $X_{i,j}$ ($1\le i,\, j\le n$) and $Y_{i,j}$ ($1\le i,\, j\le n$), each distributed uniformly on $[0,1]$. Assume that each man $i$ (woman $j$ resp.) ranks the women (men resp.) in increasing order of the variables $X_{i,k}$, $1\le k\le n$, ($Y_{\ell,j}$, $1\le\ell\le n$). Each of the resulting $2n$ orderings is uniform, and all the orderings are independent. \begin{Lemma}\label{P(Mest)=} Let $(a,b)$ stand for a generic, unordered, pair of distinct elements of $[n]$. Then, for every matching $M$, \begin{equation}\label{P(Mest)=int^2} \begin{aligned} \textup{ P\/}(M)&=\left(\idotsint\limits_{\bold x\in [0,1]^{n}}\prod_{(a,b)}(1-x_ax_b)\,d\bold x\right)^2,\\ \mathcal P(M)&=\idotsint\limits_{\bold x,\,\bold y\in [0,1]^{n}}\mathcal P(M |\bold x,\bold y)\,d\bold xd\bold y, \end{aligned} \end{equation} where \begin{equation}\label{mathcalP(M|)=} \begin{aligned} \mathcal P(M |\bold x,\bold y)&=\prod_{(i_1,i_2)}\!\!\textup{ P\/}\Bigl(\bigl\{X_{i_1,M(i_2)}<x_{i_1},\,Y_{i_1,M(i_2)}<y_{M(i_2)}\bigr\}^c\\ &\qquad\qquad\cap\bigl\{X_{i_2,M(i_1)}<x_{i_2},\,Y_{i_2,M(i_1)}<y_{M(i_1)}\bigr\}^c\\ &\qquad\qquad\cap\bigl\{X_{i_1,M(i_2)}<x_{i_1},\,X_{i_2,M(i_1)}<x_{i_2}\bigr\}^c\\ &\qquad\qquad\cap\bigl\{Y_{i_1,M(i_2)}<y_{M(i_2)},\,Y_{i_2,M(i_1)}<y_{M(i_1)}\bigr\}^c\Bigr). \end{aligned} \end{equation} \end{Lemma} \begin{proof} $M$ is e-stable if and only if we have \begin{align*} &\forall\, (i_1, i_2),\,X_{i_1, M(i_1)}> X_{i_1, M(i_2)}\Longrightarrow X_{i_2, M(i_2)}<X_{i_2, M(i_1)},\\ &\forall\, (j_1, j_2),\, Y_{M^{-1}(j_1), j_1}>Y_{M^{-1}(j_2), j_1}\Longrightarrow Y_{M^{-1}(j_2), j_2}< Y_{M^{-1}(j_1), j_2}. \end{align*} We can say that a pair of men $(i_1,\,i_2)$ (a pair of women $(j_1,j_2)$ resp.) blocks the matching $M$ , i.e. prevents $M$ from being e-stable, if $X_{i_1, M(i_1)}> X_{i_1, M(i_2)}$, $X_{i_2, M(i_2)} >X _{i_2, M(i_1)}$ (if $Y_{M^{-1}(j_1), j_1} > Y_{M^{-1}(j_2), j_1}$, $Y_{M^{-1}(j_2), j_2}> Y_{M^{-1}(j_1), j_2}$ resp.). So $M$ is e-stable if no two men block $M$ and no two women block $M$. By independence of the matrices $\{X_{i,j}\}$ and $\{Y_{i,j}\}$, the $\binom{n}{2}$ first-line events and the $\binom{n}{2}$ second-line events are collectively independent. Furthermore, conditioned on $\{X_{i, M(i)}=x_i,\,i\in [n]\}$ (on $\{Y_{M^{-1}(j), j}=y_j,\, j\in [n]\}$ resp.) the $\binom{n}{2}$ events in the first (second resp.) line are independent among themselves. Therefore \begin{equation}\label{P(Mest|x,y)=} \begin{aligned} &\textup{ P\/}\Bigl(M\text{ is e-stable}\,\boldsymbol |\, X_{i, M(i)}=x_i,\,Y_{M^{-1}(j), j}=y_j,\,i,j\in [n]\Bigr)\\ &\qquad\quad\qquad=\prod_{(i_1, i_2)}\!\!\textup{ P\/}\Bigl(\bigl\{X_{i_1, M(i_2)}< x_{i_1},\, X_{i_2, M(i_1)}<x_{i_2}\bigr\}^c\Bigr)\\ &\qquad\qquad\qquad\cdot\!\!\prod_{(j_1,j_2)}\!\!\textup{ P\/}\Bigl(\bigl\{Y_{M^{-1}(j_2), j_1}<y_{j_1},\,Y_{M^{-1}(j_1), j_2}<y_{j_2}\bigr\}^c\Bigr)\\ &\qquad\qquad\quad=\prod_{(i_1, i_2)}\bigl(1-x_{i_1} x_{i_2}\bigr)\cdot \prod_{(j_1, j_2)}\bigl(1-y_{j_1} y_{j_2}\bigr). \end{aligned} \end{equation} Integrating both sides for $\bold x,\,\bold y \in [0,1]^n$ we obtain the top formula in \eqref{P(Mest)=int^2}. The proof of the bottom formula is similar, as $\mathcal P(M |\bold x,\bold y)$ is the probability that $M$ is e-stable and stable, conditioned on $\{X_{i, M(i)}=x_i,\,Y_{M^{-1}(j), j}=y_j,\,i,j\in [n]\}$. Of course, \begin{equation}\label{mathcal P(M|)<P(M|)} \mathcal P(M|\bold x,\bold y)\le \prod_{(i_1, i_2)}\bigl(1-x_{i_1} x_{i_2}\bigr)\cdot \prod_{(j_1, j_2)}\bigl(1-y_{j_1} y_{j_2}\bigr). \end{equation} \end{proof} Next, introduce $Q(M)$ and $R(M)$, the total sum of all wives' ranks and the total sum of all husbands' ranks on the preference lists of their spouses under matching $M$. Using $\chi(A)$ to denote the indicator of an event $A$, we have \begin{align*} Q(M)&=n+\sum_{i, j\neq M(i)}\chi\bigl(X_{i, j}< X_{i, M(i)}\bigr)\\ &=n+\sum_{\{i_1,\,i_2\}}\chi\bigl(X_{i_1, M(i_2)}<X_{i_1, M(i_1)}\bigr)\\ &=n+\sum_{(i_1, i_2)}\Bigl[\chi\bigl(X_{i_1, M(i_2)}<X_{i_1, M(i_1)}\bigr)+ \chi\bigl(X_{i_2, M(i_1)}<X_{i_2, M(i_2)}\bigr)\Bigr], \end{align*} and likewise \begin{align*} R(M)=n+\sum_{(j_1, j_2)}\Bigl[\chi\bigl(Y_{M^{-1}(j_2), j_1}&<Y_{M^{-1}(j_1), j_1}\bigr)\\ &+\chi\bigl(Y_{M^{-1}(j_1), j_2}<Y_{M^{-1}(j_2), j_2}\bigr)\Bigr]. \end{align*} \noindent For $n\le k,\ell \le n^2$, let $\textup{ P\/}_{k,\ell}(M):=\!\!\textup{ P\/}(M\text{ is e-stable}, Q(M)=k, R(M)=\ell)$. \begin{Lemma}\label{Pkell(M)} Using notation $\bar z=1-z$, \begin{equation}\label{Pkell(M)=} \begin{aligned} \textup{ P\/}_{k,\ell}(M)&=\idotsint\limits_{\bold x\in [0,1]^{n}}[\xi^{k-n}]\prod_{(a,b)} \bigl(\bar x_a\bar x_b+\xi x_a\bar x_b+\xi \bar x_a x_b\bigr)\,d\bold x\\ &\times\idotsint\limits_{\bold y\in [0,1]^{n}} [\eta^{\ell-n}]\prod_{(c,d)} \bigl(\bar y_c\bar y_d+\eta y_c\bar y_d+\eta \bar y_c y_d\bigr)\,d\bold y. \end{aligned} \end{equation} Thus $\textup{ P\/}_{k,\ell}(M)$ does not depend on $M$. \end{Lemma} \begin{proof} First of all, we have \begin{align*} &\qquad\qquad\qquad \textup{ P\/}_{k,\ell}(M)=[\xi^k\eta^{\ell}]\,\textup{E\/}\Bigl[\xi^{Q(M)} \eta^{R(M)} \chi(M\text{ is e-stable})\Bigr],\\ & \chi\bigl(M\text{ is e-stable}\bigr)=\!\prod_{(i_1 ,i_2)}\!\!\chi\Bigl(\bigl\{X_{i_1, M(i_2)}<X_{i_1, M(i_1)},\,X_{i_2, M(i_1)}<X_{i_2, M(i_2)}\bigr\}^c\Bigr)\\ &\quad\times \prod_{(j_1, j_2)} \chi\Bigl(\bigl\{Y_{M^{-1}(j_2), j_1}<Y_{M^{-1}(j_1), j_1},\,Y_{M^{-1}(j_1), j_2} <Y_{M^{-1}(j_2), j_2}\bigr\}^c\Bigr). \end{align*} So, conditioning on $\{X_{i,\,M(i)}\}_{i\in [n]}=\bold x$ and $\{Y_{M^{-1}(j),j}\}_{j\in [n]}=\bold y$ respectively, we have \begin{align*} &\textup{E\/}\Bigl[\xi^{Q(M)}\!\!\prod_{(i_1 ,i_2)}\!\!\chi\Bigl(\bigl\{X_{i_1, M(i_2)}<X_{i_1, M(i_1)},\,X_{i_2, M(i_1)}<X_{i_2, M(i_2)}\bigr\}^c\Bigr) \Big|\,\, \bold x\Bigr]\\ &=\xi^n\prod_{(i_1,i_2)}\textup{E\/}\Bigl[\xi^{\chi\bigl(X_{i_1, M(i_2)}<x_{i_1}\bigr)+\chi\bigl(X_{i_2, M(i_1)} <x_{i_2}\bigr)}\\ &\qquad\qquad\qquad\qquad\qquad\cdot\,\chi\bigl(\{X_{i_1, M(i_2)}<x_{i_1},\,X_{i_2, M(i_1)}< x_{i_2}\}^c\bigr)\Bigr]\\ &\qquad\qquad\quad=\xi^n\prod_{(i_1, i_2)}\bigl(\bar x_{i_1}\bar x_{i_2} +\xi x_{i_1}\bar x_{i_2} +\xi \bar x_{i_1} x_{i_2}\bigr), \end{align*} as $\textup{ P\/}(X_{i_1, M(i_2)}<x_{i_1})=x_{i_1}$ and $\textup{ P\/}(X_{i_2, M(i_1)}< x_{i_2})=x_{i_2}$. Likewise \begin{align*} &\textup{E\/}\Bigl[\eta^{R(M)}\!\!\prod_{(i_1 ,i_2)}\!\!\chi\Bigl(\bigl\{Y_{M^{-1}(j_2), j_1}<Y_{M^{-1}(j_1), j_1},\,Y_{M^{-1}(j_1), j_2} <Y_{M^{-1}(j_2), j_2}\bigr\}^c\Bigr)\Big|\,\, \bold y\Bigr]\\ &\qquad\qquad\qquad\,\,\,=\eta^n\prod_{(j_1, j_2)}\bigl(\bar y_{j_1}\bar y_{j_2} +\eta y_{j_1}\bar y_{j_2} +\eta \bar y_{j_1} y_{j_2}\bigr). \end{align*} Integrating the two conditional expectations over $\bold x\in [0,1]^n$ and $\bold y\in [0,1]^n$ respectively, and multiplying the integrals, we obtain \begin{align*} \textup{E\/}\Bigl[\xi^{Q(M)} \eta^{R(M)}\chi(M\text{ is e-stable})\Bigr]&=\xi^n \idotsint\limits_{\bold x\in [0,1]^{n}}\!\prod_{(a,b)} \bigl(\bar x_a\bar x_b+\xi x_a\bar x_b+\xi \bar x_a x_b\bigr)\,d\bold x\\ &\times\eta^n\idotsint\limits_{\bold y\in [0,1]^{n}}\!\prod_{(c,d)} \bigl(\bar y_{c}\bar y_{d} +\eta y_{c}\bar y_{d} +\eta \bar y_{c} y_{d}\bigr)\,d\bold y. \end{align*} This identity is equivalent to \eqref{Pkell(M)=}. \end{proof} Let $M_1\neq M_2$ be two generic matchings. Together $M_1$ and $M_2$ determine a bipartite graph $G(M_1,M_2)$ on the vertex set $[n]\times [n]$, with the edge set $E$ formed by the man-woman pairs $(i,j)\in M_1\cup M_2$. Each component of $G(M_1,M_2)$ is either an edge $e\in M_1\cap M_2$, or a vertex-wise alternating circuit of even length at least $4$, in which the edges from $M_1$ and $M_2$ alternate as well. So the edge set for all these circuits is the symmetric difference $M_1\Delta M_2$. The vertex set $V(M_1\Delta M_2)$ is the union of the men set $\mathcal N$ and the women set $\mathcal N'$, where $|\mathcal N|=|\mathcal N'|=: \nu$, and \[ I:=\mathcal N^c=\{i: M_1(i)=M_2(i)\},\quad J:=(\mathcal N')^c=\{j: M_1^{-1}(j)=M_2^{-1}(j)\}; \] $i\in I$ iff $j\in J$, where $j$ is the common value of $M_1(i)$, $M_2(i)$. \begin{Lemma}\label{P(M1,M2est)} Denoting $ \bold x_1=\{x_{i,1}:\,i\in [n]\}, \quad \bold x_2=\{x_{i, 2}:\,i\in [n]\},\quad x_{i, 1}= x_{i, 2}\,\,\text{ for }\,\,i\in I$, and $\bold x_2^*= \{x_{i,2}:\,i\in \mathcal N\}$, we have \begin{equation*} \begin{aligned} P(M_1,M_2)&\ge\left(\,\,\,\idotsint\limits_{\bold x_1\in [0,1]^n,\,\,\bold x_2^*\in [0,1]^{\nu}}\!\!\!\!\!\!\!\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^*\right)^2,\\ f(\bold x_1,\bold x_2)&=\prod_{(i_1,i_2):\,i_1,i_2\in \mathcal N}\!\!\!\!\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr)\\ &\quad\times\!\!\prod_{i_1\in I,\, i_2\in \mathcal N}\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr)\\ &\quad\times\!\!\!\!\!\prod_{(i_1,i_2):\,i_1,i_2\in I}\!\!\!\bigl(1-x_{i_1,1}x_{i_2,1}\bigr). \end{aligned} \end{equation*} \end{Lemma} \begin{proof} We need to characterize e-stability of the matchings $M_1$ and $M_2$ in terms of the matrices $\{X_{i, j}\}$ and $\{Y_{i, j}\}$. Observe first that $X_{i, M_1(i)} = X_{i, M_2(i)}$ for $i\in I$ and $Y_{M_1^{-1}(j), j}=Y_{M_2^{-1}(j).j}$ for $j\in J$. The matchings $M_1$ and $M_2$ are both e-stable if and only if none of the pairs of men $(i_1, i_2)$ and none of the pairs of women $(j_1, j_2)$ blocks either $M_1$ or $M_2$. Introduce the events \begin{align*} A_{(i_1, i_2)}&=\!\bigl\{\!(i_1, i_2)\text{ blocks neither }M_1\text{ nor }M_2\bigr\},\\ B_{(j_1, j_2)}&=\!\bigl\{\!(j_1, j_2)\text{ blocks neither }M_1\text{ nor }M_2\bigr\}. \end{align*} The $\binom{n}{2}$ events $A_{(i_1,i_2)}$ are independent of the $\binom{n}{2}$ events $B_{(j_1,j_2)}$. Furthermore, conditioned on the values $X_{i,M_1(i)}=x_{i,1}$, $X_{i,M_2(i)}=x_{i,2}$, so that $x_{i, 1}=x_{i, 2}$ for $i\in I$ ($Y_{M_1^{-1}(j),j}=y_{j,1}$, $Y_{M_2^{-1}(j),j}=y_{j,2}$ so that $y_{j, 1}=y_{j, 2}$ for $j\in J$ resp.), the events $A_{(i_1,i_2)}$ ($B_{(j_1,j_2)}$ resp.) are independent among themselves. Introducing (in addition to $\bold x_1$, $\bold x_2$, $\bold x_2^*$) the vectors $\bold y_1=\{y_{j, 1}:\,j\in [n]\}$, $\bold y_2=\{y_{j, 2}:\,j\in [n]\}$ and $\bold y_2^*=\{y_{j,2}\,:\,j\in \mathcal N'\}$, we have \begin{equation}\label{cond} \begin{aligned} &\qquad\qquad\textup{ P\/}\Bigl(M_1, M_2\text{ are e-stable }\boldsymbol |\,\bold x_1, \bold x_2, \bold y_1, \bold y_2\Bigr)\\ &= \prod_{(i_1, i_2)}\!\!\textup{ P\/}\Bigl(A_{(i_1, i_2)}\boldsymbol |\,\bold x_1, \bold x_2\Bigr) \cdot \prod_{(j_1, j_2)}\!\!\textup{ P\/}\Bigl(B_{(j_1, j_2)}\boldsymbol |\,\bold y_1, \bold y_2\Bigr). \end{aligned} \end{equation} \noindent Consider $\textup{ P\/}\Bigl(A_{(i_1, i_2)}\boldsymbol |\,\bold x_1, \bold x_2\Bigr)$. {\bf (1)\/} $M_1(i_1)\neq M_2(i_1)$, $M_1(i_2)\neq M_2(i_2)$. Then \begin{align*} \textup{ P\/}\Bigl(A_{(i_1, i_2)}\boldsymbol |\,\bold x_1, \bold x_2\Bigr)&=1-\textup{ P\/}\Bigl(X_{i_1, M_1(i_2)}< x_{i_1,1},\, X_{i_2, M_1(i_1)} <x_{i_2, 1}\Bigr)\\ &\quad-\textup{ P\/}\Bigl(X_{i_1, M_2(i_2)} < x_{i_1,2},\,X_{i_2, M_2(i_1)}<x_{i_2, 2}\Bigr)\\ &\quad+\textup{ P\/}\Bigl(\{X_{i_1, M_1(i_2)}< x_{i_1,1},\, X_{i_2, M_1(i_1)} <x_{i_2, 1}\}\\ &\qquad\quad\text{ and }\{X_{i_1, M_2(i_2)} < x_{i_1,2},\,X_{i_2, M_2(i_1)}<x_{i_2, 2}\}\Bigr)\\ &=1-x_{i_1, 1}\,x_{i_2, 1} - x_{i_1, 2}\, x_{i_2, 2} +x_{i_1,1} x_{i_1, 2} \cdot x_{i_2, 1} x_{i_2, 2}\\ &=\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr). \end{align*} {\bf (2)\/} $M_1(i_1)=M_2(i_1)$, $M_1(i_2)\neq M_2(i_2)$. In this case $x_{i_1,1}=x_{i_1,2}$, and \[ X_{i_2, M_1(i_1)}< x_{i_2,1},\,X_{i_2, M_2(i_1)}< x_{i_2,2}\Longleftrightarrow X_{i_2, M_1(i_1)} < x_{i_2,1} \wedge x_{i_2, 2}, \] and therefore \begin{align*} \textup{ P\/}\Bigl(A_{(i_1, i_2)}\boldsymbol |\,\bold x_1, \bold x_2\Bigr) &=1-x_{i_1, 1}\,x_{i_2, 1} - x_{i_1, 2}\, x_{i_2, 2} +x_{i_1,1} x_{i_1, 2} \,(x_{i_2, 1}\wedge x_{i_2, 2})\\ &\ge \bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr). \end{align*} {\bf (3)\/} Finally, if $M_1(i_1)=M_2(i_1)$, $M_1(i_2)=M_2(i_2)$, then \begin{align*} \textup{ P\/}\Bigl(A_{(i_1, i_2)}\boldsymbol |\,\bold x_1, \bold x_2\Bigr) &=1-x_{i_1, 1}\,x_{i_2, 1} - x_{i_1, 2}\, x_{i_2, 2} +(x_{i_1,1}\wedge x_{i_1, 2}) \,(x_{i_2, 1}\wedge x_{i_2, 2})\\ &=1-x_{i_1, 1}\,x_{i_2, 1}. \end{align*} Therefore \begin{equation}\label{prodP(A)} \begin{aligned} \prod_{(i_1, i_2)}\!\!\textup{ P\/}\Bigl(A_{(i_1, i_2)}\boldsymbol |\,\bold x_1, \bold x_2\Bigr)&\ge \prod_{(i_1,i_2):\,i_1,i_2\in \mathcal N}\!\!\!\!\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr)\\ &\quad\times\!\!\prod_{i_1\in I,\, i_2\in \mathcal N}\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr)\\ &\quad\times\!\!\!\!\!\prod_{(i_1,i_2):\,i_1,i_2\in I}\!\!\!\bigl(1-x_{i_1,1}x_{i_2,1}\bigr). \end{aligned} \end{equation} Similarly \begin{equation}\label{prodP(B)} \begin{aligned} \prod_{(j_1, j_2)}\!\!\textup{ P\/}\Bigl(B_{(j_1, j_2)}\boldsymbol |\,\bold y_1, \bold y_2\Bigr)&\ge \prod_{(j_1,j_2):\,j_1,j_2\in \mathcal N'}\!\!\!\!\bigl(1-y_{j_1, 1}\,y_{j_2, 1}\bigr)\bigl(1-y_{j_1, 2}\,y_{j_2, 2}\bigr)\\ &\quad\times\!\!\prod_{j_1\in J,\, j_2\in \mathcal N'}\bigl(1-y_{j_1, 1}\,y_{j_2, 1}\bigr)\bigl(1-y_{j_1, 2}\,y_{j_2, 2}\bigr)\\ &\quad\times\!\!\!\!\!\prod_{(j_1,j_2):\,j_1,j_2\in J}\!\!\!\bigl(1-y_{j_1,1}y_{j_2,1}\bigr). \end{aligned} \end{equation} Plugging the formulas \eqref{prodP(A)} and \eqref{prodP(B)} into \eqref{cond}, integrating over $\bold x_1,$\linebreak $\bold x_2^*,\,\bold y_1,\,\bold y_2^*$, {\it and\/} using $|\mathcal N|=|\mathcal N'|$, $|I|=|J|$, we complete the proof. \end{proof} The integral identities in Lemma \ref{P(M1,M2est)}, Lemma \ref{Pkell(M)} and Lemma \ref{P(M1,M2est)} turn out quite amenable to a sharp asymptotic analysis. \subsection{One-sided matchings} Consider an instance of the one-sided matching problem on the set $[n]$, $n$ even, chosen uniformly at random from among all $((n-1)!^{n}$ such instances. Let $\textup{ P\/}(M)$ be the probability that a generic matching $M$ e-stable, and let $\textup{ P\/}(M_1,M_2)$ the probability that two generic matchings $M_1\neq M_2$ are both e-stable. In addition, introduce $\mathcal P(M)$ the probability that $M$ is doubly stable, i.e. both exchange-stable and stable. Like two-sided case, the uniformly random instance of the $n$ preference lists can be generated as follows. Introduce the array of $n(n-1)$ independent, $[0,1]$-Uniforms $X_{i,j}$ ($1\le i\neq j\le n$) and $Y_{i,j}$ ($1\le i\neq j\le n$), each distributed uniformly on $[0,1]$. Assume that each member $i$ ranks other members in increasing order of the variables $X_{i,k}$, $k\neq i$. Each of the resulting $n$ preference lists is uniform, and all the lists are independent. \begin{Lemma}\label{P(Mest)='} For every matching $M$, \begin{equation}\label{P(Mest)=int^2'} \begin{aligned} \textup{ P\/}(M)&=\idotsint\limits_{\bold x\in [0,1]^{n}}\prod_{(a, b\neq M(a))}(1-x_ax_b)\,d\bold x,\\ \mathcal P(M)&=\idotsint\limits_{\bold x\in [0,1]^{n}}\prod_{(a, b\neq M(a))}(1-x_ax_b)^2\,d\bold x. \end{aligned} \end{equation} \end{Lemma} {\bf Note.\/} The top integral in \eqref{P(Mest)=int^2'} also equals the probability that $M$ is stable. Next, introduce $R(M)$, the sum of all partners' ranks under matching $M$. We have \begin{align*} R(M)&=(n-1)+\sum_{i, j\neq M(i)}\chi\bigl(X_{i, j}< X_{i, M(i)}\bigr)\\ &=(n-1)+\sum_{(i_1,i_2)}\Bigl[\chi\bigl(X_{i_1, M(i_2)}<X_{i_1, M(i_1)}\bigr)+ \chi\bigl(X_{i_2, M(i_1)}<X_{i_2, M(i_2)}\bigr)\Bigr]. \end{align*} \noindent For $n-1\le k,\ell \le n(n-1)$, let $\textup{ P\/}_{k}(M):=\!\!\textup{ P\/}(M\text{ is e-stable}, R(M)=k)$. \begin{Lemma}\label{Pk(M)'} Using notation $\bar z=1-z$, \begin{equation}\label{Pk(M)='} \begin{aligned} \textup{ P\/}_{k}(M)&=\idotsint\limits_{\bold x\in [0,1]^{n}}[\xi^{k-(n-1)}]\prod_{(a,b\neq M(a))} \bigl(\bar x_a\bar x_b+\xi x_a\bar x_b+\xi \bar x_a x_b\bigr)\,d\bold x. \end{aligned} \end{equation} Thus $\textup{ P\/}_{k}(M)$ does not depend on $M$. \end{Lemma} Let $M_1\neq M_2$ be two generic matchings. Together, $M_1$ and $M_2$ determine a graph $G(M_1,M_2)$ on the vertex set $[n]$, with the edge set $E$ formed by the man-woman pairs $(i,j)\in M_1\cup M_2$. Each component of $G(M_1,M_2)$ is either an edge $e\in M_1\cap M_2$, or a circuit of even length at least $4$, in which the edges from $M_1$ and $M_2$ alternate. So the edge set for all these circuits is the symmetric difference $M_1\Delta M_2$. Let $\mathcal N=\mathcal N(M_1,M_2)$ denote the vertex set of $M_1\Delta M_2$, and $\nu=\nu(M_1,M_2):=|\mathcal N|$. Then $I=\mathcal N^c$ is $\{i\in [n]:\, M_1(i)= M_2(i)\}$, $|I|=n-\nu$. \begin{Lemma}\label{P(M1,M2est)'} Denoting $ \bold x_1=\{x_{i,1}:\,i\in [n]\}, \quad \bold x_2=\{x_{i, 2}:\,i\in [n]\},\quad x_{i, 1}= x_{i, 2}\,\,\text{ for }\,\,i\in I$, and $\bold x_2^*= \{x_{i,2}:\,i\in \mathcal N\}$, we have \begin{equation*} \begin{aligned} P(M_1,M_2)&\ge\,\,\,\idotsint\limits_{\bold x_1\in [0,1]^n,\,\,\bold x_2^*\in [0,1]^{\nu}}\!\!\!\!\!\!\!\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^*, \end{aligned} \end{equation*} with $f(\bold x_1,\bold x_2)$ defined in Lemma \ref{P(M1,M2est)}. \end{Lemma} The proofs are a shorter version of those for the two-sided matchings. \section{Estimates of the integrals} {\bf Notations.\/} We will write $A_n\lessdot B_n$ as a shorthand for ``$A_n=O(B_n)$, uniformly over parameters that determine $A_n$, $B_n$'', when the expression for $B_n$ is uncomfortably bulky for an argument of the big-O notation. In \cite{Pit3} we proved that, uniformly for all matchings $M$ on $[m]$, ($m$ even), \begin{equation}\label{simple1,2} \prod_{(i_1,i_2\neq M(i_1))}\!\!(1-x_{i_1} x_{i_2}) \lessdot \exp\Bigl(-\frac{s^2}{2}\Bigr),\quad s:=\sum_{i\in [m]}x_i, \end{equation} Only minor modification is needed to prove that \begin{equation}\label{simple1,2'} \prod_{(i_1,i_2\neq i_1)}\!\!(1-x_{i_1} x_{i_2}) \lessdot \exp\Bigl(-\frac{s^2}{2}\Bigr),\quad s:=\sum_{i\in [m]}x_i, \end{equation} The bounds \eqref{simple1,2} and \eqref{simple1,2} will be instrumental in this paper as well. Another key tool is the following claim, \cite{Pit0}, \cite{Pit2}. \begin{Lemma}\label{intervals1} Let $X_1,\dots, X_{\nu}$ be independent $[0,1]$-Uniforms. Let $S=\sum_{i\in [\nu]}X_i$ and $\bold V=\{V_i=X_i/S;\, i\in [\nu]\}$, so that $\sum_{i\in [\nu]}V_i=1$. Let $\bold L=\{L_i ;\, i\in [\nu]\}$ be the set of lengths of the $\nu$ consecutive subintervals of $[0,1]$ obtained by selecting, independently and uniformly at random, $\nu-1$ points in $[0,1]$. Then the joint density $f_{S,\bold V}(s,\bold v)$, ($\bold v=(v_1,\dots,v_{\nu-1})$), of $(S,V)$ is given by \begin{equation}\label{joint<} \begin{aligned} f_{S,\bold V}(s,\bold v)&=s^{\nu-1}\chi\bigl(\max_{i\in [\nu]} v_i\le s^{-1}\bigr) \chi(v_1+\cdots+v_{\nu-1}\le 1)\\ &\le \frac{s^{\nu-1}}{(\nu-1)!} f_{\bold L}(\bold v),\quad v_{\nu}:=1-\sum_{i=1}^{\nu-1}v_i; \end{aligned} \end{equation} here $f_{\bold L}(\bold v)=(\nu-1)!\,\chi(v_1+\cdots+v_{\nu-1}\le 1)$ is the density of $(L_1,\dots,L_{\nu-1})$. \end{Lemma} We will also use the classic identities, Andrews, Askey and Roy \cite{AndAskRoy}, Section 1.8: \begin{equation}\label{int,prod} \begin{aligned} &\overbrace {\idotsint}^{\nu}_{\bold x\ge \bold 0 \atop x_1+\cdots+x_{\nu}\le 1}\prod_{i\in [\nu]} x_i^{\alpha_i}\,\,d\bold x=\frac{\prod_{i\in [\nu]}\alpha_i!}{(\nu+\alpha)!},\quad \alpha:=\sum_{i\in [\nu]}\alpha_i,\\ &\overbrace {\idotsint}^{\nu-1}_{\bold x\ge\bold 0\atop x_1+\cdots+x_{\nu}=1}\prod_{i\in [\nu]} x_i^{\alpha_i}\,\,dx_1\cdots dx_{\nu-1} =\frac{\prod_{i\in [\nu]}\alpha_i!}{(\nu-1+\alpha)!}. \end{aligned} \end{equation} The identity/bound \eqref{joint<} is useful since the random vector $\bold L$ had been well studied. It is known, for instance, that \begin{equation}\label{L^{(nu)}=frac} \bold L\overset{\mathcal D}\equiv\left\{\frac{W_i}{\sum_{j\in [\nu]}W_j}\right\}_{i\in [\nu]}, \end{equation} where $W_j$ are independent, exponentially distributed, with the same parameter, R\'enyi \cite{Ren}. An immediate corollary is that $L_1,\dots L_{\nu}$ are equidistributed. We used \eqref{L^{(nu)}=frac} in \cite{Pit3} to prove \begin{Lemma}\label{sumsofLs} Let $s\ge 2$. For $\sigma<\frac{1}{s+1}$, we have \begin{equation}\label{sumLj^s} \textup{ P\/}\Biggl(\Bigl|\frac{\nu^{s-1}}{s!}\sum_{j\in [\nu]}L_j^s-1\Bigr|\ge \nu^{-\sigma}\Biggr) \le\exp\bigl(-\Theta(\nu^{\frac{1}{s+1}-\sigma}\,)\bigr) \end{equation} and, for $\nu$ even, \begin{equation}\label{sumL_jL_{j+}} \textup{ P\/}\Biggl(\Bigl|2\nu\sum_{j\in [\nu/2]}L_jL_{j+\nu/2}-1\Bigr|\ge \nu^{-\sigma}\Biggr) \le\exp\bigl(-\Theta(\nu^{\frac{1}{s+1}-\sigma}\,)\bigr). \end{equation} \end{Lemma} {\bf Note.\/} Had $\textup{E\/}\bigl[e^{zW^2}\bigr]$ been finite for $|z|\neq 0$ sufficiently small, we would have been able to prove--via a standard application of Chernoff's method--a stronger bound, namely $\exp\bigl(-\Theta(\nu)\bigr)$. And that's the estimate we claimed (without a proof) in \cite{Pit1}, overlooking that, for the exponential $W$, $\textup{E\/}\bigl[e^{zW^2}\bigr]=\infty$ if $z>0$. The weaker, sub-exponential, bounds \eqref{sumLj^s}, \eqref{sumL_jL_{j+}} were proved in \cite{Pit3} by combining Chernoff's method with truncation of $W$ at $\nu^{\frac{1}{s+1}}$. It turned out that these bounds combined with the inequality \eqref{simple1,2}, missed in \cite{Pit1}, were all we needed in \cite{Pit3} for the asymptotic study of {\it non-bipartite\/} stable partitions and matchings. We stressed there that the argument could be used as a template for swapping some proof steps in the corresponding parts of \cite{Pit1}, \cite{Pit2}, \cite{IrvPit}, and so avoiding the problematic issue of exponential bounds. We will see that the sub-exponential bounds \eqref{sumLj^s}, \eqref{sumL_jL_{j+}} are sufficient for our study in this paper as well.\\ In addition to the bounds \eqref{sumLj^s}, we will need \begin{equation}\label{P(L^+>)<} \textup{ P\/}\left(\max_{j\in [\nu]} L_j^{(\nu)}\ge \frac{\log ^2\nu}{\nu}\right)\le e^{-\Theta(\log^2\nu)}, \end{equation} which directly follows from \begin{equation}\label{P(maxL_j>)} \textup{ P\/}\left(\max_{j\in [\nu]} L_j^{(\nu)}\ge x\right)\le \nu\! \textup{ P\/}\bigl(L_1^{(\nu)}\ge x\bigr) =\nu (1-x)^{\nu-1}. \end{equation} \section{Estimates for two-sided matchings} \subsection{$\textup{ P\/}(M)$, $\textup{E\/}[S_n]$} By Lemma \ref{P(Mest)=}, \begin{equation}\label{P(M)=(Int)^2} P(M)=\textup{ P\/}(M\text{ is e-stable})\equiv\left(\idotsint\limits_{\bold x\in [0,1]^n}\prod_{(a,b)}(1-x_ax_b)\,d\bold x\right)^2. \end{equation} Here, by \eqref{simple1,2'}, \[ \prod_{(a,b)}(1-x_ax_b)\lessdot \exp\Bigl(-\frac{s^2}{2}\Bigr),\quad s:=\sum_{a\in [n]}x_a. \] So, by Lemma \ref{intervals1} and \eqref{int,prod}, \begin{equation}\label{prelim} \begin{aligned} \idotsint\limits_{\bold x\in [0,1]^n}\,\prod_{(a,b)}&(1-x_ax_b)\,d\bold x\lessdot \idotsint\limits_{\bold x\in [0,1]^n}\exp\Bigl(-\frac{s^2}{2}\Bigr)\,d\bold x\\ &\le\int_0^{\infty}\exp\Bigl(-\frac{s^2}{2}\Bigr)\frac{s^{n-1}}{(n-1)!}\,ds\lessdot\frac{(n-2)!!}{(n-1)!} =\frac{1}{(n-1)!!}. \end{aligned} \end{equation} Therefore $\textup{ P\/}(M)\lessdot \bigl[(n-1)!!\bigr]^{-2}$, implying that \[ \textup{E\/}\bigl[S_n\bigr]=n!\!\textup{ P\/}(M)\lessdot\frac{n!}{\bigl[(n-1)!!\bigr]^2}=\frac{n!!}{(n-1)!!}=\Theta(n^{1/2}). \] This bound is qualitatively sharp. \begin{Theorem}\label{ES_nsim} \[ \textup{E\/}\bigl[S_n\bigr]=\bigl(1+O(n^{-\sigma})\bigr)\sqrt{\frac{\pi n}{2}},\qquad\forall\,\sigma<\frac{1}{3}. \] \end{Theorem} \noindent The proof consists of two parts: reduction of the integration domain in the formula \eqref{P(M)=(Int)^2} and sharp estimate of the integral over the core domain. {\bf Note.\/} For comparison: (1) the expected number of the classical, bipartite, stable matchings is asymptotic to $e^{-1} n\log n\gg n^{1/2}$, \cite{Pit0}; (2) its counterpart for one-sided stable matchings approaches a finite limit $e^{1/2}$ \cite{Pit2}, and even the expected number of stable {\it partitions\/}, that include stable matchings as a very special case (Tan \cite{Tan}), is of order $n^{1/4}$ \cite{Pit3}, again well below $n^{1/2}$. \subsubsection{Reduction of the cube $[0,1]^n$} In steps, we will eliminate the large chunks of the integration cube so that we will be able to approximate sharply the integrand on the remaining part of the cube, at the total (relative) error cost of order $e^{-\Theta(\log^2 n)}$. Since the argument is very close to, indeed simpler than, the proof in Section 4.2 in \cite{Pit3}, we limit ourselves to describing the intermediate steps and shedding some light on the proofs. For the first reduction, we observe that the integrand $e^{-\frac{s^2}{2}} s^{n-1}$ in \eqref{prelim} attains its sharply pronounced maximum at $(n-1)^{1/2}$, and the second order logarithmic derivative of the integrand is below $-1$ for all $s>0$. Given $C\subseteq [0,1]^n$, define \[ I_C(M)=\idotsint\limits_{\bold x\in C} \prod_{(a,b)}(1-x_ax_b)\,d\bold x, \] and set $I(M):=I_{[0,1]^n}(M)$. \begin{Lemma}\label{C1} Let $C_1=\bigl\{\bold x\in [0,1]^n:\,s\le s_n\bigr\}$, $s_n=n^{1/2}+3 \log n$. Then, \[ I(M)-I_{C_1}(M)\le \frac{e^{-\Theta(\log^2 n)}}{(n-1)!!}. \] \end{Lemma} \noindent See Lemma 4.5 in \cite{Pit3}. Our next step is to shrink $C_1$ to its subset where each component $x_i$ of $\bold x$ is at most $s\frac{\log^2 n}{n}$. \begin{Lemma}\label{C2} Let $u_i:=\frac{x_i}{s}$ and $C_2:=\Bigl\{\bold x\in C_1\,:\,\max_i u_i\le\frac{\log^2 n}{n}\Bigr\}$. Then \[ I_{C_1}(M)-I_{C_2}(M)\le \frac{e^{-\Theta(\log^2 n)}}{(n-1)!!}. \] \end{Lemma} \begin{proof} The proof starts with \begin{align*} I_{C_1}(M)-I_{C_2}(M)&\lessdot\idotsint\limits_{\bold x\ge 0}e^{-\frac{s^2}{2}}\,\chi\Bigl\{\max_i u_i\ge \frac{\log^2 n}{n}\Bigr\}\,d\bold x\\ &\le\frac{\textup{ P\/}\Bigl(\max_{i\in [n]}L_i\ge \frac{\log^2n}{n}\Bigr)}{(n-1)!}\int_0^{\infty}e^{-\frac{s^2}{2}}s^{n-1}\,ds, \end{align*} see Lemma \ref{intervals1}, and the probability is then bounded with the help of \eqref{P(L^+>)<}. \end{proof} Notice that $\sum_{i\in [n]} x_i^4=O\bigl(n^{-1}\log^8 n)$ uniformly for $\bold x\in C_2$, which would have been good enough for us. However, the constraints on $C_2$ guarantee only that $\sum_{i\in [n]} x_i^2=O\bigl(\log^4 n)$, while we will need to know that only $\bold x$ with a bounded $\sum_{i\in [n]}x_i^2$ matter asymptotically. Thus another reduction of the integration domain is in order. \begin{Lemma}\label{C3} Let $C_3:=\!\Bigl\{\bold x\in C_2\,: \Bigl|\frac{n}{2}\sum_i u_i^2-1\Bigr|\le n^{-\sigma}\!\Bigr\}$, \,$\sigma<1/3$. Then \[ I_{C_3}(M)-I_{C_2}(M)\le \frac{e^{-\Theta(n^{1/3-\sigma})}}{(n-1)!!}. \] \end{Lemma} \noindent The proof combines Lemma \ref{intervals1} and Lemma \ref{sumsofLs}.\\ Putting together Lemmas \ref{C1}, \ref{C2} and \ref{C3} we have \begin{Corollary}\label{C3,expl} \[ I(M)-I_{C_3}(M)\le \frac{e^{-\Theta(\log^2 n)}}{(n-1)!!}, \] where the core domain $C_3\subset [0,1]^n$ is defined by the constraints: with $s_n=n^{1/2}+3\log n$, \begin{equation}\label{C3def} s \le s_n\quad \max_{i\in [n]} x_i\le s\frac{\log^2 n}{n}, \quad \left|\frac{n\sum_{i\in [n]}x_i^2}{2s^2}-1\right|\le n^{-\sigma}. \end{equation} \end{Corollary} Notice that the constraints \eqref{C3def} imply that \begin{equation}\label{max x<} \max_{I\in [n]} x_i\le 2n^{-1/2}\log^2 n\to 0, \end{equation} meaning that the constraint $\max_i x_i\le 1$ is superfluous for $n$ large enough. Furthermore, in combination with $1-z=\exp\bigl[-z-z^2/2+O(|z|^3)\bigr]$, $z\to 0$, the inequality \eqref{max x<} delivers \[ \prod_{(i, j)} (1-x_i x_j)=\exp\Biggl(-\sum_{(i, j)}\Bigl(x_ix_j+\frac{x_i^2x_j^2}{2}\Bigr)+O\Bigl(\sum_{i\in [n]} x_i^4\Bigr)\Biggr), \] the equality that holds uniformly for $\bold x\in C_3$, thus with the remainder term of order $O\bigl(n^{-1}\log^8 n\bigr)$. From the constraints \eqref{C3def} we infer \begin{Lemma}\label{prodsim} Uniformly for $x\in C_3$, \[ \prod_{(i, j)} (1-x_i x_j)=\exp\!\left(\!-\frac{s^2}{2}\left(\!1-\frac{2}{n}\!\right)-\frac{s^4}{n^2}+O(n^{-\sigma})\!\right). \] \end{Lemma} \subsubsection{Sharp estimate of $P(M)$} \begin{Lemma}\label{IC3sim} \[ I_{C_3}(M)=c_n\frac{1-e^{-\Theta(\log^2 n)}}{(n-1)!!}, \quad c_n=\left\{\begin{aligned} &1,&&n\text{ is even},\\ &\sqrt{\frac{\pi}{2}},&&n\text{ is odd}.\end{aligned}\right. \] \end{Lemma} \begin{proof} Denote $\psi_n(s)=\frac{s^2}{2}\bigl(1-\frac{2}{n}\bigr)-\frac{s^4}{n^2}$. Applying Lemma \ref{intervals1} and using \eqref{max x<}, Lemma \ref{prodsim}, we obtain \begin{align*} I_{C_3}(M)&=\bigl(1+O(n^{-\sigma})\bigr)\textup{ P\/}\Bigl(\Bigl|\frac{n}{2}\!\sum_{i\in [n]}L_i^2-1\Bigr|\le n^{-\sigma};\,\, \max_{j\in [n]} L_j\le \frac{\log^2 n}{n}\Bigr)\\ &\quad\times\frac{1}{(n-1)!}\int_0^{s_n} e^{-\psi_n(s)} s^{n-1}\,d s. \end{align*} The probability factor here exceeds $1-\exp(-\Theta(\log^2 n))$. The integrand attains its sharp maximum at $\hat s=(n-1)^{1/2} -\Theta(n^{-1/2})$, so that $s_n-\hat s\ge 2\log n$. The overwhelming contribution to the integral comes from $s\in [\hat s -\log n,\hat s+\log n]$, and for those $s$ we have $\frac{s^4}{n^2}=1+O(n^{-1/2}\log n)$. An easy argument shows then that the integral equals \begin{align*} &\frac{e^{-1}\bigl(1+O(n^{-1/2}\log n)\bigr)}{(n-1)!}\int_0^{\infty}\!\! e^{-\frac{s^2}{2}\left(1-\frac{2}{n}\right)} s^{n-1}\,ds\\ &\quad =\bigl(1+O(n^{-1/2}\log n)\bigr)\cdot \frac{c_n}{(n-1)!!}. \end{align*} \end{proof} \begin{Corollary}\label{inprod=sharp} \begin{align*} \textup{ P\/}(M)&=\left(\idotsint\limits_{\bold x\in [0,1]^n} \prod_{(a,b)}(1-x_ax_b)\,d\bold x\right)^2 =\bigl(1+O(n^{-\sigma})\bigr)\left(\frac{c_n}{(n-1)!!}\right)^2. \end{align*} \end{Corollary} By $\textup{E\/}\bigl[S_n\bigr]=n! P(M)$, and Stirling formula for factorials, Corollary \ref{inprod=sharp} completes the proof of Theorem \ref{ES_nsim}. \subsection{Likely range of the partners' ranks} Let $R_{w}(M)$ and $R_{m}(M)$ stand for the total wives' rank and the total husbands' rank in a generic matching $M$. From \eqref{Pkell(M)=}, $R_{w}(M)$ and $R_{m}(M)$ are equidistributed and, for $R(M)=R_m(M),\,R_w(M)$, $\textup{ P\/}_k(M):=\textup{ P\/}(M\text{ is e-stable},\,R(M)=k)$ is given by \begin{equation}\label{Pk(M)=} \begin{aligned} \textup{ P\/}_{k}(M)&=\idotsint\limits_{\bold x\in [0,1]^{n}}[\xi^{k-n}]\prod_{(a,b)} \bigl(\bar x_a\bar x_b+\xi x_a\bar x_b+\xi \bar x_a x_b\bigr)\,d\bold x\\ &\times\idotsint\limits_{\bold y\in [0,1]^{n}} \prod_{(c,d)} \bigl(1-y_cy_d\bigr)\,d\bold y. \end{aligned} \end{equation} \begin{Theorem}\label{R(M)appr} For a fixed $\varepsilon\in (0,1)$, \[ \textup{ P\/}\left(\max_M\left|\frac{R(M)}{n^{3/2}}-1\right|\ge \varepsilon\right)\le e^{-\Theta(\log^2 n)}. \] \end{Theorem} \begin{proof} Introduce $k=\lceil (1+\varepsilon)n^{3/2}\rceil$, and define \[ P^+(M)=\textup{ P\/}(M\text{ is e-stable},\,R(M)\ge k). \] Applying Chernoff's method to \eqref{Pk(M)=}, we get \begin{align*} P^+(M)&\le I(k) \idotsint\limits_{\bold y\in [0,1]^{n}}\prod_{(c,d)}\bigl(1-y_cy_d\bigr)\,d\bold y,\\ I(k)&:=\idotsint\limits_{\bold x\in [0,1]^{n}}\inf_{\xi\ge 1}\Biggl[\xi^{-\bar k}\prod_{(a,b)} \bigl(\bar x_a\bar x_b+\xi x_a\bar x_b+\xi \bar x_a x_b\bigr)\Biggr]\,d\bold x, \end{align*} $\bar k:=k-n$. By \eqref{prelim}, the first line integral is of order $\frac{1}{(n-1)!!}$. As for $I(k)$, in Theorem 4, \cite{Pit2} (stable matchings on $[n]$, $n$ even), and recently in Theorem 4.16, \cite{Pit3} (stable partitions on $[n]$) we analyzed similar integrals, where the products were over the unmatched (unaligned) pairs, while in the present case the product is over all pairs $(a, b)$ of distinct elements $a,\,b \in [n]$. Since in all cases the number of excluded pairs is linear in $n$, the arguments from the cited papers work just as well for $I(k)$, and we get \[ I(k)\le \frac{e^{-\Theta(\log^2 n)}}{(n-1)!!}\Longrightarrow P^+(M)\le \frac{e^{-\Theta(\log^2 n)}}{[(n-1)!!]^2}. \] Likewise \[ P^-(M):=\textup{ P\/}\bigl(M\text{ is e-stable},\, R(M)\le (1-\varepsilon)n^{3/2}\bigr) \le \frac{e^{-\Theta(\log^2 n)}}{[(n-1)!!]^2}. \] Therefore \begin{align*} \textup{ P\/}\left(\max_M\left|\frac{R(M)}{n^{3/2}}-1\right|\ge \varepsilon\right)&\le n!\,\frac{e^{-\Theta(\log^2 n)}}{[(n-1)!!]^2} \le n^{1/2}e^{-\Theta(\log^2 n)}, \end{align*} which completes the proof of Theorem \ref{R(M)appr}. \end{proof} \subsection{A doubly stable matching is unlikely} Our task is to prove that $\sum_M\mathcal P(M)\to 0$. To bound $\mathcal P(M)$ we need first to reduce the integration domain $\{\bold x, \bold y\in [0,1]^n\}$ to a manageable subdomain $D^*$, such that \[ n!\max_M\bigl[\mathcal P(M)-\mathcal P_{D^*}(M)\bigr]\to 0. \] By \eqref{mathcal P(M|)<P(M|)} and the inequality \eqref{simple1,2'}, we have \begin{equation* \mathcal P(M|\bold x,\bold y)\lessdot\exp\left(-\frac{s^2}{2}-\frac{t^2}{2}\right),\quad s:=\sum_i x_i,\,\,t:=\sum_i y_i. \end{equation*} Given $D$, a subset of $\{\bold x,\bold y\in [0,1]^n\}$, denote \[ \mathcal P_{D}(M)=\idotsint\limits_{\bold x,\,\bold y\in D}\mathcal P(M |\bold x,\bold y)\,d\bold xd\bold y. \] Then \[ \mathcal P_{D}(M)\lessdot \idotsint\limits_{\bold x,\,\bold y\in D}\exp\left(-\frac{s^2}{2}-\frac{t^2}{2}\right)\,d\bold x d\bold y. \] Let $D_1=\{\bold x, \bold y: \max(s,t) \le 2n^{1/2}\}$. Then \begin{align*} \mathcal P(M)-\mathcal P_{D_1}(M)&\lessdot \idotsint\limits_{\bold x\ge \bold 0\atop s\ge 2n^{1/2}}\exp\left(\!-\frac{s^2}{2}\right)\,d\bold x\,\cdot \,\idotsint\limits_{\bold y\ge \bold 0}\exp\left(\!-\frac{t^2}{2}\right)\,d\bold y. \end{align*} The second integral equals $1/(n-1)!!$, and the first integral is of order \begin{align*} ne^{-2n}\frac{(2n^{1/2})^{n-1}}{(n-1)!}\lessdot \frac{n (2e^{-2})^n}{(n-1)!!}, \end{align*} since the integrand attains its maximum at $s=2n^{1/2}$. So \begin{equation}\label{D1} \mathcal P(M)-\mathcal P_{D_1}(M)\lessdot \frac{n(2e^{-2})^n}{\bigl[(n-1)!!\bigr]^2}. \end{equation} For the second, last, reduction, define $u_i=\frac{x_i}{s}$, $v_i=\frac{y_i}{t}$ and set \[ D_2=\Bigl\{(\bold x,\bold y)\in D_1: \max_i u_i\le n^{-\gamma},\,\max_i v_i\le n^{-\gamma} \Bigr\}, \] where $\gamma<1$ is to be chosen later. Then \begin{align*} \mathcal P_{D_1}(M)-\mathcal P_{D_2}(M)&\lessdot \idotsint\limits_{\bold x\ge \bold 0\atop \max u_i>n^{-\gamma}}\exp\left(\!-\frac{s^2}{2}\right)\,d\bold x\,\cdot\, \idotsint\limits_{\bold y\ge \bold 0}\exp\left(\!-\frac{t^2}{2}\right)\,d\bold y. \end{align*} By Lemma \ref{intervals1} and \eqref{P(maxL_j>)}, the first integral is bounded by \[ \frac{\textup{ P\/}\Bigl(\max_{i\in [n]}L_i\ge n^{-\gamma}\Bigr)}{(n-1)!}\int_0^{\infty}e^{-\frac{s^2}{2}}s^{n-1}\,ds \le \frac{e^{-\Theta(n^{1-\gamma})}}{(n-1)!!}. \] So \begin{equation}\label{D2} \mathcal P_{D_1}(M)-\mathcal P_{D_2}(M)\le \frac{e^{-\Theta(n^{1-\gamma})}}{\bigl[(n-1)!!\bigr]^2}. \end{equation} Setting $D^*=D_2$, by \eqref{D1} and \eqref{D2} we have \begin{equation}\label{D*} \mathcal P(M)-\mathcal P_{D^*}(M)\le \frac{e^{-\Theta(n^{1-\gamma})}}{\bigl[(n-1)!!\bigr]^2}. \end{equation} Now that $(\bold x,\bold y)\in D^*$ we can use \eqref{mathcalP(M|)=} to obtain a sharp upper bound for $\mathcal P(M|\bold x,\bold y)$, whence for $\mathcal P_{D^*}(M)$ the integral of $\mathcal P(M|\bold x,\bold y)$ over $D^*$. First of all, on $D^*$ we have $x_i,\,y_i \le 2n^{1/2-\gamma}\to 0$, provided that $\gamma\in (1/2,1)$. By Bonferroni inequality \[ \textup{ P\/}\bigl(\cap B_j^c\bigr)\le 1-\sum_j\textup{ P\/}(B_j)+\sum_{j_1<j_2}\textup{ P\/}\bigl(B_{j_1}\cap B_{j_2}), \] the $(i_1,i_2)$-th factor from the product in \eqref{mathcalP(M|)=} is at most \begin{equation}\label{G(i_1,i_2)=} \begin{aligned} F_{(i_1,i_2)}(\bold x,\bold y)&=1-G_{(i_1,i_2)}(\bold x,\bold y)+H_{(i_1,i_2)}(\bold x,\bold y),\\ G_{(i_1,i_2)}(\bold x,\bold y)&:=x_{i_1}y_{M(i_2)}+x_{i_2}y_{M(i_1)}+x_{i_1}x_{i_2}+ y_{M(i_2)}y_{M(i_1)},\\ H_{(i_1,i_2)}(\bold x,\bold y)&:=2x_{i_1}x_{i_2}y_{M(i_1)}y_{M(i_2)}+x_{i_1}x_{i_2}\bigl(y_{M(i_1)}+y_{M(i_2)}\bigr)\\ &\quad+y_{M(i_1)}y_{M(i_2)}\bigl(x_{i_1}+x_{i_2}\bigr). \end{aligned} \end{equation} Here $G_{(i_1,i_2)}(\bold x,\bold y),\,H_{(i_1,i_2)}(\bold x,\bold y)\to 0$ uniformly for all $i_1,\,i_2$ and $(\bold x,\bold y)\in D^*$, and more precisely $H_{(i_1,i_2)}(\bold x,\bold y)=O(n^{3/2-3\gamma})$. So \begin{align*} F_{(i_1,i_2)}(\bold x,\bold y)&\le \bigl(1-G_{(i_1,i_2)}(\bold x,\bold y)\bigr) e^{O(n^{3/2-3\gamma})}\\ &\le \bigl(1-x_{i_1}x_{i_2}\bigr) \bigl(1-y_{M(i_1)}y_{M(i_2)}\bigr)\\ &\quad\times\bigl(1-x_{i_1}y_{M(i_2)}\bigr) \bigl(1-x_{i_2}y_{M(i_1)}\bigr) e^{O(n^{3/2-3\gamma})}. \end{align*} Now \begin{align*} &\qquad\qquad \prod_{(i_1,i_2)}\!\bigl(1-x_{i_1}x_{i_2}\bigr)\lessdot e^{-\frac{s^2}{2}},\quad \prod_{(i_1,i_2)}\!\bigl(1-y_{M(i_1)}y_{M(i_2)}\bigr)\lessdot e^{-\frac{t^2}{2}},\\ &\prod_{(i_1,i_2)}\!\bigl(1-x_{i_1}y_{M(i_2)}\bigr)\bigl(1-x_{i_2}y_{M(i_1)}\bigr) \le\exp\Bigl(-\!\sum_{(i_1,i_2)}\!(x_{i_1}y_{M(i_2)}+x_{i_2}y_{M(i_1)})\!\Bigr)\\ &\qquad\qquad\qquad\quad=\exp\Bigl(-st+\sum_ix_iy_{M(i)}\Bigr)\le e^{-st} e^{O(n^{2-2\gamma})}. \end{align*} Therefore, uniformly for $(\bold x,\bold y)\in D^*$, we have \[ \mathcal P(M|\bold x,\bold y )\le e^{-\frac{\xi^2}{2}}\cdot e^{O(n^{7/2-3\gamma})}, \quad \xi:=s+t. \] Consequently \begin{align*} \mathcal P_{D^*}(M)&\le e^{O(n^{7/2-3\gamma})}\idotsint\limits_{\bold x,\, \bold y\ge\bold 0} e^{-\frac{\xi^2}{2}}\,d\bold x d\bold y\\ &=e^{O(n^{7/2-3\gamma})}\int\limits_0^{\infty} e^{-\frac{\xi^2}{2}} \frac{\xi^{2n-1}}{(2n-1)!}\,d\xi =\frac{e^{O(n^{7/2-3\gamma})}}{(2n-1)!!}. \end{align*} Combining this estimate with \eqref{D*} we conclude that \[ \mathcal P(M)\le \frac{e^{O(n^{7/2-3\gamma})}}{(2n-1)!!}+\frac{e^{-\Theta(n^{1-\gamma})}}{\bigl[(n-1)!!\bigr]^2}, \] uniformly for all $M$. So \begin{align*} \sum_M \mathcal P(M)&\le \frac{e^{O(n^{7/2-3\gamma})}n!}{(2n-1)!!} +\frac{e^{-\Theta(n^{1-\gamma})}n!}{\bigl[(n-1)!!\bigr]^2}\\ &\le \exp\bigl(-n\log 2+O(n^{7/2-3\gamma})\bigr)+e^{-\Theta(n^{1-\gamma})}\\ &=e^{-\Theta(n^{1-\gamma})}, \end{align*} provided that $\gamma\in (5/6,1)$. Thus we have proved \begin{Theorem}\label{P(Mdeexists)} The probability that there exists a matching $M$, which is both e-stable and stable, is at most $e^{-n^{\sigma}}$ for every $\sigma<1/6$. \end{Theorem} \subsection{$\textup{E\/}\bigl[S_n^2\bigr]$ and such} Having proved that $\textup{E\/}\bigl[S_n\bigr]$ is of order $n^{1/2}$, we felt confident that--- like other types of stable matchings we studied earlier---the second moment $\textup{E\/}\bigl[S_n^2\bigr]$ would not grow faster than $n^{\gamma}$, for some $\gamma\ge 1$. Contrary to our naive expectations, $\textup{E\/}\bigl[S_n^2\bigr]$ grows much faster. For $\xi\in (0,1)$, define \begin{equation}\label{H(xi)=} \begin{aligned} \mathcal H(\xi)&= -(1-\xi)\log(1-\xi)+4\xi \log\frac{1+\xi+\sqrt{1-2\xi+5\xi^2}}{1-\xi+\sqrt{1-2\xi+5\xi^2}}\\ &\quad-(1+\xi)\log\frac{1+3\xi^2+(\xi+1)\sqrt{1-2\xi+5\xi^2}}{1-\xi+\sqrt{1-2\xi+5\xi^2}}. \end{aligned} \end{equation} $\mathcal H(0+)=\mathcal H(1-)=0$, and $\mathcal H(\xi)$ attains its maximum at $\xi_{\text{max}}\approx 0.739534$, with $H(\xi_{\text{max}})\approx 0.253062$. Using $A\gtrdot B$ as a shorthand for $B=O(A)$, we have \begin{Theorem}\label{ESn^2>} $\textup{E\/}\bigl[S_n^2\bigr]\gtrdot n^{3/2}\exp[nH(\xi_{\text{max}})]> n^{3/2}1.28^n$, for $n$ large enough. Consequently, for each such $n$ there exists an instance of the $2n$ preference lists with the number of e-stable matchings exceeding $n^{3/4}1.28^{n/2}> n^{3/4}1.13^n$. \end{Theorem} Thus the standard deviation of $S_n$ is more than $1.13^n$, dwarfing $\textup{E\/}\bigl[S_n\bigr]$. Informally, the distribution of $S_n$ is highly asymmetric, with the discernible right tail much longer than the left tail. It is tempting to conjecture that $\textup{ P\/}(S_n>0)\to 1$. To begin the proof, we observe that \begin{equation}\label{sum>lower} \textup{E\/}\bigl[(S_n)_2\bigr]=\sum_{M_1\neq M_2}\textup{ P\/}(M_1,M_2), \end{equation} where $\textup{ P\/}(M_1,M_2)$ is the probability that both $M_1$ and $M_2$ are e-stable. The lower bound for $\textup{ P\/}(M_1,M_2)$ given in Lemma \ref{P(M1,M2est)} depends on $M_1$, $M_2$ only through $2\nu:=2\nu(M_1,M_2)$ the total length of the bipartite circuits formed by the alternating pairs (man,woman) matched in either $M_1$ or, exclusively, in $M_2$. It makes sense to guess that the dominant contribution to the resulting lower bound for the sum in \eqref{sum>lower} comes from the pairs $(M_1,M_2)$ with $\nu(M_1,M_2)$ relatively close to some judiciously chosen $\nu$. And since we are after an exponential bound, we will use a single--$\nu$ bound coming from Lemma \ref{P(M1,M2est)}: with $\bold x_1,\,\bold x_2\in [0,1]^n$, and $\bold x_2^*$ formed by the first $\nu$ components of $\bold x_2$, \begin{equation}\label{single} \begin{aligned} \textup{E\/}\bigl[(S_n)_2\bigr] &\ge B(n,\nu) \left(\,\,\,\idotsint\limits_{\bold x_1\in [0,1]^n,\,\,\bold x_2^*\in [0,1]^{\nu}}\!\!\!\!\!\!\!\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^*\right)^2,\\ f(\bold x_1,\bold x_2)&=\prod_{(i_1,i_2):\,i_1,i_2\in [\nu]}\!\!\!\!\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr)\\ &\quad\times\!\!\prod_{i_1\in [\nu]^c,\, i_2\in [\nu]}\bigl(1-x_{i_1, 1}\,x_{i_2, 1}\bigr)\bigl(1-x_{i_1, 2}\,x_{i_2, 2}\bigr)\\ &\quad\times\!\!\!\!\!\prod_{(i_1,i_2):\,i_1,i_2\in [\nu]^c}\!\!\!\bigl(1-x_{i_1,1}x_{i_2,1}\bigr), \end{aligned} \end{equation} where $B(n,\nu)$ is the total number of pairs $(M_1,M_2)$ of general matchings $M_1$ and $M_2$, with $2\nu(M_1,M_2)=2\nu$. More explicitly, we have $B(n,\nu)=\binom{n}{\nu}^2 (n-\nu)! B(\nu)$. Here $B(\nu)$ is the total number of the disjoint unions of bipartite {\it circuits\/} on the vertex set $[\nu]\cup [\nu]$, with every second edge on each circuit marked as belonging to the matching $M_1$, and the intervening edges being assigned to the matching $M_2$. Thus $B(\nu)$ is also the total number of bipartite permutations of $[\nu]\cup [\nu]$ with {\it cycles\/} of length $\ge 4$. A simple bijective argument shows that $B(\nu)=\nu!\pi(\nu)$, where $\pi(\nu)$ is the total number of permutations of $[\nu]$ without a fixed point. Since $\pi(\nu)\sim e^{-1}\nu!$, as $\nu\to\infty$, it follows that $B(\nu)=\Theta\bigl((\nu!)^2\bigr)$. Therefor \begin{equation}\label{single,simple} \textup{E\/}\bigl[(S_n)_2\bigr] \gtrdot\frac{(n!)^2}{(n-\nu)!} \left(\,\,\,\idotsint\limits_{\bold x_1\in [0,1]^n,\,\,\bold x_2^*\in [0,1]^{\nu}}\!\!\!\!\!\!\!\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^*\right)^2. \end{equation} It remains to find a lower, $\nu$-dependent, bound for the multidimensional integral, simple enough to identify a value $\nu=\nu(n)$ that makes the resulting bound fast approach infinity. We focus on the case when $\nu$ and $n-\nu$ are both of order $\Theta(n)$. Similarly to the case of $\textup{E\/}[S_n]$, the rest of the proof has two components: determination of the potentially dominant core $\mathcal C$ of the integration domain in \eqref{single,simple} and a sufficiently sharp, lower, bound of the integral over $\mathcal C$. Motivated by our analysis of $\textup{E\/}[S_n]$, and by Corollary \ref{C3,expl} in particular, we define $\mathcal C$ as follows. Define $\mathcal I_1=\mathcal I_2=[\nu]$, $\mathcal I_3=[n]\setminus [\nu]$, and $x_{i,3}=x_{i,1} (=x_{i,2})$ for $i\in \mathcal I_3$. Denoting \begin{align*} &s_t=\sum_{i\in \mathcal I_t}x_{i,t},\quad s_t^{(2)}=\sum_{i\in \mathcal I_t} x_{i,t}^2, \quad s=\sum_ts_t, \end{align*} $\mathcal C$ is the set of all $(\bold x_1,\bold x_2)$ such that \begin{equation}\label{mathC} \max_{i\in \mathcal I_t} x_{i,t}\le s_t\frac{\log^2 n}{n},\quad \frac{s_t^{(2)}}{s_t^2}\le \frac{3}{|\mathcal I_t|}, \quad s=\Theta(n^{1/2}). \end{equation} The definition of the range of $s$ will be specified shortly. Let $(\bold x_1,\bold x_2^*)\in \mathcal C$. For large $n$, we have $\mathcal C\subset [0,1]^n \times [0,1]^{\nu}$ since $x_{i,t}=O\bigl(n^{-1/2}\log n\bigr)$ on $\mathcal C$. This bound on $x_{i,t}$ yields \begin{align*} \log\!\!\!\!\!\prod_{(i_1,i_2):\,i_1,i_2\in \mathcal I_t}\!\!\!\!\!\!\bigl(1-x_{i_1, t}\,x_{i_2, t}\bigr)&\ge -\sum_{(i_1,i_2):\,i_1,i_2\in \mathcal I_t}\!\!\!\bigl(x_{i_1, t}\,x_{i_2, t}+x_{i_1, t}^2\,x_{i_2, t}^2\bigr)\\ &\ge -\frac{s_t^2}{2}-\frac{\bigl(s_t^{(2)}\bigr)^2}{2},\qquad x\in \mathcal C. \end{align*} Similarly, for $t=1,2$, \begin{align*} \log\!\!\prod_{i_1\in \mathcal I_3,\, i_2\in\mathcal I_t}\!\!\bigl(1-x_{i_1, t}\,x_{i_2, t}\bigr)&\ge -\sum_{i_1\in \mathcal I_3,\, i_2\in\mathcal I_t}\!\!\bigl(x_{i_1, t}\,x_{i_2, t}+x_{i_1, t}^2\,x_{i_2, t}^2\bigr)\\ &=-s_t s_3 -s_t^{(2)} s_3^{(2)}. \end{align*} It follows then from \eqref{single} that, with $s=\sum_ts_t$, \begin{align*} &\log f(\bold x_1,\bold x_2)\ge -\frac{\sum_t s_t^2}{2}-s_1s_3-s_2s_3-1.5\sum_t \bigl(s_t^{(2)}\bigr)^2\\ =&-\frac{s^2}{2}+s_1s_2- \sum_t \bigl(s_t^{(2)}\bigr)^2\ge -\frac{s^2}{2}+s_1s_2-4.5s^2\sum_t\frac{1} {|\mathcal I_t|}\\ &\qquad\qquad\quad\,\,=-\frac{s^2}{2}+s_1s_2 +O(1), \end{align*} uniformly for $n$ and $(\bold x_1,\bold x_2^*)\in \mathcal C$. Therefore, for all positive integers $k$, \begin{align*} \idotsint\limits_{\bold x_1\in [0,1]^n,\,\,\bold x_2^*\in [0,1]^{\nu}}\!\!\!\!\!\!\!\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^*&\gtrdot\idotsint\limits_{(\bold x_1,\bold x_2^*)\in \mathcal C}\exp\Bigl(-\frac{s^2}{2}+s_1s_2\Bigr)\,d\bold x_1 d\bold x_2^*\\ &\ge\frac{1}{k!}\idotsint\limits_{(\bold x_1,\bold x_2^*)\in \mathcal C}\exp\Bigl(-\frac{s^2}{2}\Bigr)s_1^ks_2^k\,d\bold x_1 d\bold x_2^*. \end{align*} We will use this bound for $k=\Theta(n)$. Let $\{L_i^{(t)}\}_{i\in \mathcal I_t}$, ($t=1,2,3$), denote the lengths of $|\mathcal I_t|$ consecutive intervals obtained by sampling $|\mathcal I_t|$ points uniformly at random, and independently, from the interval $[0,1]$. (The three sampling procedures are implemented independently of each other.) Applying Lemma \ref{intervals1}, we obtain \begin{align*} &\qquad\quad\,\,\idotsint\limits_{(\bold x_1,\bold x_2^*)\in \mathcal C}\exp\Bigl(-\frac{s^2}{2}\Bigr)s_1^ks_2^k\,d\bold x_1 d\bold x_2^* =\iiint\limits_{ s=\Theta(n^{1/2})}\!\!\!\!\exp\Bigl(-\frac{s^2}{2}\Bigr)\,s_1^ks_2^k\\ &\times\prod_t\frac{s_t^{|\mathcal I_t|-1}}{(|\mathcal I_t|-1)!}\textup{ P\/}\!\left(\!\max_i L_i^{(t)}\le \min\left\{\frac{1}{s_t}, \frac{\log^2 n}{n}\right\};\, \sum_i(L_i^{(t)})^2\le \frac{3}{|\mathcal I_t|}\!\right)\,d\bold s. \end{align*} Since $s_t=O\bigl(n^{1/2}\bigr)\ll n \log^{-2}n$ and $|\mathcal I_t|=\Theta(n)$, the $t$-th probability factor is at least \[ 1-\exp\bigl(-\Theta(\log^2 n)\bigr)-\exp\bigl(-\Theta(n^{\gamma})\bigr), \] for $\gamma\in (0,1/3)$, see Lemma \ref{sumsofLs} and \eqref{P(L^+>)<}. Since $|\mathcal I_1|= |\mathcal I_2|=\nu$, $|\mathcal I_3|=n-\nu$, we see that, with $\varepsilon_n:=e^{-\Theta(\log^2n)}$, \begin{align*} &\qquad\qquad\qquad\quad\frac{1}{k!}\idotsint\limits_{(\bold x_1,\bold x_2^*)\in \mathcal C}\exp\Bigl(-\frac{s^2}{2}\Bigr)s_1^ks_2^k\,d\bold x_1 d\bold x_2^*\\ &\qquad\,\,\ge(1-\varepsilon_n)\!\!\iiint\limits_{ s=\Theta(n^{1/2})}\!\!\!\!\exp\Bigl(-\frac{s^2}{2}\Bigr)\,\frac{s_1^{\nu+k-1}s_2^{\nu+k-1}s_3^{n-\nu-1}}{k! \bigl((\nu-1)!\bigr)^2(n-\nu-1)!}\,d\bold s\\ &\quad=\frac{(1-\varepsilon_n)\bigl((\nu+k-1)!\bigr)^2}{k!\bigl((\nu-1)!\bigr)^2(n+\nu+2k-1)!}\int\limits_{s=\Theta(n^{1/2})}\!\!\!\!\!\!\!\!\!\exp\Bigl(-\frac{s^2}{2}\Bigr)\, s^{n+\nu+2k-1}\,ds, \end{align*} using \eqref{int,prod} for the last step. The integrand attains its sharply pronounced maximum at $s_{\text{max}}=(n+\nu+2k)^{1/2}$, which is $\Theta(n^{1/2})$ for $k=O(n)$. Let us choose $J:=[s_{\text{max}}-\log n, s_{\text{max}}+\log n]$ as the range of $s$. Since \[ \frac{d^2}{ds^2}\log\left[\exp\Bigl(-\frac{s^2}{2}\Bigr)\, s^{n+\nu+2k-1}\right]\le -1, \] it follows in a standard way (cf. the proof of Lemma \ref{IC3sim}) that \begin{align*} \int\limits_{s\in J}\!\!\exp\Bigl(-\frac{s^2}{2}\Bigr)\, s^{n+\nu+2k-1}\,ds&\ge (1-\varepsilon_n)\int\limits_0^{\infty} \exp\Bigl(-\frac{s^2}{2}\Bigr)\, s^{n+\nu+2k-1}\,ds\\ &\ge (1-\varepsilon_n)(n+\nu+2k-1)!!. \end{align*} Therefore, using \[ \binom{b}{a}\le \frac{b^b}{a^a(b-a)^{b-a}},\quad m!=\Theta\left[\left(\frac{m}{e}\right)^m\right], \quad (m-1)!!=\Theta\left[\left(\frac{m}{e}\right)^{m/2}\right], \] we obtain \begin{equation}\label{iiint>} \begin{aligned} &\idotsint\limits_{\bold x_1,\, \bold x_2^*}\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^* \gtrdot \frac{\bigl((\nu+k-1)!\bigr)^2}{k!\bigl((\nu-1)!\bigr)^2(n+\nu+2k-1)!!}\\ &\qquad\qquad\qquad\qquad\quad\quad\,\,\,\,\gtrdot n^{1/2} e^{H(\nu,k)},\\ &\quad H(\nu,k):=-k\log (ke)+2(\nu+k)\log(\nu+k)-2\nu\log\nu\\ &\qquad\qquad\qquad\quad\,\,\,-\frac{n+v+2k}{2}\log\frac{n+\nu+2k}{e}. \end{aligned} \end{equation} Treating $k$ as a continuously varying parameter, we have \begin{equation}\label{H'_k=} H'_k(\nu,k)=2\log (\nu+k)-\log k-\log(n+\nu+2k)=\log\frac{(\nu+k)^2} {k(n+\nu+2k)}. \end{equation} From \eqref{H'_k=} we see that $H(\nu,k)$ has a unique stationary point \begin{align*} k(\nu)&=\frac{2\nu^2}{n-\nu+\sqrt{\nu^2+(n-\nu)^2}}=n \phi(\xi),\quad \xi:=\frac{\nu}{n},\\ \phi(x)&:=\frac{2x^2}{1-x+\sqrt{x^2+(1-x)^2}}. \end{align*} So, using \eqref{H'_k=} again, \begin{equation}\label{H(nu,k(nu))=} \begin{aligned} &H(\nu,k(\nu))=2\nu\log(\nu+k(\nu))-2\nu\log\nu-\frac{n+\nu}{2}\log\frac{n+\nu+2k(\nu)}{e}\\ &\quad=n\left[2\xi\log\left(1+\frac{\phi(\xi)}{\xi}\right)-\frac{1+\xi}{2}\left(\log\frac{n}{e}+\log(1+\xi+2\phi(\xi)\right)\right]. \end{aligned} \end{equation} Since only integers $k$ qualify for the bound \eqref{iiint>}, we introduce $k^*(\nu)=\lceil k(\nu)\rceil$. As $H''_k(k,\nu)=O(n^{-1})$, we have $H(\nu,k^*(\nu))=H(\nu,k(\nu))+O(n^{-1})$. For the first factor on the RHS of \eqref{single,simple} we have \begin{equation}\label{first>} \frac{(n!)^2}{(n-\nu)!}\gtrdot n^{1/2}\exp\left[-(1-\xi)\log(1-\xi) +(1+\xi)\log\frac{n}{e}\right]. \end{equation} Combining the equations \eqref{iiint>}, \eqref{H(nu,k(nu))=} and \eqref{first>}, we obtain \begin{equation}\label{ES_n^2>} \textup{E\/}\bigl[(S_n)_2\bigr]\gtrdot n^{3/2}\exp\bigl(n\mathcal H(\xi)\bigr),\quad \xi=\frac{\nu}{n}, \end{equation} with $\mathcal H(\xi)$ defined in \eqref{H(xi)=}. As a function of the continuously varying $\xi\in (0,1)$, $H(\xi)$ attains its maximum at $\xi_{\text{max}}\approx 0.739534$. Introduce $\nu^*=\lceil n\xi_{\text{max}} \rceil$; then $\frac{\nu^*}{n}=\xi_{\text{max}}+O(n^{-1})$, implying that $H(\nu^*/n)=H(\xi_{\text{max}})+ O(n^{-1})$. Therefore \[ \textup{E\/}\bigl[(S_n)_2\bigr]\gtrdot n^{3/2}\exp\bigl(n\mathcal H(\xi_{\text{max}})\bigr). \] The proof of Theorem \ref{ESn^2>} is complete. \section{Estimates for one-sided matchings} \subsection{$\textup{ P\/}(M)$, $\textup{E\/}[S_n]$, $\mathcal P(M)$} By Lemma \ref{P(Mest)='}, \[ \textup{ P\/}(M)=\textup{ P\/}(M\text{ is e-stable})=\idotsint\limits_{\bold x\in [0,1]^n}\prod_{(a,b\neq M(a))}(1-x_ax_b)\,d\bold x. \] Let \[ \mathcal C^*=\Biggl\{\bold x \in C_3:\,\Bigl|\frac{2n\sum_{i\in [n/2]}x_ix_{i+n/2}}{s^2}-1\Bigr|\le n^{-\sigma}\Biggr\}, \] where $C_3$ is defined in Corollary \ref{C3,expl}. Very similarly to Lemma \ref{prodsim}, uniformly for $x\in \mathcal C^*$, we have \[ \prod_{(a, b\neq M(a))} (1-x_i x_j)=\exp\!\left(\!-\frac{s^2}{2}\left(\!1-\frac{3}{n}\!\right)-\frac{s^4}{n^2}+O(n^{-\sigma})\!\right). \] And, just like Corollary \ref{C3,expl} itself, invoking the bound \eqref{sumL_jL_{j+}} we obtain \[ \idotsint\limits_{\bold x\in [0,1]^n\setminus \mathcal C^*}\prod_{(a,b\neq M(a))}(1-x_ax_b)\,d\bold x\le \frac{e^{-\Theta(\log^2n)}}{(n-1)!!}. \] Arguing as in the proof of Lemma \ref{IC3sim}, and using $\textup{E\/}[S_n]=(n-1)!! \textup{ P\/}(M)$, we establish \begin{Theorem}\label{IC3sim'} \begin{align*} \textup{ P\/}(M)&=\bigl(1+O(n^{-\sigma})\bigr)\frac{e^{1/2}}{(n-1)!!},\\ \textup{E\/}[S_n]&=e^{1/2}+O(n^{-\sigma}),\quad\forall \sigma<1/3. \end{align*} \end{Theorem} {\bf Note.\/} Since $\textup{E\/}[S_n]$ also equals the expected number of the usual, one-sided, stable matchings, we actually gave here a corrected proof of our result from \cite{Pit2}. See the note following Lemma \ref{sumLj^s}. Turn to $\mathcal P(M)$, the probability that $M$ is both stable and e-stable. By Lemma \ref{P(Mest)='} \eqref{simple1,2}, Lemma \ref{intervals1} and \eqref{int,prod}, \begin{align*} \mathcal P(M)&=\idotsint\limits_{\bold x\in [0,1]^n}\prod_{(a,b\neq M(a))}(1-x_ax_b)^2\,d\bold x\\ &\lessdot \idotsint\limits_{\bold x\ge\bold 0} e^{-s^2}\,d\bold x\le \int_0^{\infty}e^{-s^2}\frac{s^{n-1}}{(n-1)!}\,ds \\ &=\frac{2^{-n/2}}{(n-1)!!} \end{align*} Since the total number of matchings on $[n]$ is $(n-1)!!$, we have proved \begin{Theorem}\label{nodouble'} \[ \textup{ P\/}(\exists\, M\text{ both stable and e-stable})= O\bigl(2^{-n/2}\bigr). \] \end{Theorem} \subsection{Likely range of the partners's ranks} Let $R(M)$ be the sum of all partners's ranks under $M$, and $\!\!\textup{ P\/}_k(M)=\!\!\textup{ P\/}(M\text{ is e-stable}, R(M)\!=k)$. By Lemma \ref{Pk(M)'}, \[ \textup{ P\/}_k(M)=\!\!\idotsint\limits_{\bold x\in [0,1]^{n}}[\xi^{k-n+1}]\!\!\prod_{(a,b\neq M(a))} \!\!\!\!\!\!\bigl(\bar x_a\bar x_b+\xi x_a\bar x_b+\xi \bar x_a x_b\bigr)\,d\bold x; \] this integral is also the probability that $M$ is stable, and $R(M)=k$. \begin{Theorem}\label{R(M)appr'} For a fixed $\varepsilon\in (0,1)$, \[ \textup{ P\/}\left(\max_M\left|\frac{R(M)}{n^{3/2}}-1\right|\ge \varepsilon\right)\le e^{-\Theta(\log^2 n)}. \] \end{Theorem} In our recent \cite{Pit3} we proved the similar result for the total rank of ``predecessors'' in the stable cyclic partitions, that include the stable matchings as a special case. Since the proof was based on the union bound involving the distribution of that rank for a generic cyclic partition, Theorem \ref{R(M)appr'} is a direct corollary of that result. The note following Theorem \ref{IC3sim'} could be replicated here. \subsection{$\textup{E\/}[S_n^2]$ and such} \begin{Theorem}\label{ESn^2>'} $\textup{E\/}\bigl[S_n^2\bigr]\gtrdot\exp\Bigl[\frac{n}{2}H(\xi_{\text{max}})]> 1.13^n$, for $n$ large enough. Consequently, for each such $n$ there exists an instance of the $n$ preference lists with the number of e-stable matchings exceeding $1.06^n$. \end{Theorem} \begin{proof} The one-sided counterpart of \eqref{single} is \begin{equation}\label{single'} \begin{aligned} \textup{E\/}\bigl[(S_n)_2\bigr] &\ge \mathcal B(n,\nu) \,\,\,\idotsint\limits_{\bold x_1\in [0,1]^n,\,\,\bold x_2^*\in [0,1]^{\nu}}\!\!\!\!\!\!\!\!\! f(\bold x_1,\bold x_2)\,d\bold x_1 d\bold x_2^*. \end{aligned} \end{equation} Here $\mathcal B(n,\nu)$ is the total number of pairs $(M_1,M_2)$ of general matchings $M_1$ and $M_2$, on $[n]$ with $\nu(M_1,M_2)$, the total length of circuits formed by the pairs from $M_1\Delta M_2$, equal to $\nu$. More explicitly, we have $\mathcal B(n,\nu)=\binom{n}{\nu}(n-\nu-1)!! \mathcal B(\nu)$. Here $\mathcal B(\nu)$ is the total number of the disjoint unions of {\it circuits\/} on the vertex set $[\nu]$, with every second edge on each circuit marked as belonging to the matching $M_1$, and the intervening edges being assigned to the matching $M_2$. Thus $\mathcal B(\nu)$ is also the total number of permutations on $[\nu]$ with {\it cycles\/} of length $\ge 4$. Since the total number of those permutations with $r_j$ cycles of length $j\ge 4$ is $\nu! \prod_j \frac{1}{(j!)^{r_j}r_j!}$, it follows easily that \begin{align*} \sum_{\nu\ge 4}x^{\nu}\,\frac{\mathcal B(\nu)}{\nu!}&=\exp\left(\sum_{\text{even }j\ge 4}\frac{x^j}{j}\right) =\frac{e^{-\frac{x^2}{2}}}{(1-x^2)^{1/2}}. \end{align*} Using the saddle-point method (Flajolet and Sedgewick \cite{FlaSed}), we obtain \[ \mathcal B(\nu)=\bigl(1+O(\nu^{-1})\bigr)\nu!\sqrt{\frac{2}{\pi e\nu}}\gtrdot \nu!\,\nu^{-1/2}. \] Consequently, for $\nu=\Theta(n)$, \begin{equation}\label{first>'} \begin{aligned} \mathcal B(n,\nu)& \gtrdot \exp\left[\frac{n}{2}\left(-(1-\xi)\log(1-\xi)+(1+\xi)\log\frac{n}{e}\right)\right],\,\,\xi:=\frac{\nu}{n}; \end{aligned} \end{equation} cf. \eqref{first>}. Combining the equations \eqref{single'}, \eqref{iiint>}, \eqref{H(nu,k(nu))=} and \eqref{first>'}, we obtain \begin{equation}\label{ES_n^2>'} \textup{E\/}\bigl[(S_n)_2\bigr]\gtrdot n^{3/2}\exp\Bigl(\frac{n}{2}\mathcal H(\xi)\Bigr),\quad \xi=\frac{\nu}{n}, \end{equation} with $\mathcal H(\xi)$ defined in \eqref{H(xi)=}. The rest follows the conclusion of the proof of Theorem \ref{ESn^2>}. \end{proof} {\bf Acknowledgment.\/} I am grateful to David Manlove for bringing the existing work on the doubly stable matchings to my attention, and asking how likely these matchings are.
1,108,101,566,068
arxiv
\section{Introduction} \label{sec:intro} Quality control is a fundamental process in the manufacturing pipeline. Since the '80s, automatizing the quality control task has been offering potentials to overcome limitations of manual inspection \cite{chin1982automated}. As a consequence, successful applications of automatic visual inspection have been emerging year after year, and nowadays inspection systems are being employed in a vast number of industries, from food \cite{wu2013colour} and fabrics \cite{li2016deformable} to railways \cite{shang2018detection} and reconstruction \cite{chen2017self}. In this regard, a standard visual inspection hardware setup is typically composed of a digital camera, optics, and an illumination system. The hardware setup is usually coupled with a customised software that controls the acquisition, evaluates the captured images, and eventually takes decisions based on the evaluated results. Hence, hardware selection is a fundamental task in the design of an automatic visual inspection system and is essentially driven by the characteristics of the object to be inspected \cite{see2017role}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig_setup_defects.png} \caption{The proposed lighting system and four examples of defective images in our dataset as (i) worn-out paint, (ii) dots on the metal surface, (iii) missing decorations, and (iv) unexpected glass break. } \label{fig:defects} \vspace{-1em} \end{figure} For a given manufactured object, countless different models might exist in production having various material properties (specular, diffusive, directional, transparent) or geometrical shape (flat, curved, prismatic). The surface to be inspected might also contain patterns and adornments which should be distinguished from the undesirable irregularities (see Fig. \ref{fig:defects}). We define the object to be inspected, which is the subject of this work, as a \textit{complex-object} if its variable surface characteristics cannot be determined a priori, e.g. it can appear highly reflective and curved in one instance and opaque and prismatic in a different instance. This situation is not uncommon when inspecting assembled and/or decorated products, which can have custom finishing, based on customer requests.\footnote{Due to Non-Disclosure Agreement (NDA) restrictions in place, we cannot reveal the identity of the object inspected in this study.} In this context, \textit{standard illumination techniques} comprising `front lighting', `back lighting', `diffuse lighting', `bright-field lighting' and `dark-field lighting' \cite{van1996choose} individually are not sufficient for this task as each of them is merely suitable for inspection of a few certain surface characteristics. Additionally, the surface attributes are not the only factor driving the choice of the illumination setup. In fact, \cite{martin2007practical} names \textit{immediate inspection environment} one of the three factors for an optimal lighting solution, and introduces object geometry and its support structure as two critical factors for the design of lighting solutions that may even limit the choice of standard illumination techniques. In this work, we aim to propose an illumination system which is capable of dealing with the challenges of automatic visual inspection of the complex-objects, and to define a methodology for analyzing the effect of the proposed illumination system on the final defect detection performance. In particular, we seek to study the impact of the proposed multi-lighting system when deployed in training phase only or in both training and evaluation phases. The first case is specifically relevant in the common situation where deployment of a novel acquisition system cannot be accomplished on the customer site, either due to industrial constraints or technical specifications. To summarize, our contributions are as follows: \begin{itemize} \item We propose an acquisition setup composed of a multi-illumination system (diffused, dark-field and frontal illumination techniques) to guarantee high defect visibility (over 99\%, as reported by the annotators) on a wide selection of instances of the complex-object. \item We conduct exhaustive experiments to demonstrate the importance of the the multi-lighting system, even though merely deployed in training phase. \item We experimentally show that the multi-lighting setup deployment in the evaluation phase, when coupled with late-fusion of detections in each single-lighting conditions, leads to the highest defect detection rate of the system. \end{itemize} \section{Related work} \label{sec:related} The list of successful applications of the visual inspection systems in the case of non-complex objects is long and in many cases the deployment of standard illumination techniques leads to significant improvements. For instance, \cite{chang2016development} addresses \textit{touch panel glass} defect detection using dark-field illumination coupled with image processing techniques achieving $99\%$ accuracy on edge defect type, and an ad-hoc illumination technique such as injecting light beams perpendicularly in the glass achieves $100\%$ performance in scratch defect detection and its discrimination from dust \cite{ozturk2018real}. Inspection of non-regular objects, however, has always been considered a challenging task where a combination of hardware and software techniques was required to achieve the desired outcome. For \textit{silver halide films} inspection, adopting a combination of dark-field illumination brought to the best results in detecting scratch and dust \cite{rufenacht2013automatic}. In certain inspection scenarios, such as small defects in \textit{automotive components}, standard illumination techniques were not found suitable. Thus, to ensure identifying defects when they are undetectable to the naked eyes, \cite{mery2017automatic} proposed to use x-ray imaging and achieved quality performance using SVM-linear classifier. Yet, in a similar case for detecting small defects on \textit{automobile casting aluminum parts}, deployment of x-ray imaging together with the most recent algorithms such as Feature Pyramid Networks leads to $0.51$ mAP in the best case scenario \cite{du2019approaches}. In this paper, firstly we present a custom-designed illumination system comprising several heterogeneous lighting techniques including diffused, dark-field, and front lighting, under various camera exposure values to illuminate numerous defect types on a wide range of surface characteristics that a complex-object might be made of. In addition, we discuss that collecting data under various illumination configurations can be understood as representing an artifact in different modalities, although all the modalities are in practice offered in a single data format as the \textit{RGB image}. As also suggested in \cite{guo2015microscopy}, hereafter we will thus regard images acquired under different illumination configurations as different modalities. Secondly, we provide exhaustive analysis on the potentials of the proposed system to be utilized in either training or evaluation phase. In many cases, the illumination system cannot be arbitrarily chosen or modified due to, for example, out of reach system specifications or cost related issues, especially in customer site (evaluation phase). Hence, we will investigate performance improvement brought by the developed system only in the data collection phase for training of algorithms only in provider site. Further, we experimentally demonstrate that mutual processing of multiple modalities in the form of late-fusion of single detections in each modality leads to considerable improvements in the performance of defect detection algorithms if employed in both training and evaluation phases, thus justifying the suitability of the proposed pipeline. In this regard, the work most similar to ours is the one proposed in \cite{park2016ambiguous}, where to detect and classify defects on a \textit{smartphone surface}, several images are taken with various cameras and light sources to ensure the visibility of defects in at least some of the images. However, differently from our proposal where images are taken with a single camera under varying illumination conditions, in \cite{park2016ambiguous}, as the images are taken with different cameras placed in different locations, the mutual processing of the collected images does not occur. Our motivations for proposing our design are threefold: first, in our proposal, only one camera is embedded, leading to a more cost-effective setup. Second, our proposed setup is designed to have a moderate physical weight enabling it to be carried e.g. with a robotic arm to spin around the complex-object and acquire images at different positions of it. Third, and possibly of more interest to the pattern recognition community, our proposed setup provides multiple instances of the same defective region that we empirically demonstrate to have a large improving impact on defect detection procedure. \section{Acquisition setup} \label{sec:acquisition} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig_setup_lights.png} \caption{Illuminators in the proposed acquisition setup are set to activate and deactivate sequentially to resemble diffused, dark-field, and front lighting techniques within four illumination configurations (C, UD, LR, UDLR).} \label{fig:setup_lights} \vspace{-2em} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig_modality.png} \caption{Images of a defective object in various illumination conditions. The labeled defect annotation by the annotator is shown with a green bounding-box.} \label{fig:modality} \vspace{-1em} \end{figure} Our proposed lighting setup is composed of five flat-dome lights that alternatively activate and deactivate in different combinations. The light positioning has been empirically studied such to reproduce diffused, dark-field and front lighting techniques, while producing the least possible glares on the specular surfaces. Our proposed setup can be seen in Fig. \ref{fig:setup_lights}. Dome light offers diffused, shadow-less, and uniform illumination even on shiny, curved, and uneven surfaces. In fact, flat-dome lights provide the same characteristics of dome lights, with the additional advantage of occupying less volume, as of the standard LED light. To minimize the reflectivity of the lighting system, which would make it visible when acquiring highly specular surfaces, we covered all the white flat-dome lights with dark collimator filters. We identified four lighting configurations which allow the system to produce front lighting (Fig. \ref{fig:setup_lights}.C) and dark-field lighting in vertical (Fig. \ref{fig:setup_lights}.UD), horizontal (Fig. \ref{fig:setup_lights}.LR) and all lateral (Fig. \ref{fig:setup_lights}.UDLR) directions. Front lighting is mostly suitable for detecting color irregularities or flat defects, while dark-field lighting is extremely useful for acquiring effective images of defects related with surface irregularities such as scratches, bumps, or missing pieces. In addition to the four modalities and to ensure the appropriate illumination level of the acquired images of any surface independently from their reflective characteristics, each light configuration is activated for 3 different time lengths, mimicking 3 different camera shutter speeds (low, medium, high). Camera exposure time is set to be constant and longer than the maximum time of light activation. Trigger controls are configured such that lights and the camera are properly synchronized. In our study, all the images are acquired using a Basler acA2440-75uc camera and an Edmund Optics 16mm F1.4 lens. The camera is placed at the ad-hoc hole presented in the center of the central light. In order to block out all the external environment light, the entire setup and the complex-object to be inspected were placed in a dark black box. \section{Dataset} \label{sec:dataset} Given the described acquisition setup, the system can simultaneously acquire 12 images of the same object varying the illumination conditions (4 modalities, each with 3 exposures). A defect, depending on its type and the characteristics of the surface on which it appears, might be visible in all or only some of the $12$ captured images. For example, as in the case shown in Fig. \ref{fig:modality}, the defect is visible in all the images but images captured with central light with medium and high exposures. Note the significantly different representation each one of the light configurations offers from a single defect. Without predefined instructions on image choice, for each defective object, the annotators label the defect in only one of the images on which they can spot it, as shown by a green bounding-box in Fig. \ref{fig:modality}. Fig. \ref{fig:ann_freq} shows the normalized frequency of annotations for each illumination condition. We expand the single annotation on one image to all the 11 remaining images. If the existing defect on the object is not visible in any of the 12 images captured by the setup, the annotator indicates the non-visibility of the defect in the annotation tool. It is worth mentioning that, the developed setup enabled us to visualize and correctly annotate $99.2\%$ of the defects in a freely selected collection of complex-objects. The collected dataset consists of $5,071$ defective regions of complex-objects, where each region may contain more than one defect. For each region, 12 images with varying illumination conditions are collected, obtaining a total number of $60,852$ images. For our experiments, we split the dataset object-wise in training, validation, and test set with the ratio of 70\%, 15\%, and 15\% respectively. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig_ann_freq.png} \caption{Normalized annotations frequency per illumination condition.} \label{fig:ann_freq} \vspace{-1em} \end{figure} \section{Methodology} \label{sec:method} Depending on the defect type and on the surface characteristics, the defect might be better visible in one or more than one image out of the 12 collected using the proposed setting. Given this, is selecting a conventional single illumination technique the most effective choice that the system provider can make? Can the system provider leverage the availability of the multi-modal data in training phase for improving uni-modal testing performance? Can different light conditions be considered as a natural data augmentation technique, or the resulting images are too correlated to actually bring any contribution during the model training? Can inspection scenarios benefit from the multi-modal data availability also in evaluation phase? In the following paragraphs, we explain our proposed methodology for responding to the aforementioned questions. \subsection{Study 1: Training and evaluation on one single modality} \label{subsec:study1} The most common situation when working with visual inspection systems consists of having the same illumination setup available in both training and evaluation phase, therefore it is fundamental to assess the best performing illumination modality. This scenario will be our baseline: only one illumination modality is available for training and evaluation. In this single modality scenario, we are interested in comparing the performances that can be obtained using each of different modalities, for better understanding the characteristics of our dataset and for exploring which light configuration may better help in solving our task. Note that given the single modality scenario, only one quarter of the collected data is used, since the related images to all the other 3 modalities are discarded. Yet, in all the experiments, the selected light configuration includes all of its corresponding images taken under all the 3 exposures, unless stated differently. \subsection{Study 2: Training on multiple modalities, evaluation on a single modality} \label{subsec:study2} As mentioned earlier, in some cases, visual inspection systems cannot be arbitrarily chosen or modified in evaluation phase. In this study, we aim to verify whether deploying a multi-modal inspection system only for acquiring images to be used for model training can lead to improved performances on the unmodified single modality evaluation setup. In order to be comparable with the results of Study 1, we introduce images acquired using different illumination modalities keeping constant the number of images used during training. In other words, also in this experimental setup, only one quarter of the entire dataset is used. In this case, we choose two possible strategies to select dataset images to preserve: \begin{itemize} \item Out of the 12 images available per each defective region, preserving 3 random images each from a different modality under one randomly selected exposure value only; \item Preserving only one quarter of the defective regions in the dataset, but using all of their 12 images acquired with all the light configurations and exposures. \end{itemize} Comparing the performances obtained by training the model on these two datasets will give us an insight on the comparative effectiveness of having either more defective objects or more modalities during training phase, given any of the single modalities in evaluation phase. \subsection{Study 3: Training on all the images and modalities, test on a single modality} \label{subsec:study3} In Study 2 we discarded three quarter of the collected images for comparing the achieved results with the ones obtained in Study 1. Nevertheless, the proposed acquisition setup enables collecting 12 images per each object with no additional effort required for acquiring or annotating them in comparison to a single modality illumination system. The possibility of having a bigger training set to exploit, would raise expectations for modeling better the task to be solved. However, in complex-object defect detection scenario, it is not given that the additionally collected images, in fact, provide beneficial information for training a more effective model to be used in a single modality scenario. In case they do, it means that the system is able to transfer the information collected from one light modality to a different modality and that the system can better model the detection task even if only provided with modalities during training which are not available during evaluation. In this study, we aim to evaluate this hypothesis. In comparison to Study 1 and Study 2, in Study 3 we are using the entire training set introduced in Sec. \ref{sec:dataset} which is four times bigger, while the test set remains intact. \subsection{Study 4: Training and evaluation on multiple modalities} \label{subsec:study4} After having analyzed the impact of having a multi-modal lighting system available in training phase only, in this Study our aim is to verify the effectiveness of having the same multi-modal lighting system also in evaluation phase. It is important to highlight that the images of the same defective region collected with different light illuminations share the same annotations and should produce the same output. Combining each generated output is, therefore, essential and we expect it can positively impact the final algorithm performance, as it has been shown in other scenarios \cite{aghaei2016multi}. We propose the following fusing procedure: Let us define the set of 12 images of the same region collected varying the illumination conditions as $I = [i_1, i_2, \dots, i_{12}]$, let us define $B = [b_1, b_2, \dots, b_{M}]$ the set of the $M$ defective bounding-boxes detected in all $i_n$ images $\in I$, and let us also define $C = [c_1, c_2, \dots, c_{M}]$ the set of the corresponding detection confidences given by the detection algorithm. Our proposal is to apply Non-Maximal Suppression (NMS) algorithm over $B$ and replace on every $i_n \in I$ the output of the NMS algorithm. Given the NMS Intersection-over-Union (IoU) threshold as $\theta$, NMS algorithm operates as written in Algorithm \ref{alg:nms}. \begin{algorithm} \caption{Non-maximal Suppression} \hspace*{\algorithmicindent} \textbf{Input} $B, C, \theta$\\ \hspace*{\algorithmicindent} \textbf{Initialization} $D \leftarrow \left \{ \right \}$ \begin{algorithmic} \WHILE{$B \neq \emptyset$} \STATE $\kappa \leftarrow \argmax C$ \STATE $K \leftarrow b_{\kappa}$ \STATE $B \leftarrow B - K$ \STATE $D \leftarrow D \cup K$ \FOR{$b_\zeta \in B$} \IF{$IoU (K,b_\zeta) \geq \theta$} \STATE $B \leftarrow B - b_\zeta$ \STATE $C \leftarrow C - c_\zeta$ \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \hspace*{\algorithmicindent} \textbf{Output} $D, C.$ \label{alg:nms} \end{algorithm} NMS operates in three steps: Firstly, it sorts all of the detected boxes based on their box confidence scores from high to low; secondly, it selects the box which has the highest box confidence score as the detection result; and finally, it discards other candidate boxes whose IoU value with the selected box is beyond the threshold. Within the remaining boxes, NMS repeats the above two steps until there is no remaining box in the candidate set $B$. In Study 4 we will compare the performances of the system when the model is trained on the entire multi-modal training set and evaluated on the entire test set, with and without applying the proposed late-fusion technique. \section{Experimental setup, results and discussion} \label{sec:exp-results} In all the experiments discussed earlier in Sec.\ref{sec:method} for automatic defect detection, we used YOLO-v3 end-to-end detection pipeline \cite{redmon2018yolov3}, given its fast inference time and its ability to detect small defects.\footnote{ We would like to mention that a comparative study of detection algorithms is out of the scope of this paper.} YOLO-v3 detector has been originally trained over the COCO dataset \cite{lin2014microsoft}, then the weights of the network are adapted to our task using the transfer learning approach updating all the layers of the network. Training has been done on a NVIDIA GeForce RTX 2080 Ti GPU, with learning-rate = 0.0001, and momentum = 0.9. As mentioned in Sec. \ref{sec:dataset}, the dataset is split into training, validation, and test sets. In the experiments where a subset of data is required (Study 1, 2 and 3), that subset is selected within training, validation, and test sets independently and the splits do not vary in the experiments belonging to the same Study, or shared among various Studies (for example, Test - C is common among Study 1, 2 and 3). This allows us to retain the comparability of the experiments from one Study to another. As in standard settings, the validation set is used to tune the parameters of the algorithm and the final results are reported on the test set. Each detection bounding-box proposed by the model is compared with the ground-truth and classified as: \begin{itemize} \item True Positive (TP): the detection has IoU $\geq$ $threshold$ and it is therefore considered correct; \item False Positive (FP): the detection has IoU $<$ $threshold$ and it is therefore considered wrong; \item False Negative (FN): the ground-truth annotation has not been detected. \end{itemize} We report the results of all experiments using the standard metrics used in single-object (defect) detection as Precision, Recall, F1-score, and Average Precision (AP). Among the aforementioned metrics, Precision, Recall, and consequently, F1-score are reported after fixating the acceptance confidence threshold of the algorithm, in this work set to $0.7$. Precision is defined as $\frac{TP}{TP+FP}$, Recall as $\frac{TP}{TP+FN}$ and F1-score as $2\frac{Precision * Recall}{Precision + Recall}$. AP on the other side, summarizes the Precision-Recall curve as the weighted mean of Precision achieved at different confidence thresholds, with the increase in Recall from the previous threshold used as the weight and is calculated as $\sum_{t} (R_t - R_{t-1})P_t$, where $P_t$ and $R_t$ are the Precision and Recall at the $t$-th threshold. To compare the results in the next sections, we will mainly refer to AP, since AP compared to F1-score considers Precision and Recall relations more globally \cite{boyd2012unachievable}. In this section, results are reported with a fixed $IoU=0.5$ threshold with the ground-truth among the experiment. \subsection{Study 1: Training and evaluation on one single modality} The results of the experiments discussed in Sec. \ref{subsec:study1} are given in Table \ref{table,1st}. The most effective configuration according to the AP is the one activating all the lateral lights to produce dark-field illumination from four directions. This configuration outperforms frontal light and dark-field illuminations in any of vertical and horizontal directions and it will be referred to as the baseline for the following studies. \begin{table}[h!] \centering \caption{Results of Study 1} \begin{tabular}{|c|c|c c c c|} \hline Train & Test & Precision & Recall & F1-score & AP \\ \hline \hline C & C & 63.53 & 45.84 & 53.25 & 29.97 \\ U D & U D & 61.69 & 44.95 & 52.01 & 29.11 \\ L R & L R & 58.56 & 41.07 & 48.28 & 25.52 \\ U D L R & U D L R & 61.06 & 52.73 & 56.82 & \textbf{34.69} \\ \hline \end{tabular} \label{table,1st} \end{table} \subsection{Study 2: Training on multiple modalities, evaluation on a single modality} The results of the experiments discussed in Sec. \ref{subsec:study2} are reported in Table \ref{table,2nd}. Each training set has been generated 5 different random times for each experiment and results are given in $mean \pm std$ format in the AP column. Precision, Recall, and F1-score values are given for only the first trial. The results indicate, given the same number of images in the training set, maximizing the heterogeneity in the lighting modalities is more effective than acquiring more samples of defective objects with a limited set of illumination modalities. \begin{table}[h!] \centering \caption{Results of Study 2} \begin{tabular}{|c|c|c c c c|} \hline Train ($N=5$) & Test & Precision & Recall & F1-score & AP \\ \hline \hline \begin{tabular}[c]{@{}c@{}}All samples \\ 3 rand. modalities \end{tabular} & \multirow{2}{*}{C} & 66.28 & 38.00 & 48.30 & 25.74$\pm$2.75 \\ \cline{1-1} \cline{3-6} \begin{tabular}[c]{@{}c@{}}Quarter of samples\\ All modalities\end{tabular} & & 64.86 & 49.38 & 56.07 & 33.18$\pm$1.5 \\ \hline \hline \begin{tabular}[c]{@{}c@{}}All samples \\ 3 rand. modalities \end{tabular} & \multirow{2}{*}{U D} & 66.98 & 36.94 & 47.62 & 27.17$\pm$3.9 \\ \cline{1-1} \cline{3-6} \begin{tabular}[c]{@{}c@{}}Quarter of samples\\ All modalities\end{tabular} & & 64.74 & 51.00 & 57.06 & 33.29$\pm$1.3 \\ \hline \hline \begin{tabular}[c]{@{}c@{}}All samples \\ 3 rand. modalities \end{tabular} & \multirow{2}{*}{L R} & 68.48 & 36.90 & 47.96 & 27.45$\pm$2.9 \\ \cline{1-1} \cline{3-6} \begin{tabular}[c]{@{}c@{}}Quarter of samples\\ All modalities\end{tabular} & & 65.01 & 51.37 & 57.39 & \textbf{34.49$\pm$1.57} \\ \hline \hline \begin{tabular}[c]{@{}c@{}}All samples \\ 3 rand. modalities \end{tabular} & \multirow{2}{*}{U D L R} & 66.34 & 36.61 & 47.18 & 26.87$\pm$3.79 \\ \cline{1-1} \cline{3-6} \begin{tabular}[c]{@{}c@{}}Quarter of samples\\ All modalities\end{tabular} & & 63.53 & 48.32 & 54.89 & 31.84$\pm$0.65 \\ \hline \end{tabular} \label{table,2nd} \end{table} Comparing the results of Study 2 with Study 1, it is noticeable that multi-modal training is beneficial for most of the single lighting modalities in evaluation and that the single-modal test performance is less dependent on the choice of the illumination modality if the algorithm is initially trained with multiple modalities. \subsection{Study 3: Training on all the images and modalities, test on a single modality} The results of Study 3 are listed in Table \ref{table,3rd}. Comparing these results with ones of Study 2, using a bigger training set leads to a considerable performance boost (at least $\sim18\%$). These results are a clear demonstration that acquiring more images using multiple light conditions is actually enriching the information provided to the model during training. Even in the case when 3 modalities out of 4 are not used in evaluation time, their availability during training makes the system able to better model the detection task to be solved, as it has been shown in other scenarios \cite{garcia2018modality}. \begin{table}[h!] \centering \caption{Results of Study 3} \begin{tabular}{|c|c|c c c c|} \hline Train & Test & Precision & Recall & F1-score & AP \\ \hline \hline \multirow{4}{*}{\rotatebox[origin=c]{90}{All Train}} & C & 72.61 & 70.23 & 71.39 & 52.29 \\ & U D & 70.69 & 71.22 & 70.95 & 52.27 \\ & L R & 73.76 & 68.87 & 71.23 & \textbf{52.57} \\ & U D L R & 72.11 & 70.37 & 71.23 & 52.38 \\ \hline \end{tabular} \label{table,3rd} \end{table} Eventually, it is worth noting that choosing any illumination modality to be used in production, after training the model with the multi-modal illumination system, would not bring significant variation in the detection performances. \subsection{Study 4: Training and evaluation on multiple modalities} \begin{table}[h!] \centering \caption{Results of Study 4} \begin{tabular}{|c|c|c c c c|} \hline Train & Test & Precision & Recall & F1-score & AP \\ \hline \hline All Train & All Test & 72.26 & 70.18 & 71.20 & 52.08 \\ \hline \begin{tabular}[c]{@{}c@{}}All Train\\ +\\ Late-fusion\end{tabular} & All Test & 58.23 & 90.56 & 70.89 & \textbf{60.84} \\ \hline \end{tabular} \label{table,4th} \end{table} The focus of the experiments until this point was given to the analysis of the effect of the presence of all or selected number of modalities in training while evaluation of the algorithms has been reported on single modalities. In Study 4, we aim to analyze whether it is possible to further improve the overall system performance having the availability of all the modalities also in evaluation phase. With this Study we can also assess the benefits which can be obtained with the deployment of our designed system in the operational scenario. The results of this study are reported in Table \ref{table,4th}. Comparing the results given in Table \ref{table,4th} with ones in Table \ref{table,3rd}, having the availability of all the modalities in evaluation phase, leads to performance improvements only if the detection results obtained from each single illumination modality are properly combined, using the late-fusion technique proposed in Sec. \ref{subsec:study3}. Fig. \ref{fig:roc} shows the Precision and Recall values obtained at different detection confidence thresholds in $\{0.1, 0.2, \dots, 0.9\}$, with and without employing the late-fusion technique. It can be observed that applying late-fusion leads to a higher Area Under Curve (AUC), thus higher AP. Employing late-fusion, Fig. \ref{fig:quality} shows three examples of the successful detections of defects employing late-fusion (on the right). The qualitatively better detections after applying late-fusion with regards to detections on single images can be appreciated in all the three cases. \begin{figure} \hbox{\hspace{3em}\includegraphics[width=0.40\textwidth]{fig_roc.png}} \caption{Precision and Recall values \\ per confidence thresholds.} \label{fig:roc} \end{figure} \begin{figure*} \centering {\includegraphics[width=0.9\textwidth]{fig_quality1.png}} \newline \centering {\includegraphics[width=0.9\textwidth]{fig_quality3.png}} \newline \centering {\includegraphics[width=0.9\textwidth]{fig_quality2.png}} \newline \caption{Three examples of successful qualitative results before and after applying the proposed late-fusion technique. Green bounding-boxes indicate the ground-truth while the orange indicates the final detection boundary. As it can be seen, applying the late-fusion of the results leads to a more coherent and correct final detection of defects in all the images of the defective object. Note how differently light configurations and exposures behave on different surface types and defects.} \label{fig:quality} \end{figure*} On the other side, Fig. \ref{fig:failure} shows five examples of failure cases even after the late-fusion technique in five defective images. In these cases, our observation is that the algorithm fails to detect a defect if it is not fairly visible in any of the images taken under any of the lighting conditions \cite{kokoschka1986visual}. Besides, false positive detections in some cases occur due to the presence of visually similar-to-defect artifacts on the images. This can be considered to confirm the importance of acquisition hardware setup design, and further, annotation process, for obtaining desirable results by the machine vision algorithms. In the cases where false positive detections are due to missing annotations, thus noise in the labels, the proposed method can be used to provide support for localization of defects to be fixed in the product revision departments in industries, or as an additional supervision method for further improvement of training procedure. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{fig_quality_failure.png} \caption{Five examples of missing detections or false alarms applying the proposed late-fusion. One can appreciate the visually similar-to-defect regions where false positives occur. In some of the images, the algorithm fails to detect the defect possibly due to the invisibility of the defect on the images.} \label{fig:failure} \vspace{-1em} \end{figure*} \section{Conclusion} In this paper, we introduced our custom-designed acquisition setup for inspection of a complex-object, and discussed its suitability in visualizing a wide range of surface defects thanks to the proposed illumination setup which holds four standard illumination techniques comprised of diffused, dark-field, and front illumination in one place. Further, we argued that deployment of the proposed setup might not be feasible in an inspection environment, thus we conducted four studies to exploit the role of each of the illumination sources and whether it is possible to exploit the potentials of the proposed setup when only deployed in training phase. The conclusions from the studies can be summarized as follows. In the case of deployment of the same single illumination modality in both training and evaluation phase, the most effective one is discovered to be activating all the lateral lights resembling dark-field illumination from four directions. However, given the same number of images in training set but with more modalities, the evaluation results on any of the single modalities are less dependent on the type of modality in evaluation phase. Nevertheless, exploiting more samples in all the modalities in training phase brings to a large improvement when evaluated on single modalities, justifying our proposed lighting setup to be employed at least for training purposes. The introduction of all the modalities in evaluation phase though does not lead to any substantial change with regards to a single modality illumination only, unless the proposed late-fusion technique is utilized which is when the highest performance of the proposed pipeline is achieved. We believe our proposed acquisition setup and pattern analysis of the illumination modalities can be a source of intuition for other researchers in the industrial inspection field for the automatic examination of objects with highly complex characteristics.
1,108,101,566,069
arxiv
\section{Introduction} \label{sec:intro} With the launch of the idea that stable models \cite{GL88:iclp} of a logic program can be used to encode search problems, a new programming paradigm, called Answer Set Programming (ASP) was born \cite{MT99,Niemela99:amai,Lifschitz99:iclp}. Nowadays, the fact that normal logic programs can effectively encode NP-complete decision and function problems is exploited in applications in many different domains such as robotics \cite{ARSS15:lpnmr}, machine learning \cite{JGRNPC15:sc,BBBDDJLRDV15:tplp}, phylogenetic inference \cite{KOJS15:tplp,BEEMR07:jar}, product configuration \cite{TSNS03:iced}, decision support for the Space Shuttle \cite{NBGWB01:padl}, e-Tourism \cite{RDGIIML10:fi}, and knowledge management \cite{GILR09:lpnmr}. Tackling search problems beyond NP with ASP requires one to use more expressive logic programs than the normal ones. To this end, the class of disjunctive programs \cite{GL91:ngc} is the most prominent candidate. As shown by \citet{EG95:amai}, the main decision problems associated to disjunctive programs are $\Sigma^P_2$- and $\Pi^P_2$-complete, depending on the reasoning mode, i.e., \emph{credulous} vs.\ \emph{cautious} reasoning. But when it comes to applications, one encounters disjunctive encodings less frequently than encodings as normal logic programs. This is also witnessed by the benchmark problems submitted to ASP competitions \cite{CGMR16:aij}. Such a state of affairs is not due to a lack of application problems since many complete problems from the second level of the PH are known. Neither is it due to a lack of implementations, since state-of-the-art ASP solvers such as \system{dlv} \cite{LPFEGPS06:acmtocl} and \system{clasp} \cite{DGGKKOS08:kr,GKKRS15:lpnmr} offer a seamless support for disjunctive programs. An explanation for the imbalance identified above can be found in the essentials of disjunctive logic programming when formalizing problems from the second level of the PH. There are results \cite{BED94:amai} showing that such programs must involve \emph{head cycles}, i.e., cyclic positive dependencies established by the rules of the program that intertwine with the disjunctions in the program. Such dependencies may render disjunctive programs hard to understand and to maintain. Moreover, the existing generic encodings of complete problems from the second level of the PH as disjunctive programs are based on sophisticated \emph{saturation} \cite{EG95:amai} or \emph{meta-interpretation} \cite{GKS11:tplp} techniques, which may turn an encoding inaccessible to a non-expert. \citet{EP06:tplp} identify the limitations of subprograms that act as (co-)NP-oracles and are embedded in disjunctive programs using the saturation technique. Summarizing our observations, the access to the underlying oracle is somewhat cumbersome and difficult to detect from a given disjunctive program. Interestingly, the oracle is better visible in native implementations of disjunctive logic programs \cite{JNSSY06:tocl,DGGKKOS08:kr} where two ASP solvers cooperate: one is responsible for \emph{generating} model candidates and the other for \emph{testing} the minimality of candidates. In such an architecture, a successful minimality test amounts to showing that a certain subprogram has no stable models. In other formalisms, the second level of the PH is reached differently. For instance, \emph{quantified Boolean formulas} (QBFs) \cite{SM73:stoc}, record the interface between existentially and universally quantified subtheories, intuitively corresponding to the generating and testing programs mentioned above, explicitly in the quantifier prefix of the theory. From a modelling perspective, on one hand, QBFs support the natural formalization of subproblems as subtheories and the quantifications introduced for variables essentially identify the oracles involved. On the other hand, logic programs also have some advantages over QBFs. Most prominently, they allow for the natural encodings of \emph{inductive definitions}, not to forget about \emph{default negation}, \emph{aggregates} and \emph{first-order features} available in logic programming. The rich high-level modelling languages such as ASP-Core-2 \cite{aspcore2} offer a wide variety of primitives that are not available for QBFs and require substantial elaboration if expressed as part of a QBF. In this paper, we present a novel logic programming--based modeling paradigm that combines the best features of ASP and QBF. We introduce the notion of a \emph{combined logic program} which explicitly integrates a normal logic program as an oracle to another program. The semantics of combined programs is formalized as \emph{stable-unstable models} whose roots can be recognized from earlier work of \citet{EP06:tplp}. Our design directly reflects the generate-and-test methodology discussed above, enabling one to encode problems up to the second level of the PH. Compared to disjunctive programs, our approach is thus closer to QBFs and if the same design is applied recursively, our new formalism can be adapted to tackle problems arbitrarily high in the PH, in analogy to QBFs. We develop a proof-of-concept solver for our new formalism on top of the recently introduced solver \system{sat-to-sat} \cite{JTT16:aaai}, which is based on an architecture of two interacting, conflict-driven clause learning (CDCL) SAT solvers. The solver capable of searching for stable-unstable models is obtained using the methodology of \citet{BJT16:kr}, who automatically translate a second-order specification, combined with data that represents the involved ground programs in a reified form, into a \system{sat-to-sat} specification. The details of the solver architecture are hidden from the user so that a user experience similar to native ASP solvers is obtained, where the user inputs two logic programs in a familiar syntax and the solver produces answer sets. The rest of this paper is structured as follows. In Section \ref{sec:related}, we discuss related work in more detail. We recall some basic notions of logic programs in Section \ref{sec:lp}. Afterwards, in Section \ref{sec:new}, we present our new logic programming methodology. We illustrate how it can be used to tackle some problems from the second level of the PH in Section \ref{sec:apps}. In Section \ref{sec:impl}, we show how our new formalism can be implemented on top of \system{sat-to-sat}. We show how our formalism naturally extends beyond the second level of the PH in Section \ref{sec:beyond} and conclude the paper in Section \ref{sec:concl}. \section{Related Work} \label{sec:related} A fundamental technique to encode $\Sigma^P_2$-complete problems as disjunctive programs is known as \emph{saturation}. The technique goes back to the $\Sigma^P_2$-completeness proof for the existence of stable models in the case of disjunctive programs \cite{EG95:amai}. Although saturation can be applied in a very systematic fashion to some programs of interest, \citet{EP06:tplp} identify the impossibility of having negation as a central limitation of oracles encoded by saturation, rendering the oracle call to a bare minimality check rather than showing that an oracle program has no stable models. This limitation can be partially circumvented using \emph{meta-interpretation} \cite{EP06:tplp,GKS11:tplp}, but these techniques do not necessarily decrease the \emph{conceptual complexity} of disjunctive programming from the user's perspective. The approach of \citet{EP06:tplp} is perhaps most closely related to our work. They present a transformation of two \emph{head-cycle free} (HCF) disjunctive logic programs $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$, where $\m{\mathcal{P}}_g$ and $\m{\mathcal{P}}_t$ form the \emph{generating} and \emph{testing} programs, into a disjunctive program $\m{\mathcal{P}}_{c}$. In our terminology, the \emph{stable-unstable} models of the \emph{combined program} $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ are in one-to-one correspondence with the stable models of $\m{\mathcal{P}}_c$. Thus, their approach is based on essentially the same base definition. However, their transformation counts on meta-interpretation and $\m{\mathcal{P}}_c$ is encoded as a disjunctive meta program to capture the intended semantics of $(\m{\mathcal{P}}_g, \m{\mathcal{P}}_t)$. A similar meta-encoding can be obtained using the approach of \citet{GKS11:tplp}, but stable-unstable semantics is not explicit in their work. Since these meta programming approaches use disjunctive logic programs as the back end formalism, they are inherently confined to the second level of the PH. Our approach, on the other hand, easily generalizes for the classes of the entire PH, as to be shown in Section~\ref{sec:beyond}. Moreover, when \citet{EP06:tplp} translate $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ into a disjunctive logic program the essential structural distinction between $\m{\mathcal{P}}_g$ and $\m{\mathcal{P}}_t$ is lost. Many disjunctive answer set solvers \cite{JNSSY06:tocl,DGGKKOS08:kr} try to recover this interface due to their internal data structures. In our approach, the generate-and-test structure of the original problem is explicitly present in the input presented to the solver. While meta programming can be viewed as a front-end to disjunctive logic programming, the goal of our work is to foster the idea of generate-and-test programs as a basis for a logic programming methodology that complexity-wise covers the entire PH. In this paper, we present a proof-of-concept implementation based on the recursive \system{sat-to-sat} solver architecture \cite{JTT16:aaai,BJT16:bnp}. It is reasonable to expect that such an architecture can be realized in the future using native ASP solvers as building blocks, too, thus eliminating the need for second-order interpretation. Another formalization of a similar idea was worked out by \citet{EGV97:lpnmr}, based on the theory of generalized quantifiers \cite{Mostowski57:fm,Lindstrom66:th}. The semantics we propose for combined logic programs can be obtained as a special case of a (stratified) logic program with generalized quantifiers \cite{EGV97:lpnmr}. One important difference is that in our approach, the interaction between the two programs is fixed: one program serves as generator and the second as a tester program. The approach of \citet{EGV97:lpnmr} is more general in the sense that it allows for other types of interaction as well. The price to pay for this generality is that the interaction between programs needs to be specified explicitly by users, resulting in a more error-prone modelling process. Moreover, in our approach, the input expected from the user is a set of source files in a familiar syntax (ASP-Core-2), requiring no syntactic extension for quantification. \section{Preliminaries: Logic Programming} \label{sec:lp} In this section, we recall some preliminaries from logic programming. The new semantics is only formulated for propositional programs but, in practice, the users are not expected to write propositional programs. Instead, they are supposed to use grounders, such as the state-of-the-art grounder \system{Gringo}, to transform first-order programs to propositional ones. A \emph{vocabulary} is a set of symbols, also called \emph{atoms}; vocabularies are denoted by $\m{\sigma},\tau$. A \emph{literal} is an atom or its negation. A \emph{logic program} $\m{\mathcal{P}}$ over vocabulary $\m{\sigma}$ is a set of \emph{rules} $r$ of form \begin{equation}\label{eq:rule} h_1\lor \dots \lor h_l \m{\leftarrow} a_1\land \dots \land a_n \land \lnot b_1\land \dots \land \lnot b_m. \end{equation} where $h_i$'s, $a_i$'s, and $b_i$'s are atoms in $\m{\sigma}$. We call $h_1\lor \dots \lor h_l$ the \emph{head} of $r$, denoted $\m{\mathit{head}}(r)$, and $a_1\land \dots \land a_n \land \lnot b_1\land \dots \land \lnot b_m$ the \emph{body} of $r$, denoted $\m{\mathit{body}}(r)$. A program is \emph{normal} (resp. \emph{positive}) if $l=1$ (resp. $m=0$) for all rules in \m{\mathcal{P}}. If $n=m=0$, we simply write $h_1\lor \dots \lor h_l$. An interpretation $\m{I}$ of a vocabulary \m{\sigma} is a subset of $\m{\sigma}$. An interpretation $I$ is a \emph{model} of a logic program \m{\mathcal{P}} if, for all rules $r$ in \m{\mathcal{P}}, whenever $\m{\mathit{body}}(r)$ is satisfied by $I$, so is $\m{\mathit{head}}(r)$. The \emph{reduct} of \m{\mathcal{P}} with respect to $I$, denoted $\m{\mathcal{P}}^I$, is the program that consists of rules $ h_1\lor \dots \lor h_l \m{\leftarrow} a_1\land \dots \land a_n $ for all rules of the form \eqref{eq:rule} in \m{\mathcal{P}} such that $b_i\not\in I$ for all $i$. An interpretation $I$ is a \emph{stable model} of \m{\mathcal{P}} if it is a $\subseteq$-minimal model of $\m{\mathcal{P}}^I$ \cite{GL88:iclp}. \emph{Parameterized logic programs} have been implicitly present in the literature for a long time, by assigning a meaning to \emph{intensional} databases. They have been made explicit in various forms \cite{GP96:ijseke,OJ06:ecai,DV07:lpnmr,DLTV12:iclp}. We briefly recall the basics. Assume that $\tau\subseteq \m{\sigma}$ and $\m{\mathcal{P}}$ is a logic program over $\m{\sigma}$ such that no atoms from $\tau$ occur in the head of a rule in $\m{\mathcal{P}}$. We call $I$ a \emph{parameterized stable model} of $\m{\mathcal{P}}$ with respect to \emph{parameters} $\tau$ if $I$ is a stable model of $\m{\mathcal{P}}\cup (I\cap \tau)$. Parameters $\tau$ are also known as \emph{external}, \emph{open}, or \emph{input atoms}. From time to time, we use syntactic extensions such as choice rules, constraints, and cardinality atoms in this paper. A \emph{cardinality atom} $m\leq \#\{l_1, \dots, l_n\} \leq k$ (with $l_1, \dots, l_n$ being literals and $m, k \in \mathbb{N}$) is satisfied by $I$ if $m \leq \#\{i\mid l_i\in I\} \leq k$. A \emph{choice rule} is a rule with a cardinality atom in the head. A \emph{constraint} is a rule with an empty head. An interpretation $I$ satisfies a constraint $c$ if it does not satisfy $body(c)$. These language constructs can all be translated to normal rules \cite{BJ13:lpnmr}. We also sometimes use the colon syntax $H : L$ for conditional literals as a way to succinctly specify a set of literals in the body of a rule or in a cardinality atom \cite{GHKLS15:tplp}. \section{Stable-Unstable Semantics} \label{sec:new} The design goal of our new formalism is to isolate the logic program that is acting as an oracle for another program. Thus, we would like to find a stable model $I$ for a program while showing the \emph{non-existence} of stable models for the oracle program given $I$. Following this intuition, we formalize the pair $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ of a \emph{generating} program $\m{\mathcal{P}}_g$ and a \emph{testing} program $\m{\mathcal{P}}_t$ as follows.% \footnote{The terminology goes back to \system{GnT}, one of the early solvers developed for disjunctive programs \cite{JNSSY06:tocl}.} \begin{definition}[Combined logic program] \label{def:combined-program} A \emph{combined logic program} is pair $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ of normal logic programs $\m{\mathcal{P}}_g$ and $\m{\mathcal{P}}_t$ with vocabularies $\m{\sigma}_g$ and $\m{\sigma}_t$ such that $\m{\mathcal{P}}_g$ is parameterized by $\tau_g\subseteq\m{\sigma}_g$ and $\m{\mathcal{P}}_t$ is parameterized by $\m{\sigma}_g\cap \m{\sigma}_t$. \end{definition} The vocabulary of the program $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ is $\m{\sigma}_g$; it consists of all symbols that are ``visible'' to the outside. Symbols in $\m{\sigma}_t\setminus \m{\sigma}_g$ are considered to be \emph{quantified internally}. The use of normal programs in the definition of combined logic programs, or \emph{combined programs} for short, is a design decision aiming at programs that are easily understandable (compared to, for instance, disjunctive programs with head-cycles). In principle, our theory also works when replacing normal programs with another class of programs. Our next objective is to define the semantics of combined programs which should not be a surprise given the above intuitions. \begin{definition}[Stable-unstable model]\label{def:semantics} Given a combined program $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ with vocabularies $\m{\sigma}_g$ and $\m{\sigma}_t$, a $\m{\sigma}_g$-interpretation $I$ is a \emph{stable-unstable model} of $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ if the following two conditions hold: \begin{enumerate} \item $I$ is a parameterized stable model of $\m{\mathcal{P}}_g$ with respect to $\tau_g$ (the parameters of $\m{\mathcal{P}}_g$) and \item there is no parameterized stable model $J$ of $\m{\mathcal{P}}_t$ that coincides with $I$ on $\m{\sigma}_t\cap\m{\sigma}_g$ (i.e., such that $I\cap {\m{\sigma}_t}=J\cap {\m{\sigma}_g}$). \end{enumerate} \end{definition} The fact that a $\m{\sigma}_g$-interpretation $I$ is a stable-unstable model of $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ is denoted $I\models_{su} (\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$. Note that the testing program stands for the \emph{non-existence} of stable models. If $\m{\sigma}_g\cap\m{\sigma}_t\neq\emptyset$, the programs truly interact. Otherwise, we call $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ \emph{independent}. \begin{example}\label{ex:small} Let $\m{\mathcal{P}}_1=\{0\leq \#\{c\}\leq 1.~~ \m{\leftarrow} c \land d.~~ \m{\leftarrow} \lnot c\land b.\}$ and $\m{\mathcal{P}}_2=\{0\leq \#\{a\} \leq 1.~~ b \m{\leftarrow} a.\}$ where $\m{\mathcal{P}}_1$ has vocabulary $\m{\sigma}_1=\{c,b,d\}$ and parameters $\tau_1=\{b,d\}$, and $\m{\mathcal{P}}_2$ has vocabulary $\m{\sigma}_2=\{a,b,d\}$ and parameters $\tau_2=\{d\}$. The stable models of $\m{\mathcal{P}}_1$ and $\m{\mathcal{P}}_2$ are, respectively, $\{\{d\},\{b,c\},\{\},\{c\}\}$ and $\{\{d,a,b\}, \{d\}, \{a,b\}, \{\}\}$. Notice that $\tau_1=\m{\sigma}_1\cap\m{\sigma}_2$. The combined program $(\m{\mathcal{P}}_2,\m{\mathcal{P}}_1)$ has parameters $\tau_2$ and has only one stable-unstable model $\{d,a,b\}$ since all other stable models of $\m{\mathcal{P}}_2$ coincide with a stable model of $\m{\mathcal{P}}_1$ on $\tau_1$. \end{example} \begin{theorem}\label{thm:complex} Deciding the existence of a stable-unstable model for a \emph{finite} combined program $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ is $\Sigma^P_2$-\allowbreak{}complete in general, and $D^P$-\allowbreak{}complete for \emph{independent} combined programs. \end{theorem} \begin{proof} The theorem is a straightforward consequence of known complexity results. The membership in $\Sigma^P_2$ follows directly from the definition of $\Sigma^P_2$ and the fact that deciding whether a normal logic program has a stable model is NP-complete \cite{MT99}. For hardness in the general case, we recall that \citet{JNSSY06:tocl} have shown that any disjunctive logic program $\m{\mathcal{P}}$ can be represented as a pair of normal programs $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ whose stable-unstable models essentially capture the stable models of $\m{\mathcal{P}}$. In the case of an independent input $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$, the decision problem conjoins an NP-complete problem (showing that $\m{\mathcal{P}}_g$ has a stable model) and a co-NP-complete problem (showing that $\m{\mathcal{P}}_t$ has no stable models). Thus, membership in $D^p$ is immediate. The hardness is implied by Niemel\"a's reduction \citeyear{Niemela99:amai} that translates a set of clauses $C$ into a normal logic program $N(C)$, when applied to instances of the $D^P$-\allowbreak{}complete SAT-UNSAT problem. \end{proof} \begin{example}\label{ex:eaqbf} Any \EAQBF of the form $\exists\vec{x}\forall\vec{y}: \varphi$ with $\varphi$ a Boolean formula in DNF can be encoded as a combined program as follows. Let $\m{\mathcal{P}}_g$ be a logic program that expresses the choice of a truth value for every variable $x$ in $\vec{x}$ using two normal rules $x\m{\leftarrow} \lnot x'$ and $x'\m{\leftarrow} \lnot x$ where $x'$ is new. Also, let $\m{\mathcal{P}}_t$ be a logic program that similarly chooses truth values for every $y$ in $\vec{y}$ and contains for each conjunction $l_1\land\dots\land l_n$ in the DNF $\varphi$ a rule $\pr{sat} \m{\leftarrow} l_1\land \dots \land l_n$ where \pr{sat} is a new atom that is true if $\varphi$ is satisfied. Moreover, let $\m{\mathcal{P}}_t$ have the rule $\pr{fail} \m{\leftarrow} \lnot \pr{fail}\land \pr{sat}$. This rule enforces that \pr{sat} must be false in models of $\m{\mathcal{P}}_t$. As such $\m{\mathcal{P}}_t$ corresponds to the sentence $\exists \vec{y}: \lnot \varphi$. Since $\lnot\exists\vec{y}:\lnot\varphi$ $\equiv$ $\forall\vec{y}:\varphi$, we thus find that $\exists\vec{x}\forall\vec{y}:\varphi$ is valid iff $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ has a stable-unstable model. \end{example} It follows from Theorem \ref{thm:complex} that the theoretical expressiveness of combined programs equals that of \EAQBF{}s. There are, however, several reasons why one would prefer combined programs. Firstly, logic programs are equipped with rich, high-level, first-order modeling languages. Secondly, logic programs allow for natural encodings of \emph{inductive definitions}. These reasons are comparable to the advantages of logic programs on the first level of the hierarchy in contrast with pure SAT. For instance, the former can naturally express reachability in digraphs, while the latter requires a non-trivial encoding, which is non-linear in the size of the input graph. The advantage of combined programs over \EAQBF{}s is analogous when solving problems on the second level. The expressive power of inductive definitions and the high-level modeling language are available both in $\m{\mathcal{P}}_g$ and in $\m{\mathcal{P}}_t$. We exploit this when presenting examples in the next section. \section{Applications} \label{sec:apps} The goal of this section is to present some applications of stable-unstable programming. We will focus on \emph{modelling} aspects, i.e., how certain application problems can be represented. The programs to be presented are non-ground (and may also use some constructs present in ASP-Core-2, such as arithmetic) while the stable-unstable semantics was formulated for ground programs only. However, in practice, input programs are first grounded and thus covered by the propositional semantics. Hence, the user has all high-level primitives of ASP at his/her disposal. \subsection{Winning Strategies for Parity Games} \emph{Parity games}, to be detailed below, have been studied intensively in computer aided verification since they correspond to model checking problems in the $\mu$-calculus. We show how to represent parity game instances as combined programs. A parity game consists of a finite graph $G=(V; A, v_0, V_\exists, V_\forall, \Omega)$, where $V$ is a set of nodes, $A$ a set of arcs, $v_0 \in V$ an initial node, $V_\exists$ and $V_\forall$ partition $V$ into two subsets, respectively owned by an existential and a universal player, and $\Omega : V \to \mathbb{N}$ assigns a priority to each node. All nodes are assumed to have at least one outgoing arc. A \emph{play} in a parity game is an infinite path in $G$ starting from $v_0$. We denote such a play by a function $\pi:\mathbb{N} \to V$. A play $\pi$ is generated by setting $\pi(0) = v_0$ and, at each step $i$, asking the player who owns node $\pi(i)$ to choose a following node $\pi(i+1)$ such that $(\pi(i),\pi(i+1))\in A$. The existential player wins if $\min\{\Omega(v) \mid v \mbox{ appears infinitely often in }\pi\}$ is an even number. Otherwise, the universal player wins. A \emph{strategy} $\m{\sigma}_x$ for a player $x \in \{\exists, \forall\}$ is a function that takes a finite path $(v_0, v_1, \cdots, v_n)$ in $G$ with $v_n \in V_x$ and returns a node $v_{n+1}$ such that $(v_n, v_{n+1}) \in A$. A play $\pi$ conforms to $\m{\sigma}_x$ if, whenever $\pi(n) \in V_x$, it holds that $\m{\sigma}_x(\pi(0), \pi(1), \cdots, \pi(n)) = \pi(n+1)$. A strategy $\m{\sigma}_x$ is a \emph{winning strategy} for $x$ if $x$ wins all plays that conform to $\m{\sigma}_x$. A strategy $\m{\sigma}_x$ is called \emph{positional} if $\m{\sigma}(v_0, v_1, \cdots, v_n)$ only depends on $v_n$. Two important properties of parity games are that (i) exactly one player has a winning strategy and (ii) a player has a winning strategy if and only if it has a positional winning strategy \cite{EJ91:focs}. Using the above properties, we provide an intuitive axiomatization $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ to capture winning strategies of the existential player in a given parity game. The generator program $\m{\mathcal{P}}_g$ is simple: it guesses a (positional) strategy (called $\pr{eStrategy}$) for player $\exists$. The test program is more involved. It guesses a positional strategy (called $\pr{uStrategy}$) for player $\forall$ and accepts $\pr{uStrategy}$ if it wins against $\pr{eStrategy}$. To perform the acceptance test, we define the set $\pr{inf}$ of nodes that appear infinitely often on the unique play that conforms to both strategies. We reject $\pr{uStrategy}$ if the minimum priority of nodes in $\pr{inf}$ is an even number. Hence, $(\m{\mathcal{P}}_g, \m{\mathcal{P}}_t)$ has a stable-unstable model if $\m{\mathcal{P}}_g$ can find a positional strategy $\m{\sigma}$ for the existential player such that $\m{\mathcal{P}}_t$ cannot find any positional strategy to defeat $\m{\sigma}$. The entire programs can be found below. \begin{small} \begin{align*} \m{\mathcal{P}}_g &= \left\lbrace \begin{array}{l} 1\leq\#\{\pr{eStrategy}(X,Y) : \pr{arc}(X,Y)\}\leq 1 \leftarrow \pr{existNode}(X). \end{array} \right\rbrace\\ \m{\mathcal{P}}_t &= \left\lbrace \begin{array}{l} 1\leq \#\{\pr{uStrategy}(X,Y) : \pr{arc}(X,Y)\} \leq 1 \leftarrow \pr{univNode}(X).\\ \pr{next}(X,Y) \leftarrow \pr{eStrategy}(X,Y).\\ \pr{next}(X,Y) \leftarrow \pr{uStrategy}(X,Y).\\ \pr{r}(v_0). \qquad \pr{r}(Y) \leftarrow \pr{r}(X)\land \pr{next}(X,Y).\\ \pr{inf}(v_0) \leftarrow \pr{next}(X,v_0)\land \pr{r}(X).\\ \pr{inf}(X) \leftarrow \pr{next}(Y,X)\land \pr{next}(Z,X)\land \pr{r}(Y)\land \pr{r}(Z)\land Y \neq Z.\\ \pr{inf}(Y) \leftarrow \pr{inf}(X)\land \pr{next}(X,Y).\\ \pr{infNum}(N) \leftarrow \pr{omega}(X,N)\land \pr{inf}(X).\\ \pr{num}(N) \leftarrow \pr{omega}(X,N).\\ \pr{minNum}(N) \leftarrow \pr{num}(N)\land N \leq M : \pr{num}(M).\\ \pr{nextNum}(N,M) \leftarrow \pr{num}(N)\land\pr{num}(M)\land M \leq P : \pr{num}(P) : N < P.\\ \pr{nonMin}(M) \leftarrow \pr{infNum}(N) \land \pr{nextNum}(N,M).\\ \pr{nonMin}(M) \leftarrow \pr{nonMin}(N) \land \pr{nextNum}(N,M).\\ \pr{min}(N) \leftarrow \pr{infNum}(N) \land \lnot \pr{nonMin}(N).\\ \leftarrow \pr{min}(N)\land N \equiv 0~(\text{mod} 2). \end{array} \right\rbrace \end{align*} \end{small} Deciding if a parity game has a winning strategy for the existential player has been encoded in difference logic and in SAT \cite{HKLN12:jcss}. We see two reasons why our encoding as a combined program can still be of interest. First, it is an intuitive encoding that corresponds directly to the problem definition. Second, to the best of our knowledge, it is the first encoding whose size is linear in the size of the graph, i.e., $\orderof{|V|+|A|}$. The existing difference logic encoding has size $\orderof{|V|^2+|A|}$ and the existing SAT encoding (which is developed on top of the difference logic encoding) has size $\orderof{|V|^2\times\log|V|+|A|}$ \cite{HKLN12:jcss}. \subsection{Conformant Planning} \emph{(Classical) planning} is the task of generating a plan (i.e., a sequence of actions) that realizes a certain goal given a complete description of the world. \emph{Conformant planning} is the task of generating a plan that reaches a given goal given a partial description of the world (certain facts about the initial state and/or actions' effects are unknown). In this section, we focus on \emph{deterministic} conformant planning problems: problems where the state of the world at any time is completely determined by the initial state and the actions taken. It is well-known that deciding if a conformant plan exists is a $\Sigma^P_2$-complete decision problem. To encode conformant planning problems in our formalism, we assume a vocabulary $\m{\sigma} = \m{\sigma}_a \cup \m{\sigma}_w \cup \m{\sigma}_i$ is given. Here, $\m{\sigma}_a$, $\m{\sigma}_w$ and $\m{\sigma}_i$ represent a sequence of actions, the state of the world over time, and the initial state of the world, respectively. We also assume that $\m{\sigma}_w$ contains an atom $\pr{goal}$ with intended interpretation that the goal of the planning problem is reached at some time. Furthermore, we assume that $\m{\sigma}_i$ is partitioned in $\m{\sigma}_{unc} $ and $ \m{\sigma}_{c}$, where $\m{\sigma}_{unc}$ are the atoms subject to uncertainty (to which our plan should be conformant). Let $\m{\mathcal{P}}_{ca}$ be a logic program containing a rule $ 0\leq \#\{\pr{a}\}\leq 1$ for each $\pr{a}\in \m{\sigma}_a$. Intuitively, the program $\m{\mathcal{P}}_{ca}$ \emph{guesses} a sequence of actions. Similarly, let us introduce a program $\m{\mathcal{P}}_{unc}$ containing a rule $ 0\leq \#\{\pr{u}\}\leq 1$ for each $\pr{u}\in \m{\sigma}_{unc}$. Furthermore, we assume the availability of a program $\m{\mathcal{P}}_{w}$ that defines the atoms in $\m{\sigma}_w$ (including $\pr{goal}$) deterministically in terms of $\m{\sigma}_a$ and $\m{\sigma}_i$. Also, let $\m{\mathcal{P}}_{pa}$ be a program that contains a rule $\pr{fail} \m{\leftarrow} \pr{a}\land \lnot \pr{p}$ for each $\pr{a}\in \m{\sigma}_a$, $\pr{p}\in \m{\sigma}_w$ such that $\pr{p}$ is a precondition of $\pr{a}$. With these building blocks, we can easily encode conformant planning as a combined program \[ \left(\m{\mathcal{P}}_{ca}, \m{\mathcal{P}}_w\cup \m{\mathcal{P}}_{pa} \cup \m{\mathcal{P}}_{unc}\cup \{\m{\leftarrow} \pr{goal}\land \lnot \pr{fail}\}\right). \] This program is parameterized by $\m{\sigma}_{c}$. To see that it encodes the conformant planning problem, we notice that stable-unstable models of this program are stable models of $\m{\mathcal{P}}_{ca}$, i.e., sequences of actions. Furthermore, models of the testing program are interpretations of the atoms in $\m{\sigma}_{unc}$ such that in this world, either one of the preconditions on the actions is not satisfied or the goal is not reached. I.e., models of the testing program amount to showing that the sequence of actions is \emph{not} a conformant plan. The stable-unstable semantics dictates that there can be no such counterexample. In the above, we described $\m{\mathcal{P}}_{w}$ and $\m{\mathcal{P}}_{pa}$ only informally since these components have already been worked out in the literature. More precisely, many classical planning encodings use exactly those components, combining them to a program of the form \[\m{\mathcal{P}}= \m{\mathcal{P}}_{ca}\cup \m{\mathcal{P}}_{w} \cup\m{\mathcal{P}}_{pa}\cup \{\m{\leftarrow} \lnot \pr{goal}. ~~\m{\leftarrow} \pr{fail}.\}. \] These components (or very similar) are used for instance by \citet{Lifschitz99:iclp}, \citet{LRS01:puui}, and by \citet{BJBDVD14:tplp}. This illustrates that our encoding of conformant planning stays very close to the existing encodings of classical planning problems in ASP. On the other hand, native conformant planning encodings in ASP are often based on saturation \cite{LRS01:puui}. After applying saturation, it is very hard to spot the original components. \subsection{% Points of No Return: A Generic Problem Combining Logic and Graphs} We now present a generic problem that connects graphs with logic. Let $G=(V,A,s)$ be a directed multi-graph: $V$ is a set of nodes, $s \in V$ is an initial node and $A$ is a set of arcs labeled with Boolean formulas. We use $a:\arc{u}{\phi}{v}$ to denote that $a$ is an arc from $u $ to $v $ labeled with $\phi$. There may be multiple arcs between $u$ and $v$ with different labels. We call a node $v \in V$ a \emph{point of no return} if (i) $G$ contains a path $\arc{s=v_0}{\phi_1}{v_1} \arc{}{\phi_2}{\ldots}\arc{}{\phi_n}{v_n=v}$ such that $\phi_1\land \ldots\land \phi_n$ is satisfiable and (ii) the preceding path in $G$ cannot be extended with a path $\arc{v=v_n}{\phi_{n+1}}{v_{n+1}} \arc{}{\phi_{n+2}}{\ldots} \arc{}{\phi_{n+m}}{v_{n+m}=s}$ such that $\phi_1\land \dots \land \phi_n \land \phi_{n+1}\land \ldots \land\phi_{n+m}$ is satisfiable. Thus, points of no return are nodes $v$ that can be reached from $s$ in a way that makes $s$ unreachable from $v$ (i.e., reaching $s$ back from $v$ would violate a constraint of the path from $s$ to $v$). \begin{proposition} Given a finite labeled graph $G=(V,A,s)$ as above and a node $v \in V$, it is a $\Sigma^P_2$-complete problem to decide if $v$ is a point of no return. \end{proposition} \begin{proof} Membership in $\Sigma^P_2$ is obvious. We present a reduction from \EAQBF to support hardness. Consider an \EAQBF formula $\exists x_1\cdots\exists x_n \forall y_1\cdots\forall y_m\phi$. This formula is equivalent to \[\exists x_1 \cdots\exists x_n\neg\exists y_1\cdots\exists y_m\neg\phi.\] Now, construct a graph $G$ with nodes $v_0,v_1,$ \ldots, $v_n,v_{n+1},$ \ldots, $v_{n+m+1}$ and following labeled arcs: $$\begin{array}{lcr} \arc{v_{i-1}}{x_i}{v_{i}} \mbox{ and } \arc{v_{i-1}}{\neg x_i}{v_{i}} & \quad & \mbox{(for $1\leq i\leq n$)},\\ \arc{v_{n+j}}{y_j}{v_{n+j+1}} \mbox{ and } \arc{v_{n+j}}{\neg y_j}{v_{n+j+1}} & & \mbox{(for $1\leq j\leq m$)},\\ \arc{v_n}{\neg\phi}{v_{n+1}} \mbox{ and }\arc{v_{n+m+1}}{\top}{v_0}. \end{array}$$ Observe that, setting $s=v_0$ and $v=v_n$, we have that $v$ is a point of no return if and only if $\exists x_1 \cdots\exists x_n \forall y_1\cdots\forall y_m\phi$ is valid. \end{proof} To model the problem of checking whether a node is point of no return as a combined program, we assume that each arc is labeled by a \emph{literal} and that there is at most one arc between every two nodes. Our programs easily generalize to the general case. To allow for multiple arcs between two nodes, it suffices to introduce explicit identifiers for arcs. To allow more complex labeling formulas, we can introduce Tseitin predicates for subformulas and use standard meta-interpreter approaches to model the truth of such a formula; see for instance \cite[Section 3]{GKS11:tplp}. We use unary predicates $\pr{init}$ and $\pr{ponr}$ to respectively interpret the initial node $s$ and the point of no return $v$. Herbrand functions $\pr{pos}$ and $\pr{neg}$ map atoms (represented as constants) to literals. The predicate $\pr{arc}(X,Y,L)$ holds if there is an arc between nodes $X$ and $Y$ labeled with literal $L$. In $\m{\mathcal{P}}_g$ (and $\m{\mathcal{P}}_t$), we use predicates $\pr{pick}_g$ (and $\pr{pick}_t$) such that $\pr{pick}_g(X,Y)$ ($\pr{pick}_t(X,Y)$) holds if the arc from $X$ to $Y$ is chosen in the path $v_0\to\dots\to v_n$ (the path $v_n\to\dots\to v_{n+m}$ respectively). The programs contain constraints ensuring that the selected edges indeed form paths from $s$ to $v$ (respectively from $v$ to $s$), using an additional predicate $\pr{r}_g$ ($\pr{r}_t$) and that the formulas associated to the respective paths are satisfiable. Thus, $\m{\mathcal{P}}_g$ encodes that there exists a path from $s$ to $v$ and $\m{\mathcal{P}}_t$ encodes that this path can be extended to a cycle back to $s$. As such, the combined program indeed models that $v$ is a point of no return. The entire combined program can be found below. {\small \[\begin{array}{@{}c@{~}c@{}} \m{\mathcal{P}}_g & \m{\mathcal{P}}_t\\ =&=\\ \left\lbrace \begin{array}{l@{~}} 0\leq\# \{\pr{pick}_g(X,Y)\}\leq 1 \m{\leftarrow} \pr{arc}(X,Y,L).\\ \m{\leftarrow} \pr{pick}_g(X,Y)\land \pr{pick}_g(X',Y') \\ \quad \land\, \pr{arc}(X,Y,\pr{pos}(A)) \land \pr{arc}(X',Y',\pr{neg}(A)).\\ \pr{r}_g(X) \leftarrow \pr{init}(X).\\ \pr{r}_g(Y) \leftarrow \pr{r}_g(X)\land \pr{pick}_g(X,Y).\\ \m{\leftarrow} \lnot \pr{r}_g(X)\land \pr{pick}_g(X,Y).\\ \m{\leftarrow} \pr{ponr}(X) \land \lnot \pr{r}_g(X).\\ \m{\leftarrow} \pr{ponr}(X) \land \pr{pick}_g(X, Y).\\ \m{\leftarrow} \pr{pick}_g(X,Y)\land \pr{pick}_g(X,Z)\land Y\neq Z.\\ \m{\leftarrow} \pr{pick}_g(X,Y)\land \pr{pick}_g(Z,Y)\land X\neq Z.\\ \end{array} \right\rbrace & \left\lbrace \begin{array}{l@{~}} 0\leq\# \{\pr{pick}_t(X,Y)\}\leq 1 \m{\leftarrow} \pr{arc}(X,Y,L).\\ \pr{pick}(X,Y) \m{\leftarrow} \pr{pick}_t(X,Y).\\ \pr{pick}(X,Y)\m{\leftarrow} \pr{pick}_g(X,Y).\\ \m{\leftarrow} \pr{pick}(X,Y)\land \pr{pick}(X',Y') \land \\ \quad \pr{arc}(X,Y,\pr{pos}(A)) \land \pr{arc}(X',Y',\pr{neg}(A)).\\ \pr{r}_t(X) \leftarrow \pr{ponr}(X).\\ \pr{r}_t(Y) \leftarrow \pr{r}_t(X)\land \pr{pick}_t(X,Y).\\ \m{\leftarrow} \lnot \pr{r}_t(X)\land \pr{pick}_t(X,Y).\\ \m{\leftarrow} \pr{init}(X) \land \lnot \pr{r}_t(X).\\ \m{\leftarrow} \pr{init}(X) \land \pr{pick}_t(X, Y).\\ \m{\leftarrow} \pr{pick}_t(X,Y)\land \pr{pick}_t(X,Z)\land Y\neq Z.\\ \m{\leftarrow} \pr{pick}_t(X,Y)\land \pr{pick}_t(Z,Y)\land X\neq Z.\\ \end{array} \right\rbrace \end{array} \]} \section{Implementation} \label{sec:impl} Next, we present a prototype implementation of a solver for the stable-unstable semantics. \subsection{Preliminaries: \system{sat-to-sat}} We assume familiarity with the basics of second-order logic (SO). Our implementation is based on a recently introduced solver, called \system{sat-to-sat} \cite{JTT16:aaai}. The \system{sat-to-sat} architecture combines multiple SAT solvers to tackle problems from any level of the PH, essentially acting like a QBF solver \cite{BJT16:bnp}. We do not give details on the inner workings of \system{sat-to-sat}, but rather refer the reader to the original papers for details. What matters for the current paper is that \citet{BJT16:kr} presented a high-level (second-order) interface to \system{sat-to-sat}. The idea is that in order to obtain a solver for a new paradigm, it suffices to give a second-order theory that \emph{describes} the semantics of the formalism declaratively. Bogaerts et al.\ showed, e.g., how to obtain a solver for (disjunctive) logic programming using this idea. Following \citet{BJT16:kr}, we describe a logic program by means of predicates $\pr{r}$, $\pr{a}$, $\pr{p}$, $\pr{h}$, $\pr{pb}$ and $\pr{nb}$ with intended interpretation that $\pr{r}(R)$ holds for all rules $R$, $\pr{a}(A)$ holds for all atoms $A$, $\pr{p}(A)$ holds for all parameters, $\pr{h}(R,H)$ means that $H$ is an atom in the head of rule $R$, $\pr{pb}(R,A)$ that $A$ is a positive literal in the body of $R$ and $\pr{nb}(R,B)$ that $B$ is the atom of a negative literal in the body of $R$. With this vocabulary, augmented with a predicate $\pr{i}$ with intended meaning that $\pr{i}(A)$ holds for all atoms $A$ true in some interpretation, we describe the parameterized stable semantics for disjunctive logic programs with the theory $\m{T}_{SM}$: \begin{small} \begin{equation*} \begin{array}{@{}l@{}} \left\{\begin{array}{l} \forall A: \pr{i}(A)\m{\Rightarrow} \pr{a}(A).\\ \forall R: \pr{r}(R)\m{\Rightarrow} \big((\forall A: \pr{pb}(R,A)\m{\Rightarrow} \pr{i}(A))\land (\forall B: \pr{nb}(R,B)\m{\Rightarrow} \lnot \pr{i}(B)) \m{\Rightarrow} \\ \qquad\qquad\quad~~\, \exists H: \pr{h}(R,H)\land \pr{i}(H)\big). \\ \lnot \exists \pr{i}':\\ \quad (\forall A: \pr{i}'(A)\m{\Rightarrow} \pr{i}(A))\land (\exists A: \pr{i}(A)\land \lnot \pr{i}'(A))\land (\forall A: \pr{p}(A) \m{\Rightarrow} (\pr{i}'(A)\m{\Leftrightarrow} \pr{i}(A)))\,\land \\ \quad \forall R: \pr{r}(R)\m{\Rightarrow} \big((\forall A: \pr{pb}(R,A)\m{\Rightarrow} \pr{i}'(A))\,\land \\ \qquad\qquad\qquad~~\, (\forall B: \pr{nb}(R,B)\m{\Rightarrow} \lnot \pr{i}(B)) \m{\Rightarrow} \exists H: \pr{h}(R,H)\land \pr{i}'(H)\big). \\ \end{array} \right\} \end{array} \end{equation*} \end{small} The first part of this theory expresses that $\pr{i}$ is interpreted as a model of $\m{\mathcal{P}}$: the constraint $\pr{i}(A)\m{\Rightarrow} \pr{a}(A)$ expresses that the interpretation is a subset of the vocabulary and the second constraint expresses that whenever the body of a rule is satisfied in $\pr{i}$, so is at least one of its head atoms. The constraint $\lnot \exists \pr{i}'\dots$ expresses that $i$ is $\subseteq$-minimal: there cannot be an interpretation $\pr{i}'\subsetneq \pr{i}$ that agrees with $\pr{i}$ on the parameters and that is a model of the reduct of $\m{\mathcal{P}}$ with respect to $\pr{i}$. In other words, whenever $\pr{i}'$ satisfies all positive literals in the body of a rule $R$ and $\pr{i}$ satisfies all negative literals in the body of $R$, $\pr{i}'$ must also satisfy some atom in the head of $R$. \begin{theorem}[Theorem 4.1 of \cite{BJT16:kr}] \label{thm:sm}\label{thm:stable} Let $\m{\mathcal{P}}$ be a (disjunctive) logic program and $I$ an interpretation that interprets $\{\pr{a},\pr{r},\pr{p},\pr{pb},\pr{nb},$ $\pr{h}\}$ according to $\m{\mathcal{P}}$. Then, $I\models \m{T}_{SM}$ if and only if $\pr{i}^I$ is a parameterized stable model of $\m{\mathcal{P}}$. \end{theorem} From Theorem \ref{thm:sm}, it follows that feeding $\m{T}_{SM}$ to \system{sat-to-sat} results in a solver for disjunctive logic programs. The same theory also works for normal logic programs. \subsection{An Implementation on Top of \system{sat-to-sat}} In order to obtain a solver for our new paradigm in the spirit of \citet{BJT16:kr}, we need to provide a second order specification of our semantics. A first observation is that we can reuse the theory $\m{T}_{SM}$ from the previous section, both to enforce that $I$ is a stable model of $\m{\mathcal{P}}_g$ and that there exists no stable model of $\m{\mathcal{P}}_t$ that coincides with $I$ on the shared vocabulary. When translating the definition of stable-unstable models to second-order logic, we obtain the following theory \begin{equation* \begin{array}{l} \m{T}_{SU} = \left\{\begin{array}{l} \m{T}_{SM}[\pr{r}/\pr{r}_g,\pr{a}/\pr{a}_g,\pr{p}/\pr{p}_g, \pr{h}/\pr{h}_g,\pr{pb}/\pr{pb}_g,\pr{nb}/\pr{nb}_g] . \\ \lnot \exists \pr{i}_t: \m{T}_{SM}[\pr{r}/\pr{r}_t,\pr{a}/\pr{a}_t,\pr{h}/\pr{h}_t, \pr{pb}/\pr{pb}_t,\pr{nb}/\pr{nb}_t,\pr{i}/\pr{i}_t, \pr{p}/\pr{p}_t]\\ \qquad\qquad \land \, (\forall A: \pr{a}_g(A)\land \pr{a}_t(A)\m{\Rightarrow} (\pr{i}(A)\m{\Leftrightarrow} \pr{i}_t(A))). \end{array}\right\}, \end{array} \end{equation*} where $\m{T}_{SM}[\pr{r}/\pr{r}_g]$ abbreviates a second-order theory obtained from $\m{T}_{SM}$ by replacing all free occurrences of $\pr{r}$ by $\pr{r}_g$. \begin{theorem} Let $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$ be a combined logic program and $I$ an interpretation that interprets $\{\pr{a}_g,\pr{r}_g,\pr{p}_g,$ $\pr{pb}_g,\pr{nb}_g,\pr{h}_g\}$ according to $\m{\mathcal{P}}_g$ and $\{\pr{a}_t, \pr{r}_t, \pr{p}_t, \pr{pb}_t, \pr{nb}_t, \pr{h}_t\}$ according to $\m{\mathcal{P}}_t$. Then, $I\models \m{T}_{SU}$ if and only if $\pr{i}^I$ is a stable-unstable model of $(\m{\mathcal{P}}_g,\m{\mathcal{P}}_t)$. \end{theorem} \begin{proof} Theorem \ref{thm:stable} ensures that the first sentence of this theory is equivalent with the condition of $\pr{i}^I$ being a stable model of $\m{\mathcal{P}}_g$. Also, the second sentence states that one cannot have an interpretation $\pr{i}_t$ that coincides with $\pr{i}^I$ on shared atoms (those that are in both $\pr{a}_g$ and $\pr{a}_t$) and is a stable model of $\m{\mathcal{P}}_t$. This is exactly the definition of the stable-unstable semantics. \end{proof} Providing an ASCII representation of $\m{T}_{SU}$ to the second-order interface of \system{sat-to-sat} immediately results in a solver that generates stable-unstable models of a combined logic program. Our implementation, which is available online% \footnote{% \url{http://research.ics.aalto.fi/software/sat/sat-to-sat/so2grounder.shtml}.}, consists only of the second-order theory above and some marshaling (to support ASP-Core-2 format and to exploit the symbol table to identify which atoms from different programs are actually the same). The overall workflow of our tool is as follows. We take, as input, three logic programs: $\m{\mathcal{P}}_g$ (a non-ground generate program), $\m{\mathcal{P}}_t$ (a non-ground test program) and $\m{\mathcal{P}}_i$ (an instance). We then use \system{Gringo} \cite{GST07:lpnmr} to ground $\m{\mathcal{P}}_g \cup \m{\mathcal{P}}_i$ and $\m{\mathcal{P}}_t \cup \m{\mathcal{P}}_i$. Next, we interpret $\pr{a}_x$, $\pr{r}_x$, $\pr{p}_x$, $\pr{pb}_x$, $\pr{nb}_x$ and $\pr{h}_x$ (for $x \in \{g,t\}$) according to the reified representation of the two resulting ground programs. Such an interpretation is fed to \system{sat-to-sat} along with the ASCII representation of $\m{T}_{SU}$; \system{sat-to-sat} uses these to compute stable-unstable models of the original combined program $(\m{\mathcal{P}}_g \cup \m{\mathcal{P}}_i, \m{\mathcal{P}}_t \cup \m{\mathcal{P}}_i)$. The implementation described above is proof-of-concept by nature and we plan to implement this technique natively on top of the \system{clasp} solver \cite{DGGKKOS08:kr,GKKRS15:lpnmr}. In spite of its prototypical nature, the current implementation is based on a state-of-the-art architecture shared by many QBF solvers and thus expected to perform reasonably well. This is especially the case when we go beyond the complexity class $\Sigma^P_2$ in the next section. \section{% Beyond $\Sigma^P_2$ with Normal Logic Programs} \label{sec:beyond} In this section, we show how the ideas of this paper generalize to capture the entire PH. To this end, the definition of a combined logic program is turned into a recursive definition of $k$-combined programs where the parameter $k\geq 1$ reflects the \emph{depth} of the combination. \begin{definition}[$k$-combined program] \begin{compactenum} \item For $k=1$, a \emph{$1$-combined program} is defined as a normal program $\m{\mathcal{P}}$ over a vocabulary $\m{\sigma}$, parameterized by a vocabulary $\tau\subseteq\m{\sigma}$. \item For $k>1$, a \emph{$k$-combined} program is a pair $(\m{\mathcal{P}},\m{\mathcal{C}})$ where $\m{\mathcal{P}}$ is a normal program over a vocabulary $\m{\sigma}$, parameterized by a vocabulary $\tau\subseteq\m{\sigma}$ and $\m{\mathcal{C}}$ is a $(k-1)$-combined program over a vocabulary $\m{\sigma}'$, parameterized by $\m{\sigma}\cap\m{\sigma}'$. \end{compactenum} \end{definition} Note that \emph{combined programs} (Definition \ref{def:combined-program}) directly correspond to $k$-combined programs with $k=2$. Similarly, the semantics of $k$-combined programs also directly generalizes Definition \ref{def:semantics}: \begin{definition}[Stable-unstable models for $k$-combined programs] A stable model $I$ of \m{\mathcal{P}} is also called a \emph{stable-unstable} model of a $1$-combined program $\m{\mathcal{P}}$. Let $(\m{\mathcal{P}},\m{\mathcal{C}})$ be a $k$-combined program with $k>1$ over a vocabulary $\m{\sigma}$, parameterized by $\tau\subseteq\m{\sigma}$, where $\m{\mathcal{C}}$ has vocabulary $\m{\sigma}'$. A $\m{\sigma}$-interpretation $I$ is a \emph{stable-unstable model} of $(\m{\mathcal{P}},\m{\mathcal{C}})$, if \begin{compactenum} \item $I$ is a parameterized stable model of $\m{\mathcal{P}}$ and \item there is no stable-unstable model $J$ of $\m{\mathcal{C}}$ such that $I\cap {\m{\sigma}'}=J\cap {\m{\sigma}}$. \end{compactenum} \end{definition} \begin{example}[Example \ref{ex:small} continued] Consider program $\m{\mathcal{P}}_3=\{e\m{\leftarrow} e.\ d\m{\leftarrow} e.\}$ over vocabulary $\m{\sigma}_3=\{d,e\}$. Program $\m{\mathcal{P}}_3$ has one stable model, namely $\emptyset$. This model is also a stable-unstable model of the $3$-combined program $(\m{\mathcal{P}}_3,(\m{\mathcal{P}}_2,\m{\mathcal{P}}_1))$ since it does not coincide with a stable-unstable model of $(\m{\mathcal{P}}_2,\m{\mathcal{P}}_1)$ on $\m{\sigma}_3\cap \m{\sigma}_2=\{d\}$. \end{example} The complexity of deciding whether a $k$-combined program $(\m{\mathcal{P}},\m{\mathcal{C}})$ has a stable-unstable model depends on the depth $k$ of the combination. \begin{theorem}\label{thm:complex:general} It is $\Sigma^P_k$-complete to decide if a finite $k$-combined program has a stable-unstable model. \end{theorem} \begin{proof}[Proof sketch.] The case $k=1$ follows from the results of \citet{MT99} and Theorem \ref{thm:complex} corresponds to $k=2$. Using either one as the base case, it can be proven inductively that the decision problem in question is NP-complete assuming the availability of an oracle from the class $\Sigma^P_{k-1}$, effectively a $(k-1)$-combined program in our constructions. Thus, steps in recursion depth match with the levels of the PH (in analogy to the number of quantifier alternations in QBFs). \end{proof} \section{Conclusion} \label{sec:concl} In this paper, we propose \emph{combined logic programs} subject to the \emph{stable-unstable semantics} as an alternative paradigm to disjunctive logic programs for programming on the second level of the polynomial hierarchy. We deploy \emph{normal} logic programs as the base syntax for combined programs, but other equally complex classes can be exploited analogously. Our methodology surpasses the need for saturation and meta-interpretation techniques that have previously been used to encode oracles within disjunctive logic programs. The use of the new paradigm is illustrated in terms of application problems and we also present a proof-of-concept implementation on top of the solver \system{sat-to-sat}. Moreover, we show how combined programs provide a gateway to programming on any level $k$ of the polynomial hierarchy with normal logic programs using the idea of recursive combination to depth $k$. In this sense, our formalism can be seen as a hybrid between QBFs and logic programs, combining desirable features from both.
1,108,101,566,070
arxiv
\section{Introduction} Recently, Graph Neural Networks (GNNs) have attracted increasing attention due to its successful applications on various graph-structure data, such as social networks, chemical composition structures, and biological gene proteins~\cite{Zhou2018,Wu2019}. However, recent works~\cite{Sun2018,Xu2019a,Dai2018} have pointed out that GNNs are vulnerable to adversarial attacks, which can crash safety-critical GNN applications, such as auto-driving, medical diagnosis~\cite{Wu2019}. To address this issue, numerous works~\cite{Sun2018,Chen2020survey,Jin2020survey} have been proposed to defend the adversarial attacks from the perspectives of data preprocessing~\cite{Wu2019Adversarial}, structure modification~\cite{Wang2019}, adversarial training~\cite{Feng2019}, adversarial detection~\cite{Zhang2019}, and etc. However, our experimental study and the evaluation in the existing work~\cite{Jin2020survey} have shown that none of the existing defense methods is superior to the others under all attacks for all datasets with all perturbation sizes. This illustrates the limitation of the existing defense methods in terms of the robust generalization capability. \iffalse On the other hand, this may imply that numerous factors may contribute to defend the adversarial attacks. Thus, it is desirable to design a novel defense method from a different perspective, which is complementary to the exisitng defense methods. can not only learn a spatial-sparse hidden representation that activates the most salient features, but also learn to expand the set of potential active features through temporal sparsification, so that as many as possible features can form a large pool of salient features, from which the active features can be dynamically selected. Specifically, we implement spatial sparsity on the hidden representation of each graph node through the TopK activation function, and realize temporal sparsity on each hidden feature through an attention mechanism. tried to improve the robustness of GNNs from the perspective of the sparsification on the GNN hidden features. Inspired by the recent works on the relation between sparsity and robustness, we identify the inherent correlation between the GNNs' robustness and the sparsity of the GNN hidden features. Based on this identified correlation, Thus, Empirical results on three clean benchmark datasets with GCN as the target GNN show that ST-SparseGCN (the integration of ST-Sparse and the original GCN) even outperform the original GCN, while the other three representative defense methods are inferior to the original GCN. This illustrates the generalization capability of ST-Sparse on the clean datasets. Moreover, empirical results on the same datasets under four representative attacks with varied perturbation sizes show that the proposed method can effectively increase the robust accuracy by up to 6\% improvment, compared with the three representative defense methods. This illustrates the robust generalization capability of ST-Sparse under various attacks. \fi Recently, \cite{Tsipras2019} revealed that the existence of adversarial attack might originate from the utilization of weakly correlated features, which can be reduced by keeping only the strongly correlated features. This phenomenon motivates us to adopt the sparse representation, which is widely utilized in computational neuroscience~\cite{Ahmad2019}, to reduce the effect from the weakly correlated features. Thus, in this work, we propose a spatio-temporal feature sparsification framework to improve the robustness of the GNN models. The spatial feature sparsification (called TopK) in the proposed framework simply keeps the $k$ features with the largest values and sets all the other features to zero. In spirit, TopK is the same as the Dropout regulirazation technique~\cite{srivastava2014dropout} except that Dropout randomly drops neurons, while TopK orders the neurons according to their output values and keeps only the neurons with the top $k$ values. Through experiment studies, we identify that TopK can improve the defense performance under four representative adversarial attacks on three typical benchmark datasets with varies perturbation sizes. However, the robustness brought by TopK is at the expense of the generalization capability. Compared with Dropout, TopK loses the randomness, which sacrifices the generalization capability, as the randomness in Dropout can decompose a complex model into an ensemble of a large number of simpler models. To address this issue, temporal feature sparsification is introduced to alternate the non-zero features (also called active features) in each training epoch. Through the feature alternation, more features can participate in node representation in turn. Thus, the spatial sparsification together with the temporal sparsification (abbreviated as ST-Sparse) can behave similarly to the Dropout regulirazation technique. Therefore, ST-Sparse might achieve the similar generalization capability as Dropout. Moreover, through experiment evaluation, we identify that ST-Sparse can achieve robust generalization in that it can integrate with the existing defense methods to further improve the model robustness, similar to the integration of Dropout into various deep learning models as a standard regularization technique. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{introduction.pdf} \caption{The illustration of spatio-temporal sparsification, where the vertial rectangular bar associated with each node represents the node's feature vector and the colored/white squares in the bar denote active/inactive features. In the temporal sparsification part, the horizontal red rectangle illustrates the on-and-off activation patten of temporal sparsity.} \label{fig:ST-sparseGCN_illustration} \end{figure} Fig.~\ref{fig:ST-sparseGCN_illustration} illustrates the basic idea of the proposed ST-Sparse mechanism. The spatial sparsification is mainly dedicated to transform a dense hidden node vector of a GNN to a sparse high-dimensional vector, where only the top $k$ salient features are activated, as illustrated through the red rectangle at the top left part of Fig.~\ref{fig:ST-sparseGCN_illustration}. The temporal sparsification further sparsifies the active features along the time dimension during the GNN training process. More specifically, the duty cycle of each active feature dimension is sparse so that each active feature will not be intensively used. Note that the temporal sparsification is applied to the feature dimension instead of the features of individual nodes, because on one hand, the salient features of individual nodes usually focus only on a few dimensions, the temporal spasification of these features may significantly degrade the model performance; on the other hand, the overall distribution composed of all nodes can better reflect the temporal sparsity. By balancing the duty cycle of activation among different dimension, it is possible to avoid the intensive usage of certain dimensions, thereby increasing the model's expressive capability, which in turn increases the robustness of the model. \iffalse Furthermore, although the adversarial attack can be applied to GNN either through edges or node features, the correponding perturbation can be reflected through the aggregated node features. Thus, both the spatial and temporal sparsification are applied to the aggregated features. \fi The main contributions of this work is summarized as follows. \begin{itemize} \item From the perspective of spatio-temporal sparsity, we explore to construct a robust feature space, where the information propagation in GNN is less vulnerable to adversarial attacks. \item We provide a novel ST-Sparse mechanism, which utilizes TopK to realize spatial sparsity in high-dimensional vector space, and adopts attention to balance the activation duty cycles among different dimensions, so as to realize the temporal sparsity in the feature space. \item To verify the effectiveness of ST-Sparse, we apply ST-Sparse to the graph convolution network (GCN)~\cite{Kipf2019} (denoted as ST-SparseGCN). Intensive experiments through three benchmark datasets show that ST-SparseGCN can significantly improve the robustness, robust generalization, and ordinary generalization of GCN in terms of classification accuracy. \end{itemize} \section{Related Works}\label{sec:related} \iffalse most of the exisiting GNN defense methods either assumed certian prior knowledge on the graph structures or required time-consuming optimization techniques for training a robust GNN model. For example, \cite{Zhu2019} assumed the existence of the correlation between the perturbations incurred by the attacks and the GNN variances, \cite{Wu2019Adversarial} assumed the Jaccard similarity among neighbor nodes can well characterize the perturbations incurred by the attackers, and adversarial training~\cite{Feng2019} needs to solve a complex min-max optimization problem.\fi {\bf Adversarial attacks on general graph.} The basic idea of adversarial attacks on graph is to change the graph topology or feature information to intentionally interfere with the classifier. \cite{Dai2018} studied a non-target evasion attack based on reinforcement learning. \cite{Zugner2018} proposed netattack, a poisoning attack on GCN, which modifies the training data to misclassify the target node. Further, ~\cite{Zugner2019} used the meta-gradient to solve the min-max problem in attacks during training, and proposed an attack method that reduces the overall classification performance. Besides, \cite{Xu2019} simplified the discrete graph problem by convex relaxation, and thus proposed a gradient-based topological attack. {\bf Defense methods on general graph.} The existing defense methods can be classified from the perspectives of data preprocessing~\cite{Wu2019Adversarial}, structure modification~\cite{Wang2019}, adversarial training~\cite{Feng2019}, the modificaiton of the objective function~\cite{NIPS2019_9041}, adversarial detection~\cite{Zhang2019}, and hybrid defense~\cite{DBLP:journals/corr/abs-1903-05994}. The proposed ST-SparseGCN model can be regarded as a structure modification methods, because it modifies the original GCN structure, as shown in Fig.~\ref{fig:ST-sparseGCN_framework}. However, the proposed ST-Sparse defense methods can also be integrated with the other GCN defense models, such as GCN-Jaccard\cite{Wu2019Adversarial} and GCN-SVD\cite{Entezari2020}, which can be regarded as data preprocessing methods. Thus, the integrated models can be classified as hybrid defense models. Although dozens of defense methods on graph have been proposed, none of them shows the robust generalization, as they are not superior to the others under all attacks for all datasets with all perturbation sizes~\cite{Jin2020survey}. \iffalse \cite{Wu2019Adversarial} improved the model robustness by comparing the Jaccard similarity between a node and its neighbor nodes, and then removing those neighbors with low similarity. This method is essentially an adversarial example detection scheme, which can be used as a constraint to adversarial attacks. Since adversarial attacks come from the feature/edge perturbation on the graph, \cite{Zhu2019} modeled the hidden representations of each node as Gaussian distribution and proposed to reduce the allowable perturbation from adversarial attacks through reducing the distribution variance so that the impact of adversarial attacks will also be reduced. \cite{Zugner2019a} proposed a robust certificate based on convex relaxation, which considers only the perturbation on node featuress, and used the semi-supervised property to improve the robustness of the model. \cite{Tang2019} learned the ability to punish perturbations through the information of additional clean graphs from similar domains and transfered them to the target poisoning graph.\fi {\bf Sparsity and Robustness.} The relation between sparsity and robustness has been revealed in the fields of image classification~\cite{Guo2018} and neuroscience~\cite{Ahmad2019}. From the perspective of image classification, \cite{Guo2018} clarified the inherent relation between sparsity and robustness through theoretical analysis and experimental evaluation. \cite{Cosentino2019,Tsipras2019} revealed that the existence of adversarial attack might originate from the utilization of weakly correlated features, which can be reduced by keeping only the strongly correlated features. This phenomenon also illustrated the necessity of sparsity to reduce the effect of the weakly correlated features. {\bf Difference to the Existing Methods.} Unlike the existing works on GNN robustness, most of which assume certain prior knowledge concering the attack, we intend to construct a robust feature space that can resist attack without any prior knowledge on attack, which can be called ``black box defense". \cite{ZhengICML20} also considered the relation between the model robustness and sparsity. However, its sparsity is defined for the sparsity of the graph structure, instead of the sparsity of the hidden node representation introduced by our ST-Sparse method. It is worth to note that in ST-Sparse, since the perturbation is injected in the hidden layer, it does not have to generate perturbation on graph structure and node feature separately. \section{Preliminaries}\label{sec:pre} \subsection{Notations}\label{sec:notations} Given an undirected graph $G=(V,E,X)$, where $V=\left\{v_1,v_2,...,v_n\right\}$ is a set of nodes with $|V|=n$, $E \subseteq V\times V$ is a set of edges that can be represented as an adjacency matrix $A\in {{\left\{ 0,1 \right\}}^{n\times n}}$, and $X=\left[ {{x}_{1}},{{x}_{2}},\ldots , {{x}_{n}} \right]^T \in {{\mathbb{R}}^{n \times d}}$ is a feature matrix with $x_i$ denoting a feature vector of node $v_i \in V$. $\text{C}=\left< {{c}_{1}},{{c}_{2}},\ldots ,{{c}_{n}} \right>$ is the class label vector with $c_i$ representing the label value of node $v_i$. \iffalse We define $h_i=\left<h_{i1},h_{i2},\ldots ,h_{id} \right>\in \mathbb{R}^{d}$ as a hidden representation of node $v_i$ with $h_{ij}$ denoting its $j$-th feature and $d$ representing the dimension of its feature space. Meanwhile, we use $h'_i=\left<h'_{i1},h'_{i2},\ldots ,h'_{id_h} \right>\in \mathbb{R}^{d_h}$ the high dimensional verson of $h_i$, where $d \ll d_h$. $d_h$ represents the dimension of the high-dimensional feature space. \fi \subsection{Graph Convolution Networks} In the paper, we focus on GCNs for node classification. In particular, we will consider the well established work \cite{Kipf2019}. As a semi-supervised model, GCN can learn the hidden representation of each node. The hidden vectors of all nodes in the $l+1$ layer can be represented recursively by the hidden vectors of the $l$ layer as follows. \begin{equation}\label{eq:GCN} {{H}^{\left( l+1 \right)}}=\sigma \left( {{{\tilde{D}}}^{-\frac{1}{2}}}\tilde{A}{{{\tilde{D}}}^{-\frac{1}{2}}}{{H}^{\left( l \right)}}{{W}^{\left( l \right)}} \right) \end{equation} where ${\tilde{A}}={A}+{{I}_{n}}$, $W^{\left( l \right)}\in {\mathbb{R}}^{d^{(l)}\times d^{(l+1)}}$yl denotes the learnable weight matrix at layer $l$, ${{\tilde{D}}_{i}}=\mathop{\sum }_{j}{{\tilde{A}}_{ij}}$, and $\sigma\left( \cdot \right)$ is an activation function, such as ReLu. Initially, $H^{(0)}=X$. \section{The ST-SparseGCN Framework}\label{sec:model} In the following, we introduce technical details of the proposed ST-SparseGCN. As shown in Fig.~\ref{fig:ST-sparseGCN_framework}, ST-Sparse can be integrated into the GCN model as an activation layer through replacing the ReLu activation function. The ST-Sparse layer will transform the dense feature $h_i$ of each node $v_i$ into a ST-Sparse feature $s_i$. While the ST-Sparse feature transforming process can be further decompose into the spatial sparsification and temporal sparsification processes. \iffalse the ST-Sparse layer transforms $H$, the dense representation output by the GCN layer, into a high-dimensional sparse represenation $S$. And $S$ follow the learning rules based on Spatio-Temporal Sparse, that is, activates the base features with large feature values at each epoch, and assigns attention weights to the activation frequencies of different base features in the high-dimensional sparse representation during the training. \fi \begin{figure*}[ht] \centering \includegraphics[width=0.7\linewidth]{ST-Sparse.pdf} \caption{The SparseGCN framework.} \label{fig:ST-sparseGCN_framework} \end{figure*} \subsection{The High-dimensional Sparse Space}\label{sec:highDimension} First, we will describe the mapping from the dense space to the high-dimensional space, which can be simply realized through replacing the parameter matrix $W^{(l)}$ in Eq. (\ref{eq:GCN}) with a high-dimensional version $W^{(l)}_h$, as shown in Eq. (\ref{eq:highGCN}). \begin{equation}\label{eq:highGCN} {H^{\left( {l + 1} \right)} = \sigma \left( {{{\tilde D}^{ - \frac{1}{2}}}\tilde A{{\tilde D}^{ - \frac{1}{2}}}H^{\left( l \right)}W_h^{\left( l \right)}} \right)}, \end{equation} where $W^{\left( l \right)}\in {\mathbb{R}}^{d^{(l)}\times d_h}$. Compared to $d^{(l+1)}$ (the second dimension of $W^{(l)}$ in Eq. (\ref{eq:GCN})), $d_h$ (the second dimension of $W^{(l)}_h$) is much larger. In Section \ref{sec:para}, we will illustrate the underlying reason for high dimensional space through experiment evaluation, which will show that the low dimension can significantly reduce the performance of the proposed ST-SparseGCN. Thus, the high-dimensional space is one of the key factors for the effectiveness of the propose ST-SparseGCN. It is worth to note that $d_h$ is the same for all layers except the input layer, i.e., $\forall l\ge 1$, $H^{(l)}\in {\mathbb{R}}^{n\times d_h}$ and $H^{(0)}=X\in{\mathbb{R}}^{n\times d}$. Next, we will formally introduce the definition of spatial sparsity as follows. \newtheorem{myDef}{Definition} \begin{myDef}\label{def:spatialSparsity} {Spatial Sparsity.} $\forall v_i\in V$, its high-dimensional feature vector $h_i=<h_{i1},h_{i2},\ldots ,h_{id_h}>$ satisfies the spatial sparsity if $||h_i||_0 \ll d_h$, where $||\cdot||_0$ denote the $l_0$-norm, i.e. the number of non-zero elements. \end{myDef} Def.~\ref{def:spatialSparsity} implies that the non-zero elements of a spatial sparse vector should be much less than the vector dimension. In the following, we will adopt $s_i$ to denote the sparse version of $h_i$. Also, $S=[s_1,s_2, \ldots ,s_n]^T$ represents the sparse matrix consisting of the sparse vectors from all nodes. Although spatial sparsity can ensure the feature sparsity of individual nodes, it cannot guarantee that individual features are sparse, i.e., the number of nodes activated on any given feature is much less than the total number of nodes. For example, in Fig~\ref{fig:ST-sparseGCN_framework}, feature $j$ is not temporally sparsed after spatial sparsification because too many nodes activate feature $j$. Through temporal sparsification, the non-zero elements associated with feature $j$ will be gradually reduced. This new type of sparsity can be illustrated through simple calculation. If $\forall t \in \left[ {1,2, \ldots ,{\rm{T}}} \right]$, where $T$ is the total number training epochs, and $\forall v_i\in V$, $||s^t_i||_0\le k$, then $||S^t||_0\le n\times k$, where $s^t_i$ and $S^t$ represent $s_i$ and $S$ at epoch $t$, respectively, as there are $n$ nodes in total. Thus, on average, each feature will be on duty (i.e., non-zero values) for at most $\frac{n\times k}{d_h}$ nodes, because there are $d_h$ features in total. Since $k\ll d_h$ according to Def.~\ref{def:spatialSparsity}, it can be concluded that $\frac{n\times k}{d_h}\ll n$, where $n$ is actually the maximal number of non-zero elements for any feature at epoch $t$. Thus, from the feature's perspective, if the duty cycle (in terms of non-zero elements) for all features needs to be balanced, it also shows the sparsity phenomenon. The underlying reason for the necessity of the duty-cycle balance lies in that, if a feature is on duty for too many nodes, this feature may show the Mathew effect, i.e., the more a feature is used at the current epoch, the more oftern it will be used in the following epochs. This Mathew effect can be took advantaged by the adversarial attacker through manipulating the heavily used feature. Thus, it is desirable to introduce temporal spasity so that the duty cycle of features can be balanced along with the training epochs. To formally define temporal sparsity, we introduce $s_{*j}^t$ to denote the set of nodes that utilizes the $j$-th feature, which is equal to the $j$-th column of $S^t$, i.e., $s_{*j}^t=<s_{1j}^t,s_{2j}^t,\ldots ,s_{nj}^t>$. Based on the above description, the temporal sparsity concerning the $j$-th feature can be formally defined as follows. \begin{myDef}\label{def:temporalSparsity}{Temporal Sparsity.} For $\forall j\in \{1,\cdots, d_h\}$, the vector $<s_{*j}^1,\ldots, s_{*j}^t, \ldots ,s_{*j}^T>$ satisfies the temporal sparsity if $$\lim_{T\to +\infty }\frac{sum_{t=1}^T ||s_{*j}^t||_0}{T} = \frac{n\times k}{d_h} $$ \end{myDef} \subsection{TopK Based Spatial Sparsification} In our ST-SparseGCN model, the spatial sparsification is implemented through TopK, which simply selects the top $k_{\alpha}=\lfloor \alpha \cdot d_h\rfloor$ features for any $h_i\in H$, where $\alpha\in \left(0,1\right)$ is the spatial sparse ratio. TopK can be formalized as follows. \iffalse \begin{equation}\label{eq:topK} z_i = TopK({h_i,k_{\alpha}}), \end{equation} \fi \begin{equation}\label{eq:topK2} s_i=TopK({h_i,k_{\alpha}})= \left\{ {\begin{array}{*{20}{c}} {h_{ij},\qquad j \in {z_i}},\\ {0,\qquad \;\;\; j \notin {z_i}.} \end{array}} \right. \end{equation} where $z_i$ represents the set of features with the largest $k_{\alpha}$ values from $h_i$. In another words, TopK will keep the values of those top $k$ features and set all the other features to zero. Through replacing the activation function in Eq.(\ref{eq:highGCN}) with TopK, we can implement a spatial sparsed GCN, which can be formalized through the equation shown in Eq.(\ref{eq:spatialSparseGCN}). \begin{equation}\label{eq:spatialSparseGCN} S^{(l + 1)} = TopK ({\tilde D}^{-\frac{1}{2}}{\tilde A}{\tilde D}^{-\frac{1}{2}}S^{(l)}W_h^{(l)}, k_{\alpha}), \end{equation} where $S^{(0)}=X$ initially. It is worth to note that the TopK function in Eq.(\ref{eq:spatialSparseGCN}) is a matrix version of the TopK fuction in Eq.(\ref{eq:topK2}). This matrix version selects the top $k_{\alpha}$ features for each node $v_i$ independently. The spatial sparse ratio $\alpha$ in the TopK function is a hyperparameter to be adjusted. Intuitively, on one hand, small $\alpha$ implies less non-zero features, which might seriously compromise the generalization capability of the proposed model, because the possible vectors that can be represented in the high-dimensional space will become less along with smaller $k_{\alpha}$. On the other hand, large $\alpha$ may compromise the model robustness. The appropriate value of $\alpha$ will be evaluated in Section \ref{sec:perturb}. {\bf TopK VS. ReLu.} In ST-SparseGCN, the ReLu function in GCN has been replaced by the TopK function. The effect of the replacement can be illustrated through Fig. \ref{fig:rt:a}, where the GCN coupled with TopK and the GCN with ReLu are compared in terms of the ratio of activated neurons during the training process. From Fig. \ref{fig:rt:a}, it can be observed that TopK greatly reduces the number of activated neurons. TopK and ReLu can also be compared through the funciton curves as shown in Fig. \ref{fig:rt:b}, from which it can be observed that it is more difficult for a neuron to be activated through the TopK activation function. \begin{figure}[ht] \centering \subfigure[The comparison of activated neurons.]{ \includegraphics[width=0.47\linewidth]{relu_topk.pdf} \label{fig:rt:a} } \subfigure[The comparison of function curves.]{ \includegraphics[width=0.47\linewidth]{relu_topk1.pdf} \label{fig:rt:b} } \caption{The comparison of TopK and ReLu.} \label{fig:rt} \end{figure} {\bf The Cost of Robustness.} \cite{xiao2020enhancing} has proved that the computational complexiy of TopK is asymptotically $O(N)$, which is the same as ReLu. However, it takes more time for TopK to converge in experiments, which might originate from the spatial sparsity. Due to the spatial sparsity, only a small number of neurons can be activated, which implies that the gradient update covers only a small number of neurons in each epoch. Nevertheless, the computing cost of TopK can be reduced through the computing optimization of sparse matrix. \subsection{Attention Based Temporal Sparsification} \iffalse Spatial sparsity can utilize the salient base features to well characterize the property of the corresponding nodes. However, spatial sparsity may incur the Mathew effect: the salient features become more salient, while the unimportant features become even less important. This Matheew effect may significantly compromise the generalization capability of the spatial sparsified GCN, because the possible vectors that can be represented in the high-dimensional space will be significantly reduced. \fi At the first glance, it seems that temporal spasity can be realized through applying the TopK function as follows: $TopK(s^t_{*j},\frac{n\times k}{d_h})$. However, this may reduce the number of non-zero features for certain nodes, which might compromise the generalization capability as discussed previously. Furthermore, it may cause a sudden discontinuities in the model output. To avoid the above issues, we propose an attention-based temporal sparsification mechanism, where at any epoch $t$, for any node $v_i$, its feature $j$ is assigned an attention value $b_{ij}^t$. This attention value will be adaptively adjusted according to the historical sparsity information of feature $j$, namely, $||s^{t'}_{*j}||_0$ ($t'\in \{1,2,\cdots, t-1\}$). Then, the adjusted attention value $b_{ij}^t$ is used as a weight to adjust the corresponding feature $j$ of node $v_i$ in the spatial sparsified GCN hidden representation (namely $S^{(l)}$, as defined in Eq.(\ref{eq:spatialSparseGCN})) so that the feature with larger sparsity value will reduce its chance to be selected by the TopK function. Concretely, in each epoch $t$, the attention mechanism updates the attention value of each node $v_i$'s feature $j$ based on the integration of the historical sparsity $\hat{s}_{ij}^t$ and the current sparsity of the feature (i.e. $||s^t_{*j}||_0$). If integrated feature sparsity is higher than the sparsities of the other features, its attention value $b^t_{ij}$ will be reduced accordingly. Formally, the integrated sparsity of any feature $j$ associated with node $v_i$ is updated as shown in Eq.(\ref{eq:sparsityIntegration}). \begin{equation} \label{eq:sparsityIntegration} {\hat{s}_{ij}^{t + 1} = \hat{s}_{ij}^t + \tau \times ||s^t_{*j}||_0}, \end{equation}where $\hat{s}_{ij}^t$ is the historical sparsity of node $v_i$'s feature $j$ before epoch $t$ and $\tau$ is a hyper parameter that controls the decay rate of historical information. Initially, $\hat{s}_{ij}^0 = 0$. Based on the integrated sparsity $\hat{s}_j^t$, the attention value $b^t_j$ is updated through a smooth exponential function as shown in Eq.(\ref{eq:attentionUpdate}) \begin{equation}\label{eq:attentionUpdate} b_{ij}^t = \exp(-\gamma\hat{s}_{ij}^t), \end{equation}where $\gamma$ is a hyper parameter. From Eq.(\ref{eq:attentionUpdate}), it can be observed that, the larger the integrated sparsity $\hat{s}_{ij}^t$, the smaller the updated attention value $b_{ij}^t$, because the smooth exponential function $\exp(\cdot)$ is a monotonically decreasing function. From Eq.(~\ref{eq:sparsityIntegration}), it can be observed that the historical sparsity $\hat{s}_{ij}^t$ is actually independent of nodes, and so does the attention value $b^t_{ij}$, according to Eq.(\ref{eq:attentionUpdate}). Thus, Eq.(~\ref{eq:sparsityIntegration}) and Eq.(\ref{eq:attentionUpdate}) can be computed only once for all nodes, from which we can obtain feature $j$'s historical sparsity and attention values for all nodes, namely vector $\hat{s}_{*j}^t$ and vector $b^t_{*j}$, respectively. From $b^t_{*j}$, $j\in\{1,\cdots, d_h\}$, we can contruct the attention mask matrix ${\cal B}^t$ as follows. \begin{equation}\label{eq:attentionMatrix} {\cal B}^t = <b_{*1}^t, \ldots,b_{*j}^t,\ldots,b_{*d_h}^t>, \end{equation} \iffalse It is worth to note that $\hat{s}^t$ is shared among all nodes in the graph, because temporal sparsity focuses on feature space instead of the features of individual nodes. Thus, the estimated activation frequency $\hat{S}^t \in \mathbb{R}^{n \times d_h}$ can be formalized as follows. \begin{equation} \hat{S}^t = <\hat{s}^t, \ldots, \hat{s}^t, \ldots, \hat{s}^t>, \end{equation} To reduce the Mathew effect, a smooth exponential function shown in Eq.~(\ref{eq:exponential}) is introduced to adjust the attention weight of each feature dimension, so that the weight of the feature with high/low activation frequency will be reduced/improved accordingly. \begin{equation}\label{eq:exponential} {{{\cal B}_t} = \exp (-\gamma {B^t})} \end{equation} \fi Based on the attention mask matrix, the proposed ST-SparseGCN can be formalized through Eq. (\ref{eq:ST-SparseGcN}). \begin{equation}\label{eq:ST-SparseGcN} \begin{split} S^{(l+1)}=TopK(&\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}}\left(\mathcal{B}_{t}^{(l)} \odot S^{(l)}\right) W_{h}^{(l)},k_{\alpha}). \end{split} \end{equation} Initially, $\mathcal{B}_t^{(0)}=0 $ and $S^{(0)}=X$. Eq. (\ref{eq:ST-SparseGcN}) can be desribed as follows. At epoch $t$, in the $(l+1)$-th layer, sparse matrix $S^{(l)}$ first multiplies the attention mask matrix $\mathcal{B}_{t}^{(l)}$ element-wisely to temporal sparsify the feature space, so as to mitigate the Mathew effect. The sparsified matrix will be fed as input into GCN for information propagation among nodes. The GCN output is further spatially sparsified thourgh TopK activation fucntion. In the end, $S^{(L)}$ is passed to a fully connected layer with the softmax activation function to predict labels $Y$. \section{Experimental Evaluation} \label{sec:experiment} \subsection{Experimental Settings} {\bf Baselines.} To evaluate the robustness and effectiveness of ST-SparseGCN, experiments are performed on the deep learning framework PyTorch~\cite{Steiner2019} and the GNN extension library PyG~\cite{Fey2019}. The proposed defense model (ST-SparseGCN) is compared with four baselines, three of which are representative graph defense methods, on the task of node-level semi-supervised classification as follows. \begin{itemize} \item GCN\cite{Kipf2019}:GCN proposes to simplify the graph convolution using only the first order polynomial, i.e. the immediate neighborhood. By stacking multiple convolutional layers, GCN achieved the state-of- the-art performance in clean datasets. \item GCN-Jaccard\cite{Wu2019Adversarial}: GCN-Jaccard utilizes the Jaccard similarity of features to prune perturbed graphs based on the assumption that the connected nodes usually show high feature similarity. \item GCN-SVD\cite{Entezari2020}: GCN-SVD proposes a low-rank representation method, which can approximate the original node representation with a low-rank representation, so as to resist the adversarial attacks. \item RGCN\cite{Zhu2019}: RGCN aims to defend against adversarial edges with Gaussian distributions as the latent node representation in hidden layers to absorb the negative effects of adversarial edges. \end{itemize} We implement the above baseline methods with refering to the implementation in DeepRobust\cite{li2020deeprobust}. {\bf Attacker models.} To validate the defensive ability of our proposed defender, we choose four representative GCN attacker models. \begin{itemize} \item DICE\cite{Waniek2018}:DICE randomly selects node pairs to flip their connectivity (i.e., the removal of the existing edges and the connection of non-adjacent nodes). \item Mettack\cite{Zugner2019}: Mettack aims at reducing the overall performance of GNNs via meta learning. We used the attack method Meta-Self. \item PGD\cite{Xu2019}:PGD is projected gradient descent topology attack to attacking a pre-defined GNN. \item Min-Max\cite{Xu2019}: Min-max is min-max topology attack to attacking a re-trainable GNN. The minimization is optimized using the PGD method and the maximization aims to constrain the attack loss by retraining $W$. \end{itemize} {\bf Parameter Setting.} The following common parameters are the same for ST-SparseGCN and the baselines. The number of GCN layer is 2 and the training epochs is 200. The selected optimizer is Adam~\cite{Kingma2015} with a fixed learning rate of 0.01. The other hyperparameters in baselines are closely followed the benchmark setup. And the hyper parameters in the ST-SparseGCN model are adjusted based on the validation set to achieve the best robust performance. Parameter sensitivity of ST-SparseGCN will be analyzed in Section \ref{sec:para}. The final results of all experiments are obtained by averaging 5 repeated experiments. Our experiments are performed on NVIDIA RTX 2080Ti GPU. {\bf Datasets.} ST-SparseGCN is evaluated on three well-known datasets: Cora, Citeseer and Polblogs~\cite{Sen2008}, where nodes represent documents and edges represent citations. The sparse bag-of-words feature vector associated with each node is the model input. Table \ref{tab:dataset} enumerates the statistics information of the datasets. The same training set, test set, and validation set from the same data set is used to fairly evaluate the performance of different models. \begin{table} \caption{Statistics of datasets} \centering \begin{tabular}{lllll} \toprule & Nodes & Edges & Features & Classes \\ \midrule Cora & 2708(1 graph) & 5429 & 1433 & 7 \\ Citeseer & 3327(1 graph) & 4732 & 3703 & 6 \\ Polblogs & 1490(1 graph) & 33430 & 1490 & 2 \\ \bottomrule \end{tabular} \label{tab:dataset} \end{table} \subsection{Classification Performance Evaluation} In order to properly measure the impact of the perturbation, we first evaluated the performance of ST-SparseGCN and all baselines on different clean datasets. The average accuracy with standard deviation is enumerated in Table \ref{tab:clean}, which indicates that ST-SparseGCN can achieve excellent performance on clean data sets. Compared to four baselines, the superiority of ST-SparseGCN may come from the generalization capability of the temporal sparsification on the feature space. \begin{table}[ht] \caption{The results of accuracys($\%$) on clean datasets} \centering \begin{tabular}{llll} \toprule & Cora & Citeseer & Polblogs \\ \midrule GCN & 81.6$\pm$0.6 & 70.7$\pm$0.8 & 85.9$\pm$0.9 \\ GCN-Jaccard & 78.9$\pm$0.8 & 71.4$\pm$0.7 & 50.4$\pm$0.9 \\ GCN-SVD & 68.4$\pm$0.8 & 59.8$\pm$0.9 & 80.4$\pm$0.4 \\ RGCN & 81.1$\pm$0.6 & 71.4$\pm$0.5 & 85.3$\pm$0.7 \\ ST-SparseGCN & \textbf{82.2$\pm$0.6} & \textbf{72.0$\pm$0.6} & \textbf{89.1$\pm$0.4} \\ \bottomrule \end{tabular} \label{tab:clean} \end{table} \begin{table*}[ht] \caption{Summary of \textbf{mDR}s(in percent) in classification accuracy compared to GCN in the clean/original graph.Lower is better.} \centering \begin{tabular}{lllllllllllllll} \toprule Dataset & \multicolumn{4}{c}{Cora} & & \multicolumn{4}{c}{Citeseer} \\ \cmidrule{2-5} \cmidrule{7-10} Defender \textbf{/} Attacker & DICE & Mettack & MinMax & PGD & & DICE & Mettack & MinMax & PGD \\ \midrule GCN & 5.28 & 54.13 & 21.90 & 10.09 & & 2.53 & 64.39 & 22.82 & 5.74 \\ GCN\_Jaccard & 6.82 & 38.51 & 13.18 & 17.57 & & 1.12 & 53.08 & 12.16 & 4.13 \\ GCN\_SVD & 25.81 & 50.27 & 61.43 & 13.06 & & 18.57 & 16.61 & 52.12 & 19.14 \\ RGCN & 5.04 & 35.13 & 20.24 & 13.11 & & 1.46 & 61.56 & 11.22 & 10.65 \\ \midrule ST-SparseGCN & \textbf{4.33} & 48.42 & 17.44 & \textbf{7.21} & & 1.92 & 60.74 & 17.96 & 4.73 \\ ST-SparseGCN\_Jaccard & 6.23 & \textbf{29.53} & \textbf{11.05} & 8.53 & & \textbf{0.52} & 45.69 & \textbf{8.20} & \textbf{2.30} \\ ST-SparseGCN\_SVD & 24.87 & 47.02 & 59.13 & 13.40 & & 18.22 & \textbf{15.82} & 47.82 & 19.31 \\ \bottomrule \end{tabular} \label{tab:per} \end{table*} \subsection{Defense Performance Evaluation} \label{sec:perturb} In the section, we evaluate the overall defense performance of the proposed ST-SparseGCN by comparing it with various defense methods under different adversarial attackers along with different perturbation sizes. {\bf Perturbation Size.} For each attacker, we increase the perturbation rate from 0 to 0.25 with a step size of 0.05. In general, the defense performance decreases along with the increase of the perturbation size. In order to concisely present the experiment results, we define a new metric to evaluate the defense performance, termed dropping rate (DR) as shown in Eq.(\ref{eq:DRmetric}). \begin{equation}\label{eq:DRmetric} DR(Acc,\widehat{Acc})=\frac{\widehat{Acc} - Acc}{Acc} \end{equation} where $\widehat{Acc}$ is the accuracy of GCN on clean/original graph. Dropping rate characterizes the defense performance by measuring the integration of the performance degeneration caused by attacker models and the performance remedy from the defense methods. The smaller the dropping rate, the better the defense methods. We use the mean dropping rate(mDR) to describe the overall defense performance along with different perturbation sizes. {\bf Hybrid Defense.} To illustrate that the proposed ST-SparseGCN defense methods can be complementary to the existing defense methods, we propose to integrate ST-SparseGCN with two existing defense models, namely, GCN\_Jaccard and GCN\_SVD, which improve GCN robustness through data preprocessing. The two integrated defense models are called ST-SparseGCN\_Jaccard and ST-SparseGCN\_SVD, respectively. {\bf Experimental Results.} The experiment results on the Cora and Citeseer datasets are enumerated in Table~\ref{tab:per}. Due to space limitation, the experiment results on the Prolblogs dataset are not included, but illustrated in Fig.~\ref{fig:polblogs} instead. From Table~\ref{tab:per}, we can make the following observations: (i) the proposed ST-SparseGCN defense model or its variants (ST-SparseGCN\_Jaccard and ST-SparseGCN\_SVD) achieve the best defense performance under varoius attackers on all datasets, as ST-SparseGCN constructs a robust feature space in each GCN layer; (ii) the hybrid defenders (ST-SparseGCN\_Jaccard and ST-SparseGCN\_SVD) can improve defense performance compared with the corresponding baselines (namely GCN\_Jaccard and GCN\_SVD) alone in most cases, which implies that our proposed defender ST-SparseGCN is complementary to the existing defenders, because ST-SparseGCN intends defend the adversarial attacks from the perspective of sparsification in feature space, which is complementary to most of the existing defense methods; (iii) none of the existing non-hybrid graph defenders (including the ST-SparseGCN alone) can perform best under all attackers on all datasets. This phenomenon may originate from the fact that the success of adversarial attacks comes from various aspects of the GCN model. Thus, the hybrid defense models may deserve further exploration. Fig.~\ref{fig:polblogs} summaries the performance results under different attackers along with varied perturbation sizes on the Polblogs dataset. The results show that ST-SparseGCN again consistently achieves better performance than all the baselines, which demonstrates the superiority of the proposed ST-SparseGCN model. The experiment results shown in Table~\ref{tab:per} and Fig.~\ref{fig:polblogs} have illustrated that our defender can improve defense performance under all attackers on all datasets. It is worth to note that ST-SparseGCN does not rely on any prior knowledge of any particular adversarial attack method. The advantage of ST-Sparse might lie in that it can construct a robust feature space, which can be effectively against various adversarial attacks. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{polblogs.pdf} \caption{Results of different defenders when adopting different attackers in Polblogs datasets.} \label{fig:polblogs} \end{figure*} \subsection{SparseGCN and Dropout} In the section, we compare the generalization performance and robustness of ST-Sparse and Dropout through experiments. Table~\ref{tab:dropout} demonstrates both dropout and ST-Sparse can improve the generalization ability of the model, and ST-Sparse is even better. In terms of robustness, ST-Sparse performs better than Dropout in the face of attacker. In addition, Dropout combined with ST-Sparse will damage the performance of the model. This phenomenon may be the random inactivation of Dropout will damage the ability to preferentially select features in ST-Sparse. \begin{table}[] \caption{Defense performance(in percent) in classification accuracy with Dropout and ST-Sparse.} \centering \begin{threeparttable} \begin{tabular}{lll} \toprule & GCN & ST-SparseGCN \\ \midrule Clean & 81.5$\pm$0.6 & 82.7$\pm$0.6 \\ +Dropout & 81.7$\pm$0.7 & 81.5$\pm$0.6 \\ +Attacker & 65.6$\pm$0.9 & 69.6$\pm$0.6 \\ +Dropout+Attacker & 66.9$\pm$0.7 & 68.3$\pm$0.8 \\ \bottomrule \end{tabular} \begin{tablenotes} \footnotesize \item[1] The dataset is Cora. The attacker is Mettack. The perturbation size is 0.05. \item[2] The results are averaged five times. \end{tablenotes} \end{threeparttable} \label{tab:dropout} \end{table} \subsection{Time Complexity} We conduct several experiments on datasets-model pairs mentioned above to report the runtime of a whole training procedure for 200 epochs obtained on a single NVIDIA RTX 2080 Ti (cf. Fig.~\ref{fig:complex}). Thanks to the ability to quickly process sparse data in the PYG framework, the time overhead between models is basically the same. Among them, GCN\_Jaccard takes the most time because its data preprocessing process is very slow. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{timecomplex1.pdf} \caption{The time it takes to run the model once.} \label{fig:complex} \end{figure} \subsection{Ablation Study and Parameter Analysis} \label{sec:para} In this section, we evaluate the maginal effect of the temporal sparsity, spatial sparsity, and the dimension $d_h$ on the accuracy and robustness of ST-SparseGCN. The performance evaluated on clean and perturbed datasets are shown separately. Due to space limitation, we only show the experiment results on the Cora dataset under the Mettack with a perturbation size of 0.05. The experiments on the other datasets exhibit similar patterns, which are included in the supplementary material. \begin{figure}[] \centering \subfigure[ST-sparsity]{ \includegraphics[width=0.45\linewidth]{k_para.pdf} \label{fig:para:a} } \subfigure[Dimensions]{ \includegraphics[width=0.45\linewidth]{dimen.pdf} \label{fig:para:b} } \caption{Results of parameter analysis} \label{fig:para} \end{figure} In the experiments, the extent of the spatial sparsity is controlled by the spatial sparse ratio $\alpha$. Fig.~\ref{fig:para:a} shows that the influence of the temporal sparsity and the Mettack along with the increasing sparse ratio $\alpha$ from $0.02$ to $0.5$, where T-Sparsity and Perturbed represent the temporal sparsity and the perturbation from the Mettack, respectively. From Fig.~\ref{fig:para:a}, by comparing the performance of the models with and without temporal spasity, it can be osberved that temporal sparsity can effectively improve the ST-SparseGCN's classification performance on both the clean graphs and the perturbed graphs. This illustrates the benefits of the temporal sparsity, which can not only increase the model's generalization capability (from the performance improvement on the clean graphs), but also improve the model's defense performance (from the performance improvement on the perturbed graphs). It can also be inferred from Fig.~\ref{fig:para:a} that, when the spatial sparsity ratio $\alpha$ varies from a small value (0.02) to a relative larger value (0.08), both the models with and without temporal sparsity show significant performance improvement. Thus, this illustrates the necessity of spatial sparsity. Moreover, when $\alpha$ is larger than 0.08, the accuracy of the ST-SparseGCN remains basically unchanged. However, a very small $\alpha$ will degrade the performance, probably because the number of non-zero features is not enough to distinguish different categories for node classificaiton task. \iffalse Moreover, in different spatial sparsity, temporal sparsity will not harm the classification performance in the perturbed graph. An interesting question is why there is a difference in classification performance(improve and maintain) between clean graphs and disturbance graphs in temporal sparsity. This result may due to only using spatial sparsity may cause the model to fall into a local optimum, and this local optimum is robust. \fi Fig.~\ref{fig:para:b} illustrates the impact of the dimension $d_h$ on the performance. It is can be observed that whether it is the clean dataset or the perturbed dataset, the performance is drastically reduced when $d_h$ reduced to a small value. On the other hand, when $d_h$ increases to a certain level, the performance almost remains stable. This illustrates that there exists an appropriate value for $d_h$. The results also illustrate that the high-dimensional space can enable the GCN model to have more robust performance in case of the perturbation incurred by the attackers. \section{Conclusion}\label{sec:conclude} Although the GNN models have emerged rapidly, they still suffer the adversarial attack problem. Unlike the current works, which defend the attack on certain specific scenarios, this paper intends to universally address the attack problem. The proposed ST-Sparse mechanism is similar to the Dropout regularization technique in spirit, as it can provide a general adversarial defense layer, which can be readily integrated into numerous GNN variants. Meanwhile, ST-Sparse can also ensure both robust generalization and ordinary generalization. To evaluate the ST-Sparse's effectiveness, we conduct intensive experiments. The experiment results show that, in the face of four representative attack methods on three representative datasets with different levels of perturbation, ST-SparseGCN outperforms three representative defense methods. \iffalse \section{Acknowledgments} This work was partially supported by the National Science Foundation of China, project nos. 61232001,61173169, 41871302, 91646115, 41571397 and 60903222; the Science Foundation of Hunan, project no. 2016JJ2149 and no.018JJ3012; and the Major Science and Technology Research Program for Strategic Emerging Industry of Hunan, grant no. 2012GK4054. \fi
1,108,101,566,071
arxiv
\section{Introduction} Since the seminal work of Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}, much research on cooperative mobile robots was aimed at identifying the minimal assumptions (in terms of synchrony, sensing capabilities, environment, etc.) under which basic problems can be solved. A recent state of the art was recently proposed by Flocchini et al.~\cite{DBLP:series/lncs/Flocchini19}. Robots are modeled as mathematical points in the 2D Euclidean plane and independently execute their own instance of the same algorithm. In the model we consider, robots are anonymous (\emph{i.e.}, they are indistinguishable from each-other), oblivious (\emph{i.e.}, they have no persistent memory of the past is available), and disoriented (\emph{i.e.}, they do not agree on a common coordinate system). The robots operate in Look-Compute-Move cycles. In each cycle, a robot "Looks" at its surroundings and obtains (in its own coordinate system) a snapshot containing the locations of all robots. Based on this visual information, the robot "Computes" a destination location (still in its own coordinate system), and then "Moves" towards the computed location. Since the robots are identical, they all follow the same deterministic algorithm. The algorithm is oblivious if the computed destination in each cycle depends only on the snapshot obtained in the current cycle (and not on stored previous snapshots). The snapshots obtained by the robots are not consistently oriented in any manner (that is, the robots' local coordinate systems do not share a common direction nor a common chirality\footnote{Chirality denotes the ability to distinguish left from right.}). The execution model significantly impacts the ability to solve collaborative tasks. Three different levels of synchronization have been commonly considered. The strongest model is the fully-synchronous (\FSYNC) model~\cite{DBLP:journals/siamcomp/SuzukiY99}, where each phase of each cycle is performed simultaneously by all robots. The semi-synchronous (\SSYNC) model~\cite{DBLP:journals/siamcomp/SuzukiY99} considers that time is discretized into rounds, and that in each round an arbitrary yet non-empty subset of the robots are active. The robots that are active in a particular round perform exactly one atomic \LOOK-\COMPUTE-\MOVE cycle in that round. The weakest model is the asynchronous (\ASYNC) model~\cite{DBLP:series/synthesis/2012Flocchini,DBLP:journals/tcs/FlocchiniPSW05}, which allows arbitrary delays between the \LOOK,\COMPUTE and \MOVE phases, and the movement itself may take an arbitrary amount of time. It is assumed that the scheduler (seen as an adversary) is fair in the sense that in each execution, every robot is activated infinitely often. \subsection{Previous works and Motivations} An important shortcoming of the robot model introduced by Suzuki and Yamashita \cite{DBLP:journals/siamcomp/SuzukiY99} with respect to real-world implementation of mobile robot algorithms is the assumption that both the vision sensors and the actuation motors are perfect. More specifically, the model assumes that robots have an infinite vision range, and can sense the position of other robots relatively to theirs with infinite accuracy. Robots are also usually able to reach their target with infinite movement precision (with respect to the angle to the target). Several attempts have been made to make the \OBLOT model more realistic, \emph{e.g.} by limiting the range of sensors through the limited visibility model~\cite{DBLP:journals/trob/AndoOSY99,DBLP:conf/antsw/GordonEB08,DBLP:conf/antsw/GordonWB04}, by allowing the sensors to miss other robots~\cite{DBLP:conf/sirocco/HeribanT19}, by using inaccurate sensors~\cite{DBLP:journals/siamcomp/CohenP08,DBLP:conf/antsw/GordonEB08,DBLP:conf/antsw/GordonWB04,DBLP:journals/automatica/Martinez09,DBLP:journals/tcs/YamamotoIKIW12}, or by discarding the hypothesis that robots are transparent~\cite{DBLP:conf/cccg/LunaFPSV14,DBLP:journals/tcs/HonoratPT14}. However, many attempts are hindered by increased complexity due to manually proving algorithms in those more complex settings. For instance, to our knowledge, the consequences of error-prone vision have only been studied through very simple problems: \GATHERING and \CONVERGENCE~\cite{DBLP:journals/siamcomp/CohenP08,DBLP:conf/antsw/GordonEB08,DBLP:conf/antsw/GordonWB04,DBLP:journals/automatica/Martinez09,DBLP:journals/tcs/YamamotoIKIW12}. To allow more complex problems to be studied considering more realistic settings, it appears necessary to favor an machine-helped approach. Formal methods encompass a long-lasting path of research that is meant to overcome errors of human origin. Unsurprisingly, this mechanized approach to protocol correctness was used in the context of mobile robots~\cite{DBLP:conf/srds/BonnetDPPT14,DBLP:conf/sss/DevismesLPRT12,DBLP:journals/dc/BerardLMPTT16,DBLP:conf/sss/AugerBCTU13,DBLP:conf/sss/MilletPST14,DBLP:journals/ipl/CourtieuRTU15, berard:hal-01238784,DBLP:conf/prima/RubinZMA15,DBLP:conf/sss/DevismesLPRT12,DBLP:conf/fmcad/SangnierSPT17,DBLP:conf/icdcn/BalabonskiPRT18,DBLP:journals/mst/BalabonskiDRTU19,DBLP:conf/netys/BalabonskiCPRTU19,DBLP:journals/fmsd/SangnierSPT20,DBLP:conf/srds/DefagoHTW20,}. When robots move freely in a continuous two-dimensional Euclidean space, to the best of our knowledge, the only formal framework available is Pactole\footnote{\url{http://pactole.lri.fr}}. Pactole relies on higher-order logic to certify impossibility results~\cite{DBLP:conf/sss/AugerBCTU13,DBLP:journals/ipl/CourtieuRTU15,DBLP:conf/icdcn/BalabonskiPRT18}, as well as the correctness of algorithms~\cite{DBLP:conf/wdag/CourtieuRTU16,DBLP:journals/mst/BalabonskiDRTU19} in the \FSYNC and \SSYNC models, possibly for an arbitrary number of robots (hence in a scalable manner). Pactole was recently extended by Balabonski~\emph{et al.}~\cite{DBLP:conf/netys/BalabonskiCPRTU19} to handle the \ASYNC model, thanks to its modular design. However, in its current form, Pactole lacks automation; that is, in order to prove a result formally, one still has to write the proof (that is automatically verified), which requires expertise both in Coq (the language Pactole is based upon) and about the mathematical and logical arguments one should use to complete the proof. On the other hand, model checking and its derivatives (automatic program synthesis, parameterized model checking) hint at more automation once a suitable model has been defined with the input language of the model checker. In particular, model-checking proved useful to find bugs (usually in the \ASYNC setting)~\cite{DBLP:journals/dc/BerardLMPTT16,DBLP:conf/sofl/DoanBO16,DBLP:conf/opodis/Doan0017} and to formally check the correctness of published algorithms~\cite{DBLP:conf/sss/DevismesLPRT12,DBLP:journals/dc/BerardLMPTT16,DBLP:conf/prima/RubinZMA15,DBLP:conf/srds/DefagoHTW20}. Automatic program synthesis~\cite{DBLP:conf/srds/BonnetDPPT14,DBLP:conf/sss/MilletPST14} was used to obtain automatically algorithms that are "correct-by-design". However, those approaches are limited to instances with few robots. Generalizing them to an arbitrary number of robots with similar models is doubtful as Sangnier \emph{et al.}~\cite{DBLP:journals/fmsd/SangnierSPT20} proved that safety and reachability problems are undecidable in the parameterized case with default models. Another limitation of the above approaches is that they \emph{only} consider cases where mobile robots \emph{evolve in a \underline{discrete} space} (\emph{i.e.}, graph). This limitation is due to the model used, that closely matches the original execution model by Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}. As a computer can only model a finite set of locations, a continuous 2D Euclidean space cannot be expressed in this model. Overall, the only way to obtain automated proofs of correctness in the continuous space context through model checking is to use a more abstract model~\cite{DBLP:conf/wdag/DefagoHTW19,DBLP:conf/srds/DefagoHTW20}, which commands writing additional handwritten theorems to assess its relevance in the original model. Overall, using formal methods for complex algorithms in realistic settings requires a substantial effort that may be out of reach when one simply wants to asses the feasibility of an algorithmic design. Furthermore, these approaches currently only address whether the added constraints enable the construction of counter-examples for a given task, and, to the best of our knowledge, do not address the important issue of performance degradation, or, in the cases where counter-examples do appear, the likelihood of their appearance and their impact. In fact, an overwhelming majority of the research on mobile robotic swarms has focused on proving, under a given set of conditions, whether there exists a counter example to a given solution proposal for a problem. On the other hand, the practical efficiency of a given algorithm (with respect to real-world criteria such as fuel consumption) was rarely studied by the Distributed Computing community, albeit being of paramount importance to the Robotics community~\cite{DBLP:conf/gecco/AroraMDB19,DBLP:journals/ijrr/YooFS16}. Fuel-constrained robots have been considered in the discrete graph context, for both exploration \cite{DBLP:conf/arcs/DyniaKS06} and distributed package delivery \cite{DBLP:conf/algosensors/Chalopin0MPW13}, but, to our knowledge, no study considered the two-dimensional Euclidean space model that was promoted by Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}. A possible explanation for this situation is that the more complex the algorithm (or the system settings), the more difficult it becomes to rigorously find the worst possible executions. We investigate another approach: since our goal is to bridge the gap between theoretical mobile robots, and actual robotics, we move one step towards robotics and use a very common tool: simulation. First, robot simulators, such as Gazebo\footnote{\texttt{http://gazebosim.org/}}, are industry standard tools for designing physical robots. Then, simulating mobile robots is not a new idea, and has been tried since the very beginning of mobile robots~\cite{DBLP:journals/trob/AndoOSY99}. Our goal is to design and implement a practical simulator for networks of mobile robots that is focused on finding counter-examples and monitoring network behavior, rather than proving algorithms or providing a visual representation. Our vision is that this tool is especially useful in the early stages of algorithm design to eliminate obviously wrong paths, and detect anomalies. It should not be seen as a replacement for formal tools, but as a replacement for researcher intuition when working on a mobile robot network model or algorithm. As such, the simulator should be easy to use, understand and modify by any Distributed Computing researcher in order to include any new algorithm or model. It should also be capable of monitoring network behavior and output quantitative data points to assess real world performance, according to a given set of metrics, as well as enabling comparison with previously proven algorithms in a given setting. We first focus on the known limitations of this approach and highlight the difficulty of encoding victory and defeat conditions for the computed executions, and how it impacts our ability to reliably detect counter-examples, as well as the expected consequences of working in a discretized Euclidean space, such as the impossibility to distinguish \CONVERGENCE and \GATHERING. \subsection{Our Contribution} In this paper, we design and implement a practical simulator for mobile robotic swarms evolving in a two-dimensional Euclidean space. To circumvent the obvious problem of an infinite number of initial positions, our simulation framework is based on the Monte Carlo method for choosing initial configurations~\cite{MonteCarlo49}. We first benchmark our simulation framework using a well known problem in the domain: rendezvous. Rendezvous mandates that two robots gather in finite time at the same location, not known beforehand. There exists a number of rendezvous solution for various settings, yet our simulation framework enables fair quantitative comparison. We choose the fuel metric (\emph{a.k.a.} total traveled distance) under various system conditions: \FSYNC, \SSYNC, and \ASYNC schedulers with or without rigid motion. We then assess the impact of inaccurate visibility sensors on two milestone algorithms: the Center-of-Gravity convergence algorithm~\cite{DBLP:journals/siamcomp/SuzukiY99} for two robots, and the Geoleader election algorithm~\cite{DBLP:conf/sss/CanepaP07}. It turns out that their behavior is significantly impacted by even small inaccuracies. To address the shortcoming identified by our simulations in the literature, we design a new two-color, fuel-efficient, convergence algorithm for the ASYNC scheduler, and an improved leader election algorithm that is resilient to inaccurate vision. Both proposal are similarly benchmarked with our simulation framework. The rest of the paper is organized as follows. Section~\ref{chap:MonteCarlo} presents the core technicalities underlying our simulation framework, and its limitations through the problems of \OBLOT \FSYNC \CONVERGENCE, and \GEOLEADEL. Section~\ref{chap:performance} demonstrates how the framework can be used for the purpose of performance evaluation, while Section~\ref{chap:realistic} show how realistic error models can be integrated into the entire evaluation process. Section~\ref{chap:improved} introduces two new algorithms, one for fuel efficient convergence, and one for leader election with unreliable sensors. Finally, Section~\ref{sec:conclusion} provides concluding remarks. \section{Monte-Carlo Simulation of Mobile Robots} \label{chap:MonteCarlo} \subsection{Overview of the Framework} Our simulation framework is written from scratch using Python 3, ensuring a large compatibility across executing platforms. Our design goal is to remain as close as possible to the theoretical model of Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}, in order to maximize readability and usability by the mobile robot distributed computing community. Each mobile entity is thus encapsulated as an instance of the \texttt{Robot} class. In the case of the basic \OBLOT model, robots have the following properties: \begin{itemize} \item A unique \texttt{name}. \item \texttt{x} and \texttt{y} coordinates in the Euclidean plane. \item A \texttt{snapshot} list of \texttt{Robot}s that contains visible \texttt{Robot}s. \item A \texttt{target}, which is a 2-tuple of the \texttt{x} and \texttt{y} coordinates of the target destination. \end{itemize} \noindent The \texttt{Robot} class also provides three methods: \begin{itemize} \item The \texttt{LOOK} method uses the network as an input. It creates a list of the visible \texttt{Robot}s in the network and assigns it to \texttt{snapshot}. \item The \texttt{COMPUTE} method uses \texttt{snapshot} to compute and assign \texttt{target}, according to the algorithm we want to evaluate. \item The \texttt{MOVE} method updates \texttt{x} and \texttt{y} according to \texttt{target}. \end{itemize} This is summarized in figure~\ref{fig:robotClass} \begin{figure}[htb] \centering \includegraphics[width=0.35\linewidth]{Robot.png} \caption{Robot Class} \label{fig:robotClass} \end{figure} Because robots are anonymous, \texttt{name} cannot be used for computing purposes, and is simply a way for the scheduler to reliably monitor the robots in the network. Similarly, robots cannot use \texttt{x} and \texttt{y} directly as they are disoriented. The simulation consists of two parts: an initializing sequence and a loop. The \emph{initializing sequence} creates a \texttt{network} list, which contains all robots, according to simulation parameters. To circumvent the problem of the infinite number of possible initial positions, our simulation framework is based on the Monte-Carlo method for choosing initial configurations~\cite{MonteCarlo49}. So, unless otherwise specified, the initial location of each robot is chosen uniformly at random within the bounds of the type used to represent positions. Using the Monte-Carlo method allows us to both minimize biases in the initial parameters, and arbitrarily increase the precision of the simulation by simply increasing the number of simulations. For each iteration of the \emph{main loop}, a scheduling function is executed once. In the case of \FSYNC, for each loop iteration, all robots in the network simultaneously perform a \texttt{LOOK}, then simultaneously perform a \texttt{COMPUTE}, and then simultaneously perform a \texttt{MOVE}. Using different schedulers, such as \SSYNC or \ASYNC only requires changing the scheduling function: \SSYNC creates a non empty list of robots to be activated for a whole cycle, and \ASYNC picks a single robot to be activated for a single phase. The loop terminates whenever a \emph{victory} condition holds, which confirms the algorithm completed its intended task. In the case where an algorithm may fail, a \emph{defeat} condition can also be used. For practical reasons, the loop has a maximum number of iterations. However, reaching this maximum should not be interpreted as either a failure or a success. \subsection{Scheduling} Modeling the \FSYNC scheduler can be trivially done by performing all \LOOK operations, then all \COMPUTE operations, then all \MOVE operations. For the \ASYNC and \SSYNC schedulers, we rely on randomness to test as many executions as possible. To model the \SSYNC scheduler, for each time step, we chose a non-empty subset of the network uniformly at random and perform a full cycle. To model the \ASYNC scheduler, we chose one robot uniformly at random and perform its next operation\footnote{Note that this model does not explicitly include simultaneous operations: we consider that the output of two simultaneous events $E_1$ and $E_2$ can be either the output of $E_1$ then $E_2$ or the output of $E_2$ then $E_1$.}. In the case of the \ASYNC scheduler, we must also consider what happens if a robot performs a \LOOK operation while another robot is moving. the \OBLOT model usually considers that an adversary can chose the perceived location of the second robot to be anywhere between its initial position and its destination (on a straight line). Modeling this behavior could be easily done by changing the perceived coordinates in the \LOOK operation uniformly at random between the location and the target of the perceived robot (on a straight line). However, existing literature about the \ASYNC model shows that the most problematic scenarios appear when the outdated position perceived for a robot is its initial location. With our simulation framework, we also observed that always choosing the initial location when observing a given robot while it is in its \MOVE phase yielded the most adversarial results, so, while our framework is able to simulate both perceptions, we assume this adversarial behavior in the sequel. For all schedulers, our simulation framework supports both the rigid and the non-rigid settings. The rigid setting mandates a robot that selected a distinct target in the \COMPUTE phase to always reach it in the \MOVE phase. The non-rigid setting partially removes this condition: the robot may be stopped by the scheduler before it reaches the target, but not before it traverses a distance of at least $\delta$, for some $\delta>0$. \subsection{Simulation Conditions} Our framework uses Monte-Carlo simulation for both the initial conditions and the scheduling. This means we can perform an arbitrarily large number of simulations, which in turn induces an arbitrarily more precise simulation. Therefore any criterion on either time, number of iterations, or precision is equivalent. Unless specified otherwise, 4 simulation threads are run in parallel, for one hour, on a modern quad-core CPU, after which results are merged and analyzed. We use the PyPy3 JIT compiler instead of the CPython interpreter, for better performance. Results of the 4 simulations are then compiled and analyzed. \subsection{Comparison with Existing Simulators} We found two noteworthy simulators for mobile robots: Sycamore and JBotSim. \emph{Sycamore} is a Java program focused explicitly on mobile robots. However, it appears to be far more complex to build, use and modify than our proposal. Moreover, the latest version we could find seems to date back from 2016, and requires versions of Java that are no longer supported. \emph{JBotSim}\footnote{\url{https://jbotsim.io}} is a Java library for simulating distributed networks in general. While it appears to be able to simulate \OBLOT robots, it is not designed to do so. So, one has to dig into the intricacies of the simulator to emulate basic mobile robot settings. We also found a third Java-based simulator, named oblot-sim\footnote{\url{https://github.com/werner291/oblot-sim}}. We are, however, unsure of its provenance and design goals. All three simulators emphasize real-time visualization of executed algorithms through a complete graphical interface. Our proposal focuses on extremely simple quantitative simulation. In its current version, a complete instance of the simulator requires only five separate files for a total of less than 30KB of code (The sources for JBotSim and Sycamore weigh 3MB and 4.8MB, respectively). We also believe that using Python instead of Java greatly improves portability and ease of understanding, which in turns allows researchers to more easily implement and test unusual settings. In short, our goal is not to visualize executions in real-time, but to simulate as many executions as possible to process their outcome. \subsection{Limitations of the Simulation} While the initial approach described in previous sections may seem sound and simple enough to work with, it results in two distinct problems. As stated previously, our objective with robot simulation is to reliably provide counter-examples whenever they may occur. This requires reliably detecting problematic executions, which is difficult for two reasons. First, success and defeat conditions for most mobile robot algorithms are written in a way that might not be directly usable in a computer simulation. Then, we show that issues predictably arise due to the nature of discretized floating point numbers compared to "true" real numbers used in mathematical models. \paragraph{Halting the Simulation: \emph{Victory} and \emph{Defeat} Conditions:} One of the goals of our simulation framework is to find counter-examples for a given algorithm and setting. To do so, we need to simulate the evolution of the network until one of two things happen: \begin{itemize} \item A sufficient condition has been met. This implies that the current execution is successful, and a new simulation with a different initial configuration should begin. This is called a \emph{victory condition}. \item A necessary condition has been violated. This implies that the current execution constitutes a counter-example. This is called a \emph{defeat condition}. \end{itemize} We illustrate the difficulty of using and defining such conditions in practice through the example of one of the most fundamental problems in the context of mobile robots: \GATHERING. The common victory condition for \GATHERING is the following, for two robots $r_1$ and $r_2$: \begin{condition}[Theoretical \GATHERING Victory] \label{cond_rdv} \GATHERING is achieved if and only if, for any pair of robots in the network, the distance between the two robots is eventually always zero. This can also be written more formally as $\exists t_0 \in \mathbb{R}_{\ge 0} : \forall t_1 \ge t_0 , \forall(r_1,r_2) |r_1r_2|_{t_1} = 0$ \end{condition} In the previous condition, $|r_1r_2|_{t}$ denotes the distance between $r_1$ and $r_2$ at time $t$ in the current execution. However, this particular condition would require the ability for the simulator to infinitely simulate the future of the network, which is obviously impossible. Moreover, the matching defeat condition is unusable for similar reasons: \[\nexists t_0 \in \mathbb{R}_{\ge 0} : \forall t_1 \ge t_0 , \forall(r_1,r_2) |r_1r_2|_{t_1} = 0\] \begin{center} or \end{center} \[\forall t_0 \in \mathbb{R}_{\ge 0} : \exists t_1 \ge t_0 , \exists(r_1,r_2) |r_1r_2|_{t_1} \neq 0\] We instead define a more practical defeat condition: \begin{condition}[Practical \GATHERING Defeat]\ \label{defeat} $\exists (t_0,t_1) \in (\mathbb{R}_{\ge 0})^2 : t_1>t_0, inputs(t_0) = inputs(t_1), \exists t \in [t_0,t_1] / \exists (r_1,r_2) |r_1r_2|_{t} \neq 0$ \end{condition} Where $inputs(t)$ is the set of all input parameters relevant to the algorithm. This is different from the configuration, which would contain \emph{all} parameters of the network at a given point of the execution. This input set is used as a practical way to detect cycles in the execution. For a deterministic algorithm, if all inputs of the algorithm are identical to a previously encountered set of inputs, then a cycle has been found. The input set we use must be chosen such that for two sets $S_1$ and $S_2$, $S_1(t) = S_2(t) \implies \forall S_1(t+1), \exists S_2(t+1) : S_1(t+1) = S_2(t+1)$. In other words, regardless of the scheduling, two identical sets should not be able to generate different sets. \begin{theorem} For two robots executing a deterministic algorithm, if condition~\ref{defeat} is true then condition~\ref{cond_rdv} is false. \end{theorem} \begin{proof} For a deterministic algorithm, if condition~\ref{defeat} is true, there exists a scheduling starting from the initial configuration that reaches $inputs(t_0)$ and $inputs(t_1)$. Because $inputs(t_0) = inputs(t_1)$, there exists a cycle containing non-gathered configurations. Then the adversary scheduler can repeat this cycle infinitely, and condition~\ref{cond_rdv} is false. \end{proof} \begin{theorem} If the number of input sets is finite, then for two robots executing a deterministic algorithm, if condition~\ref{cond_rdv} is false, then condition~\ref{defeat} is true. \end{theorem} \begin{proof} Any scheduling is infinite. So, if the total number of input sets is finite, then every scheduling contains at least one cycle. If condition~\ref{cond_rdv} is false, then there are no non-gathered cycles, so there is at least one gathered cycle that must be repeated, and condition~\ref{defeat} is true. \end{proof} One may naively want to use a similar reasoning to define a sufficient victory condition: \begin{condition}[Naive \GATHERING Victory] $\exists (t_0,t_1) \in \mathbb{R}_{\ge 0}^2 : t_1>t_0, inputs(t_0) = inputs(t_1), \forall t \in [t_0,t_1], \forall(r_1,r_2) |r_1r_2|_{t} = 0$ \end{condition} However, this condition ignores the fact that the scheduler may be able to not repeat this cycle by carefully choosing the activation order of the robots. A proper condition that is usable regardless of the scheduler is the following: \begin{condition}[Practical \GATHERING Victory] \label{vict_rdv} $\forall(r_1,r_2) \exists t_0 \in \mathbb{R}_{\ge 0} : |r_1r_2|_{t_0} = 0 \land \forall \mathcal{S}, \exists t_1 > t_0 : inputs(t_0) = inputs(t_1),\forall t \in [t_0,t_1], |r_1r_2|_{t} = 0$ With $\mathcal{S}$ a scheduling. In other words, there exists a time after which all robots are stuck in gathered cycles. \end{condition} Analyzing configurations and finding cycles in the execution is not an issue for our simulator. The main difficulty lies in our ability to properly model the configuration using the input set. If the set is too restrictive and omits relevant parameters, then we find cycles that do not actually exist. Similarly, a set that is not restrictive enough may hide actual cycles. This depends on both the robot model and the algorithm used to solve the problem. In the case of \RENDEZVOUS or \GATHERING for two robots, the standard algorithm~\cite{DBLP:journals/siamcomp/SuzukiY99} for the \FSYNC scheduler targets the midpoint between the two robots and is described in algorithm~\ref{algo_rdv}. \begin{algorithm}[H] \caption{Basic \FSYNC \RENDEZVOUS} \label{algo_rdv} \begin{algorithmic} \STATE target[0] = (x + snapshot[0].x)/2 \STATE target[1] = (y + snapshot[0].y)/2 \end{algorithmic} \end{algorithm} In the Euclidean space, the number of configurations appears to be infinite. Because robots are disoriented, the algorithm uses no information on distance, or coordinate systems, so that all configurations are identical. Then, the input set is actually empty. This implies that an algorithm succeeds if and only if the network is gathered after the first activation of both robots. Otherwise, the defeat condition is immediately true for rigid movement. For the sake of providing a second example, let us consider that robots are endowed with weak local multiplicity detection, meaning that they can distinguish a non-gathered configuration from a gathered configuration. This allows us to modify the initial algorithm to algorithm~\ref{algo_rdv2}. \begin{algorithm}[htb] \caption{\FSYNC \RENDEZVOUS with Multiplicity Detection} \label{algo_rdv2} \begin{algorithmic} \IF{$\neg gathered$} \STATE target[0] = (x + snapshot[0].x)/2 \STATE target[1] = (y + snapshot[0].y)/2 \ENDIF \end{algorithmic} \end{algorithm} In this case, the gathered state is a relevant input parameter, and should be included in the input set. Now, all gathered configurations are considered identical and all non-gathered configurations are considered identical. This means that the robots must still gather after the first activation. However, while this was already considered a cycle with the empty set, if robots are now gathered, the input set is different and no cycle has yet been reached. The first cycle is reached after the second activation. If the robots remain gathered, then this is a gathered cycle and should not trigger the defeat condition. However, if for some reason the robots were to separate after the second activation, this would constitute a non-gathered cycle with the first input set, and the defeat condition would be triggered. Using this reasoning, we check our simulator against our two-color \ASYNC algorithm~\cite{DBLP:conf/icdcn/HeribanDT18} and the two-color \SSYNC algorithm from Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}. For Heriban two-color, we accurately find no counter-example, and all executions lead to the victory condition in \ASYNC, \SSYNC and \FSYNC. For Viglietta two-color, we accurately find no counter-example and all executions lead to the victory condition in \SSYNC and \FSYNC, and we find counter-examples that trigger the defeat condition in \ASYNC. We perform a similar study for a weaker version of \GATHERING, called \CONVERGENCE. The common condition for \CONVERGENCE is the following: \begin{condition}[Theoretical \CONVERGENCE Victory]\label{conv} \CONVERGENCE is achieved if and only if, for any distance $\epsilon$ greater than zero, the distance between any pair of robots is eventually always smaller than $\epsilon$. This can also be written more formally as $\forall \epsilon \in \mathbb{R}_{> 0}, \exists t_0 \in \mathbb{R}_{\ge 0} : \forall t_1 \ge t_0, \forall(r_1,r_2) |r_1r_2|_{t_1} \le \epsilon$ \end{condition} Note that, as we expect, \GATHERING implies \CONVERGENCE, but \CONVERGENCE does not imply \GATHERING. In this case, the distance between the two robots is a relevant parameter to check whether or not the problem is solved. However, since it does not change the behavior of the algorithm, it is still not part of the input set. We define the following defeat condition: \begin{condition}[Practical \CONVERGENCE Defeat]\ \label{def_conv} $\exists(r_1,r_2) : \exists (t_0,t_1) \in (\mathbb{R}_{\ge 0})^2 : t_1>t_0 \land inputs(t_0) = inputs(t_1) \land 0 < |r_1r_2|_{t_0} \le |r_1r_2|_{t_1}$ \end{condition} \begin{theorem} For a deterministic algorithm, if condition~\ref{def_conv} is true, then condition~\ref{conv} is false. \end{theorem} \begin{proof} Similarly to \GATHERING, this condition implies a cycle where distance does not decrease, so the adversary scheduler can repeat it infinitely and prevent \CONVERGENCE. \end{proof} This does \emph{not} imply that the distance between the two robots must always be strictly decreasing in the general case, as this would neither be a sufficient nor a necessary condition. Because $\epsilon$ can be infinitely small, we cannot chose the 'right' $\epsilon$ to properly define a victory condition. \paragraph{The Consequences of the Discretized Euclidean Plane:} \label{sssec:NRN} While it is tempting to define a victory condition similar to that of \GATHERING, the question of $\epsilon$ remains. Floating point numbers are obviously incapable of infinite precision. So, because any number greater that zero is a valid choice, if $\epsilon$ is smaller than the minimum positive number that can be represented in the chosen floating point precision, it cannot be distinguished from a true zero. This implies that small enough distances between two robots cannot be distinguished from a gathered state. So, it is intrinsically impossible to distinguish \CONVERGENCE from actual \GATHERING. Let us modify algorithm~\ref{algo_rdv} so that both robot move towards the midpoint, but only move a distance of $\dfrac{|r_1r_2|}{2} - \dfrac{\delta}{2}$ instead of $\dfrac{|r_1r_2|}{2}$. In theory, this algorithm does not lead to \RENDEZVOUS, as robots reach a distance of $\delta$ after their first activation. However, if $\delta$ is small enough, the precision of floating point numbers is such that $\dfrac{|r_1r_2|}{2} - \dfrac{\delta}{2}$ and $\dfrac{|r_1r_2|}{2}$ appear identical, and the distance $|r_1r_2|$ appears to be zero. This is essentially a \CONVERGENCE algorithm that is fast enough to be mistaken for a \RENDEZVOUS algorithm. In practice, there is very little that can be done against this sort of behavior and \uline{conditions for \GATHERING should not be considered reliable.} On the other hand, under different circumstances, the discrete nature of the simulation can instead lead theoretically good executions to fail in practice. Let us consider a network of two robots $r_1$ and $r_2$ such that $r_2$ does not move, and $r_1$ moves to the midpoint. This should trivially lead to \CONVERGENCE. Let us now assume that $r_1.y = r_2.y$, and that $r_1.x$ and $r_2.x$ are such that $r_2.x$ is the smallest float greater than $r_1.x$. This possibly leads to $\dfrac{r_1.x+r_2.x}{2} = r_1.x$, so $r_1$ stops moving and the defeat condition for \CONVERGENCE is wrongly activated. We test this by setting $r_1.y = r_2.y = 0$, picking $r_1.x$ at random in $[0,1]$ and picking $r_2.x$ at random in $[2,3]$ so that $r_1.x < r_2.x$. In the first case, $r_1$ moves to the midpoint and $r_2$ does not move. This results in approximately 37.5\% of one million attempts wrongly failing \CONVERGENCE. In the second case, $r_2$ moves to the midpoint and $r_1$ does not move. This results in approximately 25.0\% of one million attempts wrongly failing \CONVERGENCE. This asymmetry may be explained by biases in the binary64 approximation. Regardless, this is a real, hard to predict problem with a non-negligible chance of happening and requires careful analysis of found counter-examples. Problems with limited float precision also appear when simulating \GEOLEADEL. \GEOLEADEL is successful if, given a set of robots, each with their own coordinate system, robots can all deterministically agree on a same robot, called the \robstate{Geoleader}. \GEOLEADEL is known to be impossible in the general case~\cite{DBLP:journals/tcs/DieudonneP12} because of possible symmetries in the network. In practice, this impossibility is circumvented using randomized algorithms to break such symmetries. Let us consider the state-of-the-art algorithm~\ref{algCan3} by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07} for three robots. \begin{algorithm}[H] \caption{Original \LEADEL Algorithm by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07} for Three Robots} \label{algCan3} \begin{algorithmic} \STATE Compute the angles between two robots \IF{$my\_angle$ is the smallest} \STATE Become \robstate{Leader} \STATE Exit \ELSIF{$my\_angle$ is not the smallest, but the other two are identical} \STATE Become \robstate{Leader} \STATE Exit \ELSIF{All angles are identical} \STATE Perform a Bernoulli trial with a probability of winning of $p = \dfrac{1}{3}$ \IF{Trial won} \STATE Move perpendicular to the opposite side of the triangle in opposite direction \ENDIF \ENDIF \end{algorithmic} \end{algorithm} For this particular algorithm, there are three cases: \begin{enumerate} \item The common case, where one angle is greater than the two others. \item A rare case where two angles are identical, and the third one is smaller. \item The rarest case where all angles are identical. In that case, a Bernoulli trial is required to degrade to the other cases. \end{enumerate} Let us assume a network of three robots, $[r_1,r_2,r_3]$, such that $r_1$ is placed at coordinates $(-0.5,0)$, and $r_2$ at $(0.5,0)$. We show where each case appears in figure~\ref{fig:Lead_theor}). The third case occurs if $r_3$ is at $(0,\pm \dfrac{\sqrt{3}}{2})$, which are noted as points $eq1$ and $eq2$. Positions of $r_3$ that lead to the second case are noted as $iso1$, $iso2$, and $iso3$. However, it is \emph{not} possible, using floating point numbers, to have $x$ such that $x^2 = 3$. It is then impossible, regardless of the quality of the simulation, to place $r_3$ on $eq1$ or $eq2$, despite being possible in theory. Similarly, an infinitely large number of points mathematically located on the circular arcs of the second case cannot be represented properly using floating point numbers. To test this, each robot is given a new property 'Leader', which is a string containing the name of the \robstate{Leader} robot. We perform the simulation and display the results in figure~\ref{fig:Simu1}. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{Lead_theor.png} \captionsetup{justification=centering} \caption{\robstate{Leader} Depending on the Location of $r_3$. Red, green and blue represent $r_1$, $r_2$ and $r_3$, respectively.} \label{fig:Lead_theor} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{Map_simu_noerr.png} \captionsetup{justification=centering} \caption{Simulation for 3-robot \LEADEL with Perfect Vision Sensors. No isosceles or equilateral point was found.} \label{fig:Simu1} \end{subfigure} \end{figure} As we predicted, the fact that real numbers cannot be properly represented in our discrete, floating point space prevents the simulator from finding the known counter-example in the case of 3-robots \LEADEL. Furthermore, the three circular arcs on which the second case occurs have a combined surface theoretically equal to zero. Therefore, they are statistically impossible to find using our Monte-Carlo simulation. However, it should be noted that, even in a world of perfect sensors, building an equilateral triangle would require placing the third robot with physically impossible precision. So, while this counter-example exists from a mathematical standpoint, it could never occur in a more realistic setting. So, when considering practical robots, this could be considered a minor issue. On the contrary, the use of a discretized Euclidean space could be viewed as massive advantage compared to the regular, continuous model, as it makes the inherently unrealistic hypothesis of robots being able to store and process snapshots of infinite precision. In this approximated context, snapshots have a known, maximum size, depending on the chosen precision for the coordinates of other robots. So, in this context, storing a snapshot for a full cycle becomes a trivial matter, and using algorithm \textbf{SyncSim} described by Das \emph{et al.}~\cite{DBLP:conf/icdcs/DasFPSY12,DBLP:journals/tcs/0001FPSY16}, to simulate an \FSYNC scheduling under an \ASYNC scheduler, becomes possible without additional unrealistic hypotheses. As a result, we believe designing algorithms that properly solve problems in the context of a discretized Euclidean space should be a priority, as it would allow mobile robots to only need to function using the \FSYNC scheduler, and would remove the unrealistic requirement of infinite precision. One such algorithm is shown in Section~\ref{chap:improved}. \section{Fuel Efficiency in the Usual Settings} \label{chap:performance} \noindent The overwhelming majority of the mobile robots research has focused on proving, under a given set of conditions, whether there exists a counter example to a given problem. On the other hand, the practical efficiency of a given algorithm (with respect to real-world criteria such as fuel consumption) was rarely studied by the distributed computing community, albeit commanded by the robotics community~\cite{DBLP:conf/gecco/AroraMDB19,DBLP:journals/ijrr/YooFS16}. Fuel-constrained robots have been considered in the discrete graph context, for both exploration~\cite{DBLP:conf/arcs/DyniaKS06} and distributed package delivery~\cite{DBLP:conf/algosensors/Chalopin0MPW13}. However, to our knowledge, no study considered the two-dimensional Euclidean space model that was promoted by Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}. A possible explanation for this situation is that the more complex the algorithm (or the system setting), the more difficult it becomes to rigorously find the worst possible execution. \subsection[\textsf{Rendezvous} Algorithms]{\RENDEZVOUS Algorithms} \noindent We first quantify the maximum traveled distance and the average traveled distance for several known \RENDEZVOUS algorithms. We consider the \emph{Center Of Gravity algorithm}~\cite{DBLP:journals/siamcomp/SuzukiY99}, the two-color \ASYNC algorithm (\emph{Her2}) by Heriban et al.~\cite{DBLP:conf/icdcn/HeribanDT18}, the two-color algorithm (\emph{Vig2}) by Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}, which is known to solve \RENDEZVOUS in \SSYNC, and \CONVERGENCE in \ASYNC, the three-color algorithm (\emph{Vig3}) by Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}, the four-color algorithm (\emph{Das4}) by Das \emph{et al.}~\cite{DBLP:conf/icdcs/DasFPSY12,DBLP:journals/tcs/0001FPSY16}. We also investigate the algorithms assuming unreliable compasses by Izumi \emph{et al.}~\cite{DBLP:journals/siamcomp/IzumiSKIDWY12}: the \SSYNC static-error compass algorithm (\emph{Stat \SSYNC}), which, despite its name, works in \ASYNC, the \SSYNC dynamic-error compass algorithm (\emph{Dyn \SSYNC}), which does not work in \ASYNC, and the \ASYNC static-error compass algorithm (\emph{Dyn \ASYNC}). We take advantage of the modularity of our simulator. The \texttt{Robot} class now carries several new properties: \texttt{color}, the color a robot presently displays ; \texttt{compass}, the type of compass and error, \emph{i.e.} 'none', 'static' or 'dynamic' ; \texttt{compass\_error}, the maximum error allowed for the compass ; and \texttt{compass\_offset}, the current compass error. The color is changed at the end of the \texttt{COMPUTE} method. Depending on the value of \texttt{compass}, \texttt{compass\_offset} is either chosen during the initialization, or at the beginning of every \texttt{LOOK} method. Each algorithm is first carefully analyzed on paper to find the worst possible execution. Simulations are then run according to the aforementioned protocols. Due to limitations described in Section~\ref{sssec:NRN}, we actually assess those protocols for a degraded notion of \CONVERGENCE rather than \GATHERING. The distance traveled is expressed relatively to the initial distance between the two robots. In practice, the first robot is always located at $\{0,0\}$ and the second robot is placed at random on the circle of radius 1 centered on $\{0,0\}$. Algorithms are tested with no initial pending moves, as arbitrary pending moves would render fuel efficiency mostly impossible to reliably monitor. Results are summed up in Table~\ref{table_res}. The red color denotes cases where the simulation was stuck in non-gathered cycles, and had to be manually unstuck. Details as to why this happened are provided below. For scale, running 4 instances of \emph{Vig3} for one hour under the \ASYNC scheduler resulted in $\simeq$ 14 million total individual executions. \clearpage \begin{table}[htb] \centering \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{rdv_max.png} \caption{Maximum Traveled Distance \\ Found / Predicted} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{rdv_avg.png} \caption{Average Traveled Distance} \end{subfigure} \caption{Maximum and Average Traveled Distances} \label{table_res} \end{table} While most results match the predictions, our pen and paper analysis missed a worst case execution for \ASYNC \emph{Vig3}, which was found by the simulator (highlighted in bold in Figure~\ref{table_res}). This highlights the difficulty of manually finding the maximum distance even with simple algorithms and settings. It should be noted that rigid motion yields worst results than non-rigid. This is normal because increasing the traveled distance relies on picking a target outside of the $[r_1,r_2]$ segment, and when this is the case, performing the full motion increases the traveled distance more than performing it partially. Thus, unless stated otherwise, all further simulations assume rigid motion. The difference between \SSYNC and \ASYNC with respect to efficiency becomes apparent, as under the \ASYNC scheduler, optimal fuel consumption mandates using four colors, while a simple oblivious algorithm is sufficient in \SSYNC. The algorithms using compasses yield the most interesting results. First, numerous simulations of the \SSYNC static algorithm became stuck. These failures are due to the fact that the sine and cosine operations used in the algorithms tend to sum errors, and there is a possibility that a robot moves in a way that results in an angle of exactly 0, which actually randomly yields an angle of either $0-\epsilon$ or $0+\epsilon$, where $\epsilon$ is a very small positive number. This in turn results in unsolvable cycles that prevent \CONVERGENCE. As $\epsilon$ was never larger than $10^{-9}$, we chose to prevent this behavior by slightly enlarging the interval of the condition that should be triggered on an angle of zero to an angle in $[-10^{-6},10^{-6}]$. We do the same for all conditions for consistency. So any condition that should be true for angles in $[A,B[$ are now true for angles in $[A-10^{-6},B-10^{-6}[$, in $[A,B]$ now in $[A-10^{-6},B+10^{-6}]$, in $]A,B]$ now in $]A+10^{-6},B+10^{-6}]$ and in $]A,B[$ now in $]A+10^{-6},B-10^{-6}[$. Interestingly, this new condition only had notable impact on the static error algorithm. Indeed, these errors could be seen as small dynamic random angle errors. Since the static error algorithm is not designed to be resilient against dynamic errors, it fails whenever they appear. This also demonstrate the resilience of the dynamic error algorithms. \subsection[\textsf{Convergence} For \textit{n} Robots]{\CONVERGENCE For \textit{n} Robots} \noindent Cohen and Peleg~\cite{DBLP:journals/siamcomp/CohenP05} proved the Center of Gravity (CoG) algorithm solves \CONVERGENCE for $n$ robots under the \ASYNC scheduler. We analyze the fuel consumption of the algorithm under both the \SSYNC and \ASYNC schedulers. Results for the minimum, maximum, and average distance traveled are show in table~\ref{NCoG}. We use the sum of the distances to the CoG in the initial configuration as a baseline unit of distance, \emph{i.e.} the distance traveled in \FSYNC. \begin{table}[htb] \centering \captionsetup{justification=centering} \includegraphics[scale=0.39]{T2.png} \caption{Traveled Distances for CoG} \label{NCoG} \end{table} It should be noted that, while previous results are based on at least hundreds of thousands of simulations, due to the increase in simulation complexity, in \ASYNC, for $n=25$, only 31 simulations could be computed under an hour. So they were discarded. Similarly, for $n=50$, no simulation could be finished under an hour. Looking at the results, one element immediately jumps out: for $n \geq 3$, the CoG algorithm wastes movements. This is easy to understand: robots move towards the center of gravity, which for 3 or more robots is different from the geometric median (\emph{a.k.a.} the Weber point), which would actually minimize movement. Our tests seem to indicate that aiming for the median instead of the CoG can reduce traveled distance by up to 30\%. However, it is a known result that no explicit formula for the geometric median exists. As a result, in practice, when trying to minimize traveled distance, \CONVERGENCE for $n$ robots should rely on an approximation of the geometric median rather than the center of gravity. \section{Analyzing Algorithms in Realistic Settings} \label{chap:realistic} \noindent In Section~\ref{chap:performance}, the simulation of inaccurate compasses yielded extremely interesting results. To follow this track, we now focus in this section on the setting where sensors are inaccurate. In more details, we analyze the Center of Gravity (CoG) algorithm for \RENDEZVOUS in this setting, as well as the \GEOLEADEL algorithm by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07}. \subsection{Visibility Sensor Errors} \noindent To study the impact of inaccurate sensors, we consider three different models for vision error. For a robot $r_1$ looking at a robot $r_2$ located in $(x,y)$ in the Cartesian coordinate system centered at $r_1$, and located at $(r,\theta)$ in the polar coordinate system centered at $r_1$, we define: \begin{itemize} \item The \emph{absolute} error model~\cite{DBLP:journals/automatica/Martinez09} uses a constant value $err$. A first number $R_{err}$ is picked uniformly at random in $[0,err]$, and a second $\theta_{err}$ in $[0,2\pi]$. The perceived position of $r_2$ is then $(x+R_{err} cos(\theta_{err}),y+R_{err} sin(\theta_{err}))$. \item The \emph{relative} error model~\cite{DBLP:journals/siamcomp/CohenP08} uses two constants $err_{dist}$ and $err_{angle}$. Two numbers $R_{err}$ and $\theta_{err}$ are picked uniformly at random in $[-err_{dist},err_{dist}]$ and $[-err_{angle},err_{angle}]$. The polar coordinates of $r_2$ are then perceived to be $(r + r*R_{err}, \theta + \theta_{err})$ \item The \emph{absolute-relative} error model is similar to relative error, but the perceived polar coordinates are $(r + R_{err}, \theta + \theta_{err})$ \end{itemize} These error models are depicted in Figure~\ref{fig:errors}. \begin{figure}[htb] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.6\textwidth]{err_abs.png} \caption{Absolute error} \label{fig:err_abs} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.6\textwidth]{err_rel.png} \caption{Relative error} \label{fig:err_rel} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.6\textwidth]{err_relabs.png} \caption{Absolute-relative error} \label{fig:err_relabs} \end{subfigure} \captionsetup{justification=centering} \caption{Types of Errors \\ The $r_2$ point is the actual location of robot $r_2$. The red hashed area represents possible detected positions by robot $r_1$.} \label{fig:errors} \end{figure} It should be noted that each model could be used to accurately model errors for different types of sensors. The absolute error model is interesting because it is simple to compute, requires no change of coordinate system, uses a single parameter, and closely matches the behavior of robots where the \LOOK phase is an abstraction of GPS-type coordinates exchanges~\cite{DBLP:journals/jnw/YaredDIW07}. The two relative models are more complex from a computing perspective, but closely match the use of either computer vision or telemetry sensors. Both carry an angular error matched with either proportional or absolute distance error. Which type of distance error is more appropriate would depend on the exact type of sensor. These new error models drive adding three properties to the \texttt{Robot} class: \begin{itemize} \item \texttt{LOOK\_error\_type}, a string that defines the type of error and can be either \texttt{'none'}, \texttt{'relative'}, \texttt{'absolute'}, or \texttt{'abs-rel'}. \item \texttt{LOOK\_distance\_error}, a float that matches either $err$ or $err_{dist}$, depending on the type of error. \item \texttt{LOOK\_angle\_error}, a float that matches $err_{angle}$. \end{itemize} Robots then chose the corresponding error (with parameters chosen uniformly at random) when performing their \LOOK operation. \subsection[\textsf{Convergence} for \textit{n}=2 Robots]{\CONVERGENCE for \textit{n}=2} \noindent \CONVERGENCE with vision error using the CoG algorithm has already been studied by Cohen and Peleg~\cite{DBLP:journals/siamcomp/CohenP08}. The error model they considered is identical to our relative error model. Their paper states that \CONVERGENCE with distance error using the CoG algorithm is impossible in the general case. This is, however, only true for $n\geq3$, which the authors omit to mention. In the case $n=2$, it appears to be theoretically impossible to make the algorithm diverge for a distance error smaller than a $100\%$, or $err = 1$. We can reasonably ignore the case of an error greater than $100\%$, as it would allow for a robot to perceive another one directly behind itself. To our knowledge, no formal result exists regarding the angle error. In theory, the maximum angle error is $\pi$. We simulate \CONVERGENCE for $n=2$ robots using the CoG algorithm for the relative error model. The error for each robot is chosen uniformly at random at the beginning of the execution. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.28]{Figure_1.png} \caption{Maximum Traveled Distance} \label{fig:Max_dist} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.28]{Figure_2.png} \caption{Average Traveled Distance} \label{fig:Avg_Dist} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.28]{Figure_3.png} \caption{Proportion of Diverging Executions} \label{fig:Fail_exec} \end{subfigure} \captionsetup{justification=centering} \caption{Movement and Divergence of the CoG Algorithm for Two Robots with Inaccurate Visibility Sensors} \label{fig:CoG_Dist_Ang} \end{figure} We must also consider the now possible case of a diverging algorithm. Since the execution is random, any setting should \emph{eventually} converge. However, we must put a reasonable stopping condition in case the execution is clearly diverging. We chose to activate the defeat condition if the distance between the two robots becomes ten times larger than the distance in the initial configuration. Note that the apparent decrease in maximum and average traveled distance for higher angle error is most likely due to the increase of diverging executions (fewer executions converge, but the traveled distance for those is shorter). It appears clearly that the angular error has a much greater potential for both preventing\linebreak\CONVERGENCE, and making robots waste fuel. Indeed, when the angular error remains below $3\pi/5$, a distance error up to 100\% can be tolerated with no performance loss. \\To give some perspective, the realistic setting of a $10\%$ vision error with a $1^\circ$ angle error yields a maximum traveled distance of 1.221 and an average of 1.036, with no divergent executions out of more than 500 million data points. \pagebreak \subsection{Compass Errors} \noindent In the particular case of compass based algorithms of Izumi \emph{et al.}~\cite{DBLP:journals/siamcomp/IzumiSKIDWY12}, rendezvous is possible when the compasses are inaccurate. More specifically, the maximum tolerated errors are $\frac{\pi}{2}$, $\frac{\pi}{4}$ and $\frac{\pi}{6}$ for the static \SSYNC, dynamic \SSYNC, and dynamic \ASYNC algorithms, respectively. In our simulation we chose static errors, for consistency, with values up to $\frac{49\pi}{100}$, $\frac{24\pi}{100}$ and $\frac{16\pi}{100}$, to avoid possible edge cases. Results of maximum and average traveled distances for these algorithms are detailed in Table~\ref{table_comp_err}. \begin{table}[htb] \centering \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[scale=0.35]{Compass_err_max.png} \caption{Maximum Traveled Distance} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[scale=0.35]{Compass_err_avg.png} \caption{Average Traveled Distance} \end{subfigure} \captionsetup{justification=centering} \caption{Maximum and Average Traveled Distances for \RENDEZVOUS \\ with Inaccurate Compasses} \label{table_comp_err} \end{table} We observe that the unreliable compasses are used in a way that makes robots rotate around each other until they are oriented in such a way that one robot moves while the other stays, regardless of the error. However, there are no provisions in these algorithms to limit distance increases during the rotating phases, which explains the results. Detailed observation shows the distance between the two robots can gradually diverge towards infinity during rotation and then converge to zero in a single cycle. This also demonstrated a problem for our \CONVERGENCE criterion: robots could converge at rather large coordinates such that the coordinates of robots are in succession, but, since the accuracy of floating point numbers decreases as the number increase, the distance between the two robots was greater than $10^{-10}$. As a result, we modified the criterion to $|r_1r_2|<max(10^{-10},|Or_1|*10^{-10})$, with $O$ the point of coordinates $\{0,0\}$. \subsection[\textsf{Geoleader} \textsf{Election}]{\GEOLEADEL} \noindent Let us now consider \GEOLEADEL algorithm \ref{algCan3} by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07}, for $n=3$. Looking at our previous results from Section~\ref{sssec:NRN}, we notice that the borders between each zone should be an issue for imperfect sensors, as different errors for different robots may lead to robots electing different \robstate{Leader} robots. \begin{figure}[htb] \centering \includegraphics[width=0.6\linewidth]{ex1.png} \caption{Example of \LEADEL Failure Due to Imperfect Vision} \label{fig:ex1} \end{figure} We demonstrate how this phenomenon can occur in Figure~\ref{fig:ex1} for the case of absolute vision error. On top is the actual configuration, where angles $\widehat{r_1r_2r_3}$ and $\widehat{r_2r_1r_3}$ are equal\footnote{Because robots have no chirality, angles cannot reliably be distinguished from their opposite. So, two opposite angles may always be considered equal.}, and angle $\widehat{r_1r_3r_2}$ is smaller than both, so $r_3$ should be elected. The red circle shows the possible perceived position of $r_3$ by $r_1$ and $r_2$ due to vision error. In the bottom left case, we show a possible perception by $r_1$ where $r_1$ should be elected \robstate{Leader}, as $\widehat{r_2r_1r_3}$ is now greater than $\widehat{r_1r_2r_3}$. On the lower right, $r_2$ similarly thinks it should be elected. Now, two different robots consider themselves \robstate{Leader} and the election process fails. We now use the absolute model to simulate \GEOLEADEL with $err = 0.001$, for $n=3$. This simulation yields $\simeq 0.1\%$ of errors in total, where two robots compute different \robstate{Leader} robots, and is shown in figure~\ref{fig:Simu2}. \begin{figure}[htb] \centering \captionsetup{justification=centering} \includegraphics[width=0.675\linewidth]{Map_simu_err_613.png} \caption{Simulation for 3-robot \LEADEL with Absolute Vision Error \\ Yellow points represent configurations where the error generates two different \robstate{Leader} robots.} \label{fig:Simu2} \end{figure} \section[Improved \textsf{Convergence} and \textsf{Leader} \textsf{Election}]{Improved \CONVERGENCE and \LEADEL for Faulty Visibility Sensors} \label{chap:improved} \noindent Following our observations of problematic behaviors in Sections~\ref{chap:performance} and~\ref{chap:realistic}, we provide two new algorithms: a fuel efficient \CONVERGENCE algorithm for two robots, and a \GEOLEADEL algorithm that is resilient to faulty visibility sensors. \subsection[Fuel Efficient \textsf{Convergence}]{Fuel Efficient \CONVERGENCE} \noindent We provide a new algorithm (Algorithm~\ref{alg:efficient}) for the \ASYNC \CONVERGENCE of two robots. Our algorithm is a simplified version of the two-color algorithm by Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}, which does \emph{not} solve \GATHERING (while Viglietta's algorithm does solve \GATHERING in \SSYNC). Our algorithm however ensures that no target can ever be outside of the segment between the two robots, ensuring no wasted moves, and that there exists a scheduling such that convergence is eventually achieved. It is denoted by \textsc{FEC} (Fuel Efficient \CONVERGENCE, presented in Figure~\ref{fig:Efficient2}). Our algorithm still uses two colors (\Black and \White), and when observing the other robot's color, the observing robot either remains still (the 'Self' target) or goes to the computed midpoint between the two robots (the 'Midpoint' target), possibly switching its color to the opposite one. \begin{figure}[htb] \centering \begin{tikzpicture} \node[blk] (B) {}; \node[wht] (W) [right of=B] {}; \path[->] (B) edge[bend left] node[above,align=center]{\Black$\rightarrow$Self} (W); \path[->] (W) edge[bend left] node[below,align=center]{\White$\rightarrow$Midpoint \\ \Black$\rightarrow$Self} (B); \path[->] (B) edge[out=150,in=210,loop] node[near start,above,align=right]{\White$\rightarrow$Self} (B); \end{tikzpicture} \caption{FEC: Fuel Efficient \CONVERGENCE Algorithm for Two Robots} \label{fig:Efficient2} \end{figure} \begin{algorithm} \caption{FEC: Fuel Efficient \CONVERGENCE Algorithm for Two Robots} \label{alg:efficient} \begin{algorithmic} \STATE \IF{me.color = \White} \STATE me.color $\Leftarrow$ \Black \IF{other.color = \White} \STATE me.destination $\Leftarrow$ other.position/2 \ENDIF \ELSIF{me.color = \Black} \IF{other.color = \Black} \STATE me.color $\Leftarrow$ \White \ENDIF \ENDIF \end{algorithmic} \end{algorithm} As a sanity check, we ran this algorithm through our simulator for one hour ($\simeq 30$ million data points) under a randomized \ASYNC scheduler and could not find a single execution where the traveled distance was greater than the initial distance. \begin{theorem} The Fuel Efficient \CONVERGENCE Algorithm (\ref{alg:efficient}) guarantees the distance traveled for \CONVERGENCE is never greater than the initial distance between the two robots under the \ASYNC scheduler, assuming no pending moves in the initial configuration. \end{theorem} \begin{proof} First, we see that to achieve \CONVERGENCE with an optimal distance, robots should always be moving towards each other. So, for robots to converge using more than the initial distance, it is required that, at one point in the execution, one robot moves \emph{not towards} the other robot. We note that a network of two disoriented robots can be simplified as a line. In that sense, the only movement that can increase the maximum \CONVERGENCE distance is when a robot moves opposite the other robot. In other words, when robots 'switch sides'. Let us now prove that no robot can target a robot while it is in its \MOVE phase: Only the $\{$\WHITE,\WHITE $\}$ snapshot can trigger a \MOVE phase. Since this transition implies a change of color to \BLACK at the end of the \COMPUTE phase, robots that move can only be \BLACK. So, if a robot is moving, it is \BLACK and the other robot, regardless of color, cannot start moving because its snapshot is different from $\{$\WHITE,\WHITE $\}$. Furthermore, because robots switch to \BLACK after moving, and can only switch to \WHITE if the other robot is \BLACK, no robot can execute multiple \MOVE in sequence unless the other robot has executed at least a full cycle in between. So a robot cannot move multiple times while the other has pending moves. We look at what happens after each robot completes at least one full cycle. We assume $r_1$ performs a \LOOK, and $r_2$ performs $k$ cycles before $r_1$ finishes its \MOVE. The distance after $r_1$ finishes its cycle is presented in table~\ref{tab:movCycles-bis}. \begin{table}[htb] \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{|c|c|c|}\hline & $r_1$ has a pending \STAY & $r_1$ has a pending \HALF \\ \hline $r_2$ executes $k$ \STAY & $X$ & $\left[\dfrac{X}{2} , X - \delta \right]$ \\ \hline $r_2$ executes $1$ M2H\footnotemark[2]& $\left[ \dfrac{X}{2} , X - \delta \right]$ & $\left[ 0 , X - 2\delta \right]$ \\ \hline \end{tabular} \end{adjustbox} \caption{Distance after a full cycle of $r_1$ and $k$ full cycles of $r_2$ with an initial distance of $X$} \label{tab:movCycles-bis} \end{table} \footnotetext[2]{As explained above, moving a second time requires at least a full cycle from the other robot.} In the case of simultaneous \HALF, the distance can be reduced down to zero, but robots cannot switch sides. In both other cases where a \MOVE happens, the distance is reduced at most down to half, and robots cannot switch sides. Overall, in no cases can the robots move not towards one another, so the maximum distance traveled is always the initial distance between the two robots. \end{proof} However, while the randomized scheduler we use for the simulator ensures convergence is always achieved, a rapid analysis of the algorithm shows that this algorithm ensures fuel efficiency, but does not actually ensures convergence. In fact, a simple \SSYNC scheduling can infinitely prevent robots from moving. This further highlights that simulations and formal proofs are complementary. We conjecture that Fuel Efficient Convergence is not actually possible for two colors, and that algorithms using three colors may even yield Fuel Efficient Rendezvous (not just Convergence). We also compare the resilience of this algorithm against vision errors with the center of gravity algorithm as a baseline in Figures~\ref{MAX_err} and~\ref{AVG_err}. Our results show that this algorithm is slightly more resilient to vision errors than CoG. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{MAX_err.png} \captionsetup{justification=centering} \caption{Maximum Distance Traveled by \textsc{CoG} (top) and \textsc{FEC} (bottom)} \label{MAX_err} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{AVG_err.png} \captionsetup{justification=centering} \caption{Average Distance Traveled by \textsc{CoG} (top) and \textsc{FEC} (bottom)} \label{AVG_err} \end{subfigure} \end{figure} \subsection[Error Resilient \textsf{Geoleader} \textsf{Election}]{Error Resilient \GEOLEADEL} \label{ssec:errLead} \noindent The \GEOLEADEL algorithm by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07} was \emph{not} designed under the assumption that the visibility sensors could be prone to errors. In this subsection, we use this awareness to create a new, error-resilient, version of this algorithm, using our simulation framework. \paragraph{\textsf{Geoleader} \textsf{Election} for Four Robots} One intuitive way of building a fully resilient algorithm for \LEADEL could be based on robots computing the bounds of the error zone. While this seems feasible for a 3-robot election, it becomes far less trivial for four robots or more. We present the results of a leader election for four robots in the appendix. \paragraph{Proposed Algorithm} In section~\ref{chap:realistic}, we used the simulation framework to detect failed elections caused by visibility sensor errors. Since mobile robots are able to run any algorithm during their \COMPUTE phase, then they can also run the simulation framework to do precisely that. The improved algorithm relies on the knowledge of the vision error model and its upper bounds to simulate random errors in a robot's position and snapshot and determine whether there exists a possibility of the other robots electing different \robstate{Leader} robots. Note that absolutely knowing that the election cannot fail (\emph{i.e.}, the election cannot yield two different \robstate{Leader} robots for two different robots) would require checking the entire surface of possible errors, which is not feasible in practice. So, we assume that robots perform a finite number of trials and decide accordingly. Each robot internally simulates a position error for each robot in its snapshot within the known margins, performs a simulated election for each robot in its snapshot, and checks for discrepancies in the resulting \robstate{Leader} robots. This is repeated with new random errors for a given number of tries, similar to a Monte-Carlo approach. Once a robot believes the election process can succeed, it chooses the \robstate{Leader} as in the original algorithm. Otherwise, it picks a random direction and distance, and performs a \MOVE to "scramble" the network. This process repeats until all robots believe the election can succeed. This process is detailed in Algorithm~\ref{algR}. \begin{algorithm} \caption{Reliable \LEADEL algorithm} \label{algR} \begin{algorithmic} \STATE $L = self.$COMPUTE$('Leader Election')$ \STATE $my\_network = self.snapshot \cup self$ \STATE $counter = 0$ \WHILE{$counter < nb\_tries$} \FOR{$r_1$ in $my\_network$} \STATE $r_v = r_1$ \STATE Change $r_v.x$ and $r_v.y$ randomly according to error parameters \STATE $r_v.snapshot = my\_network/\{r_1\}$ \FOR{$r_2$ in $r_v.snapshot$} \STATE Change $r_2.x$ and $r_2.y$ randomly according to error parameters \ENDFOR \STATE $L_v = r_v.$COMPUTE$('LeaderElection')$ \IF{$L \neq L_v$} \STATE Move randomly \STATE Exit \ENDIF \ENDFOR \STATE $counter \mathrel{+}= 1$ \ENDWHILE \STATE L is elected \robstate{Leader} \end{algorithmic} \end{algorithm} \noindent We now perform simulations using this algorithm. Each point is sorted according to the following: \begin{itemize} \item If no robot detects a possible error, it is a valid point. \item If at least one robot has detected a possible error, and decided to move as a result, it is a detected possible error point. \item If no robot moves, but two robots have different \robstate{Leader} robots, it is an undetected error point. \end{itemize} We measure the proportion of undetected error and possible error points for $nb_{tries}$ between 0 and 30. Results are presented in Figure~\ref{fig:perf}. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{Perf.png} \caption{Performance of the Error-Resilient Election Algorithm \\ $err = 0.001$} \label{fig:perf} \end{figure} Note that the number of undetected error points, while decreasing, does not reach zero under our testing conditions. Also, using a single internal simulation typically results in a $\sim 80\%$ reduction in the number of undetected error points. Using 10 internal simulations resulted in a reduction of $99.5\%$ of undetected error points. Which number of internal simulations is the best suited would depend on both the speed of the leader election process and reliability of the obtained solution requirements. Importantly, we notice that, were we to choose an error model and error bounds such that it models the possible errors of representing real numbers using limited precision floats, then this particular algorithm, when used with an infinitely large, similar to $\mathbb{R}^2$ number of random tries, can be made to reliably detect anomalies due to the errors induced by evolving in the continuous plane, yet only perceiving a discretized plane. Actually, me make the conjecture that this algorithm can be adapted to allow any algorithm that makes decisions based on robot locations to operate properly in a perceived discretized plane. Furthermore, using this algorithm allows us to reduce the size of a snapshot to a finite, storable amount to realistically use the \textbf{SyncSim} protocol~\cite{DBLP:journals/tcs/0001FPSY16}, and fully simulate the \FSYNC scheduler in \luminous \ASYNC. \section{Conclusion} \label{sec:conclusion} \noindent In this paper, we introduce a modular framework designed to simulate mobile robots for any given setting. We discuss the limitations and constraints of this approach, and use it to compute the maximum distance traveled, or fuel efficiency, of multiple algorithms in several settings, with interesting results. In particular, we note that the algorithm by Izumi \emph{et al.}~\cite{DBLP:journals/siamcomp/IzumiSKIDWY12} can lead to an unbounded increase in distance before eventually gathering. Similarly, the center of gravity algorithm is inherently sub-optimal for $n>2$ robots, and robots should use an algorithm based on the geometric median instead. We then use this framework to simulate inaccurate sensors for mobile robots and verify the behavior of \CONVERGENCE and motion based \LEADEL under this new model. We also introduce errors in the perception of colors for \luminous robots performing state-of-the-art two-robot \GATHERING. Finally, we designed two new algorithms. The first one is designed to perform two-robot \CONVERGENCE under the \ASYNC scheduler with optimal fuel efficiency. The second algorithm uses the simulator itself to allow robots to solve motion based \LEADEL with inaccurate sensors. The latter can be adapted to allow for decision making algorithm, such as \LEADEL, to function using discretized snapshots, and so, to use the \textbf{SyncSim} protocol to simulate the \FSYNC scheduler in \luminous \ASYNC. Overall, this framework achieves its planned objective of being both easy to use and able to produce useful results for researchers. As a test, we timed the full implementation, and testing in \FSYNC, \SSYNC and \ASYNC, of the two color \RENDEZVOUS algorithm from Viglietta to require less than half an hour, including basic network monitoring and testing. The source code and instructions for our simulator are provided in the appendix and at the following repository: \url{https://github.com/UberPanda/PyBlot-Sim} \subsection*{Future Work} \noindent As we already discussed, our simulator is modular to allow for use for any given algorithm and model. So it seems logical that it should, ideally, implement every existing model and test all major algorithms in the literature, such as mutual visibility for opaque robots. Furthermore, while interesting for researchers, our simulator is not a tool for formal proofs. However, one could also argue that in its current state, we have not proven that the simulator actually simulates mobile robots, even within our degraded hypotheses. We believe that the simulator itself should be formally proven to match the model of mobile robots it claims to simulate. Note that the usefulness of this proof would be limited, as the addition of any new module may require proving the entire simulator again. Finally, our \LEADEL algorithm for errors in vision is able to function in a continuous setting using discretized snapshots. The design philosophy behind this algorithm of using randomized tries to simulate sensor errors is not specific to the \LEADEL problem, and it could be used for other algorithms that rely on making decisions based on the locations of robots in the network and that are sensitive to errors in perception. Building new algorithms that can use these finite snapshots allows us to use the \textbf{SyncSim} protocol~\cite{DBLP:journals/tcs/0001FPSY16} and simulate a \FSYNC scheduler in \luminous \ASYNC and would be a major advantage for resilience to asynchrony. \printbibliography \newpage \section{Introduction} Since the seminal work of Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}, much research on cooperative mobile robots was aimed at identifying the minimal assumptions (in terms of synchrony, sensing capabilities, environment, etc.) under which basic problems can be solved. A recent state of the art was recently proposed by Flocchini et al.~\cite{DBLP:series/lncs/Flocchini19}. Robots are modeled as mathematical points in the 2D Euclidean plane and independently execute their own instance of the same algorithm. In the model we consider, robots are anonymous (\emph{i.e.}, they are indistinguishable from each-other), oblivious (\emph{i.e.}, they have no persistent memory of the past is available), and disoriented (\emph{i.e.}, they do not agree on a common coordinate system). The robots operate in Look-Compute-Move cycles. In each cycle, a robot "Looks" at its surroundings and obtains (in its own coordinate system) a snapshot containing the locations of all robots. Based on this visual information, the robot "Computes" a destination location (still in its own coordinate system), and then "Moves" towards the computed location. Since the robots are identical, they all follow the same deterministic algorithm. The algorithm is oblivious if the computed destination in each cycle depends only on the snapshot obtained in the current cycle (and not on stored previous snapshots). The snapshots obtained by the robots are not consistently oriented in any manner (that is, the robots' local coordinate systems do not share a common direction nor a common chirality\footnote{Chirality denotes the ability to distinguish left from right.}). The execution model significantly impacts the ability to solve collaborative tasks. Three different levels of synchronization have been commonly considered. The strongest model is the fully-synchronous (\FSYNC) model~\cite{DBLP:journals/siamcomp/SuzukiY99}, where each phase of each cycle is performed simultaneously by all robots. The semi-synchronous (\SSYNC) model~\cite{DBLP:journals/siamcomp/SuzukiY99} considers that time is discretized into rounds, and that in each round an arbitrary yet non-empty subset of the robots are active. The robots that are active in a particular round perform exactly one atomic \LOOK-\COMPUTE-\MOVE cycle in that round. The weakest model is the asynchronous (\ASYNC) model~\cite{DBLP:series/synthesis/2012Flocchini,DBLP:journals/tcs/FlocchiniPSW05}, which allows arbitrary delays between the \LOOK,\COMPUTE and \MOVE phases, and the movement itself may take an arbitrary amount of time. It is assumed that the scheduler (seen as an adversary) is fair in the sense that in each execution, every robot is activated infinitely often. \subsection{Previous works and Motivations} An important shortcoming of the robot model introduced by Suzuki and Yamashita \cite{DBLP:journals/siamcomp/SuzukiY99} with respect to real-world implementation of mobile robot algorithms is the assumption that both the vision sensors and the actuation motors are perfect. More specifically, the model assumes that robots have an infinite vision range, and can sense the position of other robots relatively to theirs with infinite accuracy. Robots are also usually able to reach their target with infinite movement precision (with respect to the angle to the target). Several attempts have been made to make the \OBLOT model more realistic, \emph{e.g.} by limiting the range of sensors through the limited visibility model~\cite{DBLP:journals/trob/AndoOSY99,DBLP:conf/antsw/GordonEB08,DBLP:conf/antsw/GordonWB04}, by allowing the sensors to miss other robots~\cite{DBLP:conf/sirocco/HeribanT19}, by using inaccurate sensors~\cite{DBLP:journals/siamcomp/CohenP08,DBLP:conf/antsw/GordonEB08,DBLP:conf/antsw/GordonWB04,DBLP:journals/automatica/Martinez09,DBLP:journals/tcs/YamamotoIKIW12}, or by discarding the hypothesis that robots are transparent~\cite{DBLP:conf/cccg/LunaFPSV14,DBLP:journals/tcs/HonoratPT14}. However, many attempts are hindered by increased complexity due to manually proving algorithms in those more complex settings. For instance, to our knowledge, the consequences of error-prone vision have only been studied through very simple problems: \GATHERING and \CONVERGENCE~\cite{DBLP:journals/siamcomp/CohenP08,DBLP:conf/antsw/GordonEB08,DBLP:conf/antsw/GordonWB04,DBLP:journals/automatica/Martinez09,DBLP:journals/tcs/YamamotoIKIW12}. To allow more complex problems to be studied considering more realistic settings, it appears necessary to favor an machine-helped approach. Formal methods encompass a long-lasting path of research that is meant to overcome errors of human origin. Unsurprisingly, this mechanized approach to protocol correctness was used in the context of mobile robots~\cite{DBLP:conf/srds/BonnetDPPT14,DBLP:conf/sss/DevismesLPRT12,DBLP:journals/dc/BerardLMPTT16,DBLP:conf/sss/AugerBCTU13,DBLP:conf/sss/MilletPST14,DBLP:journals/ipl/CourtieuRTU15, berard:hal-01238784,DBLP:conf/prima/RubinZMA15,DBLP:conf/sss/DevismesLPRT12,DBLP:conf/fmcad/SangnierSPT17,DBLP:conf/icdcn/BalabonskiPRT18,DBLP:journals/mst/BalabonskiDRTU19,DBLP:conf/netys/BalabonskiCPRTU19,DBLP:journals/fmsd/SangnierSPT20,DBLP:conf/srds/DefagoHTW20,}. When robots move freely in a continuous two-dimensional Euclidean space, to the best of our knowledge, the only formal framework available is Pactole\footnote{\url{http://pactole.lri.fr}}. Pactole relies on higher-order logic to certify impossibility results~\cite{DBLP:conf/sss/AugerBCTU13,DBLP:journals/ipl/CourtieuRTU15,DBLP:conf/icdcn/BalabonskiPRT18}, as well as the correctness of algorithms~\cite{DBLP:conf/wdag/CourtieuRTU16,DBLP:journals/mst/BalabonskiDRTU19} in the \FSYNC and \SSYNC models, possibly for an arbitrary number of robots (hence in a scalable manner). Pactole was recently extended by Balabonski~\emph{et al.}~\cite{DBLP:conf/netys/BalabonskiCPRTU19} to handle the \ASYNC model, thanks to its modular design. However, in its current form, Pactole lacks automation; that is, in order to prove a result formally, one still has to write the proof (that is automatically verified), which requires expertise both in Coq (the language Pactole is based upon) and about the mathematical and logical arguments one should use to complete the proof. On the other hand, model checking and its derivatives (automatic program synthesis, parameterized model checking) hint at more automation once a suitable model has been defined with the input language of the model checker. In particular, model-checking proved useful to find bugs (usually in the \ASYNC setting)~\cite{DBLP:journals/dc/BerardLMPTT16,DBLP:conf/sofl/DoanBO16,DBLP:conf/opodis/Doan0017} and to formally check the correctness of published algorithms~\cite{DBLP:conf/sss/DevismesLPRT12,DBLP:journals/dc/BerardLMPTT16,DBLP:conf/prima/RubinZMA15,DBLP:conf/srds/DefagoHTW20}. Automatic program synthesis~\cite{DBLP:conf/srds/BonnetDPPT14,DBLP:conf/sss/MilletPST14} was used to obtain automatically algorithms that are "correct-by-design". However, those approaches are limited to instances with few robots. Generalizing them to an arbitrary number of robots with similar models is doubtful as Sangnier \emph{et al.}~\cite{DBLP:journals/fmsd/SangnierSPT20} proved that safety and reachability problems are undecidable in the parameterized case with default models. Another limitation of the above approaches is that they \emph{only} consider cases where mobile robots \emph{evolve in a \underline{discrete} space} (\emph{i.e.}, graph). This limitation is due to the model used, that closely matches the original execution model by Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}. As a computer can only model a finite set of locations, a continuous 2D Euclidean space cannot be expressed in this model. Overall, the only way to obtain automated proofs of correctness in the continuous space context through model checking is to use a more abstract model~\cite{DBLP:conf/wdag/DefagoHTW19,DBLP:conf/srds/DefagoHTW20}, which commands writing additional handwritten theorems to assess its relevance in the original model. Overall, using formal methods for complex algorithms in realistic settings requires a substantial effort that may be out of reach when one simply wants to asses the feasibility of an algorithmic design. Furthermore, these approaches currently only address whether the added constraints enable the construction of counter-examples for a given task, and, to the best of our knowledge, do not address the important issue of performance degradation, or, in the cases where counter-examples do appear, the likelihood of their appearance and their impact. In fact, an overwhelming majority of the research on mobile robotic swarms has focused on proving, under a given set of conditions, whether there exists a counter example to a given solution proposal for a problem. On the other hand, the practical efficiency of a given algorithm (with respect to real-world criteria such as fuel consumption) was rarely studied by the Distributed Computing community, albeit being of paramount importance to the Robotics community~\cite{DBLP:conf/gecco/AroraMDB19,DBLP:journals/ijrr/YooFS16}. Fuel-constrained robots have been considered in the discrete graph context, for both exploration \cite{DBLP:conf/arcs/DyniaKS06} and distributed package delivery \cite{DBLP:conf/algosensors/Chalopin0MPW13}, but, to our knowledge, no study considered the two-dimensional Euclidean space model that was promoted by Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}. A possible explanation for this situation is that the more complex the algorithm (or the system settings), the more difficult it becomes to rigorously find the worst possible executions. We investigate another approach: since our goal is to bridge the gap between theoretical mobile robots, and actual robotics, we move one step towards robotics and use a very common tool: simulation. First, robot simulators, such as Gazebo\footnote{\texttt{http://gazebosim.org/}}, are industry standard tools for designing physical robots. Then, simulating mobile robots is not a new idea, and has been tried since the very beginning of mobile robots~\cite{DBLP:journals/trob/AndoOSY99}. Our goal is to design and implement a practical simulator for networks of mobile robots that is focused on finding counter-examples and monitoring network behavior, rather than proving algorithms or providing a visual representation. Our vision is that this tool is especially useful in the early stages of algorithm design to eliminate obviously wrong paths, and detect anomalies. It should not be seen as a replacement for formal tools, but as a replacement for researcher intuition when working on a mobile robot network model or algorithm. As such, the simulator should be easy to use, understand and modify by any Distributed Computing researcher in order to include any new algorithm or model. It should also be capable of monitoring network behavior and output quantitative data points to assess real world performance, according to a given set of metrics, as well as enabling comparison with previously proven algorithms in a given setting. We first focus on the known limitations of this approach and highlight the difficulty of encoding victory and defeat conditions for the computed executions, and how it impacts our ability to reliably detect counter-examples, as well as the expected consequences of working in a discretized Euclidean space, such as the impossibility to distinguish \CONVERGENCE and \GATHERING. \subsection{Our Contribution} In this paper, we design and implement a practical simulator for mobile robotic swarms evolving in a two-dimensional Euclidean space. To circumvent the obvious problem of an infinite number of initial positions, our simulation framework is based on the Monte Carlo method for choosing initial configurations~\cite{MonteCarlo49}. We first benchmark our simulation framework using a well known problem in the domain: rendezvous. Rendezvous mandates that two robots gather in finite time at the same location, not known beforehand. There exists a number of rendezvous solution for various settings, yet our simulation framework enables fair quantitative comparison. We choose the fuel metric (\emph{a.k.a.} total traveled distance) under various system conditions: \FSYNC, \SSYNC, and \ASYNC schedulers with or without rigid motion. We then assess the impact of inaccurate visibility sensors on two milestone algorithms: the Center-of-Gravity convergence algorithm~\cite{DBLP:journals/siamcomp/SuzukiY99} for two robots, and the Geoleader election algorithm~\cite{DBLP:conf/sss/CanepaP07}. It turns out that their behavior is significantly impacted by even small inaccuracies. To address the shortcoming identified by our simulations in the literature, we design a new two-color, fuel-efficient, convergence algorithm for the ASYNC scheduler, and an improved leader election algorithm that is resilient to inaccurate vision. Both proposal are similarly benchmarked with our simulation framework. The rest of the paper is organized as follows. Section~\ref{chap:MonteCarlo} presents the core technicalities underlying our simulation framework, and its limitations through the problems of \OBLOT \FSYNC \CONVERGENCE, and \GEOLEADEL. Section~\ref{chap:performance} demonstrates how the framework can be used for the purpose of performance evaluation, while Section~\ref{chap:realistic} show how realistic error models can be integrated into the entire evaluation process. Section~\ref{chap:improved} introduces two new algorithms, one for fuel efficient convergence, and one for leader election with unreliable sensors. Finally, Section~\ref{sec:conclusion} provides concluding remarks. \section{Monte-Carlo Simulation of Mobile Robots} \label{chap:MonteCarlo} \subsection{Overview of the Framework} Our simulation framework is written from scratch using Python 3, ensuring a large compatibility across executing platforms. Our design goal is to remain as close as possible to the theoretical model of Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}, in order to maximize readability and usability by the mobile robot distributed computing community. Each mobile entity is thus encapsulated as an instance of the \texttt{Robot} class. In the case of the basic \OBLOT model, robots have the following properties: \begin{itemize} \item A unique \texttt{name}. \item \texttt{x} and \texttt{y} coordinates in the Euclidean plane. \item A \texttt{snapshot} list of \texttt{Robot}s that contains visible \texttt{Robot}s. \item A \texttt{target}, which is a 2-tuple of the \texttt{x} and \texttt{y} coordinates of the target destination. \end{itemize} \noindent The \texttt{Robot} class also provides three methods: \begin{itemize} \item The \texttt{LOOK} method uses the network as an input. It creates a list of the visible \texttt{Robot}s in the network and assigns it to \texttt{snapshot}. \item The \texttt{COMPUTE} method uses \texttt{snapshot} to compute and assign \texttt{target}, according to the algorithm we want to evaluate. \item The \texttt{MOVE} method updates \texttt{x} and \texttt{y} according to \texttt{target}. \end{itemize} This is summarized in figure~\ref{fig:robotClass} \begin{figure}[htb] \centering \includegraphics[width=0.35\linewidth]{Robot.png} \caption{Robot Class} \label{fig:robotClass} \end{figure} Because robots are anonymous, \texttt{name} cannot be used for computing purposes, and is simply a way for the scheduler to reliably monitor the robots in the network. Similarly, robots cannot use \texttt{x} and \texttt{y} directly as they are disoriented. The simulation consists of two parts: an initializing sequence and a loop. The \emph{initializing sequence} creates a \texttt{network} list, which contains all robots, according to simulation parameters. To circumvent the problem of the infinite number of possible initial positions, our simulation framework is based on the Monte-Carlo method for choosing initial configurations~\cite{MonteCarlo49}. So, unless otherwise specified, the initial location of each robot is chosen uniformly at random within the bounds of the type used to represent positions. Using the Monte-Carlo method allows us to both minimize biases in the initial parameters, and arbitrarily increase the precision of the simulation by simply increasing the number of simulations. For each iteration of the \emph{main loop}, a scheduling function is executed once. In the case of \FSYNC, for each loop iteration, all robots in the network simultaneously perform a \texttt{LOOK}, then simultaneously perform a \texttt{COMPUTE}, and then simultaneously perform a \texttt{MOVE}. Using different schedulers, such as \SSYNC or \ASYNC only requires changing the scheduling function: \SSYNC creates a non empty list of robots to be activated for a whole cycle, and \ASYNC picks a single robot to be activated for a single phase. The loop terminates whenever a \emph{victory} condition holds, which confirms the algorithm completed its intended task. In the case where an algorithm may fail, a \emph{defeat} condition can also be used. For practical reasons, the loop has a maximum number of iterations. However, reaching this maximum should not be interpreted as either a failure or a success. \subsection{Scheduling} Modeling the \FSYNC scheduler can be trivially done by performing all \LOOK operations, then all \COMPUTE operations, then all \MOVE operations. For the \ASYNC and \SSYNC schedulers, we rely on randomness to test as many executions as possible. To model the \SSYNC scheduler, for each time step, we chose a non-empty subset of the network uniformly at random and perform a full cycle. To model the \ASYNC scheduler, we chose one robot uniformly at random and perform its next operation\footnote{Note that this model does not explicitly include simultaneous operations: we consider that the output of two simultaneous events $E_1$ and $E_2$ can be either the output of $E_1$ then $E_2$ or the output of $E_2$ then $E_1$.}. In the case of the \ASYNC scheduler, we must also consider what happens if a robot performs a \LOOK operation while another robot is moving. the \OBLOT model usually considers that an adversary can chose the perceived location of the second robot to be anywhere between its initial position and its destination (on a straight line). Modeling this behavior could be easily done by changing the perceived coordinates in the \LOOK operation uniformly at random between the location and the target of the perceived robot (on a straight line). However, existing literature about the \ASYNC model shows that the most problematic scenarios appear when the outdated position perceived for a robot is its initial location. With our simulation framework, we also observed that always choosing the initial location when observing a given robot while it is in its \MOVE phase yielded the most adversarial results, so, while our framework is able to simulate both perceptions, we assume this adversarial behavior in the sequel. For all schedulers, our simulation framework supports both the rigid and the non-rigid settings. The rigid setting mandates a robot that selected a distinct target in the \COMPUTE phase to always reach it in the \MOVE phase. The non-rigid setting partially removes this condition: the robot may be stopped by the scheduler before it reaches the target, but not before it traverses a distance of at least $\delta$, for some $\delta>0$. \subsection{Simulation Conditions} Our framework uses Monte-Carlo simulation for both the initial conditions and the scheduling. This means we can perform an arbitrarily large number of simulations, which in turn induces an arbitrarily more precise simulation. Therefore any criterion on either time, number of iterations, or precision is equivalent. Unless specified otherwise, 4 simulation threads are run in parallel, for one hour, on a modern quad-core CPU, after which results are merged and analyzed. We use the PyPy3 JIT compiler instead of the CPython interpreter, for better performance. Results of the 4 simulations are then compiled and analyzed. \subsection{Comparison with Existing Simulators} We found two noteworthy simulators for mobile robots: Sycamore and JBotSim. \emph{Sycamore} is a Java program focused explicitly on mobile robots. However, it appears to be far more complex to build, use and modify than our proposal. Moreover, the latest version we could find seems to date back from 2016, and requires versions of Java that are no longer supported. \emph{JBotSim}\footnote{\url{https://jbotsim.io}} is a Java library for simulating distributed networks in general. While it appears to be able to simulate \OBLOT robots, it is not designed to do so. So, one has to dig into the intricacies of the simulator to emulate basic mobile robot settings. We also found a third Java-based simulator, named oblot-sim\footnote{\url{https://github.com/werner291/oblot-sim}}. We are, however, unsure of its provenance and design goals. All three simulators emphasize real-time visualization of executed algorithms through a complete graphical interface. Our proposal focuses on extremely simple quantitative simulation. In its current version, a complete instance of the simulator requires only five separate files for a total of less than 30KB of code (The sources for JBotSim and Sycamore weigh 3MB and 4.8MB, respectively). We also believe that using Python instead of Java greatly improves portability and ease of understanding, which in turns allows researchers to more easily implement and test unusual settings. In short, our goal is not to visualize executions in real-time, but to simulate as many executions as possible to process their outcome. \subsection{Limitations of the Simulation} While the initial approach described in previous sections may seem sound and simple enough to work with, it results in two distinct problems. As stated previously, our objective with robot simulation is to reliably provide counter-examples whenever they may occur. This requires reliably detecting problematic executions, which is difficult for two reasons. First, success and defeat conditions for most mobile robot algorithms are written in a way that might not be directly usable in a computer simulation. Then, we show that issues predictably arise due to the nature of discretized floating point numbers compared to "true" real numbers used in mathematical models. \paragraph{Halting the Simulation: \emph{Victory} and \emph{Defeat} Conditions:} One of the goals of our simulation framework is to find counter-examples for a given algorithm and setting. To do so, we need to simulate the evolution of the network until one of two things happen: \begin{itemize} \item A sufficient condition has been met. This implies that the current execution is successful, and a new simulation with a different initial configuration should begin. This is called a \emph{victory condition}. \item A necessary condition has been violated. This implies that the current execution constitutes a counter-example. This is called a \emph{defeat condition}. \end{itemize} We illustrate the difficulty of using and defining such conditions in practice through the example of one of the most fundamental problems in the context of mobile robots: \GATHERING. The common victory condition for \GATHERING is the following, for two robots $r_1$ and $r_2$: \begin{condition}[Theoretical \GATHERING Victory] \label{cond_rdv} \GATHERING is achieved if and only if, for any pair of robots in the network, the distance between the two robots is eventually always zero. This can also be written more formally as $\exists t_0 \in \mathbb{R}_{\ge 0} : \forall t_1 \ge t_0 , \forall(r_1,r_2) |r_1r_2|_{t_1} = 0$ \end{condition} In the previous condition, $|r_1r_2|_{t}$ denotes the distance between $r_1$ and $r_2$ at time $t$ in the current execution. However, this particular condition would require the ability for the simulator to infinitely simulate the future of the network, which is obviously impossible. Moreover, the matching defeat condition is unusable for similar reasons: \[\nexists t_0 \in \mathbb{R}_{\ge 0} : \forall t_1 \ge t_0 , \forall(r_1,r_2) |r_1r_2|_{t_1} = 0\] \begin{center} or \end{center} \[\forall t_0 \in \mathbb{R}_{\ge 0} : \exists t_1 \ge t_0 , \exists(r_1,r_2) |r_1r_2|_{t_1} \neq 0\] We instead define a more practical defeat condition: \begin{condition}[Practical \GATHERING Defeat]\ \label{defeat} $\exists (t_0,t_1) \in (\mathbb{R}_{\ge 0})^2 : t_1>t_0, inputs(t_0) = inputs(t_1), \exists t \in [t_0,t_1] / \exists (r_1,r_2) |r_1r_2|_{t} \neq 0$ \end{condition} Where $inputs(t)$ is the set of all input parameters relevant to the algorithm. This is different from the configuration, which would contain \emph{all} parameters of the network at a given point of the execution. This input set is used as a practical way to detect cycles in the execution. For a deterministic algorithm, if all inputs of the algorithm are identical to a previously encountered set of inputs, then a cycle has been found. The input set we use must be chosen such that for two sets $S_1$ and $S_2$, $S_1(t) = S_2(t) \implies \forall S_1(t+1), \exists S_2(t+1) : S_1(t+1) = S_2(t+1)$. In other words, regardless of the scheduling, two identical sets should not be able to generate different sets. \begin{theorem} For two robots executing a deterministic algorithm, if condition~\ref{defeat} is true then condition~\ref{cond_rdv} is false. \end{theorem} \begin{proof} For a deterministic algorithm, if condition~\ref{defeat} is true, there exists a scheduling starting from the initial configuration that reaches $inputs(t_0)$ and $inputs(t_1)$. Because $inputs(t_0) = inputs(t_1)$, there exists a cycle containing non-gathered configurations. Then the adversary scheduler can repeat this cycle infinitely, and condition~\ref{cond_rdv} is false. \end{proof} \begin{theorem} If the number of input sets is finite, then for two robots executing a deterministic algorithm, if condition~\ref{cond_rdv} is false, then condition~\ref{defeat} is true. \end{theorem} \begin{proof} Any scheduling is infinite. So, if the total number of input sets is finite, then every scheduling contains at least one cycle. If condition~\ref{cond_rdv} is false, then there are no non-gathered cycles, so there is at least one gathered cycle that must be repeated, and condition~\ref{defeat} is true. \end{proof} One may naively want to use a similar reasoning to define a sufficient victory condition: \begin{condition}[Naive \GATHERING Victory] $\exists (t_0,t_1) \in \mathbb{R}_{\ge 0}^2 : t_1>t_0, inputs(t_0) = inputs(t_1), \forall t \in [t_0,t_1], \forall(r_1,r_2) |r_1r_2|_{t} = 0$ \end{condition} However, this condition ignores the fact that the scheduler may be able to not repeat this cycle by carefully choosing the activation order of the robots. A proper condition that is usable regardless of the scheduler is the following: \begin{condition}[Practical \GATHERING Victory] \label{vict_rdv} $\forall(r_1,r_2) \exists t_0 \in \mathbb{R}_{\ge 0} : |r_1r_2|_{t_0} = 0 \land \forall \mathcal{S}, \exists t_1 > t_0 : inputs(t_0) = inputs(t_1),\forall t \in [t_0,t_1], |r_1r_2|_{t} = 0$ With $\mathcal{S}$ a scheduling. In other words, there exists a time after which all robots are stuck in gathered cycles. \end{condition} Analyzing configurations and finding cycles in the execution is not an issue for our simulator. The main difficulty lies in our ability to properly model the configuration using the input set. If the set is too restrictive and omits relevant parameters, then we find cycles that do not actually exist. Similarly, a set that is not restrictive enough may hide actual cycles. This depends on both the robot model and the algorithm used to solve the problem. In the case of \RENDEZVOUS or \GATHERING for two robots, the standard algorithm~\cite{DBLP:journals/siamcomp/SuzukiY99} for the \FSYNC scheduler targets the midpoint between the two robots and is described in algorithm~\ref{algo_rdv}. \begin{algorithm}[H] \caption{Basic \FSYNC \RENDEZVOUS} \label{algo_rdv} \begin{algorithmic} \STATE target[0] = (x + snapshot[0].x)/2 \STATE target[1] = (y + snapshot[0].y)/2 \end{algorithmic} \end{algorithm} In the Euclidean space, the number of configurations appears to be infinite. Because robots are disoriented, the algorithm uses no information on distance, or coordinate systems, so that all configurations are identical. Then, the input set is actually empty. This implies that an algorithm succeeds if and only if the network is gathered after the first activation of both robots. Otherwise, the defeat condition is immediately true for rigid movement. For the sake of providing a second example, let us consider that robots are endowed with weak local multiplicity detection, meaning that they can distinguish a non-gathered configuration from a gathered configuration. This allows us to modify the initial algorithm to algorithm~\ref{algo_rdv2}. \begin{algorithm}[htb] \caption{\FSYNC \RENDEZVOUS with Multiplicity Detection} \label{algo_rdv2} \begin{algorithmic} \IF{$\neg gathered$} \STATE target[0] = (x + snapshot[0].x)/2 \STATE target[1] = (y + snapshot[0].y)/2 \ENDIF \end{algorithmic} \end{algorithm} In this case, the gathered state is a relevant input parameter, and should be included in the input set. Now, all gathered configurations are considered identical and all non-gathered configurations are considered identical. This means that the robots must still gather after the first activation. However, while this was already considered a cycle with the empty set, if robots are now gathered, the input set is different and no cycle has yet been reached. The first cycle is reached after the second activation. If the robots remain gathered, then this is a gathered cycle and should not trigger the defeat condition. However, if for some reason the robots were to separate after the second activation, this would constitute a non-gathered cycle with the first input set, and the defeat condition would be triggered. Using this reasoning, we check our simulator against our two-color \ASYNC algorithm~\cite{DBLP:conf/icdcn/HeribanDT18} and the two-color \SSYNC algorithm from Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}. For Heriban two-color, we accurately find no counter-example, and all executions lead to the victory condition in \ASYNC, \SSYNC and \FSYNC. For Viglietta two-color, we accurately find no counter-example and all executions lead to the victory condition in \SSYNC and \FSYNC, and we find counter-examples that trigger the defeat condition in \ASYNC. We perform a similar study for a weaker version of \GATHERING, called \CONVERGENCE. The common condition for \CONVERGENCE is the following: \begin{condition}[Theoretical \CONVERGENCE Victory]\label{conv} \CONVERGENCE is achieved if and only if, for any distance $\epsilon$ greater than zero, the distance between any pair of robots is eventually always smaller than $\epsilon$. This can also be written more formally as $\forall \epsilon \in \mathbb{R}_{> 0}, \exists t_0 \in \mathbb{R}_{\ge 0} : \forall t_1 \ge t_0, \forall(r_1,r_2) |r_1r_2|_{t_1} \le \epsilon$ \end{condition} Note that, as we expect, \GATHERING implies \CONVERGENCE, but \CONVERGENCE does not imply \GATHERING. In this case, the distance between the two robots is a relevant parameter to check whether or not the problem is solved. However, since it does not change the behavior of the algorithm, it is still not part of the input set. We define the following defeat condition: \begin{condition}[Practical \CONVERGENCE Defeat]\ \label{def_conv} $\exists(r_1,r_2) : \exists (t_0,t_1) \in (\mathbb{R}_{\ge 0})^2 : t_1>t_0 \land inputs(t_0) = inputs(t_1) \land 0 < |r_1r_2|_{t_0} \le |r_1r_2|_{t_1}$ \end{condition} \begin{theorem} For a deterministic algorithm, if condition~\ref{def_conv} is true, then condition~\ref{conv} is false. \end{theorem} \begin{proof} Similarly to \GATHERING, this condition implies a cycle where distance does not decrease, so the adversary scheduler can repeat it infinitely and prevent \CONVERGENCE. \end{proof} This does \emph{not} imply that the distance between the two robots must always be strictly decreasing in the general case, as this would neither be a sufficient nor a necessary condition. Because $\epsilon$ can be infinitely small, we cannot chose the 'right' $\epsilon$ to properly define a victory condition. \paragraph{The Consequences of the Discretized Euclidean Plane:} \label{sssec:NRN} While it is tempting to define a victory condition similar to that of \GATHERING, the question of $\epsilon$ remains. Floating point numbers are obviously incapable of infinite precision. So, because any number greater that zero is a valid choice, if $\epsilon$ is smaller than the minimum positive number that can be represented in the chosen floating point precision, it cannot be distinguished from a true zero. This implies that small enough distances between two robots cannot be distinguished from a gathered state. So, it is intrinsically impossible to distinguish \CONVERGENCE from actual \GATHERING. Let us modify algorithm~\ref{algo_rdv} so that both robot move towards the midpoint, but only move a distance of $\dfrac{|r_1r_2|}{2} - \dfrac{\delta}{2}$ instead of $\dfrac{|r_1r_2|}{2}$. In theory, this algorithm does not lead to \RENDEZVOUS, as robots reach a distance of $\delta$ after their first activation. However, if $\delta$ is small enough, the precision of floating point numbers is such that $\dfrac{|r_1r_2|}{2} - \dfrac{\delta}{2}$ and $\dfrac{|r_1r_2|}{2}$ appear identical, and the distance $|r_1r_2|$ appears to be zero. This is essentially a \CONVERGENCE algorithm that is fast enough to be mistaken for a \RENDEZVOUS algorithm. In practice, there is very little that can be done against this sort of behavior and \uline{conditions for \GATHERING should not be considered reliable.} On the other hand, under different circumstances, the discrete nature of the simulation can instead lead theoretically good executions to fail in practice. Let us consider a network of two robots $r_1$ and $r_2$ such that $r_2$ does not move, and $r_1$ moves to the midpoint. This should trivially lead to \CONVERGENCE. Let us now assume that $r_1.y = r_2.y$, and that $r_1.x$ and $r_2.x$ are such that $r_2.x$ is the smallest float greater than $r_1.x$. This possibly leads to $\dfrac{r_1.x+r_2.x}{2} = r_1.x$, so $r_1$ stops moving and the defeat condition for \CONVERGENCE is wrongly activated. We test this by setting $r_1.y = r_2.y = 0$, picking $r_1.x$ at random in $[0,1]$ and picking $r_2.x$ at random in $[2,3]$ so that $r_1.x < r_2.x$. In the first case, $r_1$ moves to the midpoint and $r_2$ does not move. This results in approximately 37.5\% of one million attempts wrongly failing \CONVERGENCE. In the second case, $r_2$ moves to the midpoint and $r_1$ does not move. This results in approximately 25.0\% of one million attempts wrongly failing \CONVERGENCE. This asymmetry may be explained by biases in the binary64 approximation. Regardless, this is a real, hard to predict problem with a non-negligible chance of happening and requires careful analysis of found counter-examples. Problems with limited float precision also appear when simulating \GEOLEADEL. \GEOLEADEL is successful if, given a set of robots, each with their own coordinate system, robots can all deterministically agree on a same robot, called the \robstate{Geoleader}. \GEOLEADEL is known to be impossible in the general case~\cite{DBLP:journals/tcs/DieudonneP12} because of possible symmetries in the network. In practice, this impossibility is circumvented using randomized algorithms to break such symmetries. Let us consider the state-of-the-art algorithm~\ref{algCan3} by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07} for three robots. \begin{algorithm}[H] \caption{Original \LEADEL Algorithm by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07} for Three Robots} \label{algCan3} \begin{algorithmic} \STATE Compute the angles between two robots \IF{$my\_angle$ is the smallest} \STATE Become \robstate{Leader} \STATE Exit \ELSIF{$my\_angle$ is not the smallest, but the other two are identical} \STATE Become \robstate{Leader} \STATE Exit \ELSIF{All angles are identical} \STATE Perform a Bernoulli trial with a probability of winning of $p = \dfrac{1}{3}$ \IF{Trial won} \STATE Move perpendicular to the opposite side of the triangle in opposite direction \ENDIF \ENDIF \end{algorithmic} \end{algorithm} For this particular algorithm, there are three cases: \begin{enumerate} \item The common case, where one angle is greater than the two others. \item A rare case where two angles are identical, and the third one is smaller. \item The rarest case where all angles are identical. In that case, a Bernoulli trial is required to degrade to the other cases. \end{enumerate} Let us assume a network of three robots, $[r_1,r_2,r_3]$, such that $r_1$ is placed at coordinates $(-0.5,0)$, and $r_2$ at $(0.5,0)$. We show where each case appears in figure~\ref{fig:Lead_theor}). The third case occurs if $r_3$ is at $(0,\pm \dfrac{\sqrt{3}}{2})$, which are noted as points $eq1$ and $eq2$. Positions of $r_3$ that lead to the second case are noted as $iso1$, $iso2$, and $iso3$. However, it is \emph{not} possible, using floating point numbers, to have $x$ such that $x^2 = 3$. It is then impossible, regardless of the quality of the simulation, to place $r_3$ on $eq1$ or $eq2$, despite being possible in theory. Similarly, an infinitely large number of points mathematically located on the circular arcs of the second case cannot be represented properly using floating point numbers. To test this, each robot is given a new property 'Leader', which is a string containing the name of the \robstate{Leader} robot. We perform the simulation and display the results in figure~\ref{fig:Simu1}. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{Lead_theor.png} \captionsetup{justification=centering} \caption{\robstate{Leader} Depending on the Location of $r_3$. Red, green and blue represent $r_1$, $r_2$ and $r_3$, respectively.} \label{fig:Lead_theor} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{Map_simu_noerr.png} \captionsetup{justification=centering} \caption{Simulation for 3-robot \LEADEL with Perfect Vision Sensors. No isosceles or equilateral point was found.} \label{fig:Simu1} \end{subfigure} \end{figure} As we predicted, the fact that real numbers cannot be properly represented in our discrete, floating point space prevents the simulator from finding the known counter-example in the case of 3-robots \LEADEL. Furthermore, the three circular arcs on which the second case occurs have a combined surface theoretically equal to zero. Therefore, they are statistically impossible to find using our Monte-Carlo simulation. However, it should be noted that, even in a world of perfect sensors, building an equilateral triangle would require placing the third robot with physically impossible precision. So, while this counter-example exists from a mathematical standpoint, it could never occur in a more realistic setting. So, when considering practical robots, this could be considered a minor issue. On the contrary, the use of a discretized Euclidean space could be viewed as massive advantage compared to the regular, continuous model, as it makes the inherently unrealistic hypothesis of robots being able to store and process snapshots of infinite precision. In this approximated context, snapshots have a known, maximum size, depending on the chosen precision for the coordinates of other robots. So, in this context, storing a snapshot for a full cycle becomes a trivial matter, and using algorithm \textbf{SyncSim} described by Das \emph{et al.}~\cite{DBLP:conf/icdcs/DasFPSY12,DBLP:journals/tcs/0001FPSY16}, to simulate an \FSYNC scheduling under an \ASYNC scheduler, becomes possible without additional unrealistic hypotheses. As a result, we believe designing algorithms that properly solve problems in the context of a discretized Euclidean space should be a priority, as it would allow mobile robots to only need to function using the \FSYNC scheduler, and would remove the unrealistic requirement of infinite precision. One such algorithm is shown in Section~\ref{chap:improved}. \section{Fuel Efficiency in the Usual Settings} \label{chap:performance} \noindent The overwhelming majority of the mobile robots research has focused on proving, under a given set of conditions, whether there exists a counter example to a given problem. On the other hand, the practical efficiency of a given algorithm (with respect to real-world criteria such as fuel consumption) was rarely studied by the distributed computing community, albeit commanded by the robotics community~\cite{DBLP:conf/gecco/AroraMDB19,DBLP:journals/ijrr/YooFS16}. Fuel-constrained robots have been considered in the discrete graph context, for both exploration~\cite{DBLP:conf/arcs/DyniaKS06} and distributed package delivery~\cite{DBLP:conf/algosensors/Chalopin0MPW13}. However, to our knowledge, no study considered the two-dimensional Euclidean space model that was promoted by Suzuki and Yamashita~\cite{DBLP:journals/siamcomp/SuzukiY99}. A possible explanation for this situation is that the more complex the algorithm (or the system setting), the more difficult it becomes to rigorously find the worst possible execution. \subsection[\textsf{Rendezvous} Algorithms]{\RENDEZVOUS Algorithms} \noindent We first quantify the maximum traveled distance and the average traveled distance for several known \RENDEZVOUS algorithms. We consider the \emph{Center Of Gravity algorithm}~\cite{DBLP:journals/siamcomp/SuzukiY99}, the two-color \ASYNC algorithm (\emph{Her2}) by Heriban et al.~\cite{DBLP:conf/icdcn/HeribanDT18}, the two-color algorithm (\emph{Vig2}) by Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}, which is known to solve \RENDEZVOUS in \SSYNC, and \CONVERGENCE in \ASYNC, the three-color algorithm (\emph{Vig3}) by Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}, the four-color algorithm (\emph{Das4}) by Das \emph{et al.}~\cite{DBLP:conf/icdcs/DasFPSY12,DBLP:journals/tcs/0001FPSY16}. We also investigate the algorithms assuming unreliable compasses by Izumi \emph{et al.}~\cite{DBLP:journals/siamcomp/IzumiSKIDWY12}: the \SSYNC static-error compass algorithm (\emph{Stat \SSYNC}), which, despite its name, works in \ASYNC, the \SSYNC dynamic-error compass algorithm (\emph{Dyn \SSYNC}), which does not work in \ASYNC, and the \ASYNC static-error compass algorithm (\emph{Dyn \ASYNC}). We take advantage of the modularity of our simulator. The \texttt{Robot} class now carries several new properties: \texttt{color}, the color a robot presently displays ; \texttt{compass}, the type of compass and error, \emph{i.e.} 'none', 'static' or 'dynamic' ; \texttt{compass\_error}, the maximum error allowed for the compass ; and \texttt{compass\_offset}, the current compass error. The color is changed at the end of the \texttt{COMPUTE} method. Depending on the value of \texttt{compass}, \texttt{compass\_offset} is either chosen during the initialization, or at the beginning of every \texttt{LOOK} method. Each algorithm is first carefully analyzed on paper to find the worst possible execution. Simulations are then run according to the aforementioned protocols. Due to limitations described in Section~\ref{sssec:NRN}, we actually assess those protocols for a degraded notion of \CONVERGENCE rather than \GATHERING. The distance traveled is expressed relatively to the initial distance between the two robots. In practice, the first robot is always located at $\{0,0\}$ and the second robot is placed at random on the circle of radius 1 centered on $\{0,0\}$. Algorithms are tested with no initial pending moves, as arbitrary pending moves would render fuel efficiency mostly impossible to reliably monitor. Results are summed up in Table~\ref{table_res}. The red color denotes cases where the simulation was stuck in non-gathered cycles, and had to be manually unstuck. Details as to why this happened are provided below. For scale, running 4 instances of \emph{Vig3} for one hour under the \ASYNC scheduler resulted in $\simeq$ 14 million total individual executions. \clearpage \begin{table}[htb] \centering \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{rdv_max.png} \caption{Maximum Traveled Distance \\ Found / Predicted} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{rdv_avg.png} \caption{Average Traveled Distance} \end{subfigure} \caption{Maximum and Average Traveled Distances} \label{table_res} \end{table} While most results match the predictions, our pen and paper analysis missed a worst case execution for \ASYNC \emph{Vig3}, which was found by the simulator (highlighted in bold in Figure~\ref{table_res}). This highlights the difficulty of manually finding the maximum distance even with simple algorithms and settings. It should be noted that rigid motion yields worst results than non-rigid. This is normal because increasing the traveled distance relies on picking a target outside of the $[r_1,r_2]$ segment, and when this is the case, performing the full motion increases the traveled distance more than performing it partially. Thus, unless stated otherwise, all further simulations assume rigid motion. The difference between \SSYNC and \ASYNC with respect to efficiency becomes apparent, as under the \ASYNC scheduler, optimal fuel consumption mandates using four colors, while a simple oblivious algorithm is sufficient in \SSYNC. The algorithms using compasses yield the most interesting results. First, numerous simulations of the \SSYNC static algorithm became stuck. These failures are due to the fact that the sine and cosine operations used in the algorithms tend to sum errors, and there is a possibility that a robot moves in a way that results in an angle of exactly 0, which actually randomly yields an angle of either $0-\epsilon$ or $0+\epsilon$, where $\epsilon$ is a very small positive number. This in turn results in unsolvable cycles that prevent \CONVERGENCE. As $\epsilon$ was never larger than $10^{-9}$, we chose to prevent this behavior by slightly enlarging the interval of the condition that should be triggered on an angle of zero to an angle in $[-10^{-6},10^{-6}]$. We do the same for all conditions for consistency. So any condition that should be true for angles in $[A,B[$ are now true for angles in $[A-10^{-6},B-10^{-6}[$, in $[A,B]$ now in $[A-10^{-6},B+10^{-6}]$, in $]A,B]$ now in $]A+10^{-6},B+10^{-6}]$ and in $]A,B[$ now in $]A+10^{-6},B-10^{-6}[$. Interestingly, this new condition only had notable impact on the static error algorithm. Indeed, these errors could be seen as small dynamic random angle errors. Since the static error algorithm is not designed to be resilient against dynamic errors, it fails whenever they appear. This also demonstrate the resilience of the dynamic error algorithms. \subsection[\textsf{Convergence} For \textit{n} Robots]{\CONVERGENCE For \textit{n} Robots} \noindent Cohen and Peleg~\cite{DBLP:journals/siamcomp/CohenP05} proved the Center of Gravity (CoG) algorithm solves \CONVERGENCE for $n$ robots under the \ASYNC scheduler. We analyze the fuel consumption of the algorithm under both the \SSYNC and \ASYNC schedulers. Results for the minimum, maximum, and average distance traveled are show in table~\ref{NCoG}. We use the sum of the distances to the CoG in the initial configuration as a baseline unit of distance, \emph{i.e.} the distance traveled in \FSYNC. \begin{table}[htb] \centering \captionsetup{justification=centering} \includegraphics[scale=0.39]{T2.png} \caption{Traveled Distances for CoG} \label{NCoG} \end{table} It should be noted that, while previous results are based on at least hundreds of thousands of simulations, due to the increase in simulation complexity, in \ASYNC, for $n=25$, only 31 simulations could be computed under an hour. So they were discarded. Similarly, for $n=50$, no simulation could be finished under an hour. Looking at the results, one element immediately jumps out: for $n \geq 3$, the CoG algorithm wastes movements. This is easy to understand: robots move towards the center of gravity, which for 3 or more robots is different from the geometric median (\emph{a.k.a.} the Weber point), which would actually minimize movement. Our tests seem to indicate that aiming for the median instead of the CoG can reduce traveled distance by up to 30\%. However, it is a known result that no explicit formula for the geometric median exists. As a result, in practice, when trying to minimize traveled distance, \CONVERGENCE for $n$ robots should rely on an approximation of the geometric median rather than the center of gravity. \section{Analyzing Algorithms in Realistic Settings} \label{chap:realistic} \noindent In Section~\ref{chap:performance}, the simulation of inaccurate compasses yielded extremely interesting results. To follow this track, we now focus in this section on the setting where sensors are inaccurate. In more details, we analyze the Center of Gravity (CoG) algorithm for \RENDEZVOUS in this setting, as well as the \GEOLEADEL algorithm by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07}. \subsection{Visibility Sensor Errors} \noindent To study the impact of inaccurate sensors, we consider three different models for vision error. For a robot $r_1$ looking at a robot $r_2$ located in $(x,y)$ in the Cartesian coordinate system centered at $r_1$, and located at $(r,\theta)$ in the polar coordinate system centered at $r_1$, we define: \begin{itemize} \item The \emph{absolute} error model~\cite{DBLP:journals/automatica/Martinez09} uses a constant value $err$. A first number $R_{err}$ is picked uniformly at random in $[0,err]$, and a second $\theta_{err}$ in $[0,2\pi]$. The perceived position of $r_2$ is then $(x+R_{err} cos(\theta_{err}),y+R_{err} sin(\theta_{err}))$. \item The \emph{relative} error model~\cite{DBLP:journals/siamcomp/CohenP08} uses two constants $err_{dist}$ and $err_{angle}$. Two numbers $R_{err}$ and $\theta_{err}$ are picked uniformly at random in $[-err_{dist},err_{dist}]$ and $[-err_{angle},err_{angle}]$. The polar coordinates of $r_2$ are then perceived to be $(r + r*R_{err}, \theta + \theta_{err})$ \item The \emph{absolute-relative} error model is similar to relative error, but the perceived polar coordinates are $(r + R_{err}, \theta + \theta_{err})$ \end{itemize} These error models are depicted in Figure~\ref{fig:errors}. \begin{figure}[htb] \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.6\textwidth]{err_abs.png} \caption{Absolute error} \label{fig:err_abs} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.6\textwidth]{err_rel.png} \caption{Relative error} \label{fig:err_rel} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=0.6\textwidth]{err_relabs.png} \caption{Absolute-relative error} \label{fig:err_relabs} \end{subfigure} \captionsetup{justification=centering} \caption{Types of Errors \\ The $r_2$ point is the actual location of robot $r_2$. The red hashed area represents possible detected positions by robot $r_1$.} \label{fig:errors} \end{figure} It should be noted that each model could be used to accurately model errors for different types of sensors. The absolute error model is interesting because it is simple to compute, requires no change of coordinate system, uses a single parameter, and closely matches the behavior of robots where the \LOOK phase is an abstraction of GPS-type coordinates exchanges~\cite{DBLP:journals/jnw/YaredDIW07}. The two relative models are more complex from a computing perspective, but closely match the use of either computer vision or telemetry sensors. Both carry an angular error matched with either proportional or absolute distance error. Which type of distance error is more appropriate would depend on the exact type of sensor. These new error models drive adding three properties to the \texttt{Robot} class: \begin{itemize} \item \texttt{LOOK\_error\_type}, a string that defines the type of error and can be either \texttt{'none'}, \texttt{'relative'}, \texttt{'absolute'}, or \texttt{'abs-rel'}. \item \texttt{LOOK\_distance\_error}, a float that matches either $err$ or $err_{dist}$, depending on the type of error. \item \texttt{LOOK\_angle\_error}, a float that matches $err_{angle}$. \end{itemize} Robots then chose the corresponding error (with parameters chosen uniformly at random) when performing their \LOOK operation. \subsection[\textsf{Convergence} for \textit{n}=2 Robots]{\CONVERGENCE for \textit{n}=2} \noindent \CONVERGENCE with vision error using the CoG algorithm has already been studied by Cohen and Peleg~\cite{DBLP:journals/siamcomp/CohenP08}. The error model they considered is identical to our relative error model. Their paper states that \CONVERGENCE with distance error using the CoG algorithm is impossible in the general case. This is, however, only true for $n\geq3$, which the authors omit to mention. In the case $n=2$, it appears to be theoretically impossible to make the algorithm diverge for a distance error smaller than a $100\%$, or $err = 1$. We can reasonably ignore the case of an error greater than $100\%$, as it would allow for a robot to perceive another one directly behind itself. To our knowledge, no formal result exists regarding the angle error. In theory, the maximum angle error is $\pi$. We simulate \CONVERGENCE for $n=2$ robots using the CoG algorithm for the relative error model. The error for each robot is chosen uniformly at random at the beginning of the execution. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.28]{Figure_1.png} \caption{Maximum Traveled Distance} \label{fig:Max_dist} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.28]{Figure_2.png} \caption{Average Traveled Distance} \label{fig:Avg_Dist} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[scale=0.28]{Figure_3.png} \caption{Proportion of Diverging Executions} \label{fig:Fail_exec} \end{subfigure} \captionsetup{justification=centering} \caption{Movement and Divergence of the CoG Algorithm for Two Robots with Inaccurate Visibility Sensors} \label{fig:CoG_Dist_Ang} \end{figure} We must also consider the now possible case of a diverging algorithm. Since the execution is random, any setting should \emph{eventually} converge. However, we must put a reasonable stopping condition in case the execution is clearly diverging. We chose to activate the defeat condition if the distance between the two robots becomes ten times larger than the distance in the initial configuration. Note that the apparent decrease in maximum and average traveled distance for higher angle error is most likely due to the increase of diverging executions (fewer executions converge, but the traveled distance for those is shorter). It appears clearly that the angular error has a much greater potential for both preventing\linebreak\CONVERGENCE, and making robots waste fuel. Indeed, when the angular error remains below $3\pi/5$, a distance error up to 100\% can be tolerated with no performance loss. \\To give some perspective, the realistic setting of a $10\%$ vision error with a $1^\circ$ angle error yields a maximum traveled distance of 1.221 and an average of 1.036, with no divergent executions out of more than 500 million data points. \pagebreak \subsection{Compass Errors} \noindent In the particular case of compass based algorithms of Izumi \emph{et al.}~\cite{DBLP:journals/siamcomp/IzumiSKIDWY12}, rendezvous is possible when the compasses are inaccurate. More specifically, the maximum tolerated errors are $\frac{\pi}{2}$, $\frac{\pi}{4}$ and $\frac{\pi}{6}$ for the static \SSYNC, dynamic \SSYNC, and dynamic \ASYNC algorithms, respectively. In our simulation we chose static errors, for consistency, with values up to $\frac{49\pi}{100}$, $\frac{24\pi}{100}$ and $\frac{16\pi}{100}$, to avoid possible edge cases. Results of maximum and average traveled distances for these algorithms are detailed in Table~\ref{table_comp_err}. \begin{table}[htb] \centering \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[scale=0.35]{Compass_err_max.png} \caption{Maximum Traveled Distance} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \captionsetup{justification=centering} \includegraphics[scale=0.35]{Compass_err_avg.png} \caption{Average Traveled Distance} \end{subfigure} \captionsetup{justification=centering} \caption{Maximum and Average Traveled Distances for \RENDEZVOUS \\ with Inaccurate Compasses} \label{table_comp_err} \end{table} We observe that the unreliable compasses are used in a way that makes robots rotate around each other until they are oriented in such a way that one robot moves while the other stays, regardless of the error. However, there are no provisions in these algorithms to limit distance increases during the rotating phases, which explains the results. Detailed observation shows the distance between the two robots can gradually diverge towards infinity during rotation and then converge to zero in a single cycle. This also demonstrated a problem for our \CONVERGENCE criterion: robots could converge at rather large coordinates such that the coordinates of robots are in succession, but, since the accuracy of floating point numbers decreases as the number increase, the distance between the two robots was greater than $10^{-10}$. As a result, we modified the criterion to $|r_1r_2|<max(10^{-10},|Or_1|*10^{-10})$, with $O$ the point of coordinates $\{0,0\}$. \subsection[\textsf{Geoleader} \textsf{Election}]{\GEOLEADEL} \noindent Let us now consider \GEOLEADEL algorithm \ref{algCan3} by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07}, for $n=3$. Looking at our previous results from Section~\ref{sssec:NRN}, we notice that the borders between each zone should be an issue for imperfect sensors, as different errors for different robots may lead to robots electing different \robstate{Leader} robots. \begin{figure}[htb] \centering \includegraphics[width=0.6\linewidth]{ex1.png} \caption{Example of \LEADEL Failure Due to Imperfect Vision} \label{fig:ex1} \end{figure} We demonstrate how this phenomenon can occur in Figure~\ref{fig:ex1} for the case of absolute vision error. On top is the actual configuration, where angles $\widehat{r_1r_2r_3}$ and $\widehat{r_2r_1r_3}$ are equal\footnote{Because robots have no chirality, angles cannot reliably be distinguished from their opposite. So, two opposite angles may always be considered equal.}, and angle $\widehat{r_1r_3r_2}$ is smaller than both, so $r_3$ should be elected. The red circle shows the possible perceived position of $r_3$ by $r_1$ and $r_2$ due to vision error. In the bottom left case, we show a possible perception by $r_1$ where $r_1$ should be elected \robstate{Leader}, as $\widehat{r_2r_1r_3}$ is now greater than $\widehat{r_1r_2r_3}$. On the lower right, $r_2$ similarly thinks it should be elected. Now, two different robots consider themselves \robstate{Leader} and the election process fails. We now use the absolute model to simulate \GEOLEADEL with $err = 0.001$, for $n=3$. This simulation yields $\simeq 0.1\%$ of errors in total, where two robots compute different \robstate{Leader} robots, and is shown in figure~\ref{fig:Simu2}. \begin{figure}[htb] \centering \captionsetup{justification=centering} \includegraphics[width=0.675\linewidth]{Map_simu_err_613.png} \caption{Simulation for 3-robot \LEADEL with Absolute Vision Error \\ Yellow points represent configurations where the error generates two different \robstate{Leader} robots.} \label{fig:Simu2} \end{figure} \section[Improved \textsf{Convergence} and \textsf{Leader} \textsf{Election}]{Improved \CONVERGENCE and \LEADEL for Faulty Visibility Sensors} \label{chap:improved} \noindent Following our observations of problematic behaviors in Sections~\ref{chap:performance} and~\ref{chap:realistic}, we provide two new algorithms: a fuel efficient \CONVERGENCE algorithm for two robots, and a \GEOLEADEL algorithm that is resilient to faulty visibility sensors. \subsection[Fuel Efficient \textsf{Convergence}]{Fuel Efficient \CONVERGENCE} \noindent We provide a new algorithm (Algorithm~\ref{alg:efficient}) for the \ASYNC \CONVERGENCE of two robots. Our algorithm is a simplified version of the two-color algorithm by Viglietta~\cite{DBLP:conf/algosensors/Viglietta13}, which does \emph{not} solve \GATHERING (while Viglietta's algorithm does solve \GATHERING in \SSYNC). Our algorithm however ensures that no target can ever be outside of the segment between the two robots, ensuring no wasted moves, and that there exists a scheduling such that convergence is eventually achieved. It is denoted by \textsc{FEC} (Fuel Efficient \CONVERGENCE, presented in Figure~\ref{fig:Efficient2}). Our algorithm still uses two colors (\Black and \White), and when observing the other robot's color, the observing robot either remains still (the 'Self' target) or goes to the computed midpoint between the two robots (the 'Midpoint' target), possibly switching its color to the opposite one. \begin{figure}[htb] \centering \begin{tikzpicture} \node[blk] (B) {}; \node[wht] (W) [right of=B] {}; \path[->] (B) edge[bend left] node[above,align=center]{\Black$\rightarrow$Self} (W); \path[->] (W) edge[bend left] node[below,align=center]{\White$\rightarrow$Midpoint \\ \Black$\rightarrow$Self} (B); \path[->] (B) edge[out=150,in=210,loop] node[near start,above,align=right]{\White$\rightarrow$Self} (B); \end{tikzpicture} \caption{FEC: Fuel Efficient \CONVERGENCE Algorithm for Two Robots} \label{fig:Efficient2} \end{figure} \begin{algorithm} \caption{FEC: Fuel Efficient \CONVERGENCE Algorithm for Two Robots} \label{alg:efficient} \begin{algorithmic} \STATE \IF{me.color = \White} \STATE me.color $\Leftarrow$ \Black \IF{other.color = \White} \STATE me.destination $\Leftarrow$ other.position/2 \ENDIF \ELSIF{me.color = \Black} \IF{other.color = \Black} \STATE me.color $\Leftarrow$ \White \ENDIF \ENDIF \end{algorithmic} \end{algorithm} As a sanity check, we ran this algorithm through our simulator for one hour ($\simeq 30$ million data points) under a randomized \ASYNC scheduler and could not find a single execution where the traveled distance was greater than the initial distance. \begin{theorem} The Fuel Efficient \CONVERGENCE Algorithm (\ref{alg:efficient}) guarantees the distance traveled for \CONVERGENCE is never greater than the initial distance between the two robots under the \ASYNC scheduler, assuming no pending moves in the initial configuration. \end{theorem} \begin{proof} First, we see that to achieve \CONVERGENCE with an optimal distance, robots should always be moving towards each other. So, for robots to converge using more than the initial distance, it is required that, at one point in the execution, one robot moves \emph{not towards} the other robot. We note that a network of two disoriented robots can be simplified as a line. In that sense, the only movement that can increase the maximum \CONVERGENCE distance is when a robot moves opposite the other robot. In other words, when robots 'switch sides'. Let us now prove that no robot can target a robot while it is in its \MOVE phase: Only the $\{$\WHITE,\WHITE $\}$ snapshot can trigger a \MOVE phase. Since this transition implies a change of color to \BLACK at the end of the \COMPUTE phase, robots that move can only be \BLACK. So, if a robot is moving, it is \BLACK and the other robot, regardless of color, cannot start moving because its snapshot is different from $\{$\WHITE,\WHITE $\}$. Furthermore, because robots switch to \BLACK after moving, and can only switch to \WHITE if the other robot is \BLACK, no robot can execute multiple \MOVE in sequence unless the other robot has executed at least a full cycle in between. So a robot cannot move multiple times while the other has pending moves. We look at what happens after each robot completes at least one full cycle. We assume $r_1$ performs a \LOOK, and $r_2$ performs $k$ cycles before $r_1$ finishes its \MOVE. The distance after $r_1$ finishes its cycle is presented in table~\ref{tab:movCycles-bis}. \begin{table}[htb] \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{|c|c|c|}\hline & $r_1$ has a pending \STAY & $r_1$ has a pending \HALF \\ \hline $r_2$ executes $k$ \STAY & $X$ & $\left[\dfrac{X}{2} , X - \delta \right]$ \\ \hline $r_2$ executes $1$ M2H\footnotemark[2]& $\left[ \dfrac{X}{2} , X - \delta \right]$ & $\left[ 0 , X - 2\delta \right]$ \\ \hline \end{tabular} \end{adjustbox} \caption{Distance after a full cycle of $r_1$ and $k$ full cycles of $r_2$ with an initial distance of $X$} \label{tab:movCycles-bis} \end{table} \footnotetext[2]{As explained above, moving a second time requires at least a full cycle from the other robot.} In the case of simultaneous \HALF, the distance can be reduced down to zero, but robots cannot switch sides. In both other cases where a \MOVE happens, the distance is reduced at most down to half, and robots cannot switch sides. Overall, in no cases can the robots move not towards one another, so the maximum distance traveled is always the initial distance between the two robots. \end{proof} However, while the randomized scheduler we use for the simulator ensures convergence is always achieved, a rapid analysis of the algorithm shows that this algorithm ensures fuel efficiency, but does not actually ensures convergence. In fact, a simple \SSYNC scheduling can infinitely prevent robots from moving. This further highlights that simulations and formal proofs are complementary. We conjecture that Fuel Efficient Convergence is not actually possible for two colors, and that algorithms using three colors may even yield Fuel Efficient Rendezvous (not just Convergence). We also compare the resilience of this algorithm against vision errors with the center of gravity algorithm as a baseline in Figures~\ref{MAX_err} and~\ref{AVG_err}. Our results show that this algorithm is slightly more resilient to vision errors than CoG. \begin{figure}[htb] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{MAX_err.png} \captionsetup{justification=centering} \caption{Maximum Distance Traveled by \textsc{CoG} (top) and \textsc{FEC} (bottom)} \label{MAX_err} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{AVG_err.png} \captionsetup{justification=centering} \caption{Average Distance Traveled by \textsc{CoG} (top) and \textsc{FEC} (bottom)} \label{AVG_err} \end{subfigure} \end{figure} \subsection[Error Resilient \textsf{Geoleader} \textsf{Election}]{Error Resilient \GEOLEADEL} \label{ssec:errLead} \noindent The \GEOLEADEL algorithm by Canepa and Gradinariu Potop-Butucaru~\cite{DBLP:conf/sss/CanepaP07} was \emph{not} designed under the assumption that the visibility sensors could be prone to errors. In this subsection, we use this awareness to create a new, error-resilient, version of this algorithm, using our simulation framework. \paragraph{\textsf{Geoleader} \textsf{Election} for Four Robots} One intuitive way of building a fully resilient algorithm for \LEADEL could be based on robots computing the bounds of the error zone. While this seems feasible for a 3-robot election, it becomes far less trivial for four robots or more. We present the results of a leader election for four robots in the appendix. \paragraph{Proposed Algorithm} In section~\ref{chap:realistic}, we used the simulation framework to detect failed elections caused by visibility sensor errors. Since mobile robots are able to run any algorithm during their \COMPUTE phase, then they can also run the simulation framework to do precisely that. The improved algorithm relies on the knowledge of the vision error model and its upper bounds to simulate random errors in a robot's position and snapshot and determine whether there exists a possibility of the other robots electing different \robstate{Leader} robots. Note that absolutely knowing that the election cannot fail (\emph{i.e.}, the election cannot yield two different \robstate{Leader} robots for two different robots) would require checking the entire surface of possible errors, which is not feasible in practice. So, we assume that robots perform a finite number of trials and decide accordingly. Each robot internally simulates a position error for each robot in its snapshot within the known margins, performs a simulated election for each robot in its snapshot, and checks for discrepancies in the resulting \robstate{Leader} robots. This is repeated with new random errors for a given number of tries, similar to a Monte-Carlo approach. Once a robot believes the election process can succeed, it chooses the \robstate{Leader} as in the original algorithm. Otherwise, it picks a random direction and distance, and performs a \MOVE to "scramble" the network. This process repeats until all robots believe the election can succeed. This process is detailed in Algorithm~\ref{algR}. \begin{algorithm} \caption{Reliable \LEADEL algorithm} \label{algR} \begin{algorithmic} \STATE $L = self.$COMPUTE$('Leader Election')$ \STATE $my\_network = self.snapshot \cup self$ \STATE $counter = 0$ \WHILE{$counter < nb\_tries$} \FOR{$r_1$ in $my\_network$} \STATE $r_v = r_1$ \STATE Change $r_v.x$ and $r_v.y$ randomly according to error parameters \STATE $r_v.snapshot = my\_network/\{r_1\}$ \FOR{$r_2$ in $r_v.snapshot$} \STATE Change $r_2.x$ and $r_2.y$ randomly according to error parameters \ENDFOR \STATE $L_v = r_v.$COMPUTE$('LeaderElection')$ \IF{$L \neq L_v$} \STATE Move randomly \STATE Exit \ENDIF \ENDFOR \STATE $counter \mathrel{+}= 1$ \ENDWHILE \STATE L is elected \robstate{Leader} \end{algorithmic} \end{algorithm} \noindent We now perform simulations using this algorithm. Each point is sorted according to the following: \begin{itemize} \item If no robot detects a possible error, it is a valid point. \item If at least one robot has detected a possible error, and decided to move as a result, it is a detected possible error point. \item If no robot moves, but two robots have different \robstate{Leader} robots, it is an undetected error point. \end{itemize} We measure the proportion of undetected error and possible error points for $nb_{tries}$ between 0 and 30. Results are presented in Figure~\ref{fig:perf}. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{Perf.png} \caption{Performance of the Error-Resilient Election Algorithm \\ $err = 0.001$} \label{fig:perf} \end{figure} Note that the number of undetected error points, while decreasing, does not reach zero under our testing conditions. Also, using a single internal simulation typically results in a $\sim 80\%$ reduction in the number of undetected error points. Using 10 internal simulations resulted in a reduction of $99.5\%$ of undetected error points. Which number of internal simulations is the best suited would depend on both the speed of the leader election process and reliability of the obtained solution requirements. Importantly, we notice that, were we to choose an error model and error bounds such that it models the possible errors of representing real numbers using limited precision floats, then this particular algorithm, when used with an infinitely large, similar to $\mathbb{R}^2$ number of random tries, can be made to reliably detect anomalies due to the errors induced by evolving in the continuous plane, yet only perceiving a discretized plane. Actually, me make the conjecture that this algorithm can be adapted to allow any algorithm that makes decisions based on robot locations to operate properly in a perceived discretized plane. Furthermore, using this algorithm allows us to reduce the size of a snapshot to a finite, storable amount to realistically use the \textbf{SyncSim} protocol~\cite{DBLP:journals/tcs/0001FPSY16}, and fully simulate the \FSYNC scheduler in \luminous \ASYNC. \section{Conclusion} \label{sec:conclusion} \noindent In this paper, we introduce a modular framework designed to simulate mobile robots for any given setting. We discuss the limitations and constraints of this approach, and use it to compute the maximum distance traveled, or fuel efficiency, of multiple algorithms in several settings, with interesting results. In particular, we note that the algorithm by Izumi \emph{et al.}~\cite{DBLP:journals/siamcomp/IzumiSKIDWY12} can lead to an unbounded increase in distance before eventually gathering. Similarly, the center of gravity algorithm is inherently sub-optimal for $n>2$ robots, and robots should use an algorithm based on the geometric median instead. We then use this framework to simulate inaccurate sensors for mobile robots and verify the behavior of \CONVERGENCE and motion based \LEADEL under this new model. We also introduce errors in the perception of colors for \luminous robots performing state-of-the-art two-robot \GATHERING. Finally, we designed two new algorithms. The first one is designed to perform two-robot \CONVERGENCE under the \ASYNC scheduler with optimal fuel efficiency. The second algorithm uses the simulator itself to allow robots to solve motion based \LEADEL with inaccurate sensors. The latter can be adapted to allow for decision making algorithm, such as \LEADEL, to function using discretized snapshots, and so, to use the \textbf{SyncSim} protocol to simulate the \FSYNC scheduler in \luminous \ASYNC. Overall, this framework achieves its planned objective of being both easy to use and able to produce useful results for researchers. As a test, we timed the full implementation, and testing in \FSYNC, \SSYNC and \ASYNC, of the two color \RENDEZVOUS algorithm from Viglietta to require less than half an hour, including basic network monitoring and testing. The source code and instructions for our simulator are provided in the appendix and at the following repository: \url{https://github.com/UberPanda/PyBlot-Sim} \subsection*{Future Work} \noindent As we already discussed, our simulator is modular to allow for use for any given algorithm and model. So it seems logical that it should, ideally, implement every existing model and test all major algorithms in the literature, such as mutual visibility for opaque robots. Furthermore, while interesting for researchers, our simulator is not a tool for formal proofs. However, one could also argue that in its current state, we have not proven that the simulator actually simulates mobile robots, even within our degraded hypotheses. We believe that the simulator itself should be formally proven to match the model of mobile robots it claims to simulate. Note that the usefulness of this proof would be limited, as the addition of any new module may require proving the entire simulator again. Finally, our \LEADEL algorithm for errors in vision is able to function in a continuous setting using discretized snapshots. The design philosophy behind this algorithm of using randomized tries to simulate sensor errors is not specific to the \LEADEL problem, and it could be used for other algorithms that rely on making decisions based on the locations of robots in the network and that are sensitive to errors in perception. Building new algorithms that can use these finite snapshots allows us to use the \textbf{SyncSim} protocol~\cite{DBLP:journals/tcs/0001FPSY16} and simulate a \FSYNC scheduler in \luminous \ASYNC and would be a major advantage for resilience to asynchrony. \printbibliography \newpage
1,108,101,566,072
arxiv
\section{Introduction} It is well known that the canonical quantization of general relativity yields the Wheeler-DeWitt equation $\cite{dew,har}$. This equation leads to static state of the universe as well as the problem of time $\cite{kuch,ish,ash,and,mer,marletto,barbour}$. To overcome this problem a solution was suggested by Page and Wootters (PaW) $\cite{pag,woo}$. By considering quantum entanglement, a static system can be described as an evolving universe by the view of internal observers. An hypothetical external observer may describe clock system and the rest of the universe as a whole system in a stationary state. This system will be evolving from the view of internal observers that test correlation between the clock and the rest $\cite{pag,woo,pagg,gam,per,rov}$. Thus, entanglement between subsystems provide the possibility to describe time as an emergent property of the subsystems of the universe. For an experimental illustration refer to $\cite{more}.$ In this paper, we apply PaW mechanism to the near horizon of the black hole to study the time evolution of the black hole's interior. We investigate this mechanism within two different paradigms, $ER=EPR$ $\cite{mal}$ and firewalls $\cite{amps}$. The complementarity view of black holes has been threatened by firewall concept. $ER=EPR$ conjecture in preserving complementarity $\cite{mal}$ does not comply with AMPS which proposes firewall at the horizon of black hole to avoid APMS's paradox $\cite{amps}$ . AMPS has argued that considering the complementarity there is a contradiction in accepting all three following assumptions at once: 1) an evaporating black hole preserves quantum information without destroying it (unitarity), 2) the event horizon of black hole is not unusual for an in-falling observer crossing it, 3) an observer staying outside the black hole works with relativistic effective quantum field theory. AMPS considers the late radiation $B$ of an old black hole (emitted half of its radiation away \cite{old}) as maximally entangled with its early radiation $R_{B}$. Assumptions 1 and 3 require the $B$ to be entangled with a subsystem of $R$, and on the other hand, the assumption 2 leads to entanglement between $B$ and a subsystem of interior of the black hole. This violates the monogamy of quantum entanglement \cite{mon1,mon2}. It asserts that if two quantum systems are maximally entangled, non of them can be entangled with a third system. To overcome this puzzle, AMPS argue that there is only one singularity at firewall and no interior of black hole exists \cite{amps,fire11}. One of the solutions to overcome AMPS's paradox without violation of equivalence principle near the horizon is the $ER=EPR$ conjecture. The $ER$ bridge from one hand and $EPR$ pair on the other hand have a relation by $ER=EPR$ \cite{mal}. This means that $ER$ bridge is created by $EPR$ correlation in the microstates of two entangled black holes. This result is based on the works \cite{B1},\cite{B2}. To explain more, the $EPR$ correlated quantum system is nothing but a weakly coupled Einstein gravity description. In other words, the $ER$ bridge is a highly quantum object. There are some speculations that for every singlet state there exists a quantum bridge of this type. For more discussion of AMPS's paradox and another solution for it, refer to $\cite{preskill}$. In this paper, we study the black hole's near horizon features and PaW mechanism briefly in section 2. In the third section, the dynamical evolution of the black hole's interior is studied within $ER=EPR$ paradigm using Wheeler-DeWitt equation near the horizon of the black hole. This is repeated in the section 4, concerning firewall at the horizon of black hole. At the end we have a conclusion section. \section{Black hole's near horizon features and PaW mechanism} In the black hole formation and evaporation process, the unitarity of S-matrix is an important fact. We assume that $B$ is an outgoing Hawking mode in the near horizon zone of a black hole. The unitarity of S-matrix imposes that the mode $B$ at near horizon be pure for a newly constructed black hole, otherwise, it has to be purified as a whole Hawking radiation, emitted partly at near horizon, entangled with the other part at far distance, for an old black hole. In the later case, the exact purification of the $B$ mode is associated to degrees of freedom of the black hole. Before considering the evaporation and radiation of an old black hole\footnote{Old black hole is known as a black hole which has radiated more than half of its initial entropy in the Page time. One can consider an old black hole by the collapse of some pure state and then evaporation of it into Hawking radiation which can be divided into early and late parts as follows $|\Psi>= \sum_{i}|\psi_{i}>_{E}\otimes |i>_{L}$. }, the entropy of which is smaller than the entropy of the radiation that it has already emitted, there is a so called AMPS paradox. Here the entropy means von Neumann entropy. This entropy can be written for two quantum systems as follows \begin{equation} S_{AB}=-tr(\rho_{AB} ln\rho_{AB}) \end{equation} where $\rho_{AB}$ is density matrix for quantum mechanical systems $A$ and $B$. The amount of von Neumann entropy $S_{A}$ ($S_{B}$) is considered by \begin{equation} S_{A}=-tr(\rho_{A} ln\rho_{A}) \end{equation} \begin{equation} S_{B}=-tr(\rho_{B} ln\rho_{B}) \end{equation} which is derived by tracing over states $B$ ($A$) in density matrix $\rho_{AB}$. When $A$ and $B$ are maximally entangled (not pure) then $S_{A}=S_{B}=1$ and $S_{AB}=0$. On the other hand when $S_{A}=S_{B}=0$, then there is no any entanglement between $A$ and $B$ (pure). To consider the AMPS paradox let's do as follows. For an exterior rest observer, the outgoing near horizon Hawking mode $B$ has the entropy $S_{B}\simeq 1$ which indicates that it is not pure. However, this mode can be purified by the early emitted Hawking radiation. If we denote $R_{B}$ for the early radiation, then the von Neumann entropy $S_{BR_{B}}$ is exponentially small, namely $S_{BR_{B}}\ll1$. If we indicate the interior mode of black hole by $A$, then for an in-falling observer, realizing the vacuum, the mode $B$ has to be purified by $A$. In other words, $S_{BA}\ll1$. On the other hand the sub-additivity theorem implies\cite{fire11} \begin{equation} S_{B}\leqslant S_{BA}+S_{BR_{B}} \end{equation} which is violated by the simultaneous imposition of the results $S_{BR_{B}}\ll1$ and $S_{BA}\ll1$. Thus, in order to revalidate this theorem, the statement of {\it entanglement monogamy} is introduced, which allows each state to be entangled with one and only one other state \cite{mon2}. To overcome above paradox, AMPS suggested the existence of firewall at the horizon which is created by breaking of entanglement between $B$ and $A$. The monogamy of entanglement does not allow entanglement among three parties. In AMPS's suggestion one of the entanglements breaks down which leads to the creation of firewall at horizon. This violates the equivalence principle of general relativity near the horizon. Regarding these properties of the black hole which leads to "frozen vacuum" $\cite{bosso}$, in the next section we will consider the Wheeler-DeWitt equation in the near horizon of black hole to ascribe a typical time evolution for the quantum states, inside the black hole. Before describing our argument, we review PaW approach which is necessary for our discussion, as follows. \begin{itemize} \item The universe is timeless \begin{equation}\label{wdequation} H|\psi>=0, \end{equation} where $|\psi> \in \mathcal{H}$ is an eigenstate of its Hamiltonian $H$. \end{itemize} \begin{itemize} \item Hamiltonian includes at least one good clock. It means that a clock system $H_{c}$ with a large distinguishable states, interacts weakly (or does not have interaction at all) with the rest of universe $H_{r}$. So, it leads to Hamiltonian system with tensor product structure in its eigenstate space $\mathcal{H} \in \mathcal{H}_{c}\otimes \mathcal{H}_{r}$ such that non-interacting property holds: \begin{equation}\label{H} H=H_{c}\otimes I_{r}+I_{c}\otimes H_{r}, \end{equation} where $I$ are the unit operator on each subsystem. \end{itemize} \begin{itemize} \item Clock and the rest of the universe are entangled. This feature allows the apparent dynamical evolution of the rest of universe in terms of clock, without any evolution at the level of the universe, at all. To explain it in more details, one assumes that the state of the universe is $|\psi>$, then $|\psi (t)>_{c}$ and $|\psi (t)>_{r}$ are the states of clock system and rest of the universe system, respectively. By projecting $|\psi>$ on the states of clocks $|\psi (t)>_{c}$, and considering $|\psi (t)>_{c}= e^{-iH_{c}t/\hslash}|\psi (0)>_{c}$, one gets the vectors \begin{equation}\label{pi} |\psi (t)>_{r}:=_{c}<\psi(t)|\psi>= e^{-iH_{r}t/\hslash}|\psi (0)>_{r}. \end{equation} This indicate the proper evolution of subsystem $r$ under the action of its local Hamiltonian $H_{r}$. Although the system globally appears to be static, its subsystems indicate correlations which represent an apparent dynamical evolution. In fact, this is called evolution without evolution. \end{itemize} \section{Time evolution of the interior of a black hole and $ER=EPR$} In this section, we investigate the time evolution of an old black hole's interior according to Wheeler-DeWitt equation by considering $ER=EPR$ conjecture. The left hand side of the $ER=EPR$ is Einstein Rosen bridge and the right hand side of it is the $EPR$ paradox. There are similarities between entanglement pair $EPR$ and Einstein Rosen bridge. To show that, suppose a large number of particles which are separated in a two entangled Bell pairs. Each part is collapsed to make a single black hole. Now there are two entangled black holes which can be connected by $ER$ bridge. In other words two entangled black holes ($EPR$ pairs) can do the role of $ER$ bridge. It is important to mention that this relation is over a particular manifold and maybe it cannot be applied in every spacetime. However, some physicist take a radical position that these two parts are linked even for a single entangled pair $\cite{mal}$. For our goal in this section and whole of this paper, since we have constrained our consideration to black holes, the symbolic equation $ER=EPR$ is applicable. It is important to note that both side of the $ER=EPR$ conjecture, have "no superluminal signals" and "no creation by LOCC" features. There is no violation of locality in the both entanglement and Einstein Rosen bridge part of the equation or in other words there is no superluminal signals in $ER$ bridge and $EPR$ Bell pairs. The other feature, no creation by LOCC, admits that by local operation and classical communication (LOCC) one can not increase or create the entanglement of both parts. In other words, Alice with her entangled pair by doing local operation and sending information by classical communication cannot create or increase the entanglement. The same situation also true for Einstein Rosen bridge part. For two distant black hole with no Einstein-Rosen bridge, there does not seem to be any way to create a bridge between them without preexisting bridges. One of the applications of $ER=EPR$ conjecture is to overcome the AMPS paradox. For an old black hole the interior a exterior states of the near horizon is indicated by $A$ and $B$. The early Hawking radiation's state is $R_{B}$. The states $B$ and $A$ are entangled and on the other hand the states $B$ are also entangled by early Hawking radiation $R_{B}$. As we mentioned in the previous sections $B$ cannot be entangled both with $A$ and $R_{B}$ (monogamy of entanglement). To overcome this paradox, $ER=EPR$ can be applied here by mapping interior states $A$ by $ER$ bridge to early Hawking radiation $R_{B}$. Therefore the monogamy of entanglement is not violated because the interior states $A$ and early Hawking radiation are identified$(A=R_{B})$. With these descriptions, horizon of the black hole for an in-falling observer is not a particular region and he/she can cross the horizon without experiencing any particular event(without confronting with firewall). However, by applying $ER=EPR$ not only for quantum vacuum $A$ and $B$ states and Hawking radiation $R_{B}$, but for the exited states of the vacuum, we will confront with a particular vacuum near the horizon of the black hole which is called frozen vacuum. To understand the essence of this vacuum and its relation to $ER=EPR$ conjecture, suppose two observers an in-falling observer Alice and a static observer Bob for an old black hole. We indicate the Hilbert state of $A$, $B$ and $R_{B}$ states, by $|n>_{\tilde b}$, $|n>_{b}$ and $|n>_{R_{B}}$, respectively. Now we want to consider the vacuum when it is exited and to observe its influence in $ER=EPR$ paradigm. In doing so, suppose an old black hole and indicate a thermally entangled state without normalization factors of $bR_{B}$ as follows \begin{equation}\label{pointerpremeasurement} |\psi>_{pR_{B}b}=|i>_{p}\otimes\sum_{n=0}^{\infty}|n>_{b}|n>_{R_{B}} \end{equation} Here $|i>_{p}$ is the state of a pointer which has not interacted with any of the subsystems yet. By using $ER=EPR$ conjecture which here is $A=R_{B}$ one can apply the following map \begin{equation} |0>_{R_{B}}\rightarrow |0>_{\tilde b},... |j>_{R_{B}}\rightarrow |j>_{\tilde b},... \end{equation} If we assume the black hole is billions of light years, then the curvature is negligible in the near horizon region. In this vicinity one may expect the violation of semiclassical approach or equivalence principle. To complete premeasurement we suppose the pointer $p$ measures the states $R_{B}$. So the equation (\ref{pointerpremeasurement}) becomes \begin{equation}\label{pre2} |\psi>_{pR_{B}b}=\sum_{n=0}^{\infty}|n>_{b}|n>_{R_{B}}|n>_{p} \end{equation} A realistic system cannot be separated from environment. Then, here a pointer can do the role of the environment for radiation states $R_{B}$. If one trace over the stats $B$ the rest $pR_{B}$ is a mixed state and is not pure. Therefore, any map from $R_{B}$ to states $A$ cannot give the vacumm state of the near-horizon zone $|0>_{b\tilde b}$. So, if we include the environment $p$ for Hawking radiation states $R_{B}$ the donkey map becomes \begin{equation} |0>_{R_{B}}|0>_{p}\rightarrow |0>_{\tilde b},... |j>_{R_{B}}|j>_{p}\rightarrow |j>_{\tilde b},... \end{equation} The in-falling vacuum is proportional to \begin{equation}\label{vacuumapp} |0>_{b\tilde b} \alpha\sum_{n=0}^{\infty}x^{n}|n>_{b}|n>_{\tilde b}, \end{equation} where we suppressed the normalization factor. Now suppose the pointer $p$ measures $b$ instead of $R_{B}$. This gives the same results as equation (\ref{pre2}) . In addition, assume Bob an static observer who is one light year from near-horizon zone is aware of this measurement then he disappears. Nine years later a clueless Alice who is a free falling observer is going to experience the near horizon vacuum. In her journey she will not recognize any thing especial near the horizon vacuum because from her knowledge of black hole she know that near horizon vacuum $B$ can be purified by states $A$ which is identified by $R_{B}$. Alice was aware of $R_{B}$ before starting her journey into vacuum and then she was aware of $A$, too. Therefor she enjoy her journey and will not see anything except the in-falling vacuum. However, if Alice become aware of Bob's knowledge she will confront with a contradiction. In other words if Bob meet the Alice and share the $p$ measurement of vacuum $B$, then Alice in purifying of $A(=R_{B})$ with $B$ confronts with a contradiction. Because, in this situation $B$ is not purified by $A$ from Alice's view. To avoid this contradiction Alice must always experience the in-falling vacuum (\ref{vacuumapp}) and she cannot see any exited vacuum by the pointers, environment or even by herself. Then, near-horizon vacuum is an special vacuum which is called "Frozen Vacuum". As we mentioned before we want to construct wheeler-DeWitt equation in near-horizon zone. We recognized that near-horizon zone is frozen vacuum. To construct the wheeler-DeWitt equation in this vicinity we use the page and wootter approach $\cite{pag}$ We reviewed this approach in section (2). Now its time to apply wheeler-DeWitt to near-horizon zone. In doing so, we start from vacuum state of near horizon or frozen vacuum. According to Bousso, the in-falling vacuum state without normalization factors is as follows $\cite{bosso}$ \begin{equation} |0>_{b\tilde{b}}=\sum_{n=0}^{\infty}x^{n}|n>_{b}|n>_{\tilde b}, \end{equation} where $|n>_{b}$ and $|n>_{\tilde b}$ are the quantum states of outside and inside the black hole horizon, from in-falling observer's point of view, and the coefficient $x=e^{-\beta \omega/2}$ for modes with Killing frequency of the order of Hawking temperature is of order one. This is particular vacuum state which is called "frozen vacuum". The observer in this vacuum state, near the horizon, is unable to observe any particle, whereas a rest inertial observer far from gravity is able to observe particles from her/his vacuum state. In other words, it leads to violation of equivalence principle. This vacuum state is the only state that exists near the horizon when one is in the $ER=EPR$ paradigm. It turns out that while $ER=EPR$ conjecture tries to save monogamy principle in black hole physics, at the same time leads to violation of equivalence principle (through the frozen vacuum rather than the firewall). These explanations have far-reaching implication for our next arguments. Now, we consider the frozen vacuum state as the universe state. Since there is only one vacuum state - frozen vacuum state - near the horizon in the $ER=EPR$ case, it is the mere state that can be described as the universe state. The local Hamiltonians for subsystems $c$ and $r$, defined by relation ($\ref{H}$), are given by $H_{b}$ and $H_{\tilde b}$, respectively as \begin{equation}\label{Hb} H_{b}=\sum_{n=0}^{\infty}x^{-n}|n>_{bb}<n|, \end{equation} \begin{equation}\label{Hbb} H_{\tilde b}=-\sum_{n=0}^{\infty}x^{-n}|n>_{\tilde b\tilde b}<n|, \end{equation} where $H_{b}$ and $H_{\tilde b}$ indicate the local Hamiltonians for outside (clock system) and inside (rest of universe) the black hole horizon, from in-falling observer's point of view, respectively. Now by using equations (\ref{wdequation}), (\ref{H}), (\ref{Hb}) and (\ref{Hbb}) one can obtain the following equation \begin{equation} \left(\sum_{n=0}^{\infty}x^{-n}|n>_{bb}<n|\otimes I_{\tilde{b}} -I_{b}\otimes\sum_{n=0}^{\infty}x^{-n}|n>_{\tilde b\tilde b}<n|\right)|0>_{b\tilde{b}}=0 \end{equation} where we apply the vacuum state $|0>_{b\tilde{b}}$ as universe state This is wheeler-DeWitt equation for this model of system. Note that, as a whole, the constraint $H|\psi>=0$ is compatible with current approaches to quantum gravity. In other word, it can be interpreted as wheeler-DeWitt equation in a closed universe $\cite{dew}$. However, it can also be regarded as the first set of sufficient conditions for a timeless approach to time in quantum gravity. Now, the in-falling observer has equipped with Hamiltonian $H_{b}$ and also knows the universe state by his knowledge of black hole's theory. This knowledge includes $ER=EPR$ conjecture which identifies the interior $A$ of the black hole with the outside distant Hawking radiation $R_{B}$, ($A=R_{B}$ which is called donkey map). This map also includes the interaction of $R_{B}$ with anything outside, even the observer itself. Whatever happens to the Hawking radiation $R_{B}$, the frozen vacuum for the in-falling observer does not change and so this observer is still unable to observe any particle. For example, the observer can read the Hawking radiation $R_{B}$ and then use donkey map as follows \begin{equation} |n>_{R_{B}}\rightarrow |n>_{\tilde b}, \end{equation} for $n=0, 1, 2, 3, ...$ as quantum states. Therefore, the observer by the knowledge of $|n>_{R_{B}}$ can recognize $|n>_{\tilde b}$, and so construct the frozen vacuum state $|0>_{b\tilde b}$ without falling into the interior of the black hole. For more discussion refer to $\cite{bosso}$. To know the proper time evolution of the interior of the black hole by the PaW approach, without falling into it, the exterior observer can use her/his own subsystem state \begin{equation} |\psi (t)>_{b}=e^{-iH_{b}t/\hslash}|\psi (0)>_{ b}, \end{equation} where $|\psi (0)>_{ b}$ is the initial state of the subsystem $b$, and then uses equation ($\ref{pi}$) to derive the proper time evolution of the interior of the black hole, without falling into it, as follows \begin{equation} |\psi (t)>_{\tilde b}:=_{b}<\psi(t)|0>_{b\tilde b}= e^{-iH_{\tilde b}t/\hslash}|\psi (0)>_{\tilde b}, \end{equation} where $|\psi (0)>_{\tilde b}=_{ b}<\psi(0)|0>_{b\tilde b}$ is the initial state of the subsystem $\tilde b$. We conclude that the observer who has access to the Hawking radiation $R_{B}$, has access to the internal of the black hole, too, without falling into it. Therefore, he can also access to the Hamiltonian ($\ref{Hbb}$) near the horizon outside of it. With these interpretations, he has ability to make a measurement globally through $H$ because of his simultaneous accessibility to the Hamiltonian $H_{b}$ and $H_{\tilde b}$. The observer by considering the whole system will recognize it as a static system, but by considering its disjoint subsystems as clock-rest system, will recognize it as a dynamical system. \section{Time evolution of the interior of a black hole and firewalls} In this section, we investigate the time evolution of the interior of the black hole in the presence of firewall that AMPS has suggested for solving the AMPS's paradox. Therefore, we study a little more about the firewall. \subsection{Firewall} AMPS has argued that for a black hole which has radiated more than half of its initial entropy in the Page time, the firewall is created at the horizon where the in-falling observer burns up there $\cite{amps}$. This is in contradiction with both the equivalence principle and the postulate of black hole complementarity $\cite{leny}$. AMPS claims that the firewall is constructed in scrambling time which is much less than Page time. However, in a more gradual picture of forming firewall in $\cite{leny}$, this is not a correct picture. For more explanation, consider an old black hole with early Hawking radiation $R$, the outside of the horizon $B$ and the interior of the black hole $A$. For an old black hole, $B$ has entanglement with Hawking radiation $R_{B}$. On the other hand, for in-falling observer the interior $A$ and the outside $B$ are entangled. Now, suppose that Alice as in-falling observer, measures the state of the $R_{B}$ and then falls into the black hole. She has recognized the state of $R_{B}$ and in her journey into the black hole, can measure the state of $B$. As long as $B$ has entanglement with $R_{B}$, regarding the monogamy of entanglement, she must not recognize the entanglement between $B$ and $A$. To overcome this paradox, APMS argues that the entanglement between $A$ and $B$ breaks down for Alice. This leads to firewall at horizon in scrambling time. According to $\cite{leny}$, this is not a correct picture, because the high degree of entanglement between $B$ and $R_{B}$ does not occur suddenly. The firewall is not a part of horizon but it is only as an extension of singularity. The separation of the singularity from horizon is a gradual function of time and at the Page time this separation goes to zero. In this time there is no horizon at all and the singularity of black hole is located at the location of the horizon. So, an in-falling observer terminates at horizon (singularity of black hole). The story is different for the young black hole. In the case of young and large black holes the in-falling observer survives passing through the horizon. \subsection{Time evolution of the inside of the black hole} Now, we investigate the time evolution of the black hole's inside from the viewpoint of an in-falling observer outside the black hole, near the horizon in the presence of firewall. According to all of above considerations about PaW approach in section 2, we choose the state of near horizon vacuum state as the universe state \begin{equation}\label{a1} |\psi>_{b\tilde b}=\frac{1}{\sqrt{2}}(|1>_{b}|1>_{\tilde b}+|0>_{b}|0>_{\tilde b}), \end{equation} which is identified by imposing the Wheeler-DeWitt equation as $H|\psi>_{b\tilde b}=0$. Now, we need local Hamiltonians for subsystems $c$ and $r$ which are $H_{b}$ and $H_{\tilde b}$ respectively, as clock subsystem and the rest of the universe obeying relation ($\ref{H}$) and are given by \begin{equation}\label{a2} H_{b}=|1>_{b}<0|_{b}-|0>_{b}<1|_{b}, \end{equation} \begin{equation}\label{a3} H_{\tilde b}=|1>_{\tilde b}<0|_{\tilde b}-|0>_{\tilde b}<1|_{\tilde b}, \end{equation} where $H_{b}$ indicates the local Hamiltonian for outside of observer near horizon and $H_{\tilde b}$ for the interior region of horizon. The observer is equipped with local Hamiltonian ($\ref{a2}$) near the horizon of black hole. For a young and large black hole the in-falling observer without any concern of existence of the firewall can measure the proper time evolution of the black hole's interior. In doing so, what she needs is to do the following measurement \begin{equation}\label{fire} |\psi (t)>_{\tilde b}:=_{b}<\psi(t)|\psi>_{b\tilde b}= e^{-iH_{\tilde b}t/\hslash}|\psi (0)>_{\tilde b}, \end{equation} where $|\psi (0)>_{\tilde b}=_{b}<\psi(0)|0>_{b\tilde b}$ and $|\psi (t)>_{b}=e^{-iH_{b}t/\hslash}|\psi (0)>_{b}$ and we know that $|\psi (0)>_{b}$ is the initial state of the subsystem $b$. Therefore, the correlation between $_{b}<\psi(t)|$ and universe state $|\psi>_{b\tilde b}$ which comes from entanglement between them provide the possibility for observer to measure proper time evolution of the black hole's interior. In the case of an old black hole, the observer again is equipped with local Hamiltonian ($\ref{a2}$) near the horizon of the black hole. If the in-falling observer does not make any measurement on early Hawking radiation $R_{B}$ there will be no detectable difference between young and old black hole for her and she will not encounter any firewall at the horizon. Therefore, she is able to measure the evolution of the subsystem $\tilde b$ by the equation ($\ref{fire}$) using the correlation between subsystems that mimic the presence of dynamical evolution. On the other hand, suppose that the in-falling observer at first makes a measurement on early Hawking radiation and then near the horizon she makes a measurement on $B$. If she recognizes $R_{B}$ and $B$ as maximally entangled, then she will confront with firewall at the horizon which comes from the breaking down of entanglement between $A$ and $B$. Therefore, in the lack of entanglement between $A$ and $B$ she will not be able to measure the dynamical evolution of the subsystem $\tilde b$ by equation ($\ref{fire}$). In other words, we can conclude that she does not recognize any evolution inside the black hole. This conclusion is very close to the approaches claiming that the lack of entanglement between two sides of horizon leads to non existence of the entire space-time behind the firewall $\cite{non1, non2, non3}$. \section{Conclusion} Although there is a frozen vacuum near the horizon region, one can construct the Wheeler-DeWitt equation there and study the dynamical evolution of the system. If one accepts the $ER=EPR$ conjecture, then the time evolution of the interior of the horizon can be accessed by an infalling observer before crossing the horizon. The outside observer of the black hole can do measurement on early Hawking radiation and then by the help of $ER=EPR$ conjecture and the map $A=R_{B}$ (donkey map) can access to the interior states. Next, the observer can construct Wheeler-DeWitt operator (Hamiltonian) near the horizon to operate on the frozen vacuum as the universe state with zero energy, and determine the local Hamiltonians for outside and inside the black hole horizon. Finally, the observer is able to obtain the time evolution of the interior states of the black hole by using the outside subsystem and the frozen vacuum state. If the observer be in the firewall paradigm, she/he will confront with two cases: For young black hole, the observer is equipped with his local Hamiltonian near the horizon of black hole. For a young and large black hole, the in-falling observer without any concern about the existence of firewall can describe the proper time evolution of the black hole's interior. In the case of old black hole, if the observer does not make any observation on the early Hawking radiation, she/he cannot distinguish between old and young black holes, and so repeats the same calculation of young black hole for the old one. But if the observer makes observation on the early Hawking radiation, then she/he will confront with a firewall and there is no any time evolution on the other side of the horizon. \section{Acknowledgments} This work is based upon research funded by Iran National Science Foundation (INSF) under project No 99033073.
1,108,101,566,073
arxiv
\section{Introduction}\ \\ Woodward's time-frequency correlation function or radar ambiguity function \cite{ieee.Wo,ieee.Wi,ieee.AT}, as defined by $$ A(u)(x,y)=\int_{-\infty}^{+\infty}u\left(t+\frac{x}{2}\right)\overline{u\left(t-\frac{x}{2}\right)} e^{-2i\pi yt}\mbox{d}t $$ plays a central role in evaluating the ability of a transmitted radar waveform $\mbox{Re}\,\bigl(u(t)e^{i\omega_0 t}\bigr)$ to distinguish targets that are separated by range delay $x$ and Doppler frequency $y$. Ideally, one would like $A(u)$ to be a Dirac mass at $(0,0)$, but this desideratum is not achievable because of the ``\emph{ambiguity uncertainty principle}'', that is the constraint $$ \iint_{{\mathbb{R}}^2}|A(u)(x,y)|^2\mbox{d}x\,\mbox{d}y=A(u)(0,0)^2=\left(\int_{\mathbb{R}}|u(t)|^2\mbox{d}t\right)^2. $$ As $A(u)$ is continuous when $u$ is a signal of finite energy, it follows that $A(u)$ can not vanish in a neighborhood of $(0,0)$. Since ideal behavior is not achievable, it becomes important to determine how closely to the ideal situation one can come. A major attempt in that direction is due to Price and Hofstetter \cite{ieee.PH} who considered the quantity $$ V(E)=\iint_E|A(u)(x,y)|^2\mbox{d}x\,\mbox{d}y $$ where $E$ is a measurable subset of ${\mathbb{R}}^2$. It should however be mentioned that, for their results to be significant, one has to go outside the class of signals of finite energy as $V(E)$ is supposed to have a limit $V_0$ when $E$ shrinks to $\{(0,0)\}$. Indeed, when $u$ has finite energy, from the continuity of $A(u)$, one gets that $$ V(E)\leq \mbox{area}(E)\sup_{(x,y)\in E}|A(u)(x,y)|^2\to \mbox{area}(\{(0,0)\})|A(u)(0,0)|^2=0. $$ Nevertheless, it is possible to define $A(u)$ when $u$ is a Schwartz distribution and the assumption in \cite[Section II]{ieee.PH} is that $A(u)\in L^2_{\mathrm{loc}}({\mathbb{R}}^2\setminus\{(0,0)\})$. In this paper, we restrict our attention to signals $u$ of finite energy and we try to determine more precisely the neighborhood of $(0,0)$ on which $A(u)$ does not vanish. To do so, we prove a new form of uncertainty principle, showing that there is an exclusion relation between the function $u$ having moments and its ambiguity function being $0$ near $(0,0)$. This result is inspired by a recent result of Luo and Zhang concerning the Fourier transform. They prove that if a real non-negative valued function is supported in $[0,+\infty)$ then there is an uncertainty principle relating its moments and the first zero its Fourier transform. It turns out that the ambiguity function, when restricted in a given direction, is always the Fourier transform of a non-negative function, but the support condition of Luo and Zhang is not valid. We thus start by removing that condition in their uncertainty principle, which can be done at little expenses apart from some numerical constants. This then allows to obtain a zero-free region for the ambiguity function $A(u)$ when $u$ and its Fourier transforms have $L^2$-moments. This region turns out to be a square when one considers dispersions. The article is divided in two sections. The first one is devoted to the extension of Luo and Zhang's uncertainty principle. In the second section, we recall how the fractional Fourier transform allows to see the restriction of the ambiguity function in a given direction as a Fourier transform of a non-negative function. We then apply the results of the first section to obtain zero-free regions of the ambiguity function. \section{Zero-free regions of the Fourier transform} \begin{notation} For $1\leq p<\infty$ we define $L^p({\mathbb{R}})$ as the space of measurable functions such that $$ \norm{f}_p^p:=\int_{\mathbb{R}}|f(t)|^p\,\mbox{d}t<+\infty. $$ For $u\in L^1({\mathbb{R}})\cap L^2({\mathbb{R}})$ we define the Fourier transform as $$ {\mathcal F} u(\xi)=\widehat{u}(\xi)=\int_{{\mathbb{R}}}u(t)e^{-2i\pi\xi t}\,\mbox{d}t,\quad\xi\in{\mathbb{R}} $$ and then extend it to $L^2({\mathbb{R}})$ in the usual way. \end{notation} \begin{theorem} \label{ieee:th1}\ \\ For every $q>0$ there exists $\kappa_q>0$ such that, if $u\in L^1({\mathbb{R}})$ is a non-negative function such that $u\not=0$, then $$ \inf\{\xi>0\,:\ \widehat{u}(\pm\xi)=0\}^q\inf_{t_0\in{\mathbb{R}}}\bigl\||t-t_0|^qu(t)\bigr\|_1\geq\kappa_q\norm{u}_1. $$ \end{theorem} \begin{remark}\ \\ --- One may take for the constant $\kappa=\frac{1}{c(2\pi)^q}$ where $c$ is the smallest constant in Equation \eqref{ieee:eq3} below.\\ --- A similar result has been proved in \cite{ieee.LZ} but with the extra assumption that $u$ be supported in $[0,+\infty)$. The constant $\kappa$ is then explicitely known and better than the one above. \end{remark} \begin{proof}[Proof of Theorem \ref{ieee:th1}] The proof is very similar to that of Luo and Zhang. The idea is that near $0$, $e^{ix}\sim\cos x\sim 1-x^2/2$ and that for $x$ big enough $1-x^2/2\ll\cos x$. Indeed $\mbox{Re}\,e^{ix}=\cos x\geq 1-x^2/2$ for all $x$. Thus $\displaystyle\mbox{Re}\,\widehat{u}\geq\int_{\mathbb{R}}\bigl(1-(2\pi t\xi)^2/2\bigr)u(t)\,\mbox{d}t$. It follows that if $\widehat{u}(\xi)=0$, then $\mbox{Re}\,\widehat{u}=0$ thus $$ \xi^2\norm{t^2u(t)}_1\geq\frac{1}{2\pi^2}\norm{u(t)}_1. $$ The constant $\frac{1}{2\pi^2}$ can be improved when $u$ is supported in $[0,+\infty)$ using a refined version of the inequality $\cos x\geq 1-x^2/2$ ({\it see} \cite[Proposition 1.2]{ieee.LZ}) and can further be extended by replacing $\norm{t^2u}$ by other moments $\norm{t^qu}$. Our substitute to \cite[Proposition 1.2]{ieee.LZ} is the following~: \medskip \noindent{\bf Fact.} {\sl for every $q>0$, there exists $a,c$ such that, for all $x\in{\mathbb{R}}$,} \begin{equation} \label{ieee:eq3} a\cos x\geq 1-c|x|^q. \end{equation} \medskip It then follows that, if $\xi$ is such that $\widehat{u}(\xi)=0$, then \begin{eqnarray*} 0&=&a\mbox{Re}\,\widehat{u}(\xi) =\int u(t)a\cos 2\pi t\xi\,\mbox{d}t \geq\int u(t)\Bigl(1-c(2\pi|\xi|)^q|t|^q\Bigr)\,\mbox{d}t\\ &=&\norm{u}_1-c(2\pi|\xi|)^q\|\,|t|^qu\,\|_1. \end{eqnarray*} Now, as $\widehat{u}$ is continuous, we may take $\xi=\pm \tau$, and get that $$ |\tau|^q\,\bigl\|\,|t|^qu\,\bigr\|_1\geq\frac{1}{c(2\pi)^q}\bigl\|u\bigr\|_1 $$ and applying this to the translate $u_{t_0}(t)=u(t-t_0)$ we get the desired result. \end{proof} \begin{proof}[Proof of \eqref{ieee:eq3}] The fact is trivial as long as one does not look for best constants. Indeed, one may take $a=2$ and $c$ such that $1-c\abs{\frac{\pi}{3}}^q=-2$ {\it i.e.} $c=3\left(\frac{3}{\pi}\right)^q$. In this case, for $\displaystyle 0\leq|x|\leq\frac{\pi}{3}$, $1-c\abs{x}^q\leq 1\leq 2\cos x$ and for $|x|>\frac{\pi}{3}$, $1-c\abs{x}^q\leq -2\leq 2\cos x$. A slightly more refined argument, taking $a=1+\eta$, $x_0=\arccos(1+\eta)^{-1}$ gives $c=\displaystyle\frac{2+\eta}{\bigl(\arccos(1+\eta)^{-1}\bigr)^q}$. One may then minimize over $\eta>0$ and some values are given in the following table~: \begin{center} \begin{tabular}{c|cccc} q&3&4&5&6\\ \hline a&3.26&3.94&4.61&5.27\\ \hline c&2.134&1.656&1.241&0.908\\ \end{tabular} \end{center} The constant $c$ may be slightly improved by a more refined argument. More precisely, at the point $x_0$ above, $1-c|x|^q$ and $a\cos x$ are still far apart. Optimizing between all parameters is however difficult. We would like to mention that, even for $q=2$, the estimate $\cos x\geq 1-x^2/2$ is not best possible for our needs. Indeed, choosing $a=1$, which is the smallest possible value for $a$ when $q\leq 2$, may not lead to the best choice of $c$. For instance, a computer plot will convince the reader that $$ 1.02\cos x\geq 1-0.52x^{3/2}\quad,\quad 1.1\cos x\geq 1-0.42x^2. $$ However, for $q\leq 1$, $a=1$ allows for the best constant $c$ since $1-cx^q$ is then concave on $]0,+\infty)$. The best constant $c$ can then be computed as follows~: the equation $\cos x=1-cx^q$ has to have a solution $x_0$ in $[\pi/2,\pi]$ for which $\sin x_0=cq|x_0|^{q-1}$. It then follows that $x_0$ is the unique solution in $[\pi/2,\pi]$ of $$ \cos x+\frac{1}{q}x\sin x=1\quad\mbox{and then }c=\frac{\sin x_0}{q x_0^{q-1}}. $$ This equation has a solution since for $x=\pi/2$, $\cos x+\frac{1}{q}x\sin x=\frac{\pi}{2q}>1$ and for $x=\pi$, $\cos x+\frac{1}{q}x\sin x=-1<1$ and $\varphi(x)=\cos x+\frac{1}{q}x\sin x$ is continuous. The solution is unique since $\varphi'(x)=\left(\frac{1}{q}-1\right)\sin x+\frac{1}{q}x\cos x$ is made of 2 pieces $\varphi_1(x)=\left(\frac{1}{q}-1\right)\sin x$ and $\varphi_2(x)=\frac{1}{q}x\cos x$. The first one, $\varphi_1$ is non-negative and decreasing on $[\pi/2,\pi]$ while the second one is negative decreasing on $[\pi/2,\pi]$. As $\varphi'(\pi/2)\geq0$ and $\varphi'(\pi)\leq0$, there exists $x_1$ such that $\varphi'(x_1)=0$ and $\varphi'(x)>0$ for $\pi/2<x<x_1$ while $\varphi'(x)<0$ for $x_1<x<\pi$. It follows that the solution of $\varphi(x)=0$ in $[x_1,\pi]$ is unique. Finally note also that for $q\sim 0$, $c\sim 2$ and for $q\sim 1$, $c\sim 0.73$. \end{proof} To conclude this section, let us give a first application of this result. We ask whether a translate $f_a(t)=f(t-a)$ of $f$ can be orthogonal to $f$. But $$ 0=\int_{\mathbb{R}} f(t)\overline{f(t-a)}\,\mbox{d}t=\int_{\mathbb{R}} |\widehat{f}(\xi)|^2e^{2ia\xi}\,\mbox{d}\xi={\mathcal F}[|\widehat{f}|^2](-a). $$ Similarily, if the modulation $f^{(\omega)}(t)=e^{2i\pi\omega t}f(t)$ of $f$ is orthogonal to $f$ then ${\mathcal F}[|f|^2](\omega)=0$ From Theorem \ref{ieee:th1}, we get the following: \begin{corollary}\ \\ Let $q>0$ and $\kappa_q$ be the constant of Theorem \ref{ieee:th1}. Let $f\in L^2({\mathbb{R}})$. \noindent--- Assume that $(1+|\xi|)^{q/2}\widehat{f}\in L^2$. Then for $f$ and its translate $f_a$ to be orthogonal, it is necessary that $$ |a|^q \inf_{t_0\in{\mathbb{R}}}\|\,|t-t_0|^{q/2}\widehat{f}(t)\|_2^2\geq \kappa_q\norm{f}_2^2. $$ --- Assume that $(1+|t|)^{q/2}f\in L^2$. Then for $f$ and its modulation $f^{(\omega)}$ to be orthogonal, it is necessary that $$ |\omega|^q \inf_{t_0\in{\mathbb{R}}}\|\,|t-t_0|^{q/2}f(t)\|_2^2\geq \kappa_q\norm{f}_2^2. $$ \end{corollary} \section{Zero-free regions of the ambiguity function} \subsection{Fractional Fourier transforms}\ \\[3pt] For $\alpha\in{\mathbb{R}}\setminus\pi{\mathbb{Z}}$, let $c_\alpha=\displaystyle\frac{\exp\frac{i}{2}\left(\alpha-\frac{\pi}{4}\right)}{\sqrt{|\sin\alpha|}}$ be a square root of $1-i\cot\alpha$. For $f\in L^1({\mathbb{R}})$ and $\alpha\notin\pi{\mathbb{Z}}$, define $$ {\mathcal F}_\alpha f(\xi)=c_\alpha e^{-i\pi\xi^2\cot\alpha}\int_{\mathbb{R}} f(t)e^{-i\pi t^2\cot\alpha}e^{-2i\pi t\xi/\sin\alpha}\mbox{d}t =c_\alpha e^{-i\pi\xi^2\cot\alpha}{\mathcal F}[f(t)e^{-i\pi t^2\cot\alpha}](\xi/\sin\alpha) $$ while for $k\in{\mathbb{Z}}$, ${\mathcal F}_{2k\pi} f=f$ and ${\mathcal F}_{(2k+1)\pi}f(\xi)=f(-\xi)$. This transformation has the following properties~: \begin{enumerate} \item\label{ieee:prop:fa1} $\displaystyle\int_{\mathbb{R}}{\mathcal F}_\alpha f(\xi)\overline{{\mathcal F}_\alpha g(\xi)}\mbox{d}\xi=\int_{\mathbb{R}} f(t)\overline{g(t)}\mbox{d}t$ which allows to extend ${\mathcal F}_\alpha$ from $L^1({\mathbb{R}})\cap L^2({\mathbb{R}})$ to $L^2({\mathbb{R}})$ as a unitary operator on $L^2({\mathbb{R}})$; \item\label{ieee:prop:fa2} ${\mathcal F}_\alpha{\mathcal F}_\beta={\mathcal F}_{\alpha+\beta}$; \item\label{ieee:prop:fa3} if $f_a(t)=f(t-a)$ then $$ {\mathcal F}_\alpha f_a(\xi)={\mathcal F}_\alpha f(\xi+a\cos\alpha)e^{-i\pi a^2\cos\alpha\sin\alpha-2i\pi a\xi\sin\alpha}; $$ \item\label{ieee:prop:fa4} if $f_\omega(t)=e^{-2i\pi\omega t}f(t)$ then $$ {\mathcal F}_\alpha f_\omega(\xi)={\mathcal F}_\alpha f(\xi+\omega\sin\alpha)e^{i\pi\omega^2\cos\alpha\sin\alpha+2i\pi\omega\xi\sin\alpha}; $$ \item\label{ieee:prop:fa5} if $f\in L^2$ is such that $tf\in L^2$ then $$ {\mathcal F}_\alpha[tf](\xi)=\xi{\mathcal F}_\alpha f(\xi)\cos\alpha+i[{\mathcal F}_\alpha f]'(\xi)\sin\alpha. $$ \end{enumerate} Let us recall that the ambiguity function of $u\in L^2({\mathbb{R}})$ is defined by $$ A(u)(x,y)=\int_{\mathbb{R}} u\left(t+\frac{x}{2}\right)\overline{u\left(t-\frac{x}{2}\right)} e^{-2i\pi ty}\mbox{d}t. $$ The following properties are well known \cite{ieee.Al,ieee.AT,ieee.Wi}~: \begin{enumerate} \item\label{ieee:prop:amb1} $A(u)\in L^2({\mathbb{R}}^2)$ with $\norm{A(u)}_{L^2({\mathbb{R}}^2)}=\norm{u}_2^2$ and is continuous; \item\label{ieee:prop:amb2} $A(u)(0,0)=\norm{u}_2^2$ where it is maximal; \item\label{ieee:prop:amb3} $A(u)(-x,-y)=\overline{A(u)(x,y)}$; \item\label{ieee:prop:amb4} $A({\mathcal F}_\alpha u)(x,y)=A(u)(x\cos\alpha-y\sin\alpha,x\sin\alpha+y\cos\alpha)$. \end{enumerate} The second property was proved in \cite{ieee.Wi} when the fractional Fourier transform is defined in terms of Hermite polynomials and in \cite{ieee.Al} with the above definition of the fractional Fourier transform. \subsection{Zero free regions}\ \\ Noticing that $A(u)$ is a Fourier transform and in particular that $A(u)(0,y)={\mathcal F}[|u|^2](y)$, we get from Property \ref{ieee:prop:amb4} of the ambiguity function that $$ A(u)(-y\sin\alpha,y\cos\alpha)={\mathcal F}[|{\mathcal F}_\alpha u|^2](y). $$ \begin{definition}\ \\ Let us define, for $\theta\in]0,\pi[$ $$ \tau_\theta=\inf\{t>0~:A(u)(t\cos\theta,t\sin\theta)=0\mbox{ or }A(u)(-t\cos\theta,-t\sin\theta)=0\}. $$ \end{definition}\ \\ Let $q>0$ and $\kappa_q$ be given by Theorem \ref{ieee:th1}. Applying this theorem, we obtain \begin{equation} \label{ieee.eq.tau1} \tau_\theta^q\inf\limits_{t_0\in{\mathbb{R}}}\norm{|t-t_0|^q|{\mathcal F}_{\theta-\pi/2} u|^2}_1\geq\kappa_q \norm{|{\mathcal F}_{\theta-\pi/2} u|^2}_1=\kappa_q\norm{u}_2^2. \end{equation} Let us now show that in the specific case, a more precise result can be obtained: if $u\in L^2$ is such that $tu\in L^2$ and $t\widehat{u}\in L^2$ then from Property \ref{ieee:prop:fa5} of the fractional Fourier transform \begin{eqnarray*} \norm{t{\mathcal F}_\alpha u}_2&=&\norm{tu(t)\cos(-\alpha)-iu'\sin\alpha}_2 \leq\norm{tu(t)}_2|\cos\alpha|+\norm{u'}_2|\sin\alpha|\\ &=&\norm{tu(t)}_2|\cos\alpha|+\norm{\xi\widehat{u}(\xi)}_2|\sin\alpha|. \end{eqnarray*} In particular, the ambiguity function $A(u)$ of $u$ has no zero in the region $$ \left\{(t\sin\alpha,-t\cos\alpha)\,: 0<\alpha<\pi,\ |t|\leq\frac{\sqrt{2}\norm{u}_2}{2\pi\bigl(\norm{tu(t)}_2|\cos\alpha|+\norm{\xi\widehat{u}(\xi)}_2|\sin\alpha|\bigr)} \right\}. $$ This region is a rombus with endpoints $$ \left(\pm\frac{\sqrt{2}\norm{u}_2}{2\pi\bigl\|tu(t)\bigr\|_2},0\right)\quad\mbox{and}\quad \left(0,\pm\frac{\sqrt{2}\norm{u}_2}{2\pi\bigl\|\xi\widehat{u}(t)\bigr\|_2}\right). $$ Further, changing $u(t)$ into $u(t-a)e^{i\omega t}$ leaves the modulus of $A(u)$ unchanged, and so are the zero-free regions of $A(u)$. We have thus proved \begin{theorem}\ \\ Let $u\in L^2({\mathbb{R}})$ be such that $tu(t)\in L^2({\mathbb{R}})$ and $\xi\widehat{u}(\xi)\in L^2({\mathbb{R}})$. Then the ambiguity function $A(u)$ of $u$ has no zero in the convex hull of the four points $$ \left(\pm\frac{\sqrt{2}\norm{u}_2}{2\pi\inf_{a\in{\mathbb{R}}}\bigl\||t-a|u(t)\bigr\|_2},0\right)\quad\mbox{and}\quad \left(0,\pm\frac{\sqrt{2}\norm{u}_2}{2\pi\inf_{\omega\in{\mathbb{R}}}\bigl\||\xi-\omega|\widehat{u}(t)\bigr\|_2}\right). $$ \end{theorem} The area of that rombus is $$ \frac{\frac{1}{\pi}\norm{u}_2^2}{\inf_{a\in{\mathbb{R}}}\bigl\||t-a|u(t)\bigr\|_2\inf_{\omega\in{\mathbb{R}}}\bigl\||\xi-\omega|\widehat{u}(t)\bigr\|_2} \leq 4 $$ according to Heisenberg's uncertainty principle. Note also that the numerical constant $\frac{\sqrt{2}}{2\pi}$ can be improved to $0.248$ using the inequality $1.1\cos x\geq 1-0.41x^2$ instead of $\cos x\geq 1-x^2/2$.
1,108,101,566,074
arxiv
\section{\label{sec:introduction} Introduction} For the long time LaMnO$_3$ was regarded as a prototypical example of parent (or undoped) manganites, where the strong Jahn-Teller distortion was believed to coexist with the (layered) A-type antiferromagnetic (AFM) state.\cite{WollanKoehler,Goodenough,Kanamori,Matsumoto,KugelKhomskii} The origin of this AFM state was one of the most disputed points about one decade ago, right after the new wave of interest to the phenomenon of the colossal magnetoresistance in the manganite compounds has just emerged.\cite{Hamada,PickettSingh,PRL96,Sawada,Shiina,Maezono} Despite many differences in details, all theories of that period of time seemed to agree that the Jahn-Teller effect plays an important role in the alternating population of the $3x^2$$-$$r^2$ and $3y^2$$-$$r^2$ orbitals (Fig. \ref{fig.intro}), which is primary responsible for the directional anisotropy of interatomic magnetic interactions underlying the A-type AFM phase. \begin{figure}[h!] \begin{center} \resizebox{12cm}{!}{\includegraphics{figure1.eps}} \end{center} \caption{ (Color online) (a): experimental phase diagram of $R$MnO$_3$ versus temperature and ionic radius of rare-earth elements (from ref. \protect\citen{Tachibana}). Magnetic phases are denoted as paramagnetic (P), A-type AFM (A), spiral AFM (S), incommensurate (IC), and E-type AFM (E). (b) and (c): spin arrangement in the orthorhombic ${\bf ab}$-plane, which takes place in the AFM phases of the A- and E-type, respectively. (d): alternating $3x^2$$-$$r^2$ and $3y^2$$-$$r^2$ orbitals and main magnetic interactions in the ${\bf ab}$-plane, which are responsible for the relative stability of the A- and E-states.} \label{fig.intro} \end{figure} Indeed, simple considerations for the superexchange (SE) interactions suggest that the alternating (antiferro) ordering of the $3x^2$$-$$r^2$ and $3y^2$$-$$r^2$ orbitals in the orthorhombic ${\bf ab}$-plane leads to the ferromagnetic (FM or F) coupling, while stacking (ferro) orbital ordering in the ${\bf c}$-direction is responsible for the weak AFM coupling.\cite{Goodenough2,Kanamori2,KugelKhomskii} The main surprise came later when it was found that after replacing La by smaller rare-earth elements ($R$), which \textit{systematically increases} all kinds of the lattice distortions (including the Jahn-Teller one), the orthorhombic $R$MnO$_3$ compounds undergo the change of the magnetic ground state (Fig. \ref{fig.intro}).\cite{Tachibana} Briefly, the least distorted LaMnO$_3$ forms the A-type AFM structure. The opposite-end compounds (starting from HoMnO$_3$) form the so-called E-type (zigzag) AFM structure. In the intermediate region, the magnetic structure is incommensurate and keeps some features of the both A- and E-type AFM phases. The appearance of the E-type AFM structure, which \textit{breaks the inversion symmetry} in otherwise centrosymmetric crystal environment, is particularly interesting. It can be hardly understood in terms of the nearest-neighbor (NN) SE interactions alone, because such a mechanism would inevitably imply the change of the orbital state and operate against the large energy gain associated with the Jahn-Teller distortion. Therefore, it seems that the more realistic scenario should involve some longer range interactions. At the purely phenomenological level, the competition between the A- and E-type AFM phases in the ${\bf ab}$-plane can be rationalized in terms of the following interaction parameters and trends (Fig. \ref{fig.intro}): \begin{itemize} \item[$\bullet$] the NN interaction $J^\parallel_1$, which, depending on its sign, favors either FM or bipartite AFM arrangement; \item[$\bullet$] the 3rd-neighbor AFM interaction $J_3$ which couples all 3rd-neighbor spins antiferromagnetically, as required for the E-type AFM structure. Therefore, $J_3$ should be an indispensable ingredient of the model analysis. As we will see below, the main details of the magnetic phase diagram of $R$MnO$_3$ depend on the competition between $J^\parallel_1$ and $J_3$. If considered alone, the 3rd-neighbor AFM interactions would favor the formation of an infinitely degenerate group of states, including two zigzag AFM structures propagating along the orthorhombic ${\bf a}$- and ${\bf b}$-axes. The experimentally observed E-type AFM structure is the one of them, which propagates along the ${\bf a}$-axis and where the spins are antiferromagnetically coupled along the ${\bf b}$-axis; \item[$\bullet$] the 2nd-neighbor AFM interactions $J^{\bf b}_2$, which lifts the degeneracy and together with $J_3$ determines the direction of propagation and the periodicity of the E-type AFM phase. The combination of $J^{\bf b}_2$ and $J_3$ appears to be sufficient to bind the directions of spins in each of the orbital sublattices, which are denoted as $3x^2$$-$$r^2$ or $3y^2$$-$$r^2$ in Fig. \ref{fig.intro}. \end{itemize} Loosely speaking, if ferromagnetic $J^\parallel_1$ dominates over $J^{\bf b}_2$ and $J_3$, the magnetic ground state will be of the A-type. On the other hand, if the longer range interactions dominates, the magnetic ground state will tend to be of the E-type. The last ingredient, which stabilizes the E-type AFM phase is the small difference between the parameters $J^\parallel_1$ acting in the FM and AFM bonds, which can be caused by either the exchange stiction or the orbital ordering effects. This difference is necessary in order to stabilize the directions of spins in two orbital sublattices relative to each other. The purpose of this work is to show that all these features are in fact closely related to the crystal distortion and the type of the orbital ordering realized in the orthorhombic $R$MnO$_3$ compounds. We use the same strategy as in the previous work devoted to BiMnO$_3$.\cite{BiMnO3} First, we derive an effective low-energy model for the Mn($3d$) bands and extract parameters of this model from the first-principles electronic structure calculations based on the linear-muffin-tin-orbital (LMTO) method.\cite{LMTO} Then, we solve this model in the Hartree-Fock approximation and analyze behavior of interatomic magnetic interactions and the total energies. The existence of the long-range magnetic interactions in LaMnO$_3$ was previously considered in ref. \citen{springer}, in the context of the local stability of the A-type AFM state with respect to other magnetic states. In the present work, we will further consolidate this idea and argue that it constitutes the basis for understanding the magnetic properties of all undoped manganites. The paper is organized as follows. In the next two sections we briefly discuss the main details of the experimental crystal structure (Sec. \ref{sec:structure}) and the electronic structure in the local-density approximation (LDA, Sec. \ref{sec:estruc}). The construction of the model Hamiltonian for the Mn($3d$) bands is considered in Sec. \ref{sec:model} and the strategy employed for the analysis of this Hamiltonian is briefly reviewed in Sec. \ref{sec:dataanalysis}. The behavior of interatomic magnetic interactions are discussed in Sec. \ref{sec:exchange}. Sec. \ref{sec:TEnegry} is devoted to comparison with the experimental data. Particularly, we will consider the behavior of the correlation energies and the magnetic polarization of the oxygen sites, which is typically missing in the low-energy model. Finally, the brief summary will be given in Sec. \ref{sec:summary}. \section{\label{sec:structure} Crystal Structure} All considered compounds crystallize in the highly distorted orthorhombic structure. The space group is $D^{16}_{2h}$ in Sch\"{o}nflies notations (No. 62 in International Tables). The primitive cell has four formula units. The crystal structure itself and its implications to the magnetic properties of LaMnO$_3$ have been discussed in many details in previous publications.\cite{Hamada,PickettSingh,PRL96} Some crystal structure parameters are summarized in Table \ref{tab:structure}. \begin{table}[tb] \caption{Crystal structure parameters of $R$MnO$_3$ compounds. $a$, $b$, and $c$ are the orthorhombic lattice constants, Mn-O are the interatomic distances, and $\angle$Mn-O-Mn are the bond angles (the first line is the angle in the ${\bf c}$-direction and the second line is the angle in the ${\bf ab}$-plane). All data are taken at room temperature except for LaMnO$_3$, corresponding to 4.2 K.} \label{tab:structure} \begin{tabular}{lccccc} \hline & LaMnO$_3$\protect\cite{Elemans} & PrMnO$_3$\protect\cite{Alonso} & NdMnO$_3$\protect\cite{Mori} & TbMnO$_3$\protect\cite{Blasco} & HoMnO$_3$\protect\cite{Munoz} \\ \hline $a$ (\AA) & 5.532 & 5.449 & 5.416 & 5.302 & 5.257 \\ $b$ & 5.742 & 5.813 & 5.849 & 5.856 & 5.835 \\ $c$ & 7.668 & 7.586 & 7.543 & 7.401 & 7.361 \\ Mn-O (\AA) & 1.906 & 1.909 & 1.905 & 1.889 & 1.905 \\ & 1.959 & 1.953 & 1.951 & 1.946 & 1.943 \\ & 2.188 & 2.210 & 2.227 & 2.243 & 2.222 \\ $\angle$Mn-O-Mn ($^\circ$) & 157 & 152 & 150 & 144 & 142 \\ & 154 & 151 & 149 & 146 & 144 \\ \hline \end{tabular} \end{table} It also includes the references to the experimental lattice parameters, which have been used in the calculations. Generally, the crystal distortion in $R$MnO$_3$ tends to increase in the direction La$\rightarrow$Pr$\rightarrow$Nd$\rightarrow$Tb$\rightarrow$Ho. For example, such a tendency is clearly seen for the $b/a$ and $b/c$ ratios as well as for the Mn-O-Mn angles. On the other hand, the Jahn-Teller distortion is not monotonous and takes the maximum in TbMnO$_3$. For example, the ratio of the maximal and minimal Mn-O bondlengths is $1.187$ in TbMnO$_3$ (in comparison with $1.148$ in the least distorted LaMnO$_3$), and only $1.166$ in the following it HoMnO$_3$. This structural anomaly is directly related to the anomaly of the crystal-field (CF) splitting, which will be discussed in Sec. \ref{sec:model}. \section{\label{sec:estruc} Electronic Structure in the Local-Density Approximation} An example of the LDA band structure as obtained in the LMTO calculations for LaMnO$_3$ and HoMnO$_3$ is shown in Fig. \ref{fig.DOS}. \begin{figure}[h!] \begin{center} \resizebox{7cm}{!}{\includegraphics{figure2a.eps}} \resizebox{7cm}{!}{\includegraphics{figure2b.eps}} \end{center} \caption{(Color online) Total and partial densities of states as obtained in the local-density approximation for LaMnO$_3$ (left) and HoMnO$_3$ (right). The shaded area shows contributions of the manganese $3d$ states. Other symbols show the positions of the main bands. The Fermi level is at zero energy.} \label{fig.DOS} \end{figure} The LMTO bases, which was used in the valence part of the spectrum, typically included the Mn($3d4sp$), $R$($5d6sp$), and O($2sp$) states. The $R$($4f$) states were treated as the (non-spin-polarized) core states. The atomic spheres radii were determined in two steps. First, we perform the LMTO calculations for the nominal composition, which includes 4 Mn, 4 $R$, and 12 O atoms, and find the atomic radii from the charge neutrality condition inside the spheres. Then, in order to better fill the unit cell volume and reduce the overlap between the atomic spheres, we add 12 to 16 empty spheres with the $1s2p$-basis. Typically, such a procedure guarantee a good agreement with the more accurate full-potential calculations. The electronic structure near the Fermi level is mainly formed by the Mn($3d$) states. There is also a considerable weight of the Mn($3d$) states in the oxygen band. Due to the strong crystal-field (CF) effects in the MnO$_6$ octahedra, the electronic structure near the Fermi level splits into the ``pseudocubic'' Mn($e_g$) and Mn($t_{2g}$) bands. The Jahn-Teller distortion further splits the Mn($e_g$) band in two subbands lying at around 1 and 3 eV (Fig. \ref{fig.ek}). \begin{figure}[tb] \begin{center} \resizebox{!}{5cm}{\includegraphics{figure3a.eps}} \resizebox{!}{5cm}{\includegraphics{figure3b.eps}} \end{center} \caption{(Color online) LDA energy bands for LaMnO$_3$ (left) and HoMnO$_3$ (right) as obtained in the original electronic structure calculations using the LMTO method and after the tight-binding (TB) parametrization using the downfolding method. Twelve low-lying bands spreading from around -1.0 till 0.4 eV are the ``$t_{2g}$ bands'' and the next eight bands are the ``$e_g$'' bands. Notations of the high-symmetry points of the Brillouin zone are taken from ref. \protect\citen{BradlayCracknell}.} \label{fig.ek} \end{figure} In NdMnO$_3$, TbMnO$_3$, and HoMnO$_3$, these subbands are separated by an energy gap, whereas in the least distorted LaMnO$_3$ and PrMnO$_3$, there is a small overlap between them. In the majority of the considered compounds, there is also a small overlap between upper Mn($e_g$) and $R$($5d$) bands. An exception is HoMnO$_3$, where these bands are separated by a small energy gap. \section{\label{sec:model} Construction and Parameters of the Model Hamiltonian} Our next goal is the construction of an effective model Hamiltonian for the Mn($3d$) bands located near the Fermi level. For these purposes we use the method proposed in ref. \citen{PRB06a}. Many details can be found in the review article.\cite{rev08} The model itself is specified as follows: \begin{equation} \hat{\cal{H}}= \sum_{{\bf R}{\bf R}'} \sum_{\alpha_1 \alpha_2} t_{{\bf R}{\bf R}'}^{\alpha_1 \alpha_2}\hat{c}^\dagger_{{\bf R}\alpha_1}\hat{c}^{\phantom{\dagger}}_{{\bf R}'\alpha_2} + \frac{1}{2} \sum_{\bf R} \sum_{ \{ \alpha \} } U^{\bf R}_{\alpha_1 \alpha_2 \alpha_3 \alpha_4} \hat{c}^\dagger_{{\bf R}\alpha_1} \hat{c}^\dagger_{{\bf R}\alpha_3} \hat{c}^{\phantom{\dagger}}_{{\bf R}\alpha_2} \hat{c}^{\phantom{\dagger}}_{{\bf R}\alpha_4}, \label{eqn:Hmanybody} \end{equation} where $\hat{c}^\dagger_{{\bf R}\alpha}$ ($\hat{c}_{{\bf R}\alpha}$) creates (annihilates) an electron in the Wannier orbital $\tilde{W}_{\bf R}^\alpha$ centered at the Mn-site ${\bf R}$, and $\alpha$ is a joint index, incorporating the spin ($s$$=$ $\uparrow$ or $\downarrow$) and orbital ($m$$=$ $xy$, $yz$, $z^2$, $zx$, or $x^2$$-$$y^2$) degrees of freedom. The one-electron Hamiltonian $\hat{t}_{{\bf R}{\bf R}'}$$= $$\| t_{{\bf R}{\bf R}'}^{\alpha_1 \alpha_2} \|$ consists of the two parts: the site-diagonal elements (${\bf R}$$=$${\bf R}'$) describe the crystal-field effects, whereas the off-diagonal elements (${\bf R}$$\neq$${\bf R}'$) stand for the transfer integrals, describing the kinetic energy of electrons. They are derived from the LDA band structure by using the formal downfolding method, which is totally equivalent to the use of the Wannier-basis in the projector-operator method \cite{PRB07}. The comparison between the original LDA bands and the ones obtained in the downfolding method is shown in Fig. \ref{fig.ek}. In LaMnO$_3$, the agreement is nearly perfect for the Mn($t_{2g}$) and the most of the Mn($e_g$) bands located in the low-energy part of the spectrum. In this region, the original electronic structure of the LMTO method is well reproduced after the downfolding. Since upper Mn($e_g$) bands overlap with the La($5d$) bands, it is virtually impossible to reproduce all details of the electronic structure in the minimal model (\ref{eqn:Hmanybody}) limited to the five Wannier-orbitals centered at each Mn-site. In this sense, the electronic structure obtained in the downfolding method is only an approximation to the original LDA band structure. Similar situation occurs in PrMnO$_3$, NdMnO$_3$, and TbMnO$_3$. In HoMnO$_3$, all Mn($3d$) bands are separated from the Ho($5d$) ones and well reproduced by the downfolding method. The one-electron parameters in the real space are obtained after the Fourier transformation. Since we do not consider here the relativistic spin-orbit interaction, the matrix elements $t_{{\bf R}{\bf R}'}^{\alpha_1 \alpha_2}$ are diagonal with respect to the spin indices: i.e., $t_{{\bf R}{\bf R}'}^{\alpha_1 \alpha_2}$$=$$t_{{\bf R}{\bf R}'}^{m_1 m_2} \delta_{s_1 s_2}$. Then, the site-diagonal part of $\hat{t}_{{\bf R}{\bf R}'}$$=$$\| t_{{\bf R}{\bf R}'}^{m_1 m_2} \|$ describes the CF effects. For example, the CF splitting is obtained after the diagonalization of $\hat{t}_{{\bf R}{\bf R}}$. It is particularly strong for the $e_g$ levels, being of the order of 1.5 eV (Fig. \ref{fig.CF}), and increases with the increase of the crystal distortion. As was pointed out in Sec. \ref{sec:structure}, some decrease of the $e_g$-level splitting in HoMnO$_3$ in comparison with TbMnO$_3$ is related to the decrease of the Jahn-Teller distortion. \begin{figure}[h!] \centering \noindent \resizebox{9cm}{!}{\includegraphics{figure4.eps}} \caption{\label{fig.CF} Crystal-field splitting. Three low-lying levels are of the ``$t_{2g}$''-type and the next two levels are of the ``$e_g$''-type.} \end{figure} For all considered compounds, the CF splitting is caused by the difference in the Mn($3d$)-O($2p$) hybridization in different Mn-O bonds, which after the elimination of the O($2p$)-states gives rise to the site-diagonal elements in the model Hamiltonian. The effect of nonsphericity of the Madelung potential, which plays a crucial role in the $t_{2g}$ compounds,\cite{MochizukiImada,PRB06b} is relatively small for the $e_g$-systems. For example in HoMnO$_3$, it changes the $e_g$-levels splitting by less than 3\%. The directions of the CF splitting alternate on the perovskite lattice according to the $D_{2h}^{16}$ space group. The corresponding distribution of the $e_g$-electron densities (or the orbital ordering) is shown in Fig. \ref{fig.OrbitalOrdering}.\cite{remark7} As will be discussed in Sec. \ref{sec:dataanalysis}, this orbital ordering is directly responsible for the behavior of not only the NN but also the longer range magnetic interactions. \begin{figure}[h!] \centering \noindent \resizebox{9cm}{!}{\includegraphics{figure5.eps}} \caption{\label{fig.OrbitalOrdering} Orbital ordering in LaMnO$_3$ derived from crystal-field $e_g$ orbitals of downfolded Hamiltonian (more specifically, the distribution of the electron density corresponding to the lowest $e_g$ level in Fig. \protect\ref{fig.CF}).\protect\cite{remark7} Oxygen atoms are shown by small spheres. The vectors ${\bf a}$, ${\bf b}$, and ${\bf c}$ show the directions of orthorhombic axes. Other symbols show interatomic magnetic interactions in and between the planes, which are related to the given orbital ordering.} \end{figure} Because of complexity of the transfer integrals, it is rather difficult to discuss the behavior of individual matrix elements of $\| t^{m_1 m_2}_{{\bf RR}'} \|$. Nevertheless, some useful information can be obtained from the analysis of \textit{averaged} parameters $$ \bar{t}_{{\bf RR}'}(d) = \left( \sum_{m_1 m_2} t^{m_1 m_2}_{{\bf RR}'} t^{m_2 m_1}_{{\bf R}'{\bf R}} \right)^{1/2}, $$ where $d$ is the distance between the Mn-sites ${\bf R}$ and ${\bf R}'$. All transfer integrals are well localized and practically restricted by the nearest neighbors at around 4\AA~(Fig. \ref{fig.transfer}). \begin{figure}[tb] \begin{center} \resizebox{8cm}{!}{\includegraphics{figure6.eps}} \end{center} \caption{\label{fig.transfer} (Color online) Distance-dependence of averaged transfer integrals, $\bar{t}_{{\bf RR}'}(d) = \left( \sum_{m_1 m_2} t^{m_1 m_2}_{{\bf RR}'} t^{m_2 m_1}_{{\bf R}'{\bf R}} \right)^{1/2}$.} \end{figure} Already between the next nearest neighbors, the transfer integrals are considerably smaller. Generally, $\bar{t}_{{\bf RR}'}$ are larger for the least distorted LaMnO$_3$ and smaller for the more distorted HoMnO$_3$. The screened Coulomb interactions $U^{\bf R}_{\alpha_1 \alpha_2 \alpha_3 \alpha_4}$ have usual dependence on the spin indices: $U^{\bf R}_{\alpha_1 \alpha_2 \alpha_3 \alpha_4}$$=$$U^{\bf R}_{m_1 m_2 m_3 m_4} \delta_{s_1 s_2} \delta_{s_3 s_4}$. Generally, the matrix $\hat{U}^{\bf R}$$=$$\| U^{\bf R}_{m_1 m_2 m_3 m_4} \|$ can depend on the site-index ${\bf R}$. The intersite matrix elements of $\hat{U}$ are considerably smaller.\cite{PRB06a} The matrix $\hat{U}^{\bf R}$ itself has been computed in two steps \cite{PRB06a,rev08}. First, we perform the conventional constrained LDA ($c$LDA) calculations, and derive parameters of on-site Coulomb and exchange interactions between pseudoatomic Mn($3d$) orbitals. These parameters are typically rather large because the do not include the so-called self-screening effects caused \textit{by the same $3d$ electrons}, which participate in the formation of other bands due to the hybridization \cite{rev08}. The major contribution comes from the O($2p$) band, which has a large weight of the Mn($3d$) states (Fig. \ref{fig.DOS}). This channel of screening can be efficiently taken into account in the random-phase approximation (RPA) by starting from the interaction parameters obtained in $c$LDA and assuming that the latter already include all other channels of screening.\cite{PRB06a} All RPA calculations have been performed by starting from the LDA band structure. Nevertheless, in order to simulate the electronic structure close to the saturated (ferromagnetic) state, we used different Fermi levels for the majority ($\uparrow$-) and minority ($\downarrow$-) spin states. Namely, it was assumed that the Mn($3d$) band is empty for the $\downarrow$-spin channel and accommodates all 16 electrons (per one primitive unit) for the $\uparrow$-spin channel. Meanwhile, we get rid of the unphysical metallic screening by switching off all contributions to the RPA polarization function, which are associated with the transitions within the Mn($3d$) band.\cite{rev08} Then, at each Mn site we obtain the $5$$\times$$5$$\times$$5$$\times$$5$ matrix $\hat{U}^{\bf R}$ of the screened Coulomb interactions. Since the RPA screening incorporates some effects of the local environment in solid, the symmetry of such matrices differs from the spherical one.\cite{rev08} Nevertheless, just for the explanatory purposes, we fit each matrix in terms of three parameters, which specify interactions between the $3d$-electrons in the spherical environment: the Coulomb repulsion $U$$=$$F^0$, the intraatomic exchange coupling $J$$=$$(F^2$$+$$F^4)/14$, and the ``nonsphericity'' $B$$=$$(9F^2$$-$$5F^4)/441$, where $F^0$, $F^2$, and $F^4$ are the radial Slater's integrals. These parameters have the following meaning: $U$ is responsible for the charge stability of certain atomic configuration, while $J$ and $B$ are responsible for the first and second Hund rule, respectively. The results of such a fitting are shown in Table \ref{tab:UJB}. \begin{table}[tb] \caption{Results of fitting of the effective Coulomb interactions in terms of three atomic parameters: the Coulomb repulsion $U$, the exchange coupling $J$ and the nonsphericity $B$. All energies are measured in eV.} \label{tab:UJB} \begin{tabular}{cccc} \hline compound & $U$ & $J$ & $B$ \\ \hline LaMnO$_3$ & $2.15$ & $0.85$ & $0.09$ \\ PrMnO$_3$ & $2.07$ & $0.85$ & $0.09$ \\ NdMnO$_3$ & $2.11$ & $0.85$ & $0.09$ \\ TbMnO$_3$ & $2.24$ & $0.86$ & $0.09$ \\ HoMnO$_3$ & $2.16$ & $0.85$ & $0.09$ \\ \hline \end{tabular} \end{table} One can clearly see that the Coulomb repulsion $U$ appears to be relatively small due to the self-screening effects, while $J$ and $B$ are much closer to the atomic limit. The model (\ref{eqn:Hmanybody}) does not explicitly include the oxygen states. This could be a serious problem in the case of manganites, which are known to be close to the charge-transfer regime.\cite{MizokawaFujimori} On the other hand, it is well know that in many cases a good semi-quantitative description of the magnetic properties of manganites can be achieved already in a minimal model comprising only of the Mn($e_g$) bands.\cite{springer} We will pursue the same point of view and concentrate on the behavior of the Mn($3d$) bands. The magnetic polarization of the oxygen states will be considered in Sec. \ref{sec:TEnegry}, where it will be also argued that this effect is partially compensated by correlation interactions in the Mn($3d$) band beyond the Hartree-Fock approximation. \section{\label{sec:dataanalysis} Solution and Analysis of the Model} The model Hamiltonian (\ref{eqn:Hmanybody}) was solved in the Hartree-Fock (HF) approximation.\cite{BiMnO3,PRB06b,rev08} After the solution for each magnetic state, the total energy changes corresponding to infinitesimal rotations of the spins magnetic moments near this state were mapped onto the Heisenberg model:\cite{JHeisenberg,TRN} $$ E_{\rm Heis} = -\frac{1}{2} \sum_{{\bf RR}'} J_{{\bf RR}'} {\bf e}_{\bf R} \cdot {\bf e}_{{\bf R}'}, $$ where ${\bf e}_{\bf R}$ is the direction of the magnetic moment at the site ${\bf R}$. The parameters $\{ J_{{\bf RR}'} \}$ can be expressed through the one-electron (retarded) Green function, $\hat{\cal G}^s_{{\bf RR}'}(\omega)$, and the spin-dependent part of the one-electron potential, $\Delta \hat{\cal V}_{\bf R}$, obtained from the self-consistent solution of the HF equations. For some applications, it is convenient to consider $J_{{\bf RR}'}$ as the function of the band filling: \begin{equation} J_{{\bf RR}'}(\omega) = \int_{-\infty}^{\omega} d \omega' \mathcal{J}_{{\bf RR}'}(\omega'), \label{eqn:exchange} \end{equation} where \begin{equation} \mathcal{J}_{{\bf RR}'}(\omega') = \frac{1}{2 \pi} {\rm Im} {\rm Tr}_L \left\{ \hat{\cal G}_{{\bf RR}'}^\uparrow (\omega') \Delta \hat{\cal V}_{{\bf R}'} \hat{\cal G}_{{\bf R}'{\bf R}}^\downarrow (\omega') \Delta \hat{\cal V}_{\bf R} \right\} \label{eqn:integrant} \end{equation} and ${\rm Tr}_L$ is the trace over the orbital indices. In order to obtain the observable parameters, $J_{{\bf RR}'}(\omega)$ should be taken at the Fermi energy $\varepsilon_{\rm F}$: $J_{{\bf RR}'} \equiv J_{{\bf RR}'}(\varepsilon_{\rm F})$. Some details of this procedure can be found in the review article\cite{rev08} as well as in the recent publication devoted to BiMnO$_3$.\cite{BiMnO3} \section{\label{sec:exchange}Electronic Structure and Behavior of Interatomic Magnetic Interactions} A typical example of the densities of states obtained in the HF calculations for the FM and several AFM phases of LaMnO$_3$ in shown in Fig. \ref{fig.HFDOS}. \begin{figure}[h!] \centering \noindent \resizebox{14cm}{!}{\includegraphics{figure7.eps}} \caption{\label{fig.HFDOS} (Color online) Densities of states obtained in the Hartree-Fock calculations for the ferromagnetic (F), A- and E-type antiferromagnetic phases of LaMnO$_3$. The Fermi level is at zero energy (shown by dash-dotted line). Other symbols show the positions of the main bands. Different spin states are indicated by the arrows.} \end{figure} Even in LaMnO$_3$, which is the least distorted compound, the small value of $U$, obtained in the combined $c$LDA+RPA approach, appears to be sufficient to open the gap in the $e_g$ band, so that all magnetic phases, including the FM one, become insulating. As expected, the increase of the number of the AFM bonds associated with the change of the magnetic state in the direction FM$\rightarrow$A-type AFM$\rightarrow$E-type AFM results in the narrowing of all bands. Thus, the opening of the band gap is considerably facilitated by the interplay of the crystal distortion with the AFM arrangement of spins. For example, even small Jahn-Teller distorted appears to be sufficient to open the gap in the quasi-two-dimensional FM planes of the A-phase.\cite{JKPS,GorkovKresin} A similar situation is expected for the quasi-one-dimensional spin chains in the case of the E-phase.\cite{Hotta} In other compounds, with the increase of the crystal distortion the bandwidths will additionally decrease. In other respects, the position of the main bands is similar to the one displayed in Fig. \ref{fig.HFDOS}. The distance-dependence of interatomic magnetic interactions $J_{{\bf RR}'}$ is shown in Fig. \ref{fig.exchange1}. \begin{figure}[h!] \centering \noindent \resizebox{8cm}{!}{\includegraphics{figure8.eps}} \caption{\label{fig.exchange1} (Color online) Distance-dependence of interatomic magnetic interactions, as obtained in the Hartree-Fock calculations for the ferromagnetic state. The interactions, which mainly contribute to the stability of the A- and E-type AFM phases, are shown in groups. The notations of these interactions are explained in Fig. \protect\ref{fig.OrbitalOrdering}.} \end{figure} One can clearly distinguish four types of interactions, which mainly contribute to the magnetic properties of $R$MnO$_3$: the NN interaction in the orthorhombic ${\bf ab}$-plane, $J_1^\parallel$, which strongly depends on the crystal distortion; the NN AFM interaction along the ${\bf c}$-axis, $J_1^\perp$; the 2nd-neighbor interaction in the ${\bf ab}$-plane, $J_2^{\bf b}$, which operates along the orthorhombic ${\bf b}$-axis; and the 3rd-neighbor AFM interaction in the ${\bf ab}$-plane, $J_3$, which operates only between those Mn-sites whose occupied $e_g$ orbitals are pointed towards each other (see Fig. \ref{fig.OrbitalOrdering}). Other interactions are considerably weaker. Particularly, the 2nd-neighbor interactions along the ${\bf a}$-axis as well as the 3rd-neighbor interactions in the direction perpendicular to the occupied $e_g$ orbitals are small and can be neglected. The details of the behavior of the main magnetic interactions are shown in Fig. \ref{fig.element2}. \begin{figure}[h!] \centering \noindent \resizebox{8cm}{!}{\includegraphics{figure9.eps}} \caption{\label{fig.element2} (Color online) The behavior of the main interatomic magnetic interactions for the $R$MnO$_3$ compounds, as obtained in the Hartree-Fock calculations for the FM state: the nearest-neighbor interaction in the ${\bf ab}$-plane, $J_1^\parallel$ (a); the nearest-neighbor interaction between the planes, $J_1^\perp$ (b); and the longer range interactions in the ${\bf ab}$-plane, $J_2^{\bf b}$ and $J_3$ (correspondingly, b and c). The notations of the magnetic interactions are explained in figure \protect\ref{fig.OrbitalOrdering}.} \end{figure} The interaction $J_1^\parallel$ appears to be the most affected by the crystal distortion. When the crystal distortion increases in the direction La$\rightarrow$Pr$\rightarrow$Nd$\rightarrow$Tb$\rightarrow$Ho, $J_1^\parallel$ gradually decreases and changes the sign at around Pr-Nd. Thus, the NN coupling in the ${\bf ab}$-plane is FM at the beginning of the series and becomes AFM at the end of it. At the phenomenological level, such a behavior can be related to the change of the orbital ordering in the Mn-O-Mn bond (Fig. \ref{fig.OrbitalOrderingMnOMn}). \begin{figure}[h!] \centering \noindent \resizebox{6cm}{!}{\includegraphics{figure10a.eps}} \resizebox{6cm}{!}{\includegraphics{figure10b.eps}} \caption{\label{fig.OrbitalOrderingMnOMn} (Color online) Fragment of the orbital ordering in the plane formed by the single Mn-O-Mn bond in the case of LaMnO$_3$ (left) and HoMnO$_3$ (right).} \end{figure} In LaMnO$_3$, the Mn-O-Mn angle is closer to 180$^\circ$ (Table \ref{tab:structure}). Therefore, the arrangement of the occupied $e_g$-orbitals at the neighboring Mn-sites is nearly ``antiferromagnetic'',\cite{remark1} which according to the Goodenough-Kanamori rules should correspond to the the FM coupling between the spins.\cite{Goodenough2,Kanamori2,KugelKhomskii} In HoMnO$_3$, the deviation of the Mn-O-Mn angle from 180$^\circ$ is substantially larger. Therefore, the ``antiferromagnetic orbital ordering'' is strongly distorted so that the spin coupling can become AFM. Nevertheless, as we will see below, although such a phenomenological interpretation is strongly affected by other details of the electronic structure and particularly -- by the hybridization between the $t_{2g}$ and $e_g$ states, which is caused by the crystal distortion. Other magnetic interactions also depend on the crystal distortion. However, the distortion does not change the character of these interactions, and $J_1^\perp$, $J_2^{\bf b}$ and $J_3$ are AFM for all considered compounds. The most striking result of the present calculations is the existence of relatively strong longer range AFM interactions $J_2^{\bf b}$ and $J_3$. The appearance of $J_3$ is expected for the given type of the orbital ordering (Figs. \ref{fig.intro} and \ref{fig.OrbitalOrdering}). It operates between such 3rd neighbor sites ${\bf R}$ and ${\bf R}'$ in the ${\bf ab}$-plane, whose occupied $e_g$ orbitals are directed towards each other, and is mediated by the intermediate site, whose occupied $e_g$ orbital is nearly orthogonal to the bond $\langle {\bf RR}' \rangle$. Although the direct transfer integrals between such sites ${\bf R}$ and ${\bf R}'$ are small (Fig. \ref{fig.transfer}, note that the distance between 3rd neighbors in the ${\bf ab}$-plane is about 8 \AA), the on-site Coulomb repulsion $U$ is also relatively small (Table \ref{tab:UJB}). Therefore, the longer range AFM interactions, which are mediated by unoccupied $e_g$ orbitals of intermediate Mn-sites, have the same origin as the SE interactions, operating in the charge-transfer insulators via the oxygen states \cite{Oguchi,ZaanenSawatzky,PRB98}, and the mechanism itself can be called the ``super-superexchange''. Another 3rd-neighbor interaction, operating between Mn-sites in the ${\bf ab}$-plane whose occupied $e_g$ orbital are nearly orthogonal to the bond connecting these sites, is negligibly small. A similar situation occurs in the low-temperature monoclinic phase of BiMnO$_3$.\cite{BiMnO3} The main difference is that the orbital ordering realized in BiMnO$_3$ is different from the one which takes place in the orthorhombic compounds. Therefore, the long-range AFM interactions in BiMnO$_3$ will tend to stabilize another magnetic state, which is also different from the E-state. The mechanism responsible for the appearance of the relatively strong interaction $J_2^{\bf b}$ is not so straightforward. Nevertheless, as we will show below, some useful information can be gained from the analysis of the band-filling dependence of the 2nd-neighbor interactions in the ${\bf ab}$-plane. Fig. \ref{fig.Jband1} shows the behavior of the NN magnetic interactions as a function of the band filling. \begin{figure}[h!] \centering \noindent \resizebox{12cm}{!}{\includegraphics{figure11.eps}} \caption{\label{fig.Jband1} (Color online) Band-filling dependance of the nearest-neighbor magnetic interactions in the ${\bf ab}$-plane ($J^\parallel_1$) and between the planes ($J^\perp_1$). The magnetic interactions were calculated in the FM state for LaMnO$_3$ (left) and HoMnO$_3$ (right). Upper panel shows the behavior of the integrant (\protect\ref{eqn:integrant}), while the lower panel shows the exchange coupling (\protect\ref{eqn:exchange}). The Fermi level is at zero energy (shown by dash-dotted line). The positions of the $t_{2g}$- and $e_g$-bands are indicated by symbols.} \end{figure} Somewhat unexpectedly, the NN interactions in LaMnO$_3$ are mainly formed by the $t_{2g}$-band. Particularly, the values of both $J^\parallel_1$ and $J^\perp_1$ are well reproduced already after integration over the $t_{2g}$-band spreading from -3.5 eV till -2.0 eV. The distribution of $\mathcal{J}_{{\bf RR}'}$ in the region of the occupied $e_g$-band is antisymmetric. Therefore, there is a strong cancelation of contributions to $J_{{\bf RR}'}$ coming from the bottom and the top of the occupied $e_g$-band, so that the total integral (\ref{eqn:exchange}) over the $e_g$-band practically vanishes. In this sense, our explanation for the A-type AFM order in LaMnO$_3$ is rather different from the one adopted in the model calculations,\cite{PRL96,Maezono,Shiina} which typically do not consider the rotations of the MnO$_6$ octahedra. According to the present calculations, the behavior of the NN magnetic interactions in LaMnO$_3$ is mainly related to the hybridization between the atomic $t_{2g}$- and $e_g$-orbitals, which is induced by these rotations. Without the hybridization, all contribution of the half-filled $t_{2g}$-band to the NN magnetic interactions are expected to be antiferromagnetic.\cite{Maezono,Shiina} Our analysis shows that the hybridization can easily change the character of these interactions. The $t_{2g}$-$e_g$ hybridization becomes even stronger in the more distorted HoMnO$_3$, so that the contributions of the $t_{2g}$-band become \textit{ferromagnetic} both for $J^\parallel_1$ and $J^\perp_1$. On the contrary, all contributions of the $e_g$-band to the NN interactions are antiferromagnetic. Therefore, the $e_g$-band is totally responsible for the AFM character of NN magnetic interactions in the case HoMnO$_3$. The behavior of 2nd-neighbor interactions in the ${\bf ab}$-plane as a function of the band filling is shown in Fig. \ref{fig.Jband2}. \begin{figure}[h!] \centering \noindent \resizebox{12cm}{!}{\includegraphics{figure12.eps}} \caption{\label{fig.Jband2} (Color online) Band-filling dependance of the second-neighbor magnetic interactions in the ${\bf ab}$-plane. The magnetic interactions were calculated in the FM state for LaMnO$_3$ (left) and HoMnO$_3$ (right). Upper panel shows the behavior of the integrant (\protect\ref{eqn:integrant}), while the lower panel shows the exchange coupling (\protect\ref{eqn:exchange}). The notations of the magnetic interactions are explained in Fig. \protect\ref{fig.OrbitalOrdering}. The Fermi level is at zero energy (shown by dash-dotted line). The positions of the $t_{2g}$- and $e_g$-bands are indicated by symbols.} \end{figure} Generally, the integrant $\mathcal{J}_{{\bf RR}'}(\omega)$ oscillates in sign. Moreover, as the distance between the lattice centers ${\bf R}$ and ${\bf R}'$ increases, the number of such oscillations also increases. This property can be rigorously proven for the tight-binding bands, assuming that all transfer integrals (or ``hoppings'') are restricted by the nearest neighbors. Then, the number of nodes of $\mathcal{J}_{{\bf RR}'}(\omega)$ becomes proportional to the minimal number of hopes, which are required in order to reach the center ${\bf R}'$ starting from the center ${\bf R}$.\cite{Heine1,Heine2} Thus, $\mathcal{J}_{{\bf RR}'}(\omega)$ is expected to have more nodes for the 2nd-neighbor interactions in comparison with the NN ones, as it is clearly seen from the comparison of Figs. \ref{fig.Jband1} and \ref{fig.Jband2}. Nevertheless, the lattice distortion and orbital ordering effects can cause some violation of these simple tight-binding rules. Let us consider the behavior of $\mathcal{J}_{{\bf RR}'}(\omega)$ in the region of the $e_g$-band, where $\mathcal{J}^\parallel_1(\omega)$ has only one node, which is qualitatively consistent with the tight-binding rules. Then, $\mathcal{J}^{\bf a}_2(\omega)$ has two nodes, which is again consistent with the tight-binding rules. Such a behavior is responsible for the strong cancelation of positive and negative contributions to $J^{\bf a}_2$ in the process of integration over $\omega$ and readily explains the fact that the final values of $J^{\bf a}_2$ are relatively small for all considered compounds. However, the $\omega$-dependence of $\mathcal{J}^{\bf b}_2$ appears to be strongly deformed. In the region of the $e_g$-band it has only one node . Therefore, the strong cancelation, which took place for $J^{\bf a}_2$, does not occurs for $J^{\bf b}_2$. This leads to the strong anisotropy of the 2nd-neighbor interactions in the ${\bf ab}$-plane, $|J^{\bf b}_2| \gg |J^{\bf a}_2|$, which plays a vital role in the formation of the E-type AFM structure. Particularly, it readily explains the fact why the FM zigzag chains in the observed E-type AFM structure propagate along the ${\bf a}$-direction and are antiferromagnetically coupled along the ${\bf b}$-axis (and not vice versa). Thus, the behavior of the main magnetic interactions replicates the gradual change of the crystal distortion. The form of both NN and long-range magnetic interactions is closely related to the orbital ordering realized in the distorted orthorhombic structure. Particularly, the crystal distortion explains \begin{itemize} \item[$\bullet$] the gradual change of $J^\parallel_1$ from FM in the case of LaMnO$_3$ to AFM at the end of the series. Near the point of the FM-AFM crossover, $J^\parallel_1$ is small and the magnetic ground state is mainly controlled by the longer range interactions. \item[$\bullet$] the existence of the longer range AFM interactions $J_2^{\bf b}$ and $J_3$, which bind the spin magnetic moments within each orbital sublattice, and determine both the direction of propagation and the periodicity of the E-phase. \end{itemize} Nevertheless, there should be an additional mechanism responsible for the relative orientation of spin magnetic moments in two orbital sublattices, which are marked as $3x^2$$-$$r^2$ and $3y^2$$-$$r^2$ in Fig. \ref{fig.intro}. Since each spin in the E-type AFM structure participates in the formation of two FM and two AFM bonds with the nearest neighbors in the ${\bf ab}$-plane, some difference between parameters $J^\parallel_1$ acting in the FM and AFM bonds is required in order to fix the directions of spins in the two orbitals sublattices relative to each other.\cite{remark8} Such a modulation of the parameters $J^\parallel_1$ can be caused by several mechanisms. Generally, once the symmetry is broken by the AFM spin order, orbital and lattice degrees of freedom will tend to adjust this symmetry change. One mechanism is purely electronic and related to the small deformation of the orbital ordering in the AFM phase. For example, in BiMnO$_3$ such a mechanism facilitates the formation of the $\uparrow \downarrow \downarrow \uparrow$ AFM structure, which breaks the inversion symmetry.\cite{BiMnO3} Nevertheless, in $R$MnO$_3$ the situation appears to be different. For all considered compounds, the NN interactions calculated in the E-phase satisfy the following condition: $J^\parallel_1 (\uparrow \uparrow) < J^\parallel_1 (\uparrow \downarrow)$, where the notations $\uparrow \uparrow$ and $\uparrow \downarrow$ are referred to the FM and AFM bonds, respectively (Fig. \ref{fig.J1an}). \begin{figure}[h!] \centering \noindent \resizebox{10cm}{!}{\includegraphics{figure13.eps}} \caption{\label{fig.J1an} (Color online) Nearest-neighbor magnetic interactions in the ${\bf ab}$-plane of E-type antiferromagnetic phase. The magnetic coupling in the FM and AFM bonds is denoted as $\uparrow \uparrow$ and $\uparrow \downarrow$, respectively.} \end{figure} Thus, as far as the NN interactions are concerned, the E-phase appears to be unstable with respect to the spin rotations of two orbital sublattices relative to each other.\cite{remark2} Apparently, such a situation is realized in the intermediate region, corresponding to the IC- and S-states in Fig. \ref{fig.intro}. Nevertheless, in order to stabilize the E-phase, we need another mechanism, which enforces the inequality $J^\parallel_1 (\uparrow \uparrow) > J^\parallel_1 (\uparrow \downarrow)$. Such a mechanism does exist and is related to the atomic displacements, which further minimize the total energy of the system via magneto-elastic interactions.\cite{Wang,Picozzi07} Although we do not consider it in the present work, from rather general properties of the double exchange and SE interactions,\cite{remark5} it is reasonable to expect that the AFM character of $J^\parallel_1 (\uparrow \downarrow)$ can be enforced by the conditions, which further \textit{enhance} of the transfer integrals in the AFM bond.\cite{PRL99} This can be achieved by either \textit{shrinking} the Mn-Mn bond or \textit{increasing} the Mn-O-Mn angle. The opposite distortions will favor the FM coupling, which are relevant to $J^\parallel_1 (\uparrow \uparrow)$. \section{\label{sec:TEnegry}Total Energies and Comparison with the Experimental Data} In this section we consider the quantitative aspects of the problem. Particularly, we investigate whether the the experimental phase shown in Fig. \ref{fig.intro} can be reproduced by the low-energy model (\ref{eqn:Hmanybody}) for the Mn($3d$) bands and, if not, which ingredients are missing in the model. We begin with the total energy calculations for the model (\ref{eqn:Hmanybody}) in the HF approximation (\ref{fig.TotalE}). \begin{figure}[h!] \centering \noindent \resizebox{14cm}{!}{\includegraphics{figure14.eps}} \caption{\label{fig.TotalE} (Color online) Total energies of different AFM states obtained for the model (\protect\ref{eqn:Hmanybody}) in the Hartree-Fock approximation. All energies are measured relative to the FM states. The notations of the AFM states are standard for the manganites (see, for example, ref. \protect\citen{Hamada}). Apart from the A- and E-states, the C-state corresponds to the FM chains propagating along the ${\bf c}$-axis, which are antiferromagnetically coupled in the ${\bf ab}$-plane, and the G-state corresponds to the AFM coupling between all six nearest neighbors.} \end{figure} In LaMnO$_3$, the lowest energy corresponds to the A-type AFM state, in agreement with the experiment. However, the next E-type AFM state is separated from the A-state by only 1.1 meV per one formula unit. In PrMnO$_3$ and NdMnO$_3$, the energy of the E-type AFM state appears to be lower than the one of the A-state, although experimentally both of these compounds are the A-type antiferromagnets (Fig. \ref{fig.intro}). Finally, for TbMnO$_3$ and HoMnO$_3$, the model (\ref{eqn:Hmanybody}) yields the G-type AFM ground state, where all NN spins are coupled antiferromagnetically. Thus, although the model (\ref{eqn:Hmanybody}) predicts the change of the magnetic ground state, it clearly overestimates the tendencies towards the antiferromagnetism, so that the transition from the A- to E-type AFM state is expected in the wrong place (around PrMnO$_3$ and NdMnO$_3$ instead of HoMnO$_3$). The correlation interactions beyond the HF approximation, will additionally stabilize the AFM states,\cite{PRB06b} and only worsen the agreement with the experimental data. Therefore, before considering the correlation effects, one should find some mechanism, which works in the opposite direction and additionally stabilizes the FM interactions. Such a mechanism can be related to the magnetic polarization of the oxygen sites.\cite{WeiKu,Mazurenko} Although the model (\ref{eqn:Hmanybody}) is designed for the Mn($3d$) bands, the Wannier functions, which constitute the basis of the low-energy model (\ref{eqn:Hmanybody}), may have some tails spreading to the oxygen and other atomic sites. The weight of these tails in the Wannier functions is proportional to the weight of the O($2p$)-states in the total density of states for the Mn($3d$) bands (Fig. \ref{fig.DOS}). In the case of the FM alignment of the Mn-spins, these tails will lead to some finite polarization at the intermediate oxygen sites (Fig. \ref{fig.Cartoon}). \begin{figure}[h!] \centering \noindent \resizebox{7cm}{!}{\includegraphics{figure15.eps}} \caption{\label{fig.Cartoon} (Color online) Polarization of the oxygen sites caused by the tails of the Wannier functions centered at the manganese sites. In the perovskite structure, each oxygen site is located near the midpoint between two manganese sites. Then, in the case of the FM alignment, the tails from the Mn-sites have the same direction of spins, yielding the net magnetic moment also at the oxygen sites. In the case of the AFM arrangement, these tails cancel each other and the oxygen atoms remains nonmagnetic.} \end{figure} Since the intraatomic exchange coupling $J_{\rm O}$ associated with the oxygen atoms is exceptionally large,\cite{Mazurenko,MazinSingh,NJP08} even small polarization can lead to the substantial energy gain. This contribution is missing in the model (\ref{eqn:Hmanybody}), where the form of the Coulomb and exchange interactions is assumed to be the same as in the limit of isolated Mn-atoms. In the case of the AFM alignment, the tails of the Wannier functions cancel each other and the net magnetic polarization at the oxygen sites is zero. Below we present quantitative estimates of this effect for HoMnO$_3$. By expanding the Wannier functions over the original LMTO basis functions,\cite{PRB06a,PRB06b} one can find the distribution of the magnetic moments over all sites of the perovskite lattice in different magnetic structures. The obtained values of the magnetic moments at the oxygen sites, $M_{\rm O}$, are given in Table \ref{tab:oxygenP}. \begin{table}[tb] \caption{Magnetic polarization of the oxygen sites in different magnetic states of HoMnO$_3$ (namely, the absolute values of the magnetic moments at the oxygen sites in $\mu_b$). The first value was derived from the model analysis for the isolated Mn($3d$) bands, while the second value (shown in the parentheses) was obtained in the LSDA calculations, which also take into account the polarization of the O($2p$) band. O$_{\bf ab}$ and O$_{\bf c}$ denote the oxygen sites located in the ${\bf ab}$-plane and between the planes, respectively. Two lines in the case of the E-phase stand for the polarization in the FM (first line) and AFM (second line) Mn-O-Mn bonds. The finite polarization in some AFM Mn-O-Mn bonds is related to the oxygen displacements from the midpoint positions in the $D^{16}_{2h}$ structure.} \label{tab:oxygenP} \begin{tabular}{ccc} \hline phase & O$_{\bf ab}$ & O$_{\bf c}$ \\ \hline F & $0.26~(0.11)$ & $0.23~(0.04)$ \\ A & $0.25~(0.09)$ & $0~(0)$ \\ C & $0.08~(0.02)$ & $0.20~(0.03)$ \\ G & $0.07~(0.03)$ & $0~(0)$ \\ E & $\begin{array}{c} 0.24~(0.07) \\ 0.09~(0.01) \\ \end{array}$ & $0~(0)$ \\ \hline \end{tabular} \end{table} The parameters $J_{\rm O}$ can be derived from the LMTO calculations in the local-spin-density approximation (LSDA).\cite{remark3} It yields $J_{\rm O}=$ 2.1 and 2.2 eV for the oxygen sites located in the ${\bf ab}$-plane and between the planes, respectively. Then, the energy gain, caused by the polarization of the oxygen sites, can be estimated from the formula $\Delta E_{\rm O} = -\frac{1}{4} M_{\rm O}^2$ (with subsequent summation over all oxygen sites in the formula unit), which yields $\Delta E_{\rm O} =$ $-$$102$, $-$$63$, $-$$29$, $-$$5$, and $-$$33$ meV for the states F, A, C, G, and E, respectively. The effect is clearly too big. For example, by by combining these values with the total energies shown in Fig. \ref{fig.TotalE}, we would arrive to the FM ground state, which again contradicts to the experimental data. Then, what is missing? One effect is related to the polarization of the O($2p$)-band, which is not explicitly included in the model (\ref{eqn:Hmanybody}). It is true that since the O($2p$)-band is filled, it does not contribute to the total magnetic moment. However, it can contribute to the local moments, which cancel each other after the summation over the unit cell. Particularly, the polarization of the oxygen states in the O($2p$)-band appears to be the opposite to the one in the Mn($3d$)-band, as it follows from the form of the Mn($3d$)-O($2p$) hybridization.\cite{remark6} This effect is clearly seen by comparing the moments obtained for the isolated Mn($3d$)-bands with results of the all-electron calculations, which take into account the contributions of the O($2p$)-band (Table \ref{tab:oxygenP}). Indeed, the O($2p$)-band substantially reduces the values of the magnetic moments associated with the oxygen sites (by factor two and more). Therefore, $\Delta E_{\rm O}$ will be also reduced. For example, by using the LSDA values for $M_{\rm O}$ (Table \ref{tab:oxygenP}), we find that $\Delta E_{\rm O}$ is reduced till $-13$, $-9$, $-1$, $-4$, and $-11$ meV per one formula unit for the states F, A, C, G, and E, respectively. By combining these $\Delta E_{\rm O}$ with the total energies shown in Fig. \ref{fig.TotalE}, we readily obtain the E-type AFM structure is realized as the ground state, in agreement with the experiment. The new values of the total energies, measured relative to the FM state, are $-15$, $-15$, $-27$, and $-33$ meV per one formula unit for the states A, C, G, and E, respectively. Another factor, which strongly affects the relative stability of different magnetic states, is the correlation interactions beyond the HF approximation. In order to estimate the energies of these correlation interactions, we tried three perturbative techniques starting from the HF solutions for each magnetic state. One is the random-phase approximation (RPA), which takes into account the lowest-order polarization processes, involving the excitation and subsequent deexcitation of an electron-hole pair.\cite{Pines,BarthHedin,Ferdi02} For these purposes, the RPA expression for the correlation energy has been adopted for the model calculations.\cite{remark9} Another method is the second order perturbation theory for the correlation interactions,\cite{rev08,PRB06b,JETP07,remark10} and the third one is the $T$-matrix method,\cite{JETP07,Kanamori3} which takes into account higher-order effects. Results of these calculations for HoMnO$_3$ are shown in Table \ref{tab:CorrelationE}. \begin{table}[tb] \caption{Correlation energies for several AFM states of HoMnO$_3$ measured in meV per one formula unit relative to the FM state. The correlation energies have been computed in the random-phase approximation (RPA), the second-order perturbation theory, and the $T$-matrix method starting from the Hartree-Fock approximation for each magnetic state.} \label{tab:CorrelationE} \begin{tabular}{lcccc} \hline method & A & C & G & E \\ \hline RPA & $-4.9$ & $-19.5$ & $-24.7$ & $-14.1$ \\ 2nd order & $-6.7$ & $-14.9$ & $-17.9$ & $-10.3$ \\ $T$-matrix & $-4.6$ & $-9.8$ & $-11.7$ & $-7.6$ \\ \hline \end{tabular} \end{table} Since the on-site Coulomb repulsion $U$ is relatively small, all three methods provide rather consistent explanation for the behavior of the correlation energies, which tend to stabilize the AFM states relative to the FM one. The energy gain increases with the number of the AFM bonds in the direction F$\rightarrow$A$\rightarrow$E$\rightarrow$C$\rightarrow$G. Thus, the correlation interactions act against the magnetic polarization of the oxygen sites and again tend to destabilize the E-state relative to the G-state. The situation is rather fragile and whether the E-state is realized as the ground state of HoMnO$_3$ depends on the delicate balance of these two effects and also on the approximations employed for the correlation energy. For example, RPA and the second-order perturbation theory seem to overestimate the correlation energy of the G-state and make the E-state unstable. On the other hand, the E-state, which breaks the orthorhombic $D_{2h}^{16}$ symmetry, should be additionally stabilized through the lattice relaxation. \section{\label{sec:summary}Summary and Conclusions} On the basis of first-principles electronic structure calculations, we propose a microscopic model for the behavior of interatomic magnetic interactions in the series of orthorhombic manganites $R$MnO$_3$ ($R$$=$ La, Pr, Nd, Tb, and Ho), which explains the phase transition from the A-type AFM state to the E-state with the increase of the lattice distortion. Our picture is clearly different from the ones proposed in the previous studies. In fact, several authors emphasized the importance of the 2nd-neighbor interactions $J^{\bf a}_2$ and $J^{\bf b}_2$ in the orthorhombic ${\bf ab}$-plane. For example, Kimura \textit{et al.}\cite{Kimura} considered the superexchange processes mediated by the O($2p$) orbitals in the distorted perovskite structure and argues that they can be responsible for the AFM interaction $J^{\bf b}_2$ and weakly FM interaction $J^{\bf a}_2$. Other authors\cite{Picozzi06,Xiang} performed the mapping of the total energies derived from the first-principles electronic structure calculations onto the Heisenberg model and argued that under certain conditions $J^{\bf a}_2$ and $J^{\bf b}_2$ become comparable with $J^\parallel_1$. However, such a mapping crucially depend on the form of the \textit{a priori} postulated model, where the lack of some interactions (such as $J_3$) can lead to an incomplete picture. In this sense, our approach to the problem is more consistent. \begin{itemize} \item[$\bullet$] It does not make any \textit{a priori} assumptions about the form of the Heisenberg model. \item[$\bullet$] It goes beyond the conventional superexchange processes and takes into account other contributions to interatomic magnetic interactions.\cite{TRN} \end{itemize} Particularly, the contributions associated with the ``super-superexchange'' processes in the regime of relatively small on-site Coulomb interactions, give rise to the 3rd-neighbor coupling $J_3$, which was overlooked in the previous studies.\cite{remark4} According to our point of view, $J_3$ is one of the key players, which triggers the transition to the E-type AFM state in orthorhombic manganites. \begin{itemize} \item[$\bullet$] The existence of $J_3$ is directly related to the form of the orbital ordering. \item[$\bullet$] $J_3$ is responsible for the AFM coupling between 3rd-neighbor spins in the ${\bf ab}$-plane, which is realized in the E-phase (Fig. \ref{fig.intro}). \end{itemize} Since the longer range AFM interactions seem to be the intrinsic property of all undoped manganites, these interactions should be seen in the experiment, for example, on the inelastic neutron scattering. We expect the longer range interactions to take place even in LaMnO$_3$. Although it has A-type AFM ground state, the longer range interactions participate as the precursors of the E-phase, which is finally realized in the more distorted compounds. The neutron-scattering measurements on LaMnO$_3$ are available today. Nevertheless, the experimental data are typically interpreted only in terms of the NN interactions.\cite{Hirota,Moussa} Definitely, the problem deserves further analysis. Particularly, in would be interesting to reinterpret the experimental data by permitting the longer range interactions, particularly $J^{\bf b}_2$ and $J_3$. This point was already emphasized in ref. \citen{springer}. It is possible that the longer range interactions are not particularly strong in LaMnO$_3$, which has the highest N\'{e}el temperature (${\rm T_N}$, Fig. \ref{fig.intro}) and where the NN interactions clearly dominate. From this point of view, it would be more interesting to consider two other A-type AFM systems, PrMnO$_3$ and NdMnO$_3$, which have smaller ${\rm T_N}$ and where the relative contribution of the longer range interactions to the magnon spectra is expected to be stronger. Although the proposed model is able to unveil the microscopic origin of the magnetic phase transition, the quantitative agreement with the experimental data crucially depends on the combination of the following three factors: \begin{itemize} \item[$\bullet$] the correlation effects beyond the HF approximation; \item[$\bullet$] the magnetic polarization of the oxygen sites; \item[$\bullet$] the lattice relaxation in the E-phase, which breaks the inversion symmetry and gives rise to the multiferroic behavior. \end{itemize} The detailed analysis of these effects presents and interesting and important problem for the future investigations. \section*{Acknowledgment} I am grateful to Zlata Pchelkina for valuable discussions and the help with preparation of Figs. \ref{fig.OrbitalOrdering} and \ref{fig.OrbitalOrderingMnOMn}. The work is partly supported by Grant-in-Aid for Scientific Research in Priority Area ``Anomalous Quantum Materials'' and Grant-in-Aid for Scientific Research (C) No. 20540337 from the Ministry of Education, Culture, Sport, Science and Technology of Japan.
1,108,101,566,075
arxiv
\section{Introduction} Tensorized Chebyshev interpolation underlies various algorithms for computational problems in high dimensions. The Chebyshev interpolation of function $f$ is the more beneficial, the higher the cost of evaluating $f$ itself is. The cost of evaluating $f$ directly scales with the computational cost for obtaining the coefficients of the Chebyshev interpolation. For computationally challenging high-dimensional problems, these costs become a bottleneck for the implementation of the interpolation. In these situations it is crucial to use the least number of nodal points possible to achieve a pre-specified accuracy. One valuable application is the quantification of parameter uncertainty for high-dimensional integrals that require Monte-Carlo simulations. Here, computationally expensive integrals have to be evaluated for a large set of different parameters. At this point interpolation in the parameter space promises to be highly beneficial as shown in \cite{GassGlauMahlstedtMair2016}. In this paper, we provide an improved error bound for the Chebyshev interpolation of analytic functions. \cite{SauterSchwab2004} derive an error bound for the tensorized Chebyshev interpolation. Their proof relies on a method for error estimation for analytic integrands from \cite{davis1975interpolation}. In \cite{GassGlauMahlstedtMair2016} the result of \cite{SauterSchwab2004} has been slightly improved. The error bound is connected to the radius $\varrho$ of a Bernstein ellipse and in the one-dimensional case \cite{Trefethen2013} presents a different approach which goes back to \cite{Bernstein1912}. In \cite{borm2010efficient} error bounds are presented for the case when the derivatives of function $f$ are bounded. n this paper we assume $f$ to be analytic. We iteratively extend the one-dimensional result shown in \cite{Trefethen2013} to the multivariate by induction over the dimension. The resulting nested structure of the proof reaches a certain complexity and therefore requires more space than the proof in \cite{SauterSchwab2004}. Finally, we present the new error bound as a combination of this result with this result from \cite{SauterSchwab2004} and \cite{GassGlauMahlstedtMair2016}. We furthermore discuss examples that show a significant improvement of the new error bound. In Section \ref{sec-main_result}, we present the main mathematical result and discuss this result. Section \ref{sec-Proofs} provides the proof and finally, Section \ref{sec-conclusion} concludes. \section{Main result}\label{sec-main_result} In this section, we provide our main result, the improved error bound for the multivariate Chebyshev interpolation. The main result in Theorem \ref{Asymptotic_error_decay_multidim_combined} is a combination of two error bounds. On the one hand, we use an extension of the result of \cite{SauterSchwab2004} as shown in \cite{GassGlauMahlstedtMair2016}. On the other hand, we extend the one-dimensional result presented in \cite{Trefethen2013} iteratively to the multivariate case. We consider the tensor based extension of Chebyshev polynomial interpolation of functions $f:\mathcal{X}\rightarrow\mathbb{R}$, $\mathcal{X}=[\underline{x}_{1},\overline{x}_1]\times\ldots \times[\underline{x}_D,\overline{x}_D]\subset\mathbb{R}^D$, as in e.g. \cite{SauterSchwab2004}. For notational ease we introduce the polynomials for $\mathcal{X}=[-1,1]^D$ with the obvious extension to general hyperrectangle by the appropriate linear transforms. Let $\overline{N}:=(N_1,\ldots,N_D)$ with $N_i \in\mathds N_0$ for $i=1,\ldots,D$. The interpolation with $\prod_{i=1}^D (N_{i}+1)$ summands is given by \begin{equation} I_{\overline{N}}(f)(x) := \sum_{j\in J} c_jT_j(x), \end{equation} where the function variable $x=(x_1,\dots, x_d)'\in [-1,1]^d$ and the summation index $j$ is a multiindex ranging over $J:=\{(j_1,\dots, j_D)\in\mathds N_0^D: j_i\le N_i\,\text{for }i=1,\ldots,D\}$. For $j=(j_1,\dots, j_D)\in J$, the basis functions are defined as $T_{j}(x_1,\dots,x_D) = \prod_{i=1}^D T_{j_i}(x_i)$ and the coefficients are given by \begin{equation} \label{def:Chebycj} c_j = \Big( \prod_{i=1}^D \frac{2^{\1_{\{0<j_i<N_i\}}}}{N_i}\Big)\sum_{k_1=0}^{N_1}{}^{''}\ldots\sum_{k_D=0}^{N_D}{}^{''} f(x^{(k_1,\dots,k_D)})\prod_{i=1}^D \cos\left(j_i\pi\frac{k_i}{N_i}\right), \end{equation} where $\sum{}^{''}$ indicates that the first and last summand are halved and the Chebyshev nodes $x^k$ for multiindex $k=(k_1,\dots,k_D)\in J$ are given by $x^k = (x_{k_1},\dots,x_{k_D}) with the univariate Chebyshev nodes $x_{k_i}=\cos\left(\pi\frac{k_i}{N_i}\right)$ for $k_i=0,\ldots,N_i$ and $i=1,\ldots,D$. For hyperrectangle $\mathcal{X}\subset\mathbb{R}^D$ and parameter vector $\varrho\in(1,\infty)^D$, we define the textit{generalized Bernstein ellipse} by \begin{align}\label{eq-genB} B(\mathcal{X},\varrho):=B([\underline{x}_1,\olp[1]],\varrho_1)\times\ldots\times B([\underline{x}_D,\olp[D]],\varrho_D ), \end{align} where $B([\underline{x},\olp],\varrho):=\tau_{[\underline{x},\olp]}\circ B([-1,1],\varrho)$, with the transform $\tau_{[\underline{x},\olp]}\big(\Re(x)\big):=\overline{x} + \frac{\underline{x}-\olp}{2}\big(1-\Re(x)\big)$ and $\tau_{[\underline{x},\olp]}\big(\Im(x)\big):= \frac{\olp-\underline{x}}{2}\Im(x)$ for all $x\in\mathds C$ and Bernstein ellipses $B([-1,1],\varrho_i)$ for $i=1,\ldots,D$. \begin{theorem \label{Asymptotic_error_decay_multidim_combined} Let $f:\mathcal{X}\rightarrow\mathbb{R}$ have an analytic extension to some generalized Bernstein ellipse $B(\mathcal{X},\varrho)$ for some parameter vector $\varrho\in (1,\infty)^D$ with $\max_{x\in B(\mathcal{X},\varrho)}|f(x)|\le V<\infty$. Then \begin{align*} \max_{x\in\mathcal{X}}\big|f(x)& - I_{\overline{N}}(f)(x)\big|\le\min\{a(\varrho,N,D),b(\varrho,N,D)\}, \end{align*} where, denoting by $S_D$ the symmetric group on $D$ elements, \begin{align*} a(\varrho,N,D)&=\min_{\sigma\in S_D}\sum_{i=1}^D 4V\frac{\varrho_{\sigma(i)}^{-N_i}}{\varrho_i-1} + \sum_{k=2}^D 4V\frac{\varrho_{\sigma(k)}^{-N_k}}{\varrho_{\sigma(k)}-1}\cdot 2^{k-1} \frac{(k-1) + 2^{k-1}-1}{\prod_{j=1}^{k-1}(1-\frac{1}{\varrho_{\sigma(j)}})},\\ b(\varrho,N,D)&=2^{\frac{D}{2}+1}\cdot V \cdot\left(\sum_{i=1}^D\varrho_i^{-2N_i}\prod_{j=1}^D\frac{1}{1-\varrho_j^{-2}}\right)^{\frac{1}{2}}. \end{align*} \end{theorem} \begin{proof} The bound $\max_{x\in\mathcal{X}}\big|f(x) - I_{\overline{N}}(f)(x)\big|\le b(\varrho,N,D)$ follows from \cite[Theorem 2]{GassGlauMahlstedtMair2016} as extension of \cite{SauterSchwab2004}. We show $\max_{x\in\mathcal{X}}\big|f(x) - I_{\overline{N}}(f)(x)\big|\le a(\varrho,N,D)$ in Section \ref{sec-Proofs} in Proposition \ref{Asymptotic_error_decay_multidim_new_permu}. Combining both results obviously yields the assertion of the theorem. \end{proof} The examples below show that $\min\{a(\varrho,N,D),b(\varrho,N,D)\}$ improves both error bounds $a(\varrho,N,D)$ and $b(\varrho,N,D)$. Noticing that both bounds are scaled with the factor $V$, we set $V=1$, moreover, we choose $D=2$. \begin{example}\label{Example_1} For $\varrho_1=2.3$ and $\varrho_2=1.8$, and $N_1=N_2=10$, we have $b(\varrho,N,D)=0.0018$ and $a(\varrho,N,D)=0.0066$. Therefore, in this example the error bound $b(\varrho,N,D)$ is sharper. \end{example} \begin{example}\label{Example_2} If we change slightly the setting from Example \ref{Example_1} to $\varrho_1=2.3$ and $\varrho_2=2.5$, and $N_1=N_2=10$, then the resulting error bounds are $b(\varrho,N,D)=0.0017$ and $a(\varrho,N,D)=0.0011$ and thus, the later is the sharper error bound. \end{example} As shown in Examples \ref{Example_1} and \ref{Example_2}, slight changes in the domain of analyticity and, thus, the radii of the Bernstein ellipses, may reverse the order of $a(\varrho,N,D)$ and $b(\varrho,N,D)$. Figure \ref{fig:Same_Rho} displays both error bounds $a(\varrho,N,D)$ and $b(\varrho,N,D)$ for varying $\varrho$ with $\varrho_1=\varrho_2$, $N_1=N_2=10$. We observe that both error bounds intersect at $\varrho_1=\varrho_2\approx2.800882$. For smaller values of $\varrho$, the sharper error bound is $b(\varrho,N,D)$, whereas for higher values $a(\varrho,N,D)$ is sharper. \begin{figure}[htb!] \includegraphics[width=0.8\textwidth, center]{Comparison_same_rho.eps} \caption{Comparison of the error bounds $b(\varrho,N,D)$ (blue, dashed) and $a(\varrho,N,D)$ (red) by setting $\varrho_1=\varrho_2$ and $N_1=N_2=10$. At $\varrho_1=\varrho_2\approx2.800882$ both error bounds intersect.} \label{fig:Same_Rho} \end{figure} So far, the examples indicate that for a smaller radius of the Bernstein ellipse, $b(\varrho,N,D)$ tends to be the better error bound and that for higher radii of the Bernstein ellipses or for strongly differing radii, $a(\varrho,N,D)$ tends to be the sharper error bound. Our last example shows the situation where thanks to Theorem \ref{Asymptotic_error_decay_multidim_combined} less nodes are required to guarantee a pre-specified accuracy. \begin{example}\label{Example_3} Let the radii of the Bernstein ellipse be $\varrho_1=2.95$ and $\varrho_2=9.8$. Assuming $V=1$, we are interested in achieving an accuracy of $\varepsilon\le 2\cdot 10^{-4}$. To achieve $b(\varrho,N,D)=\le\varepsilon$, we have to set $N_1=11$ and $N_2=5$. For achieving $a(\varrho,N,D)\le\varepsilon$, we have to set $N_1=8$ and $N_2=4$. Instead of $72=(11+1)\cdot(5+1)$ nodal points applying error bound $b(\varrho,N,D)$, we only need to use $45=(8+1)\cdot(4+1)$ nodal points applying the error bound $a(\varrho,N,D)$. \end{example} Example \ref{Example_3} highlights the potential of using fewer nodal points to achieve a desired accuracy by comparing both error bounds. Especially when the evaluation of the interpolated function at the nodal points is challenging, this reduces the computational costs noticeably. This particularly arises for Chebyshev interpolation combined with Monte-Carlo simulation for high-dimensional parametric integration as shown in \cite{GassGlauMahlstedtMair2016}. Summarizing, Theorem \ref{Asymptotic_error_decay_multidim_combined} improves the error bounds $a(\varrho,N,D)$ and $b(\varrho,N,D)$ significantly. \section{Proofs}\label{sec-Proofs} In the following, we will present our approach to derive the error bound $a(\varrho,N,D)$ in Theorem \ref{Asymptotic_error_decay_multidim_combined}. Whereas in proof of \cite[Lemma 7.3.3]{SauterSchwab2004} an orthonormal system of appropriately scaled Chebyshev polynomials has been used and each $\varrho_i$ is weighted equally, we will now extend the one-dimensional result in \cite[Theorem 8.2]{Trefethen2013} by induction over the dimension $D$. In each iteration step the interpolation in one additional variable is added consecutively. \begin{proposition \label{Asymptotic_error_decay_multidim_new_permu} Let $f:\mathcal{X}\rightarrow\mathbb{R}$ have an analytic extension to some generalized Bernstein ellipse $B(\mathcal{X},\varrho)$ for some parameter vector $\varrho\in (1,\infty)^D$ with\\ $\max_{x\in B(\mathcal{X},\varrho)}|f(x)|\le V<\infty$. Then \begin{align* \max_{x\in\mathcal{X}}\big|f(x) - &I_{\overline{N}}(f)(x)\big| \\ &\le\min_{\sigma\in S_D}\sum_{i=1}^D 4V\frac{\varrho_{\sigma(i)}^{-N_i}}{\varrho_i-1} + \sum_{k=2}^D 4V\frac{\varrho_{\sigma(k)}^{-N_k}}{\varrho_{\sigma(k)}-1}\cdot 2^{k-1} \frac{(k-1) + 2^{k-1}-1}{\prod_{j=1}^{k-1}(1-\frac{1}{\varrho_{\sigma(j)}})}, \end{align*} where $S_D$ denotes the symmetric group on $D$ elements. \end{proposition} \begin{proof} We show the statement for an arbitrary $\sigma\in S_D$ and for ease of notation we use $\sigma(i)=i$ for $i=1,\ldots,D$. Obviously, we can iteratively interpolate in the parameter in such a way that the error bound is minimized by choosing the corresponding $\sigma\in S_D$. We prove the assertion of the theorem via induction over the dimension $D$ of the parameter domain. We assume the function $f$ is analytic in $[-1,1]^D$ and is analytically extendable to the open Bernstein ellipse $B([-1,1]^D,\varrho)$. For $D=1$ and $\mathcal{X}=[-1,1]$ the proof of the assertion is presented in \cite[Theorem 8.2]{Trefethen2013}. The generalization of the assertion to the case of a general parameter interval $\mathcal{X}\subset \mathds R$ is elementary and follows from a linear transformation as described in \cite[Proof of Theorem 2.2]{GassGlauMahlstedtMair2016}. The key idea of the proof is to use the triangle inequality to estimate the interpolation error in $D+1$ components via the interpolation error in the $D+1$ component of the original function and the interpolation in the $D+1$ component of the already in $D$ components interpolated function. Hereby, in both cases the issue is basically reduced to an one-dimensional interpolation and the known theory from \cite[Theorem 8.2]{Trefethen2013} can be applied. The crucial step is to derive the bound of the in already in $D$ components interpolated function on the corresponding Bernstein ellipse. Let us now assume the assertion is proven for dimension $1,\ldots,D$. Let $\mathcal{X}^{D+1}:=[\underline{x}_1,\olp[1]]\times\ldots\times[\underline{x}_{D+1},\olp[D+1]]$ and let $f:\mathcal{X}\rightarrow\mathds R$ have an analytic extension to the generalized Bernstein ellipse $B(\mathcal{X}^{D+1},\varrho^{D+1})$ for some parameter vector $\varrho^{D+1}\in (1,\infty)^{D+1}$ and let $\max_{x\in B(\mathcal{X}^{D+1},\varrho^{D+1})}|f(x)|\le V$. To set up notation, we write $x_1^D=(x_1,\ldots,x_D)$ and define in the following the Chebyshev interpolation operators. For interpolation only in the $i-$th component with $N$ Chebyshev points, \begin{align*} I_N^i(f)(x_1^{D+1}):=I_N(f(x_1,\ldots,x_{i-1},\cdot,x_{i+1},\ldots,x_{D+1}))(x_i). \end{align*} Analogously, interpolation only in $j$ components with $N_{k_1},\ldots,N_{k_j}$ Chebyshev points is denoted by \begin{align*} I_{N_{k_1},\ldots,N_{k_j}}^{j_1,\ldots,j_j}(f)(x_1^{D+1}):=I_{N_{k_j}}^{j_j}\circ\ldots\circ I_{N_{k_1}}^{j_1}(f)(x_1^{D+1}), \end{align*} and finally, the interpolation in all $D+1$ components with $N_1,\ldots, N_{D+1}$ Chebyshev points is \begin{align*} I_{N_1,\ldots, N_{D+1}}(f)(x_1^{D+1}):=I_{N_{D+1}}^{D+1}\circ\ldots\circ I_{N_{1}}^{1}(f)(x_1^{D+1}). \end{align*} In the following the norm $|\cdot|$ denotes the $\infty-$norm on $[-1,1]^{D+1}$. We are interested in the interpolation error \begin{align*} &|f(x_1^{D+1}) - I_{N_1,\ldots, N_{D+1}}(f)(x_1^{D+1})|\\ &\quad\quad\quad\le|f(x_1^{D+1}) - I^{D+1}_{N_{D+1}}(f)(x_1^{D+1})|+|I^{D+1}_{N_{D+1}}(f)(x_1^{D+1})-I_{N_1,\ldots, N_{D+1}}(f)(x_1^{D+1})|. \end{align*} We first show that the first part as an one dimensional interpolation is bounded by, \cite[Theorem 8.2]{Trefethen2013}, \begin{align} |f(x_1^{D+1}) - I^{D+1}_{N_{D+1}}(f)(x_1^{D+1})|\le 4V\frac{\varrho_{D+1}^{-N_{D+1}}}{\varrho_{D+1}-1}.\label{result_1D_d1} \end{align} In order to derive \eqref{result_1D_d1}, we have to show that the coefficients of the Chebyshev polynomial interpolation are bounded. Following \cite{Trefethen2013}, the on $x_{D+1}$ depending coefficient $a_{k_{D+1}}$ is defined as \begin{align*} a_{k_{D+1}}:=\frac{2^{\mathbbm{1}_{k_{D+1}>0}}}{\pi} \int_{-1}^1\frac{f(x_1^{D+1})T_{k_{D+1}}(p_{D+1})}{\sqrt{1-x_{D+1}^2}}dx_{D+1}. \end{align*} By using the same transformation as in the proof of \cite[Theorem 8.1]{Trefethen2013}, just adapted to the multidimensional setting, i.e. \begin{align*} x_i&=\frac{z_i+z_i^{-1}}{2},\quad i=1,\ldots D+1,\\ F(z_1,\ldots,z_{D+1})&=F(z_1^{-1},\ldots,z_{D+1}^{-1})=f(x_1,\ldots, x_{D+1}), \end{align*} we achieve for the estimation of the coefficient $a_{k_{D+1}}$, \begin{align*} |a_{k_{D+1}}|=\left|\frac{2^{-\mathbbm{1}_{k_{D+1}=0}}}{\pi i}\int_{|z_{D+1}|=\varrho_{D+1}}z_{D+1}^{-1-k_{D+1}}F(z_1,\ldots,z_{D+1})dz_{D+1}\right|. \end{align*} Here, we use that $F$ is bounded by the same constant as $f$, which is given by assumption, $|f(x_1^{D+1})|_{B([-1,1]^{D+1},\varrho)}\le V$. Therefore, analogously to \cite[Theorem 8.1]{Trefethen2013}, this leads to \begin{align} |a_{k_{D+1}}|\le 2\varrho_{D+1}^{-k_{D+1}}V.\label{Coefficient_Ddim} \end{align} This estimation can be used to derive \eqref{result_1D_d1} applying \cite[Theorem 8.2]{Trefethen2013}. For the second part we use \begin{align*} |I^{D+1}_{N_{D+1}}(f)(x_1^{D+1})-I_{N_1,\ldots, N_{D+1}}(f)(x_1^{D+1})|=|I^{D+1}_{N_{D+1}}(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))(x_1^{D+1})|. \end{align*} At this point we again apply the triangle inequality and achieve \begin{align} |I^{D+1}_{N_{D+1}}&(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))(x_1^{D+1})|\notag\\ &\le|I^{D+1}_{N_{D+1}}(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))(x_1^{D+1})\label{Induktion_Step1}-(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))|\\ &\quad+|(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))|\notag. \end{align} The term \eqref{Induktion_Step1} is basically an interpolation in the $D+1$ component of the function $(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))$. An upper bound $\mathcal{M}(D)$ for this function is given in Lemma \ref{Lemma_Upper_Bound}. With this bound we can estimate the interpolation error of interpolating $(f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))$ in the component D+1, \begin{align*} |I^{D+1}_{N_{D+1}}&(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))(x_1^{D+1})- (f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))|\\ &\le 4\mathcal{M}(D)\frac{\varrho_{D+1}^{-N_{D+1}}}{\varrho_{D+1}-1} \end{align*} The term $|(f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))|$ is the interpolation error in $D$ dimensions and we assume, that this one is by our induction hypothesis bounded, depending on $D$, i.e. \begin{align}|(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))|\le B(D),\quad B(D)>0.\label{Old_error} \end{align} Collecting all parts, we achieve for the error of our interpolation in $D+1$ components, \begin{align*} |I^{D+1}_{N_{D+1}}(f-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1}))(x_1^{D+1})|\le4V\frac{\varrho_{D+1}^{-N_{D+1}}}{\varrho_{D+1}-1}+B(D)+4\mathcal{M}(D)\frac{\varrho_{D+1}^{-N_{D+1}}}{\varrho_{D+1}-1}. \end{align*} Finally, if we start with $D=1$ and apply the presented procedure step-wise, we get via straightforward induction , \begin{align*} &B(D)=\sum_{i=1}^D 4V\frac{\varrho_i^{-N_i}}{\varrho_i-1} + \sum_{k=2}^D 4\mathcal{M}(k-1)\frac{\varrho_k^{-N_k}}{\varrho_k-1}. \end{align*} Naturally, we can further estimate the error by using $\frac{s_i}{\varrho_i}<1$ and resp. $(1-\frac{s_i}{\varrho_i})<1$ in the numerator, \begin{align*} B(D)&\le \sum_{i=1}^D 4V\frac{\varrho_i^{-N_i}}{\varrho_i-1} + \sum_{k=2}^D 4V\frac{\varrho_k^{-N_k}}{\varrho_k-1}\cdot 2^{k-1} \frac{(k-1) + 2^{k-1}-1}{\prod_{j=1}^{k-1}(1-\frac{s_j}{\varrho_j})}. \end{align*} Recalling the definition of $s_i=1+\epsilon$ with $\epsilon\in(0,\min_{j=1}^D \varrho_j -1)$, the definition holds for any $\epsilon\in(0,\min_{j=1}^D\varrho_j -1)$ and therefore also for $\lim_{\epsilon\to 0}$ \begin{align*} B(D)\le&\lim_{\epsilon\to 0}\sum_{i=1}^D 4V\frac{\varrho_i^{-N_i}}{\varrho_i-1} + \sum_{k=2}^D 4V\frac{\varrho_k^{-N_k}}{\varrho_k-1}\cdot 2^{k-1} \frac{(k-1) + 2^{k-1}-1}{\prod_{j=1}^{k-1}(1-\frac{1+\epsilon}{\varrho_j})}\\ =&\sum_{i=1}^D 4V\frac{\varrho_i^{-N_i}}{\varrho_i-1} + \sum_{k=2}^D 4V\frac{\varrho_k^{-N_k}}{\varrho_k-1}\cdot 2^{k-1} \frac{(k-1) + 2^{k-1}-1}{\prod_{j=1}^{k-1}(1-\frac{1}{\varrho_j})}. \end{align*} \end{proof} In the following lemmata, we use the following notation $x_1^M=(x_1,\ldots,x_M)$ and the convention $\frac{N}{0}=\infty,\ N\in\mathbb{N}^{+}$. \begin{lemma} \label{Lemma_Trefethen_Polynom_1D} Let $\mathcal{X}\ni x_1^M \mapsto f(x_1^M)$ be a real valued function that has an analytic extension to some generalized Bernstein ellipse $B(\mathcal{X},\varrho)$ for some parameter vector $\varrho\in (1,\infty)^{M}$.\\ Then the Chebyshev polynomial interpolation $I_N^1(f)(x_1^M)$ is given by, \begin{align} I_N^1(f)(x_1^M)&=\sum_{k=0}^{N}a_k(x_2^M) T_k(x_1)+\sum_{k=N+1}^{\infty} a_k(x_2^M) T_{m(k,N)}(x_1),\label{Cheby_Series_Error1} \end{align} where $m(k,N)=|(k+N-1)(mod2N)-(N-1)|$ and $a_k(x_2^M)=\frac{2}{\pi}\int_{-1}^1 f(x_1^M)\frac{T_k(x_1)}{\sqrt{1-x_1^2}}dx_1$ \end{lemma} \begin{proof} Following \cite[Equation (4.9)]{Trefethen2013}, from aliasing properties of Chebyshev polynomials it results that \begin{align*} f(x_1^M)-I_N^1(f)(x_1^M)=\sum_{k=N+1}^{\infty} a_k(x_2^M) (T_k(x_1)-T_{m(k,N)}(x_1)). \end{align*} By writing the Chebyshev series for $f(x_1^M)$, see \cite{Trefethen2013}, we get, \begin{align*} &\sum_{k=0}^{\infty}a_k(x_2^M) T_k(x_1)-I_N^1(f)(x_1^M)=\sum_{k=N+1}^{\infty} a_k(x_2^M) (T_k(x_1)-T_{m(k,N)}(x_1)), \end{align*} and rearranging terms yields \eqref{Cheby_Series_Error1}. \end{proof} \begin{lemma}\label{Lemma_Upper_Bound} Let $\mathcal{X}\ni x_1^M \mapsto f(x_1^{D+1})$ be a real valued function that has an analytic extension to some generalized Bernstein ellipse $B(\mathcal{X},\varrho)$ for some parameter vector $\varrho\in (1,\infty)^{D+1}$. Then \begin{align*} &\sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}|f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1})|\le \mathcal{M}(D)\\ &:=2^DV \frac{\sum_{i=1}^D \left(\frac{s_i}{\varrho_i}\right)^{Ni+1}+\sum_{\sigma\in\{0,1\}^D \setminus\{0\}^D}\prod_{\delta:\sigma_{\delta}=0}(1-\left(\frac{s_{\delta}}{\varrho_{\delta}}\right)^{N_{\delta}+1} \prod_{\delta:\sigma_{\delta}=1}\left(\frac{s_{\delta}}{\varrho_{\delta}}\right)^{N_{\delta}+1}}{\prod_{j=1}^D(1-\frac{s_j}{\varrho_j})} \end{align*} \end{lemma} \begin{proof} Starting with, \begin{align*} \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}|f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1})|, \end{align*} we express the interpolation of $f$ in $D$ components as in Lemma \ref{Lemma_Interpolation_D_Components}, \begin{align*} \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}\bigg|f(x_1^{D+1}) -\sum_{\sigma\in\{0,1\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D},x_1^{D})\bigg|. \end{align*} Following \cite{Trefethen2013} and as used in Lemma \ref{Lemma_Trefethen_Polynom_1D}, we can express $f$ in the following way, \begin{align*} f(x_1^{D+1})=\sum_{\delta=1}^D \sum_{k_{\delta}=0}^{\infty}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D}=0,x_1^{D}), \end{align*} leading to, \begin{align*} \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}&|f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1})|\\ &=\sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}\bigg|\sum_{\delta=1}^D \sum_{k_{\delta}=0}^{\infty}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D}=0,x_1^{D})\\ &\quad-\sum_{\sigma\in\{0,1\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D},x_1^{D})\bigg|. \end{align*} In the next step, we use from the second summand the part $\sigma=\{0\}^D$, subtract it from the subtrahend and use the triangle inequality. \begin{align*} &\sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}|f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1})|\\ &=\sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}\bigg|\sum_{i=1}^D \left(\sum_{k_i=N_i+1}^{\infty}\sum_{j=1,j\neq i}^D\sum_{k_{j}=0}^{\infty}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D}=0,x_1^{D})\right) \\ &\quad-\sum_{\sigma\in\{0,1\}^D \setminus\{0\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D},x_1^{D})\bigg|\\ &\le \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}\bigg|\sum_{i=1}^D \left(\sum_{k_i=N_i+1}^{\infty}\sum_{j=1,j\neq i}^D\sum_{k_{j}=0}^{\infty}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D}=0,x_1^{D})\right)\bigg|\\ &+\quad\bigg|\sum_{\sigma\in\{0,1\}^D \setminus\{0\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D},x_{D+1})\tau(k_1^{D},\sigma_1^{D},x_1^{D})\bigg| \end{align*} To estimate the supremum, we first need estimations for $|I(k_1^{D},x_{D+1})|$ and \\$|\tau(k_1^{D},\sigma_1^{D},x_1^{D})|$. \begin{align*} |I(k_1^{D},&x_{D+1})|=\bigg|\prod_{i=1}^D\frac{2^{\mathbbm{1}_{k_i>0}}}{\pi}\int_{[-1,1]^D}f(x_1^{D+1})\prod_{j=1}^D\frac{T_{k_j}(x_j)}{\sqrt{1-x_j^2}}d(x_1^D)\bigg|\\ =&\bigg|\prod_{i=2}^D\frac{2^{\mathbbm{1}_{k_i>0}}}{\pi}\int_{[-1,1]^{D-1}}\frac{2^{\mathbbm{1}_{k_1>0}}}{\pi}\int_{-1}^1f\frac{T_{k_1}(x_1)}{\sqrt{1-x_1^2}}d(x_1)\prod_{j=2}^D\frac{T_{k_j}(x_j)}{\sqrt{1-x_j^2}}d(x_2^D)\bigg|. \end{align*} Analogously to deriving the estimation \eqref{Coefficient_Ddim}, we can estimate the integral with respect to $x_1$ as $\frac{2^{\mathbbm{1}_{k_1>0}}}{\pi}\int_{-1}^1f\frac{T_{k_1}(x_1)}{\sqrt{1-x_1^2}}d(p_1)\le 2V\varrho_1^{-k_1}$. The remaining $D-1$ dimensional integral can in a similar way be estimated as $D-1$ one-dimensional integrals with $V=1$. Altogether, this results in the following estimation for $|I(k_1,\ldots,k_D)|$, \begin{align*} |I(k_1^{D},x_{D+1})|\le 2^D V \prod_{i=1}^D\varrho_{i}^{-k_i}. \end{align*} For $|\tau(k_1^{D},\sigma_1^{D}=0,x_1^{D})|$, we make use of Bernstein's inequality, using that the norm of each Chebyshev polynomial is bounded by 1 on $[-1,1]$. For each $i=1,\ldots,D$ we choose a Bernstein ellipse with radius $s_i$ such that $1<s_i<\varrho_i$. Here, we define $s_i=1+\epsilon$ and this yields for $x:\ x_i\in B([-1,1],s_i),\ i=1,\ldots,D$, \begin{align*} |\tau(k_1^{D},\sigma_1^{D},x_1^{D})|=\prod_{\delta:\sigma_{\delta}=0}T_{k_{\delta}}(x_{\delta})\prod_{\delta:\sigma_{\delta}=1}T_{m_{\delta}(k_{\delta})}(x_{\delta})\le\prod_{\delta:\sigma_{\delta}=0}s_{\delta}^{k_{\delta}}\prod_{\delta:\sigma_{\delta}=1}s_{\delta}^{m_{\delta}(k_{\delta})}. \end{align*} By definition, it holds $m_{\delta}(k_{\delta})\le k_{\delta}$. This leads to \begin{align*} |\tau(k_1^{D},\sigma_1^{D}=0,x_1^{D})|\le&\prod_{i=1}^D s_i^{k_i}. \end{align*} Using both estimates leads to \begin{align*} \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}&|f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1})|\\ &\le \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}\bigg|\sum_{i=1}^D \left(\sum_{k_i=N_i+1}^{\infty}\sum_{j=1,j\neq i}^D\sum_{k_{j}=0}^{\infty}2^DV\prod_{l=1}^D\left(\frac{s_l}{\varrho_l}\right)^{k_l} \right)\bigg|\\ &\quad\quad +\bigg|\sum_{\sigma\in\{0,1\}^D \setminus\{0\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}2^DV\prod_{l=1}^D\left(\frac{s_l}{\varrho_l}\right)^{k_l}\bigg|. \end{align*} Due to $s_i<\varrho_i$ we can apply the convergence results for the geometric series. This leads to \begin{align*} &\sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})}|f(x_1^{D+1})-I_{N_1,\ldots, N_{D}}^{1,\ldots,D}(f)(x_1^{D+1})|\\ &\quad\quad\quad\quad\le\mathcal{M}(D):= \sup_{x_{D+1}\in B([-1,1],\varrho_{D+1})} \bigg|2^DV\sum_{i=1}^D\frac{\left(\frac{s_i}{\varrho_i}\right)^{Ni+1}}{\prod_{j=1}^D(1-\frac{s_j}{\varrho_j})}\bigg|\\ &\quad\quad\quad\quad\quad +\bigg|2^DV \sum_{\sigma\in\{0,1\}^D \setminus\{0\}^D} \frac{\prod_{\delta:\sigma_{\delta}=0}(1-\left(\frac{s_{\delta}}{\varrho_{\delta}}\right)^{N_{\delta}+1} \prod_{\delta:\sigma_{\delta}=1}\left(\frac{s_{\delta}}{\varrho_{\delta}}\right)^{N_{\delta}+1}}{\prod_{j=1}^D(1-\frac{s_j}{\varrho_j})}\bigg|\\ &=2^DV \frac{\sum_{i=1}^D \left(\frac{s_i}{\varrho_i}\right)^{Ni+1}+\sum_{\sigma\in\{0,1\}^D \setminus\{0\}^D}\prod_{\delta:\sigma_{\delta}=0}(1-\left(\frac{s_{\delta}}{\varrho_{\delta}}\right)^{N_{\delta}+1} \prod_{\delta:\sigma_{\delta}=1}\left(\frac{s_{\delta}}{\varrho_{\delta}}\right)^{N_{\delta}+1}}{\prod_{j=1}^D(1-\frac{s_j}{\varrho_j})}. \end{align*} \end{proof} \begin{lemma} \label{Lemma_Interpolation_D_Components} Let $\mathcal{X}\ni x_1^M \mapsto f(x_1^M)$ be a real valued function that has an analytic extension to some generalized Bernstein ellipse $B(\mathcal{X},\varrho)$ for some parameter vector $\varrho\in (1,\infty)^{M}$. For$D\le M$ let \begin{align*} I(k_1^{D},x_{D+1}^{M})&=\prod_{i=1}^D\frac{2^{\mathbbm{1}_{k_i>0}}}{\pi}\int_{[-1,1]^D}f(x_1^{M})\prod_{j=1}^D\frac{T_{k_j}(x_j)}{\sqrt{1-x_j^2}}d(x_1,\ldots,x_D),\\ \tau(k_1^{D},\sigma_1^{D},x_1^{D})&=\prod_{\delta:\sigma_{\delta}=0}T_{k_{\delta}}(x_{\delta})\prod_{\delta:\sigma_{\delta}=1}T_{m_{\delta}}(x_{\delta}), \end{align*} then the interpolation of $f(x_1^{M})$ in $D$ components is given by: \begin{align*} I_{N_1,\ldots,N_D}^{1,\ldots,D}(f)(x_1^M)=\sum_{\sigma\in\{0,1\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D},x_{D+1}^{M})\tau(k_1^{D},\sigma_1^{D},x_1^{D}). \end{align*} \end{lemma} \begin{proof} We proof this lemma via induction over the dimension $D$. For $D=1$ it follows from Lemma \ref{Lemma_Trefethen_Polynom_1D}, \begin{align*} I_{N_1}^{1}(f)(x_1^M)=&\sum_{k_1=0}^{N_1}\frac{2^{\mathbbm{1}_{k_1>0}}}{\pi}\int_{[-1,1]}f(x_1^{M})\frac{T_{k_1}(x_1)}{\sqrt{1-x_1^2}}dx_1 T_{k_1}(x_1)\\ &\quad\quad+\sum_{k_1=N_1+1}^{\infty}\frac{2^{\mathbbm{1}_{k_1>0}}}{\pi}\int_{[-1,1]}f(x_1^{M})\frac{T_{k_1}(x_1)}{\sqrt{1-x_1^2}}dx_1 T_{m_1}(x_1). \end{align*} Embedded in the introduced notation we get for $D=1$, \begin{align*} I_{N_1}^{1}(f)(x_1^M)=\sum_{\sigma\in\{0,1\}}\sum_{\delta=1}^1\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^1,x_2^M)\tau(k_1^1,\sigma_1^1,x_1^1). \end{align*} For the induction step from $D-1$ to $D$, we assume the interpolation in $D-1$ components is given by \begin{align*} I_{N_1,\ldots,N_{D-1}}^{1,\ldots,{D-1}}(f)(x_1^M)=\sum_{\sigma\in\{0,1\}^{D-1}}\sum_{\delta=1}^{D-1}\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D-1},x_{D}^M)\tau(k_1^{D-1},\sigma_1^{D-1},x_1^{D-1}). \end{align*} For the interpolation in $D$ components we make use of \begin{align*} I^{1,\ldots,D}_{N_1,\ldots, N_{D}}(f)(x_1^{M})=I_{N_{D}}^{D}\circ\ldots\circ I_{N_{1}}^{1}(f)(x_1^{M})=I_{N_{D}}^{D}\circ I_{N_1,\ldots,N_{D-1}}^{1,\ldots,{D-1}}(f)(x_1^M). \end{align*} As for $D=1$ we apply \cite[p.27]{Trefethen2013} and this leads to \begin{align*} I_{N_1,\ldots, N_{D}}(f)(x_1^{D})=&\sum_{k_D=0}^{N_D}\frac{2^{\mathbbm{1}_{k_D>0}}}{\pi}\int_{-1}^1 I_{N_1,\ldots,N_{D-1}}^{1,\ldots,{D-1}}(f)(x_1^M) \frac{T_{k_D}(x_D)}{\sqrt{1-x_D^2}}dx_D T_{k_D}(x_D)\\ +&\sum_{k_D=N_D+1}^{\infty}\frac{2^{\mathbbm{1}_{k_D>0}}}{\pi}\int_{-1}^1 I_{N_1,\ldots,N_{D-1}}^{1,\ldots,{D-1}}(f)(x_1^M) \frac{T_{k_D}(x_D)}{\sqrt{1-x_D^2}}dx_D T_{m_D}(x_D). \end{align*} By the induction hypothesis and the definitions of $I(k_1^{D-1},x_D^M)$ and\\ $\tau(k_1^{D-1},\sigma_1^{D-1},x_1^{D-1})$, we achieve, \begin{align*} &\int_{-1}^1 I_{N_1,\ldots,N_{D-1}}^{1,\ldots,{D-1}}(f)(x_1^M) \frac{T_{k_D}(x_D)}{\sqrt{1-x_D^2}}dx_D\\ &\quad=\int_{[-1,1]} \sum_{\sigma\in\{0,1\}^{D-1}}\sum_{\delta=1}^{D-1}\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^{D-1},x_D^M)\tau(k_1^{D-1},\sigma_1^{D-1},x_1^{D-1}) \frac{T_{k_D}(x_D)}{\sqrt{1-x_D^2}}dx_D\\ &\quad=\int_{[-1,1]} \sum_{\sigma\in\{0,1\}^{D-1}}\sum_{\delta=1}^{D-1}\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}\prod_{i=1}^{D-1}\frac{2^{\mathbbm{1}_{k_i>0}}}{\pi}\\ &\quad\quad\quad \int_{[-1,1]^{D-1}}f(x_1^M)\prod_{j=1}^{D-1}\frac{T_{k_j}(x_j)}{\sqrt{1-x_j^2}}d(x_1,\ldots,x_{D-1})\frac{T_{k_D}(x_D)}{\sqrt{1-x_D^2}}dx_D. \end{align*} Rearranging terms yields, \begin{align*} I_{N_1,\ldots, N_{D}}(f)(x_1^{M})=&\sum_{k_D=0}^{N_D}\sum_{\sigma\in\{0,1\}^{D-1}}\sum_{\delta=1}^{D-1}\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}\prod_{i=1}^{D}\frac{2^{\mathbbm{1}_{k_i>0}}}{\pi}\int_{[-1,1]^{D}}f(x_1^M)\\ &\prod_{j=1}^D\frac{T_{k_j}(x_j)}{\sqrt{1-x_j^2}}d(x_1^{D})\prod_{\delta:\sigma_{\delta}=0}T_{k_{\delta}}(x_{\delta})\prod_{\delta:\sigma_{\delta}=1}T_{m_{\delta}}(x_{\delta}) T_{k_D}(x_D)\\ &+\sum_{k_D=N_D+1}^{\infty}\sum_{\sigma\in\{0,1\}^{D-1}}\sum_{\delta=1}^{D-1}\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}\prod_{i=1}^{D}\frac{2^{\mathbbm{1}_{k_i>0}}}{\pi}\int_{[-1,1]^{D}}f(p_1^M)\\ &\prod_{j=1}^D\frac{T_{k_j}(x_j)}{\sqrt{1-x_j^2}}d(x_1^{D})\prod_{\delta:\sigma_{\delta}=0}T_{k_{\delta}}(x_{\delta})\prod_{\delta:\sigma_{\delta}=1}T_{m_{\delta}}(x_{\delta}) T_{m_D}(x_D).\\ \end{align*} This can be expressed as \begin{align} &I_{N_1,\ldots,N_D}^{1,\ldots,D}(f)(x_1^M)=\sum_{\sigma\in\{0,1\}^D}\sum_{\delta=1}^D\sum_{k_{\delta}=(N_{\delta}+1)\cdot\sigma_{\delta}}^{\frac{N_{\delta}}{1-\sigma_{\delta}}}I(k_1^D,x_{D+1}^M)\tau(k_1^D,\sigma_1^D,x_1^D). \end{align} \end{proof} \section{Conclusion}\label{sec-conclusion} In this article, we have provided an enhanced error bound for tensorized Chebyshev polynomial interpolation in Theorem \ref{Asymptotic_error_decay_multidim_combined} and have shown several examples. Example \ref{Example_3} highlights the effect of the improved error bound. Here, less interpolation nodes are required to guarantee a pre-specified accuracy. This significantly reduces the computational time, especially if the evaluation of function $f$ at the nodal points is time-consuming. \bibliographystyle{chicago
1,108,101,566,076
arxiv
\section{Introduction} \label{introduction} Gauge theories on noncommutative space are relevant to the quantization of D-branes in background $B_{\mu\nu}$ fields\cite{Ala98JHEP02003,Mic98JHEP02008,She99PLB119,Mal9908134,Sei9908142}. The effect of noncommutativity is given in the momentum space vertices as the from of Moyal bracket phase. To derive this factor, the authors in Ref.\cite{Big9908056} consider a dipole in a magnetic field {\bf B}. In the limit of strong magnetic field ({\bf B}$\to \infty$), this dipole is frozen into the lowest Landau level(LLL). Interaction of such dipoles include the Moyal bracket phase factor of $e^{i p \wedge q}$ with $p \wedge q = \epsilon^{ij} p_i q_j / B$. However, in this process, the dipole in a strong magnetic field is turned out to be a galileian particle of mass $M$ because they neglect the kinetic terms. It is important to note that the noncommutativity comes just from the presence of the magnetic field. The condition of ``strong'' makes the calculation easy and does not matter with the noncommutativity. In this sense it is valuable to study a field theory both in the presence of a magnetic field and in the coordinate space. Actually one can derive the factor of $e^{-i \tau \wedge \tau'}$ from the operation ${T}_{\boldsymbol{\tau}}{T}_{\boldsymbol{\tau}'} {T}_{-\boldsymbol{\tau}} {T}_{-\boldsymbol{\tau}'}$ of magnetic translation operator(${T}_{\boldsymbol{\tau}}$)\cite{Gir9907002}. This means that when an electron travels around a parallelogram generated by ${T}_{\boldsymbol{\tau}}{T}_{\boldsymbol{\tau}'} {T}_{-\boldsymbol{\tau}} {T}_{-\boldsymbol{\tau}'}$ it picks up a phase $\phi = 2 \pi \Phi/\Phi_0 = B \hat {\rm\bf k} \cdot (\boldsymbol{\tau} \times \boldsymbol{\tau}' ) \equiv \tau \wedge \tau'$, where $\Phi$ is the magnetic flux in the parallelogram and $\Phi_0 = hc/e$ is the flux quantum. The only difference between the momentum and coordinate spaces is the phase factor: in the momentum space it is proportional to $1/B$ while in the coordinate space it is proportional to $B$. This is so because the correct dimensions should be recovered. If we introduce the green's function in a magnetic field and in the coordinate space, this noncommutative situation is shown up clearly. For example, the one particle green's function for a free particle is\cite{Vei92NPB715} \begin{equation} G_\beta^{free}({\rm\bf r}_2, {\rm\bf r}_1) = { m \over {2 \pi \beta}} e^{-m r_{12}^2/2 \beta} , \label{green-function} \end{equation} whereas the one in the LLL is given by\cite{Kim93PRD4839} \begin{equation} G_\beta^B({\rm\bf r}_2, {\rm\bf r}_1) = { {m \omega_c} \over \pi } e^{-\beta \omega_c} \exp \left [ - {{m \omega_c} \over 2} \left \{ r_{21}^2 + 2 i \epsilon \hat {\rm\bf k} \cdot ({\rm\bf r}_2 \times {\rm\bf r}_1 ) \right \} \right ], \label{green-LLL} \end{equation} with $\omega_c = e |{\rm\bf B} |/2 mc$ and $\epsilon = B/|{\rm\bf B} |$. We note that $G_\beta^{free}$ is symmetric under $({\rm\bf r}_2, {\rm\bf r}_1) \to ({\rm\bf r}_1, {\rm\bf r}_2)$. But $G_\beta^B$ no longer carries such a symmetry because of the presence of the phase factor. The phase factor originates from a subtle, combined property of translation and gauge transformation in the presence of magnetic field({\it i.e.}, the magnetic translation). In this paper, we study the nonrelativistic anyonic model as a model of (2+1)D field theory on the noncommutative geometry. This model has already introduced to study the fractional statistics\cite{Wil90} and anyonic physics\cite{Vei92NPB715,Kim93PRD4839}. Now we reconsider this model to explore its hidden noncommutative property. We can regard this model as a simple model which shows the noncommutativity. Actually the noncommutativity appears as the matrix $M$ in the form of the anti-symmetric submatrix($d_{ij}$). \section{Anyonic Model in a Magnetic Field} \label{model} We start with the Lagrangian for an ideal gas of fractional particles (nonrelativistic anyons) in a magnetic field (${\rm\bf B} = B \hat {\rm\bf k}$)\cite{Wil90} \begin{equation} {\cal L} = \sum_{i=1}^{N} \left [ { m \over 2} \dot {\rm \bf x}_i^2 + q \dot {\rm\bf x}_i \cdot {\rm\bf A}_i + q \left \{ -{\rm\bf a}_0({\rm\bf x}_i) + \dot {\rm\bf x}_i \cdot {\rm\bf a}_i \right \} \right ] + {1 \over {2 \alpha}} \int d^2 x \epsilon_{\rho \sigma \tau} {a}_\rho \partial_\sigma {a}_\tau , \label{lagrangian} \end{equation} where ${\rm\bf x}_i$ is the $i$th particle coordinates, $q$(charge= $-e$), ${\rm\bf A}_i= ( - {B \over 2} y_i, { B \over 2} x_i, 0 ) $ with $\nabla_i \times {\rm\bf A}_i = {\rm\bf B}$, ${\rm\bf a}_i$(statistical gauge potential) and ${{\rm\bf a}}_0$(scalar potential). The first term is the kinetic term for nonrelativistic particles. The second one is their interaction with a magnetic field. The third term is their coupling with the statistical gauge potential. The last term is just the Chen-Simons term which associates with each particle fictitious flux $\alpha q$. $\alpha$ plays a role of the statistical parameter. The anyons are considered as identical particles (fermions or hard-core bosons) with the flux $\alpha q$. On later, we need to introduce a harmonic potential term of ($- \sum_{i=1}^N {1 \over 2} m \omega^2 r_i^2$) to regularize the divergences\cite{Vei92NPB715,Kim93PRD4839}. Then our model (\ref{lagrangian}) is very similar to (3) of ref.\cite{Big9908056}. The difference is that in our case all particles carry the same charge $q = -e$, but Bigatti and Susskind considered a dipole with harmonic interaction between the charges to connect the string theory. Further their way to lead the Moyal phase factor is artificial. However we include this factor into the green's function without any vertex correction. Using this green's function, we study the anyons on the noncommutative geometry. After some calculation, one finds the corresponding Hamiltonian as \begin{equation} {\cal H} = { 1\over {2 m} } \sum_{i=1}^N \left ( \boldsymbol{\pi}_i + {e \over c} {\rm\bf a}_i \right )^2, \label{hamiltonian} \end{equation} where the mechanical momentum $\boldsymbol{\pi}_i$ is given by \begin{equation} \boldsymbol{\pi}_i = {\rm\bf p}_i + { e \over c} {\rm\bf A}_i \label{momentum} \end{equation} with the canonical momentum ${\rm\bf p}_i$. And the statistical gauge potential ${\rm\bf a}_i$ takes the form \begin{equation} {\rm\bf a}_i = - \alpha { e \over c} \sum_{i \ne j}^N {{ \hat {\rm\bf k} \times {\rm\bf r}_{ij}} \over {r_{ij}^2} } \label{gaugepotential} \end{equation} which satisfies the Coulomb gauge condition ${\nabla}_i \cdot {\rm\bf a}_i =0$ and $\nabla_i \times {\rm\bf a}_i \equiv b\hat {\rm\bf k}$. Here we set $\hbar = 1$ and $\hat {\rm\bf k}$ is the unit vector perpendicular to the plane. Let us see how the noncommutativity comes out from the presence of a magnetic field. In the absence of a magnetic field, the commutator of the momentum $\boldsymbol{\pi}_i$ is given by \begin{equation} \left [ {\pi}_i^x, {\pi}_i^y \right ]_{B=0} = 0. \label{commutator} \end{equation} But in the presence of a magnetic field the commutator leads to \begin{equation} \left [ {\pi}_i^x, {\pi}_i^y \right ]_{B \ne 0} = i { e \over c} B \delta_{ij}. \label{commutatorB} \end{equation} This is an easy way to get a noncommutative spacetime in the plane. The Schr\"odinger equation for the $N$ anyon is \begin{equation} H \Psi({\rm\bf r}_1, ... , {\rm\bf r}_N ) = E \Psi({\rm\bf r}_1, \cdots, {\rm\bf r}_N ). \label{schrodinger} \end{equation} We now treat the $\alpha$- and $\alpha^2$-anyonic interactions in (\ref{hamiltonian}) as the perturbations of the Hamiltonian $H^0$: \begin{eqnarray} H&=&H^{0}+ \Delta H , \label{H} \\ H^{0}&=& \sum_{i=1}^N { {\boldsymbol{\pi}}_i^2 \over 2m} , \label{H0} \\ \Delta H &=& \sum_{i=1}^N {e \over 2mc} \left \{ \left ( {\rm\bf p}_i + {e \over c} {\rm\bf A}_{i} \right ) \cdot {\rm\bf a}_{i}+ {\rm\bf a}_{i} \cdot \left ( {\rm\bf p}_{i} + {e \over c} {\rm\bf A}_{i} \right ) +{e \over c} {\rm\bf a}_{i} \cdot {\rm\bf a}_{i} \right \} . \label{DeltaH} \end{eqnarray} Here $H^{0}$ describes $N$ particles (bosons or fermions) moving in the uniform magnetic field. As it stands, the model with ${\cal H}^0$ is very important. In particular, the study of (2+1)D nonrelativistic fermions in the presence of magnetic field is relevant to the fractional quantum Hall effect(FQHE)\cite{Gir9907002}. Such a system has a further connection with (1+1)D $c=1$ string model\cite{Iso92PLB143,Cap93PLB100}. That is, a system of (2+1)D nonrelativistic fermions in the LLL is dual to a boundary system of (1+1)D nonrelativistic fermions ($c=1$ string model). This is very similar to the AdS$_3$/CFT correspondence in the sense of the bulk/boundary dynamics\cite{Myu99PRD044028}. There exists a conceptual difficulty in doing the perturbation near $\alpha =0$. Because of the singular nature of the $\alpha^{2}$-interaction \begin{equation} { \alpha^2 \over 2m} \sum_{i=1}^N \left \{ 2 \sum_{i<j}^N {1 \over r_{ij}^2}+ \sum_{i \ne k,l(k\ne l)}^N {{( {\rm\bf k} \times {\rm\bf r}_{ik}) \cdot ( {\rm\bf k} \times {\rm\bf r}_{il}) } \over {r_{ik}^2 r_{il}^2} } \right \} \label{singular} \end{equation} and the fact that a wave function does not vanish when any two bosonic particles approach each other ($r_{ij} \to 0$), a naive perturbation would lead to an infinite energy shift\cite{Gir9907002,Vei92NPB715}. In order to overcome this difficulty, we use the improved technique of the perturbation. The singular nature of the interaction forces the real wave function to vanish when $r_{ij}\to 0$. Hence we redefine the $N$-body wave function as $ \Psi ( {\rm\bf r}_1,..., {\rm\bf r}_N) = \prod_{i<j} r_{ij}^\gamma \tilde \Psi ( {\rm\bf r}_1,..., {\rm\bf r}_N) $. One can easily show that all divergent terms in (\ref{singular}) disappear if $\gamma$ is equal to $| \alpha |$. It is worth noting that the prefactor $\prod_{i<j} r_{ij}^{|\alpha|}$ can be interpreted as a factor which optimizes the dynamical short-range avoidance between anyons. The resulting equation is $ (H^{0}+ \Delta \tilde H ) \tilde \Psi = E \tilde \Psi $. Here the perturbed Hamiltonian $\Delta \tilde H$ is given by \begin{equation} \Delta \tilde H = \sum_{i<j}^N \left \{ i {\alpha \over m} { {\hat{\rm\bf k} \times {\rm\bf r}_{ij}} \over {r_{ij}^2}} \cdot ( \boldsymbol{\partial}_i- \boldsymbol{\partial}_j) - { {| \alpha |} \over m} { {\rm\bf r}_{ij} \over { r_{ij}^2}} \cdot ( \boldsymbol{\partial}_i- \boldsymbol{\partial}_j) + \alpha \epsilon \omega_c \right \} . \label{delta-H} \end{equation} The interaction terms in (\ref{delta-H}) are two-body interactions, contrary to (\ref{DeltaH}) where three-body interactions are present. This approach is based on the quantum-mechanical framework at first order in $\alpha$. For second and higher corrections, this is no longer useful. Rather, it is appropriate to use a quantum field theory. \section{Second Quantized Formalism} \label{formalism} In order to compute perturbatively the thermodynamic potential $\Omega$ in the grand canonical ensemble, we introduce a second quantized formalism (finite-temperature quantum field theory)\cite{Gir9907002,Vei92NPB715}. This formalism is also very useful for representing the magnetic translation symmetry. The thermodynamic potential is given by \begin{equation} \Omega =- \beta PV =- \ln {\rm Tr} e^{- \beta ( {\cal H} - \mu {\cal N} )} , \label{OmegaPV} \end{equation} where ${\cal H}$ and ${\cal N}$ stand respectively for the second quantized Hamiltonian and the number operator of anyons, and $\mu$ is the chemical potential [ $z = \exp ( \beta \mu )$]. In terms of a second quantized field $\psi$ the second quantized Hamiltonian ${\cal H}$ takes the form \begin{equation} {\cal H} = {\cal H}^{0}+ {1 \over 2} \int d {\rm\bf r}_1 d {\rm\bf r}_2 \psi^\dag ( {\rm\bf r}_1) \psi^\dag ( {\rm\bf r}_2) {\cal V} ( {\rm\bf r}_1- {\rm\bf r}_2) \psi ( {\rm\bf r}_2) \psi ( {\rm\bf r}_1) \label{calH} \end{equation} with \begin{equation} {\cal H}^{0} = { 1\over 2m} \psi^\dag ( {\rm\bf p} + {e \over c} {\rm\bf A} )^2 \psi. \label{calH0} \end{equation} Here ${\cal V}$ is the anyonic interaction given by \begin{eqnarray} {\cal V} ( {\rm\bf r}_{1}- {\rm\bf r}_{2}) &=& {\cal V} ( {\rm\bf r}_{1}, {\rm\bf r}_{2}) + {\cal V} ( {\rm\bf r}_{2}, {\rm\bf r}_{1}) , \nonumber \\ {\cal V} ( {\rm\bf r}_{1}, {\rm\bf r}_{2}) &=& {{i \alpha \hat{\rm\bf k} \times {\rm\bf r}_{12}} \over {m r_{12}^2}} \cdot \boldsymbol{\partial}_1 - {{| \alpha | {\rm\bf r}_{12}} \over {m r_{12}^2}} \cdot \boldsymbol{\partial}_1 + { {\alpha \epsilon \omega_c} \over 2 }. \label{calV} \end{eqnarray} The first term (anyonic vertex) in (\ref{calV}) measures the energy change in the nonzero angular momentum sector and materializes, only in the presence of a magnetic field, in the virial coefficients. The second (short-range improved vertex) comes from optimizing the dynamical short-range avoidance between anyons and measures the energy change in the zero angular momentum sector. The last (constant vertex) couples the statistical parameter $\alpha$ to the magnetic field and plays a crucial role in cancellation of divergences. It is important to note that the $| \alpha |$ term is not Hermitian, and is complex as much as the anyonic vertex. Instead, we construct the simple Hermitian vertex \begin{equation} {\cal V}^{H}( {\rm\bf r}_{1}, {\rm\bf r}_{2}) = {1 \over 2} \left \{ {\cal V} ( {\rm\bf r}_{1}, {\rm\bf r}_{2}) + {\cal V}^{\dag}( {\rm\bf r}_{1}, {\rm\bf r}_{2})\right \} . \nonumber \end{equation} The explicit form of this vertex is given by \begin{equation} {\cal V}^{H}( {\rm\bf r}_{1}, {\rm\bf r}_{2}) = { {i \alpha \hat{\rm\bf k} \times {\rm\bf r}_{12}} \over {m r_{12}^{2}}} \cdot \boldsymbol{\partial}_1 + {\pi \over m} | \alpha | \delta ( {\rm\bf r}_{12}) + { {\alpha \epsilon \omega_{c}} \over 2}. \label{calVH} \end{equation} The equivalence of the $| \alpha |$-term in ${\cal V} ( {\rm\bf r}_{1}, {\rm\bf r}_{2})$ and the $| \alpha |$-term in ${\cal V}^{H}( {\rm\bf r}_{1}, {\rm\bf r}_{2})$ was confirmed to be valid at second order in $\alpha$\cite{Vei92NPB715}. We treat $\alpha$ and $| \alpha |$ as small parameters and expand perturbatively the thermodynamic potential $\Omega$ : \begin{equation} \Omega = \Omega_0 - \sum_{i=1}^\infty (-1)^i \int_0^\beta d \beta_1 \int_0^{\beta_1} d \beta_2 \cdots \int_0^{\beta_{i-1}} d \beta_i \langle {\cal V} ( \beta_{1}) {\cal V} ( \beta_{2}) \cdots {\cal V} ( \beta_i) \rangle^{c} , \label{Omega} \end{equation} where \begin{eqnarray} \Omega_0 &=& \pm \ln {\rm Tr} e^{-\beta ({\cal H}_0 - \mu {\cal N})}, \label{Omega0} \\ {\cal V}( \beta_1) &=& {1 \over 2} \int d {\rm\bf r}_1 d {\rm\bf r}_{2} \psi^{\dag}( {\rm\bf r}_1, \beta_1) \psi^{\dag}( {\rm\bf r}_2, \beta_1) {\cal V} ( {\rm\bf r}_1- {\rm\bf r}_2) \psi ( {\rm\bf r}_2,\beta_1) \psi ( {\rm\bf r}_1, \beta_1) . \label{calVbeta1} \end{eqnarray} ${\cal V}( \beta_{1})$ is the two-anyonic interaction built from the thermal second quantized field \begin{equation} \psi( {\rm\bf r}_{1}, \beta_{1}) = e^{\beta_{1}({\cal H}_0 - \mu {\cal N})} \psi ( {\rm\bf r}_{1}) e^{-\beta_{1}( {\cal H}_0 - \mu {\cal N})} . \label{2ndfield} \end{equation} The upper index $c$ in (\ref{Omega}) means that one omits any disconnected diagram when using the Wick theorem. The Wick theorem is performed through the one particle thermal green's function [ $\psi^{\dag}( {\rm\bf r}_{1}, \beta_{1}) \psi ( {\rm\bf r}_{2}, \beta_{2})$ ]. The thermal propagator in a power series of $z$ is then \begin{equation} [ \psi^{\dag}( {\rm\bf r}_{1}, \beta_{1}) \psi ( {\rm\bf r}_2, \beta_2)] = \sum_{s=1,s=0}^\infty (\pm)^{s+1} z^{s-\beta_{12}/ \beta } \langle {\rm\bf r}_2| e^{-(s\beta - \beta_{12}) H} |{\rm\bf r}_1 \rangle , \label{thermal} \end{equation} where $\beta_{12}= \beta_{1}- \beta_{2}$ and $H$ is the one particle Hamiltonian. Here $\pm$ refers to Bose/Fermi cases. When $\beta_{12} \ge 0$, $s$ starts at $s=1$; whereas when $\beta_{12}<0$, it starts at $s=0$. In the lowest Landau level of $B \to \infty$, the one-particle green's function at temperature $s \beta - \beta_{12}$ is : \begin{eqnarray} G_{s\beta - \beta_{12}}^{B}( {\rm\bf r}_{2}, {\rm\bf r}_{1})&=& \langle {\rm\bf r}_{2}| e^{-(s\beta - \beta_{12}) H_{\rm LLL }}| {\rm\bf r}_{1}\rangle \nonumber \\ &=& {{m \omega_{c}} \over \pi } e^{-(s\beta - \beta_{12}) \omega_{c}} \exp \left [ - {{m \omega_{c}} \over 2} \left \{ r_{21}^2+2i\epsilon \hat {\rm\bf k} \cdot ( {\rm\bf r}_{2} \times {\rm\bf r}_{1}) \right \} \right ] . \label{greentemp} \end{eqnarray} We here consider the statistical mechanics of a gas of anyons in a strong magnetic field, and in the thermodynamic limit. A naive perturbative calculation of thermodynamic potential consists in working with (\ref{greentemp}). However, the presence of a free-particle nature and a phase factor in the exponent of (\ref{greentemp}) leads to the unwanted result (a divergent quantity). A good regularization procedure should be introduced to resolve this problem by adding an extra potential term of confining nature. For simplicity we introduce a harmonic regulator to give an unambiguous meaning to all diagrams(Fig.\ref{first-order} - Fig.\ref{second-cluster}). This amounts to adding to (\ref{H0}) a term of $\sum_{i=1}^{N}{1 \over 2}m \omega^{2} r_i^{2}$ and the thermodynamic limit is understood as $\omega \to 0$. The one-particle green's function at temperature $s \beta$ in a constant magnetic field with a harmonic regulator reads \begin{eqnarray} G_{s \beta }^{\rm full} ( {\rm\bf r}_{2}, {\rm\bf r}_{1}) &=& {{m \omega_t} \over {2 \pi \sinh s \beta \omega_t }} \exp \left [ - { {m \omega_t} \over {2 \sinh s \beta \omega_t}} \left \{ ( \cosh s \beta \omega_c) r_{12}^2 \right . \right . \nonumber \\ && ~~~~~ +( \cosh s \beta \omega_t- \cosh s \beta \omega_c) (r_1^2+r_2^2) + (2i \epsilon \sinh s \beta \omega_c) \hat{\rm\bf k} \cdot ( {\rm\bf r}_{2} \times {\rm\bf r}_{1}) \big \} \Big ], \label{greenfull} \end{eqnarray} where $\omega_t= \sqrt { \omega_c^2+\omega^2}$. In the lowest Landau level, the regularized one-particle green's function at temperature $s \beta$ is obtained by taking the limit if $\omega_c \to \infty$ and $\omega \to 0$: \begin{equation} G_{s \beta } ( {\rm\bf r}_{1}, {\rm\bf r}_{2}) = { {m \omega_c} \over \pi} a_s e^{-s\beta \omega_c} \exp \left [ - { {m \omega_c} \over 2} a_s \left \{ r_{12}^2+2 i \epsilon \hat{\rm\bf k} \cdot ( {\rm\bf r}_{1} \times {\rm\bf r}_{2}) \right \} - b_s (r_1^2+r_2^2) \right ] , \label{greensbeta} \end{equation} with \begin{eqnarray} \hspace*{-2pt} a_s &=&1 + { \omega^2 \over { 2 \omega_c^2}}(1-s \beta \omega_c) - { \omega^4 \over { 8 \omega_c^4}} \left \{ 1+s \beta \omega_c- (s \beta \omega_c)^2 \right \} + { \omega^6 \over {16 \omega_c^6}} \left \{ 1 + s \beta \omega_c- {1 \over 3} (s \beta \omega_c)^3 \right \} + \cdots , \nonumber \\ \hspace*{-2pt} b_s&=& {{ms \beta \omega^2} \over 4} \left [ 1 + { \omega^2 \over {4 \omega_c^2}}(1-s \beta \omega_c) - { \omega^4 \over { 8 \omega_c^4}} \left \{ 1- {1 \over 3} (s \beta \omega_c)^2 \right \} \right ] + \cdots . \nonumber \end{eqnarray} It is sufficient to consider up to the order of $\omega^6$ to obtain the finite results. Care has to be taken regarding overall normalization. At a given power $s$ of $z$, one has to multiply the harmonic result by $s$ in order to recover the large volume(area) limit of $V \to \infty$. \section{Moyal Phase Factor and green's function} \label{moyalphase} Hereafter we choose $e/c = 1$. The relevant symmetries of the unperturbed system (\ref{calH0}) are the translational and rotational ones. Here we mainly concern the translational symmetry. The Hamiltonian ${\cal H}^0$ is invariant under a cocycle transformation which is defined through its action over the field operator as \cite{Bur91PA281} \begin{equation} {\rm U}_{\boldsymbol{\tau}} \psi({\rm\bf r}, t) {\rm U}_{\boldsymbol{\tau}}^{-1} = \exp ( i {\rm\bf A}({\rm\bf r}) \cdot {\boldsymbol{\tau}}) \psi({\rm\bf r} - {\boldsymbol{\tau}}, t ) \equiv T_{\boldsymbol{\tau}} \psi({\rm\bf r}, t ), \label{cocycle} \end{equation} where ${\rm U}_{\boldsymbol{\tau}}$ is the unitary operator representing the translation in the second quantized Fock space. Also the above is the defining equation of the magnetic translation operator $T_{\boldsymbol{\tau}}$. From (\ref{cocycle}) the generator(${\rm\bf G}^c$) of the cocycle transformation is derived as \begin{equation} T_{\boldsymbol{\tau}} \psi({\rm\bf r}, t) = \exp(-i {\boldsymbol{\tau}} \cdot {\rm\bf G}^c({\rm\bf r})) \psi({\rm\bf r}, t), ~ {\rm\bf G}^c({\rm\bf r}) = {\rm\bf p} - {\rm\bf A}({\rm\bf r}). \label{translation} \end{equation} Also from (\ref{calH0}), ${\rm\bf G}^c$ is compared with the momentum $\boldsymbol{\pi} = {\rm\bf p} + {\rm\bf A} $. For our purpose let us introduce the complex coordinates as \begin{equation} z = \sqrt{{{|{\rm\bf B}|} \over 2}} ( x + i y), ~ \bar z = \sqrt{{{| {\rm\bf B} |} \over 2}} ( x - i y). \label{coordinate} \end{equation} The the Hamiltonian differential operator ( $H = \boldsymbol{\pi}^2/2m$) can be rewritten as \begin{equation} H = 2 \omega_c \left \{ - { \partial^2 \over {\partial z \partial \bar z}} - {\epsilon \over 2} \left ( z { \partial \over {\partial z}} - { \partial \over {\partial \bar z}} \bar z \right ) + { 1\over 4} z \bar z \right \}. \label{h-diff} \end{equation} Further one can define two sets of annihilation and creation operators as \begin{eqnarray} {\rm G}_x^c + i {\rm G}_y^c &=& -i \sqrt{2{|{\rm\bf B}|}} \left ( {\partial \over {\partial \bar z}} + { \epsilon \over 2} z \right ) \equiv -i \sqrt{2 {|{\rm\bf B}|}} b, \label{xpy} \\ ({\rm G}_x^c + i {\rm G}_y^c )^\dag &=& {\rm G}_x^c - i {\rm G}_y^c = -i \sqrt{2{|{\rm\bf B}|}} \left ( {\partial \over {\partial z}} - { \epsilon \over 2} \bar z \right ) \equiv i \sqrt{2 {|{\rm\bf B}|}} b^\dag, \label{xmy} \\ \pi_x - i \pi_y &=& -i \sqrt{2{|{\rm\bf B}|}} \left ( {\partial \over {\partial z}} + { \epsilon \over 2} \bar z \right ) \equiv -i \sqrt{2 {|{\rm\bf B}|}} a, \label{xcpyc} \\ (\pi_x - i \pi_y)^\dag &=& \pi_x + i \pi_y = -i \sqrt{2{|{\rm\bf B}|}} \left ( {\partial \over {\partial \bar z}} - { \epsilon \over 2} z \right ) \equiv i \sqrt{2 {|{\rm\bf B}|}} a^\dag, \label{xcmyc} \end{eqnarray} Here $a(a^\dag)$ is an annihilation(creation) operator which mixes the Landau levels. On the other hand, $b(b^\dag)$ is an annihilation(creation) operator within each Landau level. The commutation relations are given by \begin{equation} [a, a^\dag] =\epsilon, ~ [ b, b^\dag ] = \epsilon, \label{a-commutator} \end{equation} with all other commutators vanishing. The Hamiltonian operator in (\ref{h-diff}) can be expressed in terms of these operators as \begin{equation} H = 2 \omega_c ( a^\dag a + {1 \over 2} ). \label{h-a} \end{equation} The generator of the cocycle transformation in (\ref{translation}) takes the following form: \begin{eqnarray} &&T_{\boldsymbol{\tau}} \psi(z, \bar z, t) = \exp \left \{ \sqrt{{{| {\rm\bf B} | } \over 2 }} ( \tau b^\dag - \tau^* b ) \right \} \psi(z, \bar z, t), \label{gen-cocycle} \\ && \tau = \left ( \tau_x + i \tau_y \right ), ~ \tau^* = \left ( \tau_x - i \tau_y \right ). \label{TTstar} \end{eqnarray} We also have $T_{\boldsymbol{\tau}}T_{\boldsymbol{\tau}'}T_{-\boldsymbol{\tau}}T_{-\boldsymbol{\tau}'} = e^{-i \tau \wedge \tau'} $ with $\tau \wedge \tau' = { 1 \over l^2 } \hat {\rm\bf k} \cdot ( {\tau} \times {\tau}' )$. Here $l^2$ is a square of magnetic length defined by $1/l^2 = \epsilon |{\rm\bf B}| = B$. This is a familiar feature of the group of translations in a magnetic field, because ${\tau} \wedge {\tau}'$ is exactly the Moyal phase generated by the flux in the parallelogram of $\boldsymbol{\tau}$ and $\boldsymbol{\tau}'$ plane. Hence $T$'s form a ray representation of the magnetic translation group. In fact $T_{\boldsymbol{\tau}}$ translates the particle a distance $\hat {\rm\bf k} \times {\boldsymbol{\tau}}$. This means that different components of $T_{\boldsymbol{\tau}}$ do not commute. That is, $T_{\boldsymbol{\tau}}T_{{\boldsymbol{\tau}}'} = e^{-i \tau \wedge \tau'} T_{{\boldsymbol{\tau}}'}T_{\boldsymbol{\tau}}$. How does the green's function accomodate this phase factor? Introducing the flux $\Phi$ enclosed in the parallelogram and $\Phi_0$ (flux quantum), then these take the forms as \begin{eqnarray} \Phi &=& {\rm\bf B} \cdot ( {\rm\bf r}_2 \times {\rm\bf r}_1 ), \label{flux} \\ \Phi_0 &=& {{2 \pi \hbar c} \over { e}} = { 2 \pi} \label{flux-quanta} \end{eqnarray} with $\hbar={e \over c} = 1$. Then the phase $\phi= 2 \pi \Phi / \Phi_0$ leads to $B \hat {\rm\bf k} \cdot ({\rm\bf r}_2 \times {\rm\bf r}_1) = { 1 \over l^2} \hat {\rm\bf k} \cdot ({\rm\bf r}_2 \times {\rm\bf r}_1) \equiv r_2 \wedge r_1$. Finally the green's function in a magnetic field is given by \begin{equation} G_\beta^B({\rm\bf r}_2, {\rm\bf r}_1) = {{ m \omega_c} \over \pi} e^{-\beta \omega_c} \exp \left ( - {{ m \omega_c r_{21}^2 } \over 2 } - i {{r_2 \wedge r_1} \over 2 } \right ). \label{green-mag} \end{equation} The Moyal phase factor $r_2 \wedge r_1$ in the coordinate space appears correctly in the green's function. The factor of $1/2$ can be understood from the relation of $T_{\boldsymbol{\tau}}T_{\boldsymbol{\tau}'} = T_{\boldsymbol{\tau}+\boldsymbol{\tau}'} e^{ -i \tau \wedge \tau' /2 } $. If one uses this green's function to calculate any physical quantity, the effect of noncommutativity is taken into account automatically. This can be done solely by the green's function without manipulating vertices as in string theories\cite{Big9908056}. \section{Structure of Perturbation Theory} \label{structure} Here we observe the effect of the Moyal phase factors on the calculation of thermodynamic quantities. For simplicity, we choose the hermitian point vertex ${\cal V}^H({\rm\bf r}_1, {\rm\bf r}_2) = {\pi \over m} |\alpha | \delta({\rm\bf r}_{12})$. This vertex minimizes the divergence problem in higher order corrections. And this represents the average anyonic effect. \subsection{First-order calculation} \label{firstorder} We are now in a position to derive the first-order correction to $\Omega_0$ in (\ref{Omega0}). One has to consider the two diagrams in Fig.\ref{first-order}. They correspond to two possible contractions in the Wick expansion \begin{eqnarray} \Omega_1 &=& \sum_{s,t \ge 1} (\pm z)^{s+t} \int_0^\beta d \beta_1 \int d{\rm\bf r}_1 d{\rm\bf r}_2 {\cal V}^H({\rm\bf r}_1, {\rm\bf r}_2) \nonumber \\ &&\times \left \{ G_{s\beta}({\rm\bf r}_1, {\rm\bf r}_1) G_{t\beta}({\rm\bf r}_2, {\rm\bf r}_2) \pm G_{s\beta}({\rm\bf r}_1, {\rm\bf r}_2) G_{t\beta}({\rm\bf r}_2, {\rm\bf r}_1) \right \}. \label{omega1} \end{eqnarray} \begin{center} \begin{figure} \epsfig{file=first-order.eps, height=7.0cm, clip=} \caption{ The first-order diagrams. The solid lines (the dashed lines) denote the thermal propagator of (\ref{thermal}) (the vertices). The $\pm$ signs refer to Bose/Fermi cases. The arrow($\to$) represents the direction of propagation and $\times$ the point of interaction. } \label{first-order} \end{figure} \end{center} \noindent The first term corresponds to the two-tadpole diagram and the second is the conventional diagram. Considering the point vertex of ${\pi \over m} |\alpha | \delta({\rm\bf r}_{12})$, two terms lead to the same expression. Hence, in the first-order correction, the phase factors never contribute to the thermodynamic potential $\Omega_1$. $\Omega_1$ takes the form in the large $x$-limit(strong magnetic field and low temperature limits) \begin{equation} \Omega_1 = | \alpha | {V \over \lambda^2} {{1\pm 1} \over 2} 4 x^2 \left [ {{\pm ze^{-x}} \over {1 \mp z e^{-x}}} \right ]^2 , \label{large-limit} \end{equation} where $\lambda^2 = 2 \pi \beta / m$(thermal wavelength), $x = \beta \omega_c$, and $\pm$ refer to Bose/Fermi cases. The equation of state is \begin{equation} \beta p V = { V \over \lambda^2} \left [ \pm 2 x \ln(1 \pm \nu_\pm) + 2(1\pm 1) | \alpha | x^2 \nu_\pm^2 \right ], \label{state} \end{equation} where the filling fraction coefficients($\nu_\pm$) are given by \begin{equation} \nu_\pm = { N \over V} \Bigg / \left ( { { e B} \over c } \right ) = { {\rho \lambda^2} \over {2 x} } = {{ z e^{-x} } \over { 1 \mp z e^{-x}}}. \label{filling} \end{equation} \subsection{Second-order calculation} \label{secondorder} We now consider the effect of Moyal phase factors on the second-order calculation. Using the Wick's theorem, one obtains the twenty connected diagrams in Fig.\ref{third-cluster} and Fig.\ref{second-cluster}. \begin{center} \begin{figure} \epsfig{file=third-cluster.eps, height=10.0cm, clip=} \caption{ The sixteen second-order diagrams which contribute to the third cluster coefficient. } \label{third-cluster} \end{figure} \end{center} \begin{center} \begin{figure} \epsfig{file=second-cluster.eps, height=7.0cm, clip=} \caption{ The four second-order diagram which contribute to the second and third cluster coefficients. } \label{second-cluster} \end{figure} \end{center} \noindent Each graph with the hermitian vertex can be computed easily using the regularized green's function of (\ref{greensbeta}). We start with two-tadpoles diagrams. (1) Diagrams with two tadpoles\\ We consider the diagram of Fig.\ref{third-cluster}(m). Applying the Feynman rules, we have \begin{eqnarray} \Omega_2^{\rm Fig.\ref{third-cluster}(m)} &=& \sum_{s,t, u\ge 1; v \ge 0} (\pm z)^{s+t+u+v} \int_0^\beta d \beta_1 \int_0^{\beta_1} d \beta_2 \int \left ( \prod_{i=1}^4 d{\rm\bf r}_i \right ) {\cal V}^H ({\rm\bf r}_1,{\rm\bf r}_2) {\cal V}^H ({\rm\bf r}_3,{\rm\bf r}_4) \nonumber \\ && \times G_{s \beta}({\rm\bf r}_2,{\rm\bf r}_2) G_{u \beta}({\rm\bf r}_4,{\rm\bf r}_4) G_{v\beta+\beta_{12}}({\rm\bf r}_1,{\rm\bf r}_3) G_{t\beta-\beta_{12}}({\rm\bf r}_3,{\rm\bf r}_1). \label{omega2m} \end{eqnarray} Using the representation of Eq. (\ref{greensbeta}), this reads as \begin{eqnarray} \Omega_2^{\rm Fig.\ref{third-cluster}(m)} &=& \sum_{s,t, u\ge 1; v \ge 0} (\pm ze^{-x})^{s+t+u+v} \int_0^\beta d \beta_1 \int_0^{\beta_1} d \beta_2 \int \left ( \prod_{i=1}^4 d{\rm\bf r}_i \right ) a_s a_t a_u a_v \left ( {\omega_c \over \pi} \right )^4 \nonumber \\ && \times \pi^2 | \alpha |^2 \delta({\rm\bf r}_{12}) \delta({\rm\bf r}_{34}) \exp\left \{ -2 b_s r_2^2 - 2 b_u r_4^2 - {\omega_c \over 2} a_v \left ( r_{13}^2 + 2 i \epsilon \hat {\rm\bf k} \cdot ({\rm\bf r}_1 \times {\rm\bf r}_3 ) \right ) \right . \nonumber \\ && \left . - b_v ( r_1^2 + r_3^2 ) - {\omega_c \over 2} a_t \left ( r_{31}^2 + 2 i \epsilon \hat {\rm\bf k} \cdot ({\rm\bf r}_3 \times {\rm\bf r}_1 ) \right ) -b_t ( r_3^2 + r_1^2 ) \right \}, \label{omega2ma} \end{eqnarray} where $a_v$ and $b_v$ means $a_{v\beta + \beta_{12}}$ and $b_{v\beta + \beta_{12}}$, while $a_t$ and $b_t$ denote $a_{t\beta - \beta_{12}}$ and $b_{t\beta - \beta_{12}}$. Here we take $m=1$ for simplicity. After the integration over the tadpole coordinates ${\rm\bf r}_2$ and ${\rm\bf r}_4$, the integrand takes the following form: \begin{equation} \exp\left\{ - \sum_{i,j=1,3} c_{ij} {\rm\bf r}_i \cdot {\rm\bf r}_j - \sum_{i,j=1,3} d_{ij} \hat {\rm\bf k} \cdot ( {\rm\bf r}_i \times {\rm\bf r}_j )\right \}, \label{integrand} \end{equation} where \begin{eqnarray} c_{11} &=& {\omega_c \over 2} (a_v + a_t) + b_v + b_t + 2 b_s , \nonumber \\ c_{33} &=& {\omega_c \over 2} (a_v + a_t) + b_v + b_t + 2 b_u, \nonumber \\ c_{13} &=& -{\omega_c \over 2} (a_v + a_t), \nonumber \\ d_{13} &=& - i \epsilon {\omega_c \over 2} (a_v - a_t). \nonumber \end{eqnarray} During the calculation, we define the matrix $M_4$ \begin{equation} M_4 = \left ( \begin{array}{cccc} c_{11} & c_{13} & 0 & d_{13} \\ c_{13} & c_{33} & -d_{13} & 0\\ 0 & -d_{13} & c_{11} & c_{13} \\ d_{13} & 0 & c_{13} & c_{33} \end{array} \right ) \label{expM} \end{equation} The free-particle nature contains in $c_{ij}$, whereas $d_{ij}$ include Moyal phase factors. Performing the gaussian integral over ${\rm\bf r}_1$ and ${\rm\bf r}_3$ leads to $\pi^2/ \sqrt{\det M_4}$. One finds that the determinant of $M_4$ is a perfect square as \begin{equation} \det M_4 = \left ( c_{11} c_{33} - c_{13}^2 - d_{13}^2 \right )^2. \label{det} \end{equation} Here one finds \begin{equation} c_{13}^2 + d_{13}^2 = \omega_c^2 a_v a_t, \label{relation} \end{equation} which means that the Moyal phase factor($d_{13}^2$) contributes to the thermodynamic potential as opposed to the free-particle nature($c_{13}^2$). This is so because of the pure imaginary of $d_{13}$. Then, $\Omega_2^{\rm Fig.\ref{third-cluster}(m)}$ takes the form \begin{eqnarray} \Omega_2^{\rm Fig.\ref{third-cluster}(m)} &=& | \alpha | ^2 \omega_c^4 \sum_{s,t, u\ge 1; v \ge 0} (\pm ze^{-x})^{s+t+u+v} \int_0^\beta d \beta_1 \int_0^{\beta_1} d \beta_2 a_s a_t a_u a_v \nonumber \\ && \times {1 \over { ( A + 2 b_u) ( A + 2 b_s ) - B } }, \label{omega2mb} \end{eqnarray} where \begin{equation} A = { \omega_c \over 2} ( a_v + a_t ) + b_v + b_t , ~ B = \omega_c^2 a_v a_t. \nonumber \end{equation} Integrating over the temperatures $(\beta_1, \beta_2)$ followed by the summation over $s,t,u$ starting at 1 and over $v$ starting at 0, one finds the final contribution in the large $x$-limit \begin{equation} \Omega_2^{\rm Fig.\ref{third-cluster}(m)} = | \alpha |^2 \left [ {{ z^3 x^3 V} \over \lambda^2 } \right ]. \label{omega2mc} \end{equation} The remaining three diagrams of Figs.\ref{third-cluster}(n)-(p) contribute to $\Omega_2$ as the same form in (\ref{omega2mc}). (2) Diagrams with one tadpole\\ The diagrams in Figs.\ref{third-cluster}(e)-\ref{third-cluster}(l) have one tadpole. If there exists a tadpole, one has to perform the integral over the tadpole coordinate first. Then, the remaining gaussian integration will be of the form of $6 \times 6$ matrix. Consider the diagram of Fig.\ref{third-cluster}(e). Applying the Feynman rules, the second-order correction to $\Omega_0$ is given by \begin{eqnarray} \Omega_2^{\rm Fig.\ref{third-cluster}(e)} &=& \sum_{s,t, u\ge 1; v \ge 0} (\pm z)^{s+t+u+v} \int_0^\beta d \beta_1 \int_0^{\beta_1} d \beta_2 \int \left ( \prod_{i=1}^4 d{\rm\bf r}_i \right ) {\cal V}^H ({\rm\bf r}_1,{\rm\bf r}_2) {\cal V}^H ({\rm\bf r}_3,{\rm\bf r}_4) \nonumber \\ && \times G_{s \beta}({\rm\bf r}_2,{\rm\bf r}_2) G_{v\beta+\beta_{12}}({\rm\bf r}_1,{\rm\bf r}_3) G_{u \beta}({\rm\bf r}_3,{\rm\bf r}_4) G_{t\beta-\beta_{12}}({\rm\bf r}_4,{\rm\bf r}_1). \label{omega2e} \end{eqnarray} Using the representation of Eq. (\ref{greensbeta}), the above reads as \begin{eqnarray} \Omega_2^{\rm Fig.\ref{third-cluster}(e)} &=& \sum_{s,t, u\ge 1; v \ge 0} (\pm ze^{-x})^{s+t+u+v} \int_0^\beta d \beta_1 \int_0^{\beta_1} d \beta_2 \int \left ( \prod_{i=1}^4 d{\rm\bf r}_i \right ) a_s a_t a_u a_v \left ( {\omega_c \over \pi} \right )^4 \nonumber \\ && \times \pi^2 | \alpha |^2 \delta({\rm\bf r}_{12}) \delta({\rm\bf r}_{34}) \exp\left \{ -2 b_s r_2^2 - {\omega_c \over 2} a_v \left ( r_{13}^2 + 2 i \epsilon \hat {\rm\bf k} \cdot ({\rm\bf r}_1 \times {\rm\bf r}_3 ) \right ) - b_v ( r_1^2 + r_3^2 ) \right . \nonumber \\ && ~~~~~~~~ - {\omega_c \over 2} a_u \left ( r_{34}^2 + 2 i \epsilon \hat {\rm\bf k} \cdot ({\rm\bf r}_3 \times {\rm\bf r}_4 ) \right ) -b_u ( r_3^2 + r_4^2 ) \nonumber \\ &&~~~~~~~~ \left . - {\omega_c \over 2} a_t \left ( r_{41}^2 + 2 i \epsilon \hat {\rm\bf k} \cdot ({\rm\bf r}_4 \times {\rm\bf r}_1 ) \right ) - b_t ( r_4^2 + r_1^2 ) \right \}. \label{omega2ea} \end{eqnarray} It is easy to show that this leads to $\Omega_2^{\rm Fig.\ref{third-cluster}(m)}$. In order to investigate the non-triviality of this diagram in connection with the Moyal phase factors, we introduce the constant vertex of $\alpha \epsilon \omega_c /2 $ in (\ref{calVH}). After integration over the tadpole coordinate ${\rm\bf r}_2$, we perform the gaussian integration over ${\rm\bf r}_1, {\rm\bf r}_3$ and ${\rm\bf r}_4$. One obtains $\pi^3/\sqrt{\det M_6}$ where the matrix $M_6$ is given by \begin{equation} M_6 = \left ( \begin{array}{cccccc} c_{11} & c_{13} & c_{14} & 0 & d_{13} & d_{14} \\ c_{13} & c_{33} & c_{34} & -d_{13} & 0 & d_{34} \\ c_{14} & c_{34} & c_{44} & -d_{14} & -d_{34} & 0 \\ 0 & -d_{13} & -d_{14} & c_{11} & c_{13} & c_{14} \\ d_{13} & 0 & -d_{34} & c_{13} & c_{33} & c_{34} \\ d_{14} & d_{34} & 0 & c_{14} & c_{34} & c_{44} \end{array} \right ) \label{M6} \end{equation} where \begin{eqnarray} c_{11} &=& { \omega_c \over 2} ( a_v + a_t ) + b_v + b_t , \nonumber \\ c_{33} &=& { \omega_c \over 2} ( a_u + a_v ) + b_u + b_v , \nonumber \\ c_{44} &=& { \omega_c \over 2} ( a_t + a_u ) + b_t + b_u , \nonumber \\ c_{13} &=& -{ \omega_c \over 2} a_v , \nonumber \\ c_{14} &=& -{ \omega_c \over 2} a_t , \nonumber \\ c_{34} &=& -{ \omega_c \over 2} a_u , \nonumber \\ d_{13} &=& i \epsilon { \omega_c \over 2} a_v = -i \epsilon c_{13}, \nonumber \\ d_{14} &=& - i \epsilon { \omega_c \over 2} a_t = i \epsilon c_{14}, \nonumber \\ d_{34} &=& i \epsilon { \omega_c \over 2} a_u = -i \epsilon c_{34}. \nonumber \end{eqnarray} Again, one finds that the determinant of $M_6$ becomes a perfect aquare: \begin{eqnarray} \det M_6 &=& \left\{ \det \left ( \begin{array}{ccc} c_{11} & c_{13} & c_{14} \\ c_{13} & c_{33} & c_{34} \\ c_{14} & c_{34} & c_{44} \end{array} \right ) -c_{11} d_{34}^2 -c_{33} d_{14}^2 -c_{44} d_{13}^2 \right . \nonumber \\ &&~~~~~~ +2 c_{13} d_{34} d_{14} -2 c_{14} d_{34} d_{13} + 2 c_{34} d_{14} d_{13} \Bigg \}^2. \label{detM1} \end{eqnarray} Finally $\sqrt{\det M_6}$ takes the form \begin{eqnarray} \sqrt{\det M_6} &=& c_{11}c_{33}c_{44} - c_{11} \left ( c_{34}^2 + d_{34}^2 \right ) - c_{33} \left ( c_{14}^2 + d_{14}^2 \right ) - c_{44} \left ( c_{13}^2 + d_{13}^2 \right ) \nonumber \\ && + 2 c_{34} \left ( c_{13}c_{14} + d_{13} d_{14} \right ) + 2 d_{34} \left ( c_{13}c_{14} - d_{13} d_{14} \right ). \label{sqrtM} \end{eqnarray} Here we observe that the Moyal phase factor($d_{ij}^2$) contribute to $\Omega_2$ as exactly opposed to the free-particle nature($c_{ij}^2$). The last term is considered as a compositive term. (3) Diagrams without tadpole\\ Diagrams in Figs.\ref{third-cluster}(a)-(d) and Figs.\ref{second-cluster}(a)-(d) belong to this category. In these cases one finds that the integrand can be cast into the form as \begin{equation} \exp \left \{ -\sum_{i,j=1}^4 c_{ij} {\rm\bf r}_i \cdot {\rm\bf r}_j -\sum_{i,j=1}^4 d_{ij} \hat{\rm\bf k} \cdot ({\rm\bf r}_i \times {\rm\bf r}_j ) \right \}, \label{inregrand1} \end{equation} where $c_{ij}$ involve the free-particle nature, whereas $d_{ij}$ contain the Moyal phase factors. Performing the gaussian integration over ${\rm\bf r}_1,{\rm\bf r}_2,{\rm\bf r}_3,{\rm\bf r}_4$, one obtains $\pi^4/\sqrt{\det M_8}$ with the $8\times 8$ matrix \begin{equation} M_8 = \left ( \begin{array}{cc} c_{ij} & d_{ij} \\ -d_{ij} & c_{ij} \end{array} \right ). \label{M8} \end{equation} Its determinant becomes a perfect square of the form $\det M = (\det P + Q)^2$ with $\det P = \det (c_{ij})$ and $ Q = Q(c_{ij}, d_{ij})$\cite{Kim93PRD4839}. After some manipulation, $\det M$ can be lead to the similar form as in (\ref{sqrtM}). \section{Discussions} \label{discussion} In this paper, we study the effect of Moyal phase factors on the thermodynamic potential using the anyonic model in the presence of a magnetic field. In this case, we use the coordinate space green's function including the Moyal phase factor without manipulating the vertices. It turns out that the Moyal phase factors contribute to the thermodynamic potential $\Omega$ as opposed to the free-particle nature. Moyal phase factors are encoded in the antisymmetric submatrix($d_{ij}$), whereas the free-particle properties are encoded in the symmetric submatrix($c_{ij}, i \ne j$). The diagonal elements of $c_{ij}$ denote the regularization scheme. In connection with string theory, we compare our model with Bigatti and Susskind's case\cite{Big9908056}. They introduced a dipole with two opposite charges and harmonic interaction(${k \over 2} r_{12}^2$) between them in the presence of the strong magnetic field\cite{She9901080}. They also neglected the kinetic terms and introduce the interaction potential $V({\rm\bf r}_1) = \lambda \delta({\rm\bf r}_1)$ by hand to extract the Moyal phase factor. In the quantum level, they derived the Moyal bracket phase $e^{i p \wedge q}$ as vertex correction in the momentum space. Here we use the $N$ particles with the same charge $q = -e$ and a harmonic regulating potential(${1 \over 2} \sum_{i=1}^N k r_i^2$). Further we use the hermitian point vertex (${\cal V}^H = {\pi \over m} | \alpha | \delta ( {\rm\bf r}_{12})$) to study the higher order corrections. Then the Moyal phase factors are included in the green's function and thus we don't need to correct the vertex. Although our model seems not to connect with the string theory, this shows clearly the noncommutative effect on the thermodynamic potential. \section*{Acknowledgement} This work was supported by the Brain Korea 21 Program, Ministry of Education, Project No. D-0025.
1,108,101,566,077
arxiv
\section{Introduction} Vast available spectrum resources have made millimeter-wave (mm-wave) communications a key technology for the development of 5G wireless systems \cite{ref47}. Experimental measurements show that mm-wave communication links can provide multi-gigabit rates with low latency \cite{ref48}. However, mm-wave signals suffer from large propagation loss and are more prone to blockage than conventional micro-wave signals \cite{ref6}. In order to mitigate the high path-loss of mm-wave signals, large antenna arrays at the transmitter and at the receiver are necessary for directional beamforming \cite{ref7}. For example, Huawei has already launched its massive MIMO technology in China that adopts large scale antenna arrays that are able to move both vertically and horizontally for 3D beamforming. This technology is able to achieve 1.4 Gbps in the 3.5 GHz band by adopting 40 MHz channels \cite{ref49}. Fortunately, due to the small wavelength of mm-wave signals and advances in radio frequency (RF) circuits, hundreds of antennas can be placed in a few square centimeters, resulting in mm-wave massive multiple-input multiple-output (MIMO) \cite{ref8}, \cite{ref9}. Among the main design problems for the massive MIMO technology is the power consumption of analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) associated with each RF chain. Specifically, due to large bandwidths in mm-wave MIMO, the sampling rates of the ADCs and DACs have to increase, which results in high-resolution, high-speed ADCs and DACs that consume huge amounts of energy and are costly \cite{ref11}, \cite{ref12}, \cite{ref13}. Since for standard deployments an RF chain is connected to each antenna, this issue limits the number of antennas at the transmitter and the receiver. \subsection{Low-Resolution DACs and ADCs} Generally, there are two approaches to reduce the power consumption of ADCs/DACs. With the first approach, the sampling rate of the ADCs and DACs is reduced by making a number of parallel low-speed converters act as a high-speed one \cite{ref50}. However, this typically results in a mismatch in gain, timing, and voltage among the sub-levels of the parallel ADCs/DACs, which leads to error floors in link performance \cite{ref11}, \cite{ref14}, \cite{ref15}. In the second approach, instead of reducing the sampling rate of the ADCs/DACs, their resolutions are reduced. Since the power consumption of ADCs/DACs grows exponentially with the resolution \cite{ref15}, ADCs/DACs with resolution of 1-bit have considerably lower power consumption compared to typical high-resolution ADCs/DACs with 10-12 bits \cite{ref14}. Furthermore, since 1-bit ADCs/DACs do not need automatic gain control, they can also significantly reduce the system complexity \cite{ref11}, \cite{ref18}. However, due to the low resolution of these ADCs/DACs, drastic non-linearities are added to the system; and consequently, the shapes of the transmitted/received waveforms cannot be preserved at the transmitter/receiver \cite{ref17}. Therefore, it is important to investigate how 1-bit ADCs/DACs influence the capacity of massive MIMO systems. Motivated by this, in this paper, we investigate the capacity of the additive white Gaussian noise (AWGN) massive MIMO systems with 1-bit ADCs and DACs at the receiver and at the transmitter. \subsection{Related Works} One of the earliest works on the problem of finding the capacity of MIMO systems, where low resolution ADCs is \cite{ref18}. In \cite{ref18}, the capacity of a MIMO channel with 1-bit ADCs at the receiver is studied at low signal-to-noise ratios (SNRs) under the assumption of perfect channel state information (CSI) at the receiver (CSIR). This problem shows that, at low SNRs, the capacity is smaller by a factor of $2/\pi$, which corresponds to a gap of -1.96 dB in $E_b/N_0$, as compared to a system with infinite resolution. In \cite{ref21}, the results in \cite{ref18} have been extended to the case where the additive Gaussian noise is mutually correlated across the receive antennas. Bussgang decomposition has been used in \cite{ref21} to model the corresponding MIMO channel and it has been demonstrated that certain conditions on the channel and the noise covariance matrices result in lower performance loss compared to the case of uncorrelated noise. In \cite{ref44} and \cite{ref45}, achievable rates have been presented for MIMO systems with 1-bit ADCs at the transmitter and full-precision DACs at the transmitter, where the antenna outputs are processed by analog combiners, and full CSI is available both at the transmitter and the receiver. In \cite{ref23}, it has been shown that at high SNRs, under the assumption of perfect CSIT and CSIR, the capacity of MIMO systems with high-resolution inputs and 1-bit quantized outputs is lower bounded by the rank of the channel matrix. On the other hand, in \cite{ref11}, when the channel matrix has full-row rank, a tight upper bound on the capacity has been derived for the MIMO systems at finite SNRs. In \cite{ref11}, the results of \cite{ref23} have been extended, where under the assumption of full CSIT and CSIR, the capacity of MIMO systems with 1-bit quantization at the receiver only has been derived for infinite SNR. In \cite{ref24}, an achievable rate has been derived for a massive MIMO system with high resolution DACs at the transmitter and 1-bit ADCs at the receiver under the assumption of imperfect CSIT, where the wideband frequency-selective channel is being estimated by a linear low-complexity algorithm. The authors in \cite{ref25} investigated the achievable rate of a massive MIMO system where low resolution ADCs are used at the receiver only and perfect CSIT is available, where it has been shown that the performance loss caused by using low resolution ADCs can be compensated by increasing the number of antennas at the receiver. In \cite{ref19}, the authors analyze the mutual information of a MIMO system with high resolution DACs at the transmitter and 1-bit ADCs at the receiver, where CSI is not available at the transmitter (CSIT) nor at the receiver. Achievable rates in \cite{ref19} are provided only for the quantized single-input single-output (SISO) channels, where on-off QPSK signaling is shown to be capacity achieving. In \cite{ref20}, the authors analyze the mutual information of a MIMO system with high-resolution DACs at the transmitter and 1-bit ADCs at the receiver is derived in the low SNR regime under the assumption of no CSIT/CSIR. Achievable rates in \cite{ref20} are provided only for the SIMO channel and only in the asymptotic case of low SNRs. \subsection{Main Contributions} In prior works, the assumption of perfect CSI has been leveraged for deriving the achievable rates and/or capacity results for MIMO systems\footnote{The achievable rate in \cite{ref19} is for SISO and in \cite{ref20} for SIMO, but in the asymptotic low-SNR regime only.}. Although different algorithms have been proposed for estimating the channel while 1-bit ADCs are used at the receiver, they are associated with large estimation errors, high computational complexity, and require extremely long training sequences \cite{ref33}\nocite{ref34}\nocite{ref35}\nocite{ref36}\nocite{ref37}\nocite{ref38}\nocite{ref39}-\cite{ref40}. Therefore, the assumption of having full CSI when 1-bit ADCs/DACs are present, is unrealistic. To the best of our knowledge, this is the first work on MIMO systems with 1-bit DACs/ADCs at the transmitter and at the receiver, that adopts the practical assumption of noisy 1-bit CSI at the transmitter and/or at the receiver. In this paper, we investigate two system models of massive MIMO, one when the number of transmit antennas, denoted by $M$, is very large and goes to infinity, and the second when the number of receive antennas, denoted by $N$, is very large and goes to infinity. Next, the results are derived for complex-valued AWGN MIMO channels. The derived results show that the capacity of the considered massive MIMO system is $2N$ and $2M$ bits per channel use when $N$ is fixed and $M\to\infty$ and when $M$ is fixed and $M\to\infty$ hold, respectively. These coincide with the respective capacities with full CSI at both transmitter and receiver. In both cases, we showed that the derived capacities can be achieved with noisy 1-bit CSI at the transmitter-end or at the receiver-end, and without any CSI at the other end. Moreover, we showed that the capacity can be achieved in one channel use without employing channel coding, which results in a latency of one channel use. Therefore, massive MIMO systems with 1-bit DACs/ADCs maybe a practical approach for achieving ultra reliable low latency communication (URLLC). This paper is organized as follows. In Section \Rmnum {2}, the system model of a MIMO system with 1-bit quantization at both the transmitter and the receiver is described. In Section \Rmnum 3, the capacities of massive MIMO systems with 1-bit quantization at both the transmitter and the receiver are presented. Finally, conclusion is brought in Section \Rmnum 4. \section{System Model} We consider a MIMO system comprised of $M$ transmit and $N$ receive antennas. Each antenna element is equipped with two\footnote{The transmitter requires 1-bit DAC and 1-bit ADC in order to transmit information symbols and receive pilot symbols in a Time Division Duplex (TDD) manner, respectively. In contrast, the receiver requires 1-bit ADC and 1-bit DAC in order to receive information symbols and transmit pilot symbols in a TDD manner, respectively.} 1-bit quantizers, 1-bit ADC and 1-bit DAC, that quantize the received and transmitted signals, respectively, as shown in Fig. \ref{fig.1}. Let $\bb x\in \mathcal X^M$ denote the complex-valued $M \times 1$ transmit vector after the 1-bit quantization at the transmitter. We assume that each element of $\bb x$ has unit energy. Let $P$ be the total transmit power and let $\bb y$ denote the $N\times 1$ received vector before the 1-bit quantization at the receiver. Then, $\bb y$ is given by \begin{equation}\label{eq.1} \bb y=\sqrt{\frac{P}{M}}\bb H \bb x+\bb w, \end{equation} where $\bb H$ denotes the $N\times M$ MIMO complex-valued channel matrix and $\bb w$ is the $N \times 1$ complex-valued Gaussian noise vector with independent and identically distributed (i.i.d.) entries having zero mean and unit variance. Following standard Rayleigh fading, the channel matrix $\bb H$ is also assumed to have complex-valued Gaussian i.i.d. entries with zero mean and unit variance. The vector $\bb y$ undergoes 1-bit quantization at the receiver, yielding the $N\times 1$ quantized received vector $\bb z\in\{1+j,1-j,-1+j,-1-j\}^N$, given by \begin{equation}\label{eq.2} \bb z=\textrm {sign}(\bb y)=\textrm {sign}\left(\sqrt{\frac{P}{M}}\bb H\bb x+\bb w\right), \end{equation} where \begin{equation}\label{eq.2.1} \textrm{sign}(a+jb)= \left\{ \begin{array}{rl} 1+j & \mbox{if } a\geq 0 \textrm{ and } b\geq 0 \\ -1+j & \mbox{if } a<0 \textrm{ and } b\geq 0\\ 1-j & \mbox{if } a\geq 0 \textrm{ and } b<0\\ -1-j & \mbox{if } a< 0 \textrm{ and } b<0.\\ \end{array} \right. \end{equation} \begin{figure}[t] \centering \includegraphics[width=15cm]{figjr4} \caption{An $M \times N$ MIMO system with 1-bit quantization at the transmitter and the receiver}\label{fig.1} \end{figure} Due to the 1-bit ADCs/DACs at both the receiver and the transmitter, the transmitter and the receiver can have access only to 1-bit CSI corrupted by noise. More precisely, let $\bb G$ denote the noisy 1-bit estimate of the channel matrix $\bb H$, which is obtained by $K$ pilot transmissions per antenna, given by \begin{equation}\label{eq.3} \bb G=\textrm {sign}\left(\sum_{k=1}^K\textrm {sign}\left( \sqrt{P_p} \bb H\bb +\bb W_k\right)\right), \end{equation} where $\bb W_k$ is an $N\times M$ noise matrix with i.i.d. zero-mean unit variance complex-valued Gaussian elements, and $P_p$ is the pilot power. In practice, assuming channel reciprocity, the noisy 1-bit CSI channel matrix $\mathbf G$ can be obtained at the transmitter (receiver) by sending $K$ orthogonal training symbols per antenna from the receiver (transmitter) to the transmitter (receiver) in a TDD fashion. The transmitter (receiver) then collects the $K$ 1-bit quantized pilot signals received on each of its antennas, then makes a 1-bit estimate on the channel based on majority rule and thereby obtains $\bb G$ in \eqref{eq.3}. The capacity of the considered MIMO system with 1-bit quantized inputs and outputs, and noisy 1-bit CSI is given as \begin{equation}\label{eq.4} C=\underset{\mm p(\bb x|\bb G)}{\text{max}}\hspace{4mm}\mm I(\bb z;\bb x|\bb G), \end{equation} where $\mm p(\bb x|\bb G)$ is the probability mass function (PMF) of the transmit signal $\bb x\in\mathcal X^M$ given the available 1-bit noisy CSI matrix $\bb G$. In this paper, we derive the capacity in \eqref{eq.4} by focusing on the massive MIMO regime, where the number of either transmit or receive antennas goes to infinity, i.e., either $M\to\infty$ and $N<\infty$ is fixed or $N\to\infty$ and $M<\infty$ is fixed. To this end, we assume that $P>0$ and $P_p>0$ hold. \section{Capacity} In this section, the capacities of massive MIMO systems with 1-bit quantized inputs and outputs, and noisy 1-bit CSI are presented for the cases when the massive antenna array is at the transmit-side and at receive-side. \subsection{Massive Antenna Array At The Transmit-Side} The following theorem establishes the capacity when $M\to\infty$ and $N$ is fixed. \begin{theorem}\label{theo.4} The capacity of the massive MIMO system with 1-bit quantized inputs and outputs, and noisy 1-bit CSI satisfies the limit \begin{equation}\label{eq.cmimo_complex} \underset{M\rightarrow\infty}{\text{$\lim$}}\hspace{2.5mm}C=2N, \end{equation} where $N<\infty$ is fixed. The capacity can be achieved in one channel use by a scheme that uses the noisy 1-bit CSI available at the transmitter only, neglecting any CSI at the receiver. \end{theorem} \begin{proof} See Appendix \ref{app.1} for the proof. \end{proof} We now demonstrate a scheme that is asymptotically able to communicate at this rate with vanishing probability of error by using the noisy 1-bit CSI only at the transmitter. \textbf{Achievability Scheme 1:} In order to communicate $2N$ bits per channel use, the transmitter constructs a codebook comprised of $2^{2N}$ codewords $\bb s_i\in\mathcal S^{N}$, for $i=1,\cdots,2^{2N}$, where $\mathcal S= \{-1-j,-1+j,1-j,1+j\}$. Without loss of generality, assume that codeword $\bb s$ is selected to be transmitted in the considered channel use. Let $s_{n}^R\in\{-1,1\}$ and $s_{n}^I\in\{-1,1\}$ denote the real-valued and imaginary-valued parts of the $n$-th complex-valued element of $\bb s$, respectively, for $n=1,2,...,N$. Next, assume that $\bb G$, given by \eqref{eq.3}, is known at the transmitter. Let $g_{nm}^R\in\{-1,1\}$ and $ g^I_{nm}\in\{-1,1\}$ denote the real-valued and imaginary-valued parts of the $(n,m)$ element of $\bb G$. The transmit vector $\bb x\in\mathcal X^M$, where $\mathcal X=\{-1,1,-j,j\}$, is formed from $\bb s$ and $\bb G$ as follows. The vector $\bb x$ is divided into $N$ parts, each comprised of $M/N$ consecutive complex-valued symbols. Let $x_m^R\in\{-1,0,1\}$ and $x_m^I\in\{-1,0,1\}$ denote the real-valued and imaginary-valued parts of the $m$-th complex-valued element of $\bb x$. Assume that $x_m^R$ and $x_m^I$ belong to the $n$-th group of transmit antennas, for $n=1,2,...,N$. Then, $x_m^R$ and $x_m^I$ are constructed as \begin{align}\label{aa1} \begin{bmatrix} x^R_m \\ x^I_m \end{bmatrix} = \frac{1}{2} \begin{bmatrix} g^R_{nm} & g^I_{nm} \\ -g^I_{nm} & g^R_{nm} \end{bmatrix} \begin{bmatrix} s^R_n \\ s^I_n \end{bmatrix} . \end{align} Note that $x_m^R$ and $x_m^I$, constructed using (\ref{aa1}), are such that either $x_m^R$ or $x_m^I$ is always zero while the other element is always $1$ or $-1$. Thereby, in a given channel use, the $m$-th transmit antenna is always silent on either the real-valued or the imaginary-valued channel/carrier. Next, $x_m^R$ and $x_m^I$ are amplified by $\sqrt{\frac{P}{M}}$, and then transmitted in one channel use from the $m$-th transmit antenna over the real-valued and imaginary-valued channels, respectively, for $m=1,...,M$. The receiver receives $\bb z$ and it will then decide that $ \bb z$ has been the transmitted codeword in the given channel use. As a result, an error happens at the receiver if $\bb z\neq\bb s$ occurs. In Appendix \ref{app.1}, we prove that the error rate goes to zero as $M\to\infty$ for fixed $N<\infty$. Achievability Scheme 1 works by splitting the $M\to\infty$ antennas at the transmitter into $N$ groups of antennas, where each group is comprised of $M/N\to\infty$ antennas. Then, the $n$-th group of transmit antennas, for $n=1,2,...,N$, uses the 1-bit CSI vector, obtained from the pilots by the $n$-th receive antenna, to beamform towards the direction of the $n$-th receive antenna. Thereby, for $M/N\to\infty$, the beamformed signal from the $n$-th group of transmit antennas is received amplified at the $n$-th receive antenna and completely attenuated at any other receive antennas. Hence, Achievability Scheme 1 resolves the considered MIMO system into $N$ parallel complex-valued Gaussian channels with 1-bit quantized inputs and outputs, where each parallel channel has an SNR that satisfies $SNR\to \infty$ as $M/N\to\infty$. As a result, instantaneous error-free decoding is feasible. \subsection{Massive Antenna Array At The Receive-Side} We now turn to the opposite case when the massive antenna array is at the receiver's side. The following theorem provides the capacity of this massive MIMO system for the asymptotic case when $N\to\infty$ holds and $M$ is fixed. \begin{theorem}\label{theo.5} The capacity of a massive MIMO system with 1-bit quantized inputs and outputs, and noisy 1-bit CSI satisfies the limit \begin{equation}\label{eq.cmimo_c} \underset{N \rightarrow\infty}{\text{$\lim$}}\hspace{2.5mm}C=2M, \end{equation} where $M<\infty$ is fixed. The capacity can be achieved in one channel use by a scheme that uses the noisy 1-bit CSI available at the receiver only, neglecting any CSI at the transmitter. \end{theorem} \begin{proof} Please refer to Appendix \ref{app.2} for the proof. \end{proof} \textbf{Achievability Scheme 2:} In order to communicate $2M$ bits per channel use, the transmitter constructs a codebook comprised of $2^{2M}$ codewords $\bb s_i\in\mathcal S^{M}$, for $i=1,\cdots,2^{2M}$, where $\mathcal S= \{-1-j,-1+j,1-j,1+j\}$. Without loss of generality, assume that codeword $\bb s$ is selected to be transmitted in the considered channel use. Then, the transmit vector $\bb x\in\mathcal X^M$, where $\mathcal X=\frac{1}{\sqrt 2}\mathcal S$, is constructed as \begin{align} \bb x =\frac{1}{\sqrt{2}}\bb s. \end{align} Next, the elements of $\bb x$ are amplified by $\sqrt{P/M}$ and then are transmitted in one channel use from the $M$ transmit antenna. The receiver receives $\bb z$. Assume that $\bb G$ given by \eqref{eq.3} is known at the receiver. Then the receiver decides that \begin{align}\label{eq_dec-s} \bb{\hat s}= \bb G \bb z \end{align} has been the transmitted codeword. As a result, an error happens at the receiver if $\bb{\hat s}\neq\bb s$. In Appendix \ref{app.2}, we prove that the error rate goes to zero as $N\to\infty$ when $M<\infty$ is fixed. Achievability Scheme 2 works by the receiver using its $N\to\infty$ antennas to steer its reception from the $M$ directions characterized by the $M$ column vectors of the 1-bit CSI matrix $\bb G$, respectively. On a given direction, the receiver receives a complex-valued symbol on each of its $N$ receive antennas, which can be either equal or not equal to the actual transmit symbol. For $N\to\infty$ and $M<\infty$ being fixed, the number of received symbols which are equal to the actual transmit symbol on a given direction is always larger than the number of received symbols which are not equal to the transmit symbol, leading to an error-free transmission. \subsection{Observations} In the following, we provide some observations regarding the derived results. Given the 1-bit quantizer at the receiver, it is immediate to conclude that the capacity in \eqref{eq.cmimo_complex} is upper bounded by $2N$ (bits/channel use) independent of the level of quantization at the transmitter and of the CSI knowledge. In other words, this capacity result holds also when full-precision symbols are transmitted, as well as when full-precision CSI is present at both the transmitter and the receiver. Similarly, given the 1-bit quantizer at the transmit side, it is immediate to conclude that the capacity in \eqref{eq.cmimo_c} is upper bounded by $2M$ (bits/channel use) independent of the level of quantization at the receiver and of the CSI knowledge. For the case when $M\to\infty$ and $N<\infty$ is fixed ($N\to\infty$ and $M<\infty$ is fixed), the number of pilots symbols needed for 1-bit CSI estimation in order to achieve the capacity in \eqref{eq.cmimo_complex} (in \eqref{eq.cmimo_c}) scales with the number of receive antennas $N$ (transmit antennas M). It is interesting to see that the capacity limits in Theorems~\ref{theo.4} and \ref{theo.5} hold for any fixed $P>0$ and $P_p>0$. Specifically, the relation between $P$ and $M$ in the error-rate of Achievability Scheme 1, given in \eqref{q7}, is of the form of $PM$. Hence, for any fixed $P>0$ and $M\to\infty$, $PM\to\infty$ holds and the error-rate of Achievability Scheme~1 goes to zero as $PM\to\infty$. On the other hand, the relation between $P_p$ and $M$ in the error-rate of Achievability Scheme 1, given in \eqref{q7}, can be written in the form of $M(1-2p_\epsilon)$, where $p_\epsilon$ is the probability of having an incorrect 1-bit CSI estimation on a single channel, which depends on the pilot power $P_p$. Thereby, $P_p$ via $p_\epsilon$ reduces the number of transmit antennas from $M$ to $M(1-2p_\epsilon)$. However, since $p_\epsilon<1/2$ holds for any fixed $P_p>0$, see \eqref{er2} for $K=1$, the effective number of transmit antennas $M(1-2p_\epsilon)$ still satisfies $M(1-2p_\epsilon)\to\infty$, which in turn results in an error-rate that goes to zero as $M\to\infty$. Similar analysis holds also for Achievability Scheme~2. Since each receive/transmit antenna receives/transmits its information without requiring coordination with the other receive/transmit antennas, Achievability Scheme 1/2 can also be applied to multi-user massive MIMO, i.e., to a MIMO system where the receive/transmit antennas are implemented on non-cooperating distinct devices. Finally, it is interesting to note that Achievability Schemes 1 and 2 do not require channel coding at the transmitter. As a result, the latency that Achievability Schemes 1 and 2 achieve is one channel use. Hence, massive MIMO systems with 1-bit ADCs and 1-bit DACs maybe a practical approach for achieving URLLC. \section{Conclusion} In this paper, we presented the capacities of the complex-valued AWGN massive MIMO system when the inputs, outputs, and noisy CSI are quantized to one bit of information, for the cases when $M\to\infty$ and $N$ is fixed, and $N\to\infty$ and $M$ is fixed. We showed that the capacity of the considered MIMO systems are $2N$ and $2M$ (bits per channel use) when $M\to\infty$ and $N\to\infty$ hold, respectively. In both cases, we showed that the capacity can be achieved in one channel use without using channel coding, which results in a latency of one channel use. Moreover, the derived capacities can be achieved with noisy 1-bit CSI at the transmitter-end or at the receiver-end, and without any CSI at the other end. \begin{appendices} \newpage \section{Proof of Theorem \ref{theo.4}}\label{app.1} \subsection{Converse} For the considered MIMO system, we have \begin{equation}\label{eq.36.01} \mm{I}(\bb x;\bb z|\bb G)=\mm{H}(\bb z|\bb G)-\mm{H}(\bb z|\bb x,\bb G)\leq\mm{H}(\bb z|\bb G)\leq 2N, \end{equation} where the last inequality follows due to the 1-bit quantized outputs. Hence, the capacity of this system cannot be larger than $2N$. In the following, we prove that \eqref{eq.36.01} is asymptotically achievable when $M\to\infty$ and $N<\infty$ is fixed. \subsection{Achievability} Using Achievability Scheme 1, the transmitter sends $2N$ bits of information at each channel use. Hence, the rate of Achievability Scheme 1 is $2N$ bits per channel use. Now, we are only left to prove that the symbols received at the receiver can be decoded with probability of error that vanishes when $M\to\infty$ and $N<\infty$ is fixed. Assume that $\bb s$ is transmitted and $\bb z$ received. Then, the probability of error can be bounded as \begin{align} \mm{P}_{\mm e}&= {\rm Pr}\{\bb z\neq \bb s\}\leq \sum_{n=1}^{N}\left(\mm{Pr}(z_{n}^R\neq s_{n}^R)+\mm{Pr}(z_{n}^I\neq s_{n}^I)\right) \label{eq.204a}\\ &= 2N \mm{Pr}\{z_{1}^R\neq s_{1}^R\} \label{eq.204b}, \end{align} where \eqref{eq.204a} follows from the union bound and \eqref{eq.204b} follows from symmetry, i.e., since $\mm{Pr}\{z_{n}^R\neq s_{n}^R\}=\mm{Pr}\{z_{k}^R\neq s_{k}^R\}=\mm{Pr}\{z_{n}^I\neq s_{n}^I\}=\mm{Pr}\{z_{k}^I\neq s_{k}^I\}$ holds. In the following, we derive a simplified expression for $\mm{Pr}\left\{z^R_1\neq s^R_1\right\}$. From \eqref{eq.2}, we can obtain the real-valued and imaginary-valued parts of the quantized received symbol at the first receive antenna, $z_{1}^R$ and $z_{1}^I$, as \begin{align} z_{1}^R&=\textrm{sign}\left(\sqrt{\frac{P}{M}}\sum_{m=1}^{M}\left(h_{1m}^Rx_{m}^R-h_{1m}^I x_{m}^I\right)+w_1^R\right) \label{aa1.1} ,\\ z_{1}^I&=\textrm{sign}\left(\sqrt{\frac{P}{M}}\sum_{m=1}^{M}\left(h_{1m}^Rx_{m}^I+h_{1m}^I x_{m}^R\right)+w_1^I\right). \label{aa1.12} \end{align} Only transmit symbols from transmit antennas $m=1,2,...,M/N$ are intended for the first receive antennas, and the symbols coming from all other antennas act as interference. Having this in mind, (\ref{aa1.1}) and (\ref{aa1.12}) can be written as \begin{align} z_{1}^R&=\textrm{sign}\left(\sqrt{\frac{P}{M}}\left( \sum_{m=1}^{M/N} (h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I) +v^R_1+w_1^R\right) \right)\label{aa1.1-1} ,\\ z_{1}^I&=\textrm{sign}\left( \sqrt{\frac{P}{M}} \left(\sum_{m=1}^{M/N} (h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R) +v^I_1 +w_1^I\right)\right), \label{aa1.12-1} \end{align} where \begin{align} v_{1}^R&= \sum_{m=M/N+1}^{M}\left(h_{1k}^R x_{k}^R-h_{1k}^I x_{k}^I\right) \label{aab1.1} ,\\ v_{1}^I&= \sum_{m=M/N+1}^{M}\left(h_{1k}^R x_{k}^I+h_{1k}^I x_{k}^R\right) \label{aab1.12} \end{align} are the interference at the $n$-th receive antenna. Since $v_{1}^R$ and $v_{1}^I$ are zero-mean Gaussian distributed with variance $\frac{P}{2}\frac{N-1}{N}$, we can write \eqref{aa1.1-1} and \eqref{aa1.12-1} equivalently as \begin{align} z_{1}^R&=\textrm{sign}\left(\sqrt{\frac{P}{M}}\left( \sum_{m=1}^{M/N} (h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I) + \hat w_1^R\right) \right)\label{aa1.1-1.1} ,\\ z_{1}^I&=\textrm{sign}\left( \sqrt{\frac{P}{M}} \left(\sum_{m=1}^{M/N} (h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R) + \hat w_1^I\right)\right), \label{aa1.12-1.1} \end{align} where $\hat w_n^R$ and $\hat w_n^I$ are independent zero-mean additive white Gaussian noises both with variance \begin{align}\label{eq_var1} \sigma_{\hat w}^2= \frac{P}{2}\frac{N-1}{N}+\frac{1}{2}= \frac{P(N-1)+N}{2N}. \end{align} Now, for clarity of presentation, for a given $m$, we represent $h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I$ and $h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R$ in \eqref{aa1.1-1.1} and \eqref{aa1.12-1.1}, respectively, in a matrix form as \begin{align}\label{eq_max} \begin{bmatrix} h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I \\ h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R \end{bmatrix} = \begin{bmatrix} h^R_{1m} & -h^I_{1m} \\ h^I_{1m} & h^R_{1m} \end{bmatrix} \begin{bmatrix} x^R_m \\ x^I_m \end{bmatrix}. \end{align} By inserting $x^R_m$ and $x^I_m $ from (\ref{aa1}) into (\ref{eq_max}), we obtain \begin{align}\label{eq_max1} \begin{bmatrix} h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I \\ h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R \end{bmatrix} =& \frac{1}{2} \begin{bmatrix} h^R_{1m} & -h^I_{1m} \\ h^I_{1m} & h^R_{1m} \end{bmatrix} \begin{bmatrix} g^R_{1m} & g^I_{1m} \\ -g^I_{1m} & g^R_{1m} \end{bmatrix} \begin{bmatrix} s^R_1 \\ s^I_1 \end{bmatrix} \nonumber\\ =&\frac{1}{2}\begin{bmatrix} g^R_{1m} h^R_{1m}+g^I_{1m} h^I_{1m} & g^I_{1m} h^R_{1m}-g^R_{1m} h^I_{1m} \\ -g^I_{1m} h^R_{1m}+g^R_{1m} h^I_{1m} & g^R_{1m} h^R_{1m}+g^I_{1m} h^I_{1m} \end{bmatrix} \begin{bmatrix} s^R_1 \\ s^I_1 \end{bmatrix}. \end{align} Now, there are four cases depending on whether $g^R_{1m}=g^I_{1m}$ or $g^R_{1m}=-g^I_{1m}$ holds, and depending on whether $s^R_1=s^I_1$ or $s^R_1=-s^I_1$ holds. If $g^R_{1m}=g^I_{1m}$ and $s^R_1=s^I_1$ hold, or if $g^R_{1m}=-g^I_{1m}$ and $s^R_1=-s^I_1$ hold, \eqref{eq_max1} simplifies to \begin{align}\label{eq_max2} \begin{bmatrix} h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I \\ h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R \end{bmatrix} = \begin{bmatrix} s^R_1 g^R_{1m} h^R_{1m} \\ s^I_1 g^I_{1m} h^I_{1m} \end{bmatrix}. \end{align} If $g^R_{1m}=g^I_{1m}$ and $s^R_1=-s^I_1$ hold, or if $g^R_{1m}=-g^I_{1m}$ and $s^R_1=s^I_1$ hold, \eqref{eq_max1} simplifies to \begin{align}\label{eq_max2.1} \begin{bmatrix} h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I \\ h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R \end{bmatrix} = \begin{bmatrix} s^R_1 g^I_{1m} h^I_{1m} \\ s^I_1 g^R_{1m} h^R_{1m} \end{bmatrix} \end{align} On the other hand, we have the following depending on whether the estimation is correct or not \begin{align}\label{eq.eq} g^\alpha_{1m} h^\alpha_{1m} =\left\{ \begin{array}{rl} |h^\alpha_{1m}| & \textrm{ if } h^\alpha_{1m} \textrm{ is correctly estimated}\\ -|h^\alpha_{1m}| & \textrm{ if } h^\alpha_{1m} \textrm{ is incorrectly estimated} \end{array} \right.\quad \alpha\in\{R,I\}. \end{align} Using \eqref{eq.eq}, we can write $h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I $ in \eqref{eq_max1} equivalently as \begin{align} h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I =\left\{ \begin{array}{rl} |\hat h^R_{1m}| & \textrm{ if } h^R_{1m} \textrm{ is correctly estimated and } x_m^I=0 \\ & \textrm{ or } h^I_{1m} \textrm{ is correctly estimated and } x_m^R=0. \\ -|\hat h^R_{1m}| & \textrm{ if } h^R_{1m} \textrm{ is incorrectly estimated and } x_m^I=0 \\ & \textrm{ or } h^I_{1m} \textrm{ is incorrectly estimated and } x_m^R=0, \\ \end{array} \right. \end{align} where $\hat h^R_{1m}$ is a zero-mean real-valued Gaussian random variable with variance $1/2$. Similarly, we can write $h_{1m}^R x_{m}^R-h_{1m}^I x_{m}^I $ in \eqref{eq_max1} equivalently as \begin{align} h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R =\left\{ \begin{array}{cc} |\hat h^I_{1m}| & \textrm{ if } h^R_{1m} \textrm{ is correctly estimated and } x_m^R=0 \\ & \textrm{ or } h^I_{1m} \textrm{ is correctly estimated and } x_m^I=0. \\ -|\hat h^I_{1m}| & \textrm{ if } h^R_{1m} \textrm{ is correctly estimated and } x_m^R=0 \\ & \textrm{ or } h^I_{1m} \textrm{ is correctly estimated and } x_m^I=0, \\ \end{array} \right. \end{align} where $\hat h^R_{1m}$ is a zero-mean real-valued Gaussian random variable with variance $1/2$. Without loss of generality, assume that there are $K_R$ incorrect estimates that influence $ h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R $ and $K_I$ incorrect estimates that influence $ h_{1m}^R x_{m}^I+h_{1m}^I x_{m}^R $. Then, we can write (\ref{aa1.1-1}) and (\ref{aa1.12-1}) equivalently as \begin{align} z_{1}^R&=\textrm{sign}\left(\sqrt{\frac{P}{M}} s_1^R \sum_{m=K^R+1}^{M/N } |\hat h_m^R| -\sqrt{\frac{P}{M}} s_1^R \sum_{m=1}^{K^R} |\hat h_m^R|+ \hat w_1^R\right) \label{aa1.1-5-1a} ,\\ z_{1}^I&=\textrm{sign}\left(\sqrt{\frac{P}{M}} s_1^I\sum_{m=K^I+1}^{M/N} |\hat h_m^I|-\sqrt{\frac{P}{M}} s_1^I\sum_{m=1}^{K^I} |\hat h_m^I| + \hat w_1^I\right). \label{aa1.12-5-1a} \end{align} Since the received real-valued symbol at the first antenna, $z_1^R$, is given by (\ref{aa1.1-5-1a}), $\mm{Pr}\{z_{1}^R\neq s_{1}^R\}$ is given by \begin{equation}\label{q1} \mm{Pr}\{z_{1}^R\neq s_{1}^R\}= \mm{Pr}\left( \textrm{sign}\left(\sqrt{\frac{P}{M}} s_1^R \sum_{m=K^R+1}^{M/N } |\hat h_m^R| -\sqrt{\frac{P}{M}} s_1^R \sum_{m=1}^{K^R} |\hat h_m^R|+ \hat w_1^R\right)\neq s_1^R\right). \end{equation} Setting $L=M/N$, \eqref{q1} can be written as \begin{align}\label{q3} &\mm{Pr}\{z_{1}^R\neq s_{1}^R\} = \sum_{k=0}^L \mm{Pr}\left(K^R=k\right) \int_{\hat h_1^R}\cdots\int_{\hat h_L^R}\nonumber\\ &\mm{Pr}\left( \textrm{sign}\left(\sqrt{\frac{P}{M}} s_1^R \sum_{m=k+1}^{L } |\hat h_m^R| -\sqrt{\frac{P}{M}} s_1^R \sum_{m=1}^{k} |\hat h_m^R|+ \hat w_n^R\right)\neq s_1^R \right) \prod_{m=1}^{L}f(\hat h_m^R)\mm d\hat h_m^R\nonumber\\ &=\sum_{k=0}^L \mm{Pr}\left(K^R_1=k\right) \int_{\hat h_1^R}\cdots\int_{\hat h_L^R}Q\left(\frac{\sqrt{\frac{P}{M}}\left( \sum\limits_{m=k+1}^{L } |\hat h_m^R| - \sum\limits_{m=1}^{k} |\hat h_m^R|\right)}{\sqrt{\frac{P(N-1)+N}{2N}}} \right) \prod_{m=1}^{L}f(\hat h_m^R)\mm d\hat h_m^R. \end{align} In \eqref{q3}, $\mm{Pr}\left(K^R=k\right)$ is the probability of receiving $k$ incorrect 1-bit CSI's that affect $z_1^R$, which can be found as \begin{equation}\label{q4} \mm{Pr}\left(K^R=k\right)=\binom{L}{k}p_{\epsilon}^{k}\left(1-p_{\epsilon}\right)^{L-k}, \end{equation} where $p_{\epsilon}$ is given in \eqref{er2} and has been derived in Appendix \ref{app3}. Due to the law of large numbers, as $L\to\infty$, we have the following asymptotic equality \begin{align}\label{q6} & \frac{1}{L} \left(\sqrt{\frac{P}{M}} \sum\limits_{m=k+1}^{L } |\hat h_m^R| -\sqrt{\frac{P}{M}} \sum\limits_{m=1}^{k} |\hat h_m^R|\right) \to \frac{1}{L} (L-k) E\{|\hat h_m^R|\}-\frac{1}{L} k E\{|\hat h_m^R|\}\nonumber\\ & = \frac{L-2k}{L} E\{|\hat h_m^R|\},\textrm{ as } L\to\infty . \end{align} As a result of \eqref{q6}, \eqref{q3} for $L\to\infty$ simplifies to \begin{align}\label{q7} \mm{Pr}\{z_{1}^R\neq s_{1}^R\}&\to\sum_{k=0}^L \mm{Pr}\left(K^R_1=k\right)\nonumber\\ &\times\int_{\hat h_1^R}\cdots\int_{\hat h_L^R}Q\left(\frac{\sqrt{\frac{P}{M}} (L-2k) E\{|\hat h_m^R|\}}{\sqrt{\frac{P(N-1)+N}{2N}}} \right)\prod_{m=1}^{L}f(\hat h_m^R)\mm d\hat h_m^R\nonumber\\ &= \sum_{k=0}^L \mm{Pr}\left(K^R_1=k\right) Q\left(\frac{\sqrt{\frac{P}{M}} (L-2k) E\{|\hat h_m^R|\}}{\sqrt{\frac{P(N-1)+N}{2N}}} \right) . \end{align} Since $k\geq 0$, \eqref{q7} can be upper bounded for $L\to\infty$ as follows \begin{align}\label{q8} \mm{Pr}\{z_{1}^R\neq s_{1}^R\}&\to \sum_{k=0}^{L/2} \mm{Pr}\left(K^R_1=k\right) Q\left(\frac{\sqrt{\frac{P}{M}}(L-2k) E\{|\hat h_m^R|\}}{\sqrt{\frac{P(N-1)+N}{2N}}} \right)\nonumber\\ &+\sum_{k=L/2+1}^{L} \mm{Pr}\left(K^R_1=k\right) Q\left(\frac{\sqrt{\frac{P}{M}}(L-2k) E\{|\hat h_m^R|\}}{\sqrt{\frac{P(N-1)+N}{2N}}} \right)\nonumber\\ &\overset{(a)}{\leq} \sum_{k=0}^{L/2} \mm{Pr}\left(K^R_1=k\right)Q\left(\frac{\sqrt{\frac{P}{M}}(L-2k) E\{|\hat h_m^R|\}}{\sqrt{\frac{P(N-1)+N}{2N}}} \right)\nonumber\\ &+\sum_{k=L/2+1}^{L} \mm{Pr}\left(K^R_1=k\right)\nonumber\\ &\overset{(b)}{\leq} \sum_{k=0}^{L/2} \mm{Pr}\left(K^R_1=k\right)e^{-\beta^2(L-2k)^2/2}+\sum_{k=L/2+1}^{L} \mm{Pr}\left(K^R_1=k\right), \end{align} where $(a)$ follows since $Q(x)\leq 1$, $\forall x$, $(b)$ follows since for $x\geq 0$ \begin{equation}\label{q8.1} Q(x)\leq \frac{e^{-x^2}}{2} \end{equation} holds, and $\beta$ is given by \begin{equation}\label{q9} \beta= \frac{\sqrt{\frac{P}{M}} E\{|\hat h_m^R|\}}{\sqrt{\frac{P(N-1)+N}{2N}}}. \end{equation} Now, by substituting \eqref{q4} into \eqref{q8}, we obtain for $L\to\infty$ \begin{align}\label{q10} \mm{Pr}\left\{z^R_1\neq s^R_1\right\}&\leq \sum_{k=0}^{L/2} \binom{L}{k} p_{\epsilon}^k(1-p_{\epsilon})^{L-k}e^{-\beta^2(L-2k)^2/2}+\sum_{k=L/2+1}^{L} \binom{L}{k} p_{\epsilon}^k(1-p_{\epsilon})^{L-k}\nonumber\\ &\overset{(c)}{\leq} \overbrace{\sum_{k=0}^{L/2}\mm{Pr}\left(K^R=k\right)e^{-\beta^2(L-2k)^2/2} }^{\mathcal O_1}+\overbrace{\sum_{k=L/2+1}^{L} 2^Lp_{\epsilon}^{k}(1-p_{\epsilon})^{L-k}}^{\mathcal O_2}, \end{align} where $p_{\epsilon}$ is given in \eqref{er2} and $(c)$ follows from $\binom{L}{k}\leq 2^L$. We can upper bound $\mathcal O_2$ as \begin{align}\label{q10.2} \mathcal O_2&= \sum_{k=L/2+1}^{L} \binom{L}{k} p_{\epsilon}^k(1-p_{\epsilon})^{L-k}< 2^L \frac{L}{2}\mm{max}\left\{ p_{\epsilon}^k(1-p_{\epsilon})^{L-k}\right\}< 2^L \frac{L}{2}p_{\epsilon}^{L/2}(1-p_{\epsilon})^{L/2}\nonumber\\ &=\frac{L}{2}\left(2^2 p_{\epsilon}(1-p_{\epsilon})\right)^{L/2} \to 0, \end{align} since $4 p_{\epsilon}(1-p_{\epsilon})<1$ for $p_{\epsilon}<1/2$. To upper bound $\mathcal O_1$, we consider two cases. Let $\mathcal E_1$ and $\mathcal E_2$ be defined as \begin{align} \mathcal E_1=\left\{k: 0\leq k\leq L/2; \; k/L\nrightarrow \frac{1}{2} \textrm{ as }L\to\infty\right\}, \end{align} \begin{align} \mathcal E_2=\left\{k: 0\leq k\leq L/2; \; k/L\to \frac{1}{2} \textrm{ as }L\to\infty\right\}. \end{align} Then, using \eqref{q4}, $\mathcal O_1$ is upper bounded as \begin{align}\label{q12} \mathcal O_1&\leq\sum_{k\in \mathcal E_1} \mm{Pr}\left(K^R=k\right)e^{-\beta^2(L-2k)^2/2}+\sum_{k\in \mathcal E_2} 2^{L}p_{\epsilon}^k(1-p_{\epsilon})^{L-k}e^{-\beta^2(L-2k)^2/2}. \end{align} Now, the first sum in \eqref{q12} is upper bounded as \begin{align}\label{q12s} \sum_{k\in \mathcal E_1} \mm{Pr}\left(K^R=k\right)e^{-\beta^2(L-2k)^2/2} \leq\sum_{k\in \mathcal E_1}e^{-\beta^2L^2(1-2\frac{k}{L})^2}\to 0 \hspace{4mm}\mm{as}\hspace{4mm}L\to\infty. \end{align} On the other hand, the second sum in \eqref{q12} is upper bounded as \begin{align}\label{q13} &\sum_{k\in \mathcal E_2} 2^{L}p_{\epsilon}^k(1-p_{\epsilon})^{L-k}e^{-\beta^2(L-2k)^2/2} = \sum_{k\in \mathcal E_2} 2^{L}p_{\epsilon}^{L\frac{k}{L}}(1-p_{\epsilon})^{L(1-k/L)}e^{-\beta^2L^2(1-2k/L)^2/2} \nonumber\\ &= \sum_{k\in \mathcal E_2} 2^{L}p_{\epsilon}^{L\frac{1}{2}}(1-p_{\epsilon})^{L(1-1/2)}e^{-\beta^2L^2(1-1)^2/2} \nonumber\\ &= |\mathcal E_2| (2^2 p_{\epsilon} (1-p_{\epsilon}))^{L/2} \nonumber\\ &\leq \frac{L}{2} (2^2 p_{\epsilon} (1-p_{\epsilon}))^{L/2} \to 0 \end{align} since $4 p_{\epsilon}(1-p_{\epsilon})<1$ for $p_{\epsilon}<1/2$. Combining \eqref{q10.2}, \eqref{q12}, \eqref{q12s}, and \eqref{q13}, we obtain $\mm{Pr}\left\{z^R_1\neq s^R_1\right\}\leq 0$ as $L\to\infty$. \section{Proof of Theorem \ref{theo.5}}\label{app.2} \subsection{Converse} For the considered 1-bit quantized MIMO system, we have \begin{equation}\label{eq10.0--1} \mm I(\bb x;\bb z|\bb G)=\mm H(\bb x|\bb G)-\mm H(\bb x|\bb z,\bb G)\leq\mm H(\bb x|\bb G)=2M, \end{equation} where the last inequality follows due to the 1-bit quantized inputs. Hence, the capacity of this system cannot be larger than $2M$. In the following, we prove that \eqref{eq10.0--1} is asymptotically achievable when $N\to\infty$ and $M<\infty$ is fixed. \subsection{Achievability} Using Achievability Scheme 2, the transmitter sends $2M$ bits of information at each channel use. Hence, the rate of Achievability Scheme 2 is $2M$ bits per channel use. Now, we are only left to prove that the symbols received at the receiver can be decoded with probability of error vanishes when $N\to\infty$ and $M<\infty$ is fixed. Assume that $\bb s$ is transmitted and $\bb z$ received. From $\bb z$, we find $\bb{\hat s}$ using \eqref{eq_dec-s}. Then, the probability of error can be bounded as \begin{align} \mm{P}_{\mm e}&= {\rm Pr}\{\bb{\hat s} \neq \bb s\}\leq \sum_{m=1}^{M}\left(\mm{Pr}\{\hat s_{m}^R\neq s_{m}^R\}+\mm{Pr}\{\hat s_{m}^I\neq s_{m}^I\}\right) \label{eq.204a--1}\\ &= 2M \mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\}, \label{eq.204b--1} \end{align} where \eqref{eq.204a--1} follows from the union bound and \eqref{eq.204b--1} follows from symmetry. Now, $\mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\}$ is given by \begin{align}\label{eq26} &\mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\}= \mm{Prob}\left\{\mm{sign}\left(\sum_{n=1}^{N} g_{n1}^R z_{n}^R\right)\neq s_{1}^R\right\}\nonumber\\ &=\mm{Pr}\left(\mm{sign}\left(\sum_{n=1}^{N} g_{n1}^R\mm{sign}\left[\sqrt{\frac{P}{2M}}\sum_{m=1}^{M}(h_{nm}^R s_{n}^R-h_{nm}^I s_{n}^I)+w_n^R\right]\right)\neq s_{1}^R\right)\nonumber\\ &=\mm{Pr}\Bigg(\mm{sign}\Bigg(\sum_{n=1}^{N} g_{n1}^R\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} h_{n1}^R s_{1}^R \nonumber\\ &\qquad\qquad\qquad +\sqrt{\frac{P}{2M}}\sum_{m= 2}^{M}h_{km}^R s_{n}^R -\sqrt{\frac{P}{2M}}\sum_{m= 1}^{M} h_{nm}^I s_{n}^I +w_n^R\Bigg]\Bigg)\neq s_{1}^R\Bigg). \end{align} The estimation of a given $h_{n1}^R$ is correct with probability $1-p_{\epsilon}$, in which case $g_{n1}^R=\mm{sign}(h_{n1}^R)$ holds, and is incorrect with probability $p_{\epsilon}$, in which case $g_{n1}^R=-\mm{sign}(h_{n1}^R)$ holds. Without loss of generality, assume that $h_{nm}^R$, for $n=1,2,...,j$ are correctly estimated and $h_{nm}^R$ for $n=j+1,2,...,N$ are incorrectly estimated, where $j$ is an RV that takes values from one to $M$. Then, \eqref{eq26} can be written as \begin{align}\label{ea14} & \mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\} =\sum_{j=0}^{N}p_{\epsilon}^j(1-p_{\epsilon})^{N-j}\binom{N}{j} \nonumber\\ &\times\mm{Pr}\Bigg\{\mm{sign}\Bigg(\sum_{n=1}^{N-j}\mm{sign}(h_{n1}^R) \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} \left(h_{n1}^R s_{1}^R +\sum_{m= 2}^{M}h_{km}^R s_{n}^R -\sum_{m= 1}^{M} h_{nm}^I s_{n}^I\right)+w^R_n\Bigg] \nonumber\\ &-\sum_{n=N-j+1}^{N}\mm{sign}(h_{n1}^R)\nonumber\\ &\times\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} \left(h_{n1}^R s_{1}^R +\sum_{m= 2}^{M}h_{km}^R s_{n}^R -\sum_{m= 1}^{M} h_{nm}^I s_{n}^I \right)+w^R_n\Bigg]\Bigg)\neq s_{1}^R\Bigg\}. \end{align} Since $\mm{sign}(a)\mm{sign}(b)=\mm{sign}(ab)$ holds, we can write \eqref{ea14} as \begin{align}\label{ea15} & \mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\} =\sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j} \nonumber\\ &\times\mm{Pr}\Bigg\{\mm{sign}\Bigg(\sum_{n=1}^{N-j} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R \nonumber\\ &+\mm{sign}(h_{n1}^R) \left\{\sqrt{\frac{P}{2M}}\left(\sum_{m= 2}^{M}h_{km}^R s_{n}^R - \sum_{m= 1}^{M} h_{nm}^I s_{n}^I\right)+ w^R_n\right\}\Bigg] \nonumber\\ &-\sum_{n=N-j+1}^{N} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R \nonumber\\ &+\mm{sign}(h_{n1}^R)\left\{ \sqrt{\frac{P}{2M}}\left(\sum_{m= 2}^{M}h_{km}^R s_{n}^R -\sum_{m= 1}^{M} h_{nm}^I s_{n}^I\right) + w^R_n\right\}\Bigg]\Bigg)\neq s_{1}^R\Bigg\} \nonumber\\ % & = \sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j}\mm{Pr}\Bigg\{\mm{sign}\Bigg(\sum_{n=1}^{N-j} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n^R\Bigg]\nonumber\\ &-\sum_{n=N-j+1}^{N} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w^R_n\Bigg]\Bigg)\neq s_{1}^R\Bigg\}, \end{align} where \begin{align}\label{ea15a} \hat w_n^R= \sqrt{\frac{P}{2M}}\sum_{m= 2}^{M}h_{km}^R s_{n}^R + \sqrt{\frac{P}{2M}}\sum_{m= 1}^{M} h_{nm}^I s_{n}^I+ w_n \end{align} is a zero-mean Gaussian RV with variance \begin{align}\label{ea15b} \sigma^2_{\hat w}= \frac{P}{2M} \frac{2M-1}{2} + \frac{1}{2} . \end{align} Since $\mm{sign}(a)=\mm{sign}(a b)$, for any $b>0$, we can write \eqref{ea15} as \begin{align}\label{ea15-1} \mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\} &= \sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j} \mm{Pr}\Bigg\{\mm{sign}\Bigg(\frac{1}{N}\sum_{n=1}^{N-j} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n^R\Bigg]\nonumber\\ &-\frac{1}{N} \sum_{n=N-j+1}^{N} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n^R\Bigg]\Bigg)\neq s_{1}^R\Bigg\}. \end{align} Now, the sums in \eqref{ea15-1} for $N\to\infty$ can be written as \begin{align} \frac{1}{N} \sum_{n=1}^{N-J} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg] &=\frac{N-J}{N}\frac{1}{N-J} \sum_{n=1}^{N-J} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\nonumber\\ &\to (1-\alpha_j) E\left\{\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\right\} , \end{align} \begin{align} \frac{1}{N} \sum_{n=N-J+1}^{N} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg] &=\frac{J}{N}\frac{1}{J} \sum_{n=N-J+1}^{N} \mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\nonumber\\ &\to \alpha_j E\left\{\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\right\} , \end{align} where \begin{align}\label{eq.ex3} \alpha_j=\lim_{N\to\infty} \frac{j}{N}. \end{align} Hence, we can write \eqref{ea15-1} for $N\to\infty$ as \begin{align}\label{ea15-2} \mm{Pr}\{\hat s_{1}^R\neq s_{1}^R\} &\to \sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j}\nonumber\\ &\mm{Pr}\Bigg\{\mm{sign}\Bigg((1-\alpha_j) E\left\{\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\right\}\nonumber\\ &-\alpha_j E\left\{\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\right\} \Bigg) \neq s_{1}^R\Bigg\} \nonumber\\ % &=\sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j}\nonumber\\ &\times \mm{Pr}\left\{\mm{sign}\Bigg(E\left\{\mm{sign}\Bigg[\sqrt{\frac{P}{2M}} |h_{n1}^R| s_{1}^R + \hat w_n\Bigg]\right\} (1-2\alpha_j ) \Bigg) \neq s_{1}^R\right\} \nonumber\\ % &=\sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j} \mm{Pr}\left\{\mm{sign}\bigg(s_{1}^R (1-2\alpha_j ) \bigg) \neq s_{1}^R\right\} \nonumber\\ % &=\sum_{j=0}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j} \mm{Pr}\left\{\mm{sign} (1-2\alpha_j ) \neq 1\right\} \nonumber\\ % &\overset{(a)}{=}\sum_{J=\frac{N}{2}}^{N}\binom{N}{j}p_{\epsilon}^j(1-p_{\epsilon})^{N-j}\overset{(b)}{\leq}\frac{N}{2}\binom{N}{j}p_{\epsilon}^{N/2}(1-p_{\epsilon})^{N/2}\nonumber\\ &\overset{(c)}{\leq}\frac{N}{2}2^N p_{\epsilon}^{N/2}(1-p_{\epsilon})^{N/2}\frac{N}{2}\left(4p_{\epsilon}(1-p_{\epsilon})\right)^{N/2}\overset{(e)}{\to} 0, \textrm{as } N\to\infty, \end{align} where $(a)$ holds since $\mm{Pr}\left\{\mm{sign} (1-2\alpha_j ) \neq 1\right\}=0$ for $j<\frac{N}{2}$ and $\mm{Pr}\left\{\mm{sign}(1-2\alpha_j\neq 1)\right\}=1$ for $j=\frac{N}{2}$,\dots,$N$. $(b)$ holds since $p_{\epsilon}<\frac{1}{2}$, $(c)$ holds since $\binom{N}{j}<2^N$, and $(e)$ holds since $4p_{\epsilon}(1-p_{\epsilon})<1$. \section{Probability of Erroneous 1-bit CSI Estimation}\label{app3} In the following, we find the probability of receiving an erroneous 1-bit CSI from $K$ pilot transmissions over the real-valued or complex-valued parts of the channel, when the symbol $x=\sqrt{P_p}$ is transmitted from one of the transmit/receive antennas. Since the distribution of the real-valued and complex-valued parts of the channel are identical, the following derivations holds for both the real-valued and complex-valued 1-bit CSI estimations. The probability of receiving an erroneous 1-bit CSI estimate $z_n$, where $z_n\neq\mm{sign}(h_{mn})$ holds, by employing a single pilot transmission over the real-valued or imaginary-valued channel $h_{mn}$ is given by \begin{equation}\label{er1} \begin{split} &\mm{Pr}\left(z_n\neq\mm{sign}(h_{mn})\right)= \mm{Pr}\left(\mm{sign}(h_{mn})\neq\mm{sign}\left(\sqrt{P_p}h_{mn}+n_n\right)\right)\\ &= \int_{-\infty}^{+\infty}\mm{Pr}\left(\mm{sign}(h_{mn})\neq\mm{sign}\left(\sqrt{P_p} h_{mn}+n_n\right)\big|h_{mn}\right)f_{h_{mn}}( h_{mn})\mm dh_{mn}\\ & =\int_{-\infty}^{+\infty}Q\left(\sqrt{P_p}| h_{mn}|\right)f_{h_{mn}}( h_{mn})\mm dh_{mn} =\frac{1}{2\pi}\int_{-\infty}^{+\infty}\int_{\sqrt{P_p}|h_{mn}|}^{+\infty}e^{-\frac{u^2+ h^2_{mn}}{2}}\mm du\mm dh_{mn}\\ &=\frac{1}{\pi}\int_{0}^{+\infty}\int_{\sqrt{P_p} h_{mn}}^{+\infty}e^{-\frac{u^2+ h^2_{mn}}{2}}\mm du\mm dh_{mn}=\frac{1}{\pi}\int_{0}^{+\infty}\int_{\tan^{-1}\left(\sqrt{P_p} h_{mn}\right)}^{\frac{\pi}{2}}e^{-\frac{r^2}{2}}r\mm dr\mm d\theta\\ &=\frac{1}{2}-\frac{\tan^{-1}\left(\sqrt{P_p}\right)}{\pi}. \end{split} \end{equation} Now, when the pilot estimation of the same channel is acquired by $K$ pilot symbols such that the final 1-bit CSI is decided based on majority rule, an erroneous 1-bit CSI estimation of the channel occurs when half or more than half of the $K$ pilot symbols are erroneously estimated. Since the probability of erroneous 1-bit CSI estimation from one pilot symbol was found in \eqref{er1}, the probability of receiving an erroneous 1-bit CSI from $K$ pilot symbols based on majority rule is given by \begin{equation}\label{er2} p_{\epsilon}=\sum_{j=\frac{K}{2}}^{K}\binom{k}{j}\left(\frac{1}{2}-\frac{\arctan\left(\sqrt{P_p}\right)}{\pi}\right)^j\left(\frac{1}{2}+\frac{\arctan\left(\sqrt{P_p}\right)}{\pi}\right)^{K-j}. \end{equation} \end{appendices}
1,108,101,566,078
arxiv
\section{Introduction} Our world has become vastly dependent on information technology. Millions of devices communicate every second in cyberspace. We are now connected to one another to a degree that seemed inconceivable just 20 years ago, but with so many connections, we are confronted with just as much, if not more, vulnerability; Hubbard \& Seiersen, in their book \emph{How to Measure Anything in Cybersecurity Risk} \cite{Hubbard2016}, lay out the case that the global attack surface is increasing from at least four perspectives: (1) the number of people on the Internet, (2) the number of online resources that each person is consuming, (3) the vulnerabilities that come with those people and resources, and (4) the risk from building online services on top of one another, which could result in a ``breach cascade". Modern industrial systems and critical infrastructure should account for cyber threats, especially when corporate organizations, national economies, and public safety have been put at risk \cite{Newhouse2017,Blythe2013}. As a result of these omnipresent risks, laws and standards have been created, such as NERC CIP-003-7, IEC 62443, and ISO 27001, that attempt to mitigate the risks posed by cyberattacks. These international standards acknowledge that among the weakest points in our security are not the technologies themselves, but the \emph{people} behind the technologies. People often lack the knowledge and training to avoid even the simplest of hacker schemes, such as social engineering, or phishing attacks. Despite these requirements and all the best efforts of IT Security departments, organizations are still only as strong as the employees who regularly engage with the technologies. People are the front line of defense against cyber threats---in particular, the engineers, managers, and other staff who interact directly with information systems and their security measures on a daily basis. Therefore, these international standards require that such personnel are well-trained, that they are aware of the risks, and that they are prepared and have the resources to mitigate those risks. Unfortunately, this is as far as mandates, requirements, and laws can go; companies have to interpret and implement them, as well as measure if the training yields the desired results. If there is an attack, and it is not averted because employees did not receive the right training, then it will have already been too late. The question is, how do we train companies' most valuable resource, their employees? And how do we ensure that the training has been efficient and cost-effective? National institutions exist that are dedicated to the sole purpose of education (the Department of Education, USA; Bundes Ministerium f{\"u}r Bildung und Forschung, Germany; etc). But what this work needs to take into consideration is that the people in need of educating are (1) in the Age of Information, a different world from anything that has ever existed before; and (2) that they are not children being prepared for the workplace, but adults who are already \emph{in} the workplace. The old paradigms are falling out of favor and it would be prudent to take advantage of the resources that are available, i.e., computers and the Internet. Security training needs to be close to the context employees work in. Therefore, our training content was based on interviews with security experts working in the company itself. Training also needs to be cost-effective and scale well, which is why we selected an online platform. In addition, understanding computers to be the main method for delivery (as opposed to lectures or in-person presentations) the media should take full advantage of the properties of a computer. Given Mayer's extensive work on learning with multimedia, this paper will explore a new method, which shows promise for training employees effectively by reducing cognitive load \cite{Mayer2003,Mayer2002}. Our method uses a vignette modality so that participants had to further engage with the comic in a way that implements situated learning \cite{Brown1988}. Furthermore, we evaluate the claims made by the developers of the comic-creation platform, especially with regard to the time and skills that are required to create a comic.\footnote{The following will be used interchangeably: comic, story, interactive stories, graphical vignettes, etc} Our requirements for providing an online educational tool for cybersecurity topics are: \begin{itemize}[leftmargin=+.48in] \item[ {\bf REQ1:}] Trainees shall be entertained during the course of the training \item[ {\bf REQ2:}] The comics shall not be perceived as disturbing or even cause stress during game play \item[ {\bf REQ3:}] Trainees shall recognize the context and feel compelled to show they know the right answer \item[ {\bf REQ4:}] Trainees shall understand how to use the comics intuitively \end{itemize} In short, we present and evaluate a method for developing educational cybersecurity comics and then using them to train employees in a company. We present the method of development and the subsequent evaluation, which was conducted as part of a capture the flag event, with 20 employees of a major industry player. In general, the comic approach did not fulfill our requirements as part of a CTF, we think that the lessons learned can and should be shared with the scientific community. We close the paper with the participants' reasoning, critical discussion on the results, and practical advice. \section{Background} Learning theories have come a long way since Socrates; but most of the empirically valid work has been done after 1960 \cite{Glaser1965TeachingII}. This paper is concerned with how to design instructional material, and how it can be applied to the company and other institutions like it. Wilson \& Cole (\citeyear{Wilson1996CognitiveModels}) wrote a chapter on the progress of Instructional Design, as understood through the lens of cognitive models: from the 1960's, with Behavioral psychology; through Information Processing psychology in the 1970's and 1980's; to the 1990's and the present, which emphasizes the construction of knowledge and the role of social mediation. The state of the art is now considered to be "Constructivism," which has been the predominant learning theory over the past three decades \cite{Fosnot1996CONSTRUCTIVISM:Learning}. Constructivism presupposes two main tenets: (1) knowledge is a construction that comes from an active interaction with the world, and (2) that it is inherently an adaptive process that is not necessarily concerned with the ontological nature of reality. Building on these facts, we next consider the visceral experience of the learner. Sweller (\citeyear{Sweller1994CognitiveDesign}) considered the factors that make learning more or less difficult; especially, when you have two subjects with similar amounts of information, but one requires much more effort to understand, while the other seems to be readily absorbed. He found that subject matter can be presented in such a way that the information enters several channels and thus diminishes the amount of strain, or load, put on "cognition." This paper attempts to address human learning at the psychological level, therefore it looks at the substrates of learning, i.e., the media (and multimedia) that is passing information to the learner. This paper combines the multimedia theory of learning and a Constructivist approach so that learners will be shown information that they can readily comprehend, then engage with to form even stronger memories about what to do in various cybersecurity situations. Additionally, they will form conceptions about the consequences of their decisions, something that is missing from many training sessions, and is found to be missing in education generally. This paper is concerned with how to design instructional material, which falls under the subset of learning theories that are about instructional design; and how it can be applied to the company and other institutions like it. Wilson \& Cole \citeyear{Wilson1996CognitiveModels} wrote about the progress of Instructional Design over the past few decades, from the perspective of cognitive models: the 1960's saw the rise of Behavioral psychology; then came Information Processing psychology in the 1970's and 1980's; to the 1990's and the present, which emphasizes the construction of knowledge and social mediation. The state of the art is now considered to be "Constructivism," which has been the predominant learning theory over the past three decades \cite{Fosnot1996CONSTRUCTIVISM:Learning}. Constructivism presupposes two main tenets: (1) knowledge is a construction that comes from an active interaction with the world, and (2) that it is inherently an adaptive process that is not necessarily concerned with the ontological nature of reality. We address these tenets by (1) having our participants make active choices in the stories they interact with and (2) utilizing fictional stories that express larger truths about cybersecurity. The authors then consider the visceral experience of the learner. Sweller (\citeyear{Sweller1994CognitiveDesign}) investigated the factors that make learning more or less difficult; especially, when there are two subjects with similar amounts of information, but one requires much more effort to understand, while the other seems to be readily absorbed. He found that subject matter can be presented in such a way that the information enters several channels and thus diminishes the amount of strain, or load, put on "cognition." This paper attempts to address human learning at this psychological level by looking at the substrates of learning, i.e., the media (and multimedia) that is passing information to the learner. This paper combines the Multimedia Theory of learning and a Constructivist approach so that learners will be shown information that they can readily comprehend, then engage with to form even stronger memories about what to do in various cybersecurity situations. Additionally, they will form conceptions about the consequences of their decisions, something that is missing from many training sessions, and is found to be missing in education generally. \section{Related Work} Comics have been used before to teach about complex issues, such as training soldiers on matters of military leadership \cite{gordon2006}. These comics consisted of four panels, the first three setting up a problem to be solved or a situation that may be encountered in the field. The last panel then was empty, and participants would fill in a response. Finally, there was a forum that included previous responses so that these could be discussed. Our work does not utilize forums, and the interaction is multiple choice, as opposed to a short answer response. This allows for more immediate feedback and for a variety of outcomes to be explored. Security training has often been perceived as uninteresting and even boring. Therefore, several researchers have been developing serious games to combine entertainment and training in this field. \textit{CyberCIEGE}~\cite{irvine2005cyberciege} is a role playing video game, where players act as an information security decision maker of a company. Players have to minimize risk, while continuing to work. \textit{PlayingSafe} ~\cite{newbould2009playing} consists of multiple choice questions about cybersecurity, using the typical mechanics of a board game. \textit{SEAG}~\cite{olanrewajusocial} is also using multiple choice questions and in addition players have to match cybersecurity terms with respective pictures. Our work differs since none of these approaches use comic-based vignettes based on real industry concerns. And in terms of generating comics, Microsoft has done some very interesting work on automatically generated comics based on cursory language analysis \cite{Kurlander1996}. Their goals were not to produce training material, but to visually represent chat environments to enrich the user experience and make electronic chat rooms more dynamic and interesting. While this technology may be incorporated into future renditions of our work, all of our comics were generated "by hand." The case for using such a platform as Comic-BEE to create comics is laid out in \cite{Ledbetter2016CySComYouth}. The stated goals and objectives therein was to introduce young people to cybersecurity content in a fun and engaging way, in an effort to encourage them to pursue cybersecurity career paths. Among the reasons they cite are that (1) educational comics can be appealing on multiple levels, including engaging and increasing interest in readers; and (2) young people are not being sufficiently exposed to cybersecurity concepts, even while they are ever more surrounded by technology. What we would like to do is take these concepts into an industrial workplace to test whether the claims of interest and engagement hold up for a more mature audience. \section{Creating our Interactive Graphical Stories} To begin this study, the authors sought a platform that was designed for creating educational comics in a quick and easy way. Learning material was then generated that was relevant to the working environment of the target participants. The information to create the training material was gathered from interviews with security experts from the industry. The gathered information was consolidated and concentrated into 5 main topic areas, each of which would become its own comic. The developed comics were deployed along with a survey to gather qualitative data about how the comics were received by the participants. \subsection{The Comic Development Platform} \label{sec:comicplatform} The comic platform is a tool developed by Secure Decisions, a division of Applied Visions, Inc., which has been sponsored by the Science and Technology division of Homeland Security in the United States.\footnote{Contract \#HHSP233201600057C, retrieved from https://govtribe.com/vendor/applied-visions-inc-northport-ny} This platform was chosen as it is aligned with the goals of this work and also allows the implementation of the requirements defined in this paper. The Comic-BEE team was contacted and they provided the requisite credentials for creating comics on their site. The platform was evaluated with a first attempt to create a small sample comic. The goal was also to learn the various commands and features of the platform. The following sub-sections describe the process for creating a comic, divided according to the four areas for creating comics within the platform itself: Plan Lesson, Write Script, Layout Storyboard, and Create Final Comic. \subsubsection{Plan Lesson} The comic creator has to configure the project for learning objectives; either the creator's own, or those from the NICE Cybersecurity Workforce Framework~\cite{Newhouse2017}. The comic creator also chooses whether the scores will be saved to the Comic-BEE servers (wherein choices are captured, along with all the relevant data for scoring the comics); or whether readers will be shown their results, in which case the scores are not saved to a server. Optionally, if the creator chooses that scores will be captured, the creator can choose whether to prompt readers with a phrase of his choice; for example, ``Please tell us why this was your choice~:)". The comic creator also has to define the topic, select what sort of audience will be expected, what kind of learning environment the readers will be in, the knowledge readers should take away, prior knowledge, and the readers' expertise level (before and after exploring the comic). There is also a space here where any known limitations of readers can be enumerated. \begin{figure}[h] \centering \centering \includegraphics[width=\linewidth]{figures/pmstoryboard.png} \captionof{figure}{Black and White working space in \textbf{Layout Storyboard}} \label{fig:pmstoryboard} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/pmcfc.png} \captionof{figure}{Full-color working space in \textbf{Create Final Comic}} \label{fig:pmcfc} \end{figure} \subsubsection{Write Script} \label{subsec:writescript} In this section, scenes and choices are inputted and organized. There are options to create scenes based on the Learning Objectives or Real life Scenarios. If the creator selects either of them, a new scene is automatically created that the creator can Edit, Duplicate, or Delete. Here, the scene is named; designated as Start, Normal, or Ending (different from the Real Life Scenario prompt); the script is written, the question posed; and choices enumerated and rated. Choices can be rated as five levels of expertise, from Apprentice to Journeyman, and rated from 1 to 5 for quality, Worst to Best. It is here, in the choices, where specific learning objectives are assigned. A minimum of one choice is required for Start and Normal scenes, and Ending scenes do not have choices; the maximum number of choices can exceed 20; though, given item choice number theory \cite{Vyas2008,Rodriguez2005,Nwadinigwe2013}, three was found to be sufficient. \subsubsection{Layout Storyboard} \label{subsec:cpstoryboard} The storyboard section is the first for which the creator can edit characters and assets in panels and storyboards. Scenes can be anywhere from 1 to 6 panels, with 8 various arrangements to choose from. After having selected a panel layout, the creator is given a working space to add any text, items, or characters to each panel individually; to aid in this, the scripts from the Write script section are shown alongside the working space. Characters can be sitting or standing, and be facing left, forward, or right. Everything is in black and white, see Figure~\ref{fig:pmstoryboard}. After having populated all of the panels and scenes, the creator can move on to create the final comic. \subsubsection{Create Final Comic} \label{subsec:CFC} Here, the creator has a working space, just as before, but this time there is a button for replacing the characters and items with their full-color counterparts, see~\autoref{fig:pmcfc}. Characters can further be customized to have up to ten various expressions, seven arm positions, four directions to face, and four leg positions. Items and characters can be stretched, rotated, and mirrored, and they are arranged according to layers within the working space. To add a sort of variance, background colors can be selected, there are all sorts of shapes, and creators can even upload their own SVG files to be included in their projects. \begin{comment} \begin{figure*} \includegraphics [width=\columnwidth] {pmcomic-bee.png} \caption{PatchM comic with many branches} \label{fig:pmcomic-bee} \end{figure*} \end{comment} \subsection{Collecting Learning Material from Experts} Experts on information security, organizational procedures, and resource allocation were consulted for this work. To begin developing comics, first it was necessary to find topics deemed important by the experts. We used a questionnaire of three items during phone interviews to find out common problems: \begin{enumerate} \item[Q1] {What are among the most common problems encountered?} \item[Q2] {What are the situations that surround the most common problems? What other choices can be made (either right or wrong)?} \item[Q3] {What is the context of events that occur before or after common problems (do not have to be directly related to the problems themselves), e.g., workload, time-frame, other tasks\ldots{} } \end{enumerate} The questionnaire was designed to allow the respondent to elucidate on all aspects of insecure scenarios, including the proximal and ultimate causes of the problems, the situations that may surround problems, and the contexts surrounding problems that may (seemingly) have little or nothing to do with the problems themselves. \subsection{Developing Learning Material} \label{sec:developingscripts} The final list was comprised of five different learning topics. In order to create a comic with more branching paths, a second expert (Expert 2) was consulted for creating new ideas that elaborated on the aforementioned topics. Several methods were tried for collecting and organizing ideas and paths, but the best method seemed to be a visual representation that displayed all the choices, such as the whiteboard, which was used to create the second comic. \subsection{The Comic Creation Process} After some trial and error, the following six phases ended up being the most effective method for creating complete and coherent comics. \begin{enumerate} \item \textbf{Design}: Collect ideas from experts and colleagues, and turn them into viable scenes and choices that could be used in a storyboard. \item \textbf{Storyboard}: Scenes and choices are organized into a coherent story, especially so they could be included into the comic platform. \item \textbf{Comic Platform - Write Script}: The design and storyboard is now taken from the ``page" and put online into the comic platform. \item \textbf{Comic Platform - Storyboard}: The story was arranged into a storyboard. For a full description see Sub-section~\ref{subsec:cpstoryboard}. \item \textbf{Comic Platform - Final Draft}: The final draft of the comic was created. For a full description, see Sub-section~\ref{subsec:CFC}. \item \textbf{Feedback on Final Draft}: After the comic was fully realized, it would be sent out for feedback once more, and that feedback would be taken to revise the comic one last time before it would be ready to be used. \end{enumerate} \subsection{Developing the Comics} \label{sec:comicdev} The five topic ideas were all made into their own dedicated comic, each covering a single topic: Backup and Restore (B\&R), Principle of Least Privilege (PLP), Password Management (PassM), Where to Share Data (WSD), and Patch Management (PatchM). Altogether, about 60 hours were spent working directly on the comics; indirectly, the process took approximately 8 weeks. For a breakdown of the amount of time spent working directly on each comic by phase, see \autoref{tab:comictime}. \input{tables/time.tex} After completing each phase of the comic creation process, the comics were sent to experts for feedback, whereby the responses were received up to a week later. Each round of feedback consisted of the following five feedback questions for readers: \begin{enumerate}[] \item How realistic are the scenes and choices? \item Is anything missing? \item Is anything wrong? \item Do you think they could work as teaching material? (e.g., in training) \item What do you think, in general? \end{enumerate} \label{fiveqs} \section{Evaluation} This section describes the context of the experiment, the participants, and also our results \subsection{Setting and Sampling} Upon completing the comics, the next step was to distribute them among employees of the company, who would evaluate them. Though this group of employees would be informed that any information they provided would be voluntary and anonymous, the author also preferred that their attention was rapt and that they had an incentive to complete them. This is why they were not sent out in emails, as was done for the feedback sessions, where it was likely that they would be ignored. Instead, it was considered better if the comics could be implemented alongside another training, giving the authors a so-called ``captive audience". Such an occasion was identified in one of the company's many Capture the Flag (CTF) events. The comics were added to the CTF as an exercise to earn points by finding the ``correct" story path leading to a flag that earns points. The event lasted two days; on the first day, there were 10 teams of two and one individual participating in the event. On the second day, there were 5 participants. The participants included Web Developers, personnel from Research and Development, Software Testers, and Project Managers. The participants' age ranged from their 20's to their 50's (see \autoref{tab:pilotparticipants}). Capture the Flag events are, by their nature, competitive and fast-paced. Given the structure of the CTF event, participants were given the incentive to complete the comics (and all the other events) for points, and to do so as quickly as possible. Participants were provided with a virtual machine that contained all of the tasks and objectives; this also allowed the moderators of the event to monitor their progress. Once an objective was successfully completed, participants were given a random string of digits, whereby these ``flags" could be exchanged for points. As is typical in a CTF event, the team with the most points wins the event and is given a small prize. As the comics were not initially part of the official CTF event, they had to be modified to include flags on the scenes that were considered to be ``correct". \subsection{Results} Before the CTF, feedback was mostly positive; the following remarks were typical: ``Very realistic", ``Try mentioning company policy", ``Nothing wrong", ``It could give an overall idea why backup is important", and ``It is good". This gives an indication that Requirement 3 is at least partially fulfilled. There was one instance of a colleague who found the comics to be indecipherable. He mentioned that he did not know which way to read the comics and even that he did not like comics. This point was noted because it was unexpected that someone would not like comics. However, his feedback does not affect any of our target requirements. Additional comments that recommended changes were also taken into account, and the corresponding necessary changes were made. During the CTF events, approximately 20 people participated in the CTF and had viewed the comics. On the first CTF day, the comics were given to the participants at the beginning of the event in the morning, as a sort of ``warm-up" exercise. Participants needed to solve the comics in order to be able to start with the CTF challenges. This caused some negative feedback from the participants and some obvious stress; thus clearly indicating that Requirement 2 is not fulfilled. After the CTF event, which lasted 8 hours, a 10-minute feedback session was held. The same five feedback questions were asked as were asked to offline participants (see the listing in section~\ref{fiveqs}). Additionally, participants were told that any feedback they provided would be voluntary. Since distress in the feedback by the participants was noticeable (due to not being anonymous), the feedback strategy was changed for the second CTF day. For this second session, which took place the following day, the participants were asked to respond to the questions on a slip of paper. Additionally, they were asked to answer the following demographic questions: gender, age, whether they have read comics before, which kind, whether they like comics, and the field they work in. On the first day, the author noted that the participants found the comics ``Okay", but that the images themselves were largely ignored, as they were perceived to be unnecessary. One participant said that he would ``just read the answers and make his decision", three other participants agreed with him, and many other participants seemed to concur. Another participant said that ``[i]t is normally the case that the visuals actually add to the learning experience". Another participant added, ``In this case it seemed that the pictures were entirely unnecessary, a waste. We were able to skip right over them to read the responses and make our selections without looking at them. What if the answers were also pictures? Because it took me out of the context to be looking at pictures then make a text selection". By receiving this feedback, requirement 3 and 4 are obviously not fulfilled. When asked to volunteer information about their past experience with comics, about 3 of the remaining 10 participants indicated that they had read comics before, and when the author asked who was familiar with the Asterix series all of the participants seemed to be familiar. The participant feedback for the second CTF day is shown in \autoref{tab:pilotparticipants}. Here we get a glimpse of failed requirements 1 and 4. On the first CTF event, the participants clearly gave feedback that the comics were perceived as disturbing the CTF game, as they consumed time to answer but did not add to the CTF fun. \input{tables/pilotparticipants.tex} \section{Conclusions} This study was conducted to (1) evaluate an online comic-creation platform and (2) to analyze user perceptions of the resultant interactive graphical stories. These stories were implemented for security awareness as part of a capture the flag (CTF) challenge. While the comic creation process was tedious, it was mostly prolonged by the feedback process. The authors thought this could be streamlined, and in a conversation with the platform provider, they have decided to implement a feature that would allow for feedback to be given on the platform itself. Since publication, this new feature has been implemented in an update to the platform. As for the CTF and feedback, we gained numerous valuable insights. First, the participants perceived the content of the comics to be realistic. Second, stories or vignettes in the form of comics were also considered a good idea for communicating complex ideas, and being used as training material. The participants perceived the following elements as needing improvement: the images and stories were not seen as essential in the stressful situation (while under time pressure of the CTF) and were therefore, largely ignored. In particular, during a hacking competition, people tried to apply the hacking mindset to the comics and just tried all paths and possible answers without reading or understanding them. Their only goal was to have the right answer before the other teams, understanding the story was not the main goal. Thus, one of our main results is that the comics were not well-received, not because of the comics or the content itself, but rather because of the context in which they were implemented - in a CTF platform. Based on these insights we have identified the following recommendations: \begin{itemize} \item to save time, feedback can be limited to the initial and final stages of the content creation process \item comics, interactive or otherwise, should not be implemented during time-dependent (sensitive) activities such as during a CTF event \item it must be ensured that comics are used for training in non-stressful environments, such as during a break or other leisure time, to encourage a relaxed and even playful mood \item while they can be about serious matters, the comics can also be fun; we hypothesise that by being fun, they remain in the memory of the participants for longer time \end{itemize} Scientific conferences present successful research to a large extent. Unfortunately, the investigative journey presented in this work has resulted in our requirements and goal not being achieved. While these were not the results we were anticipating, it is all the more reason that we are glad to have this work published, so that future researchers can learn from our experiences. In particular, we consider the lessons-learned to be important so that they can be considered when using comics as a form of IT security awareness training. \section*{Acknowledgements} The authors would like to thank all participants of the experiment for their time and their valuable insights and suggestions. The authors would also like to thank the providers of the comic platform. \bibliographystyle{IEEEtran} \footnotesize \section{Related Work and Background} Security training has often been perceived as an uninteresting and even boring topic. Therefore, several researchers have been developing serious games to combine entertainment and training in this field. \textit{CyberCIEGE}~\cite{irvine2005cyberciege} is a role playing video game, where players act as an information-security decision-maker of a company. Players have to minimize risk, while continuing to work. Using the typical mechanics of a board game, \textit{PlayingSafe} ~\cite{newbould2009playing} consists of multiple choice questions about cybersecurity. Another game, \textit{SEAG}~\cite{olanrewajusocial} also uses multiple choice questions, and additionally players have to match cybersecurity terms with their respective pictures. Our work differs from these, since none of them use interactive comic-based vignettes that are based on real-world, industry concerns. Comics have been used before to teach about complex issues, such as training soldiers on matters of military leadership~\cite{Gordon2006}. These comics displayed a problem and participants would fill in a response. Responses could then be discussed on a forum that included responses from previous participants. Our work does not utilize forums, and the interaction is of the multiple-choice type. This allows for more immediate participant feedback. In regards to creating or generating comics, Microsoft has done work on automatically generated comics based on cursory language analysis~\cite{Kurlander1996}. Their goal was not to produce training material, but to visually represent chat environments to enrich the user experience. Ledbetter~\cite{Ledbetter2016} stated that the goals for creating Comic-BEE are: to introduce young people to cybersecurity content in fun and engaging ways, and thereby encourage them to contemplate and pursue cybersecurity career paths. Among the reasons they cite are that (1) educational comics can be appealing on multiple levels, including engaging and increasing interest in readers; and (2) young people are not being sufficiently exposed to cybersecurity concepts, despite being ever more surrounded by technology. This paper is concerned with how to design instructional material, and how it can be applied to a company and other institutions like it. The evolution of instructional design has been written about by Wilson~\&~Cole~\cite{Wilson1996}; especially as it can be understood through the lens of cognitive models: from the 1960's, with Behavioral psychology; through Information Processing psychology in the 1970's and 1980's; to the 1990's and the present, which emphasizes the construction of knowledge and the role of social mediation. The state-of-the-art is now considered to be ``Constructivism," which has been the predominant learning theory over the past three decades~\cite{Fosnot1996}. Constructivism presupposes two main tenets: (1) knowledge is a construction that comes from an active interaction with the world, and (2) that it is inherently an adaptive process that is not necessarily concerned with the ontological nature of reality. This will be important, as our method utilizes simulacra of real-world cybersecurity concepts.
1,108,101,566,079
arxiv
\section{Introduction} Distributed computing by mobile computing entities has attracted much attention in the past two decades and many distributed system models have been considered, for example, the {\em autonomous mobile robot system}~\cite{SY99} modeled after cheap hardware robots with very weak capabilities, the {\em population protocol model}~\cite{AADFP06} motivated by delay tolerant networks, the {\em programmable particle model}~\cite{DDGRSS14} inspired by movement of amoebae, and the {\em metamorphic robotic system}~\cite{DSY04a,DSY04b} considering modular robots. The computational power of these distributed systems has been investigated in distributed computing theory and many fundamental problems have been proposed that require a degree of agreement among the mobile computing entities. Typical problems are {\em leader election}~\cite{DFSBRS15,DPV10}, which requires the entities to agree on a single entity, {\em gathering}~\cite{CFPS12,FPSW05,SY99}, which requires the entities to gather at a point not known apriori, and {\em shape formation}~\cite{DFSVY20,DSY04a,FYOKY15,SY99,YS10,YY13} (also called the transformability problem), which requires the entities to form a specified shape. These results are considered as theoretical foundations in several related areas like ad-hoc networks, sensor networks, robotics, molecular computing, chemical reaction circuits, and so on. In this paper, we focus on the autonomous mobile robot system. Let $\mathcal{R} = \{r_1, r_2, \ldots, r_n\}$ be a set of $n$ robots. Each robot is an {\em anonymous} (indistinguishable) point moving in the 2D space. The robots are {\em silent} (communication-less) and do not have access to a global coordinate system. A robot's behavior is a repetition of {\em Look-Compute-Move} cycles: in the {\em Look} phase, it observes the positions of the other robots within its visibility range; in the {\em Compute} phase, it computes its next position and a continuous route to the next position with a common deterministic algorithm; in the {\em Move} phase, it moves to the computed position along the computed route. The essential properties of mobile robot systems are the visibility range, obliviousness, and the timing and activation models. A robot $r_i$ is equipped with its own local coordinate system $\mathcal{Z}_i$, which is a right-handed $x$-$y$ coordinate system. The origin of $\mathcal{Z}_i $ is always the current position of $r_i$, while the unit distance and the directions and orientations of the $x$ and $y$ axes are arbitrary. The observation at $r_i$ is a snapshot in $\mathcal{Z}_i$ containing no additional information other than the positions of the robots. If the visibility range of a robot is unlimited, it can observe all robots, otherwise it can observe the robots within its visibility. In {\em Compute}, when the input to the common algorithm is the snapshot taken in the preceding Look, we say the robots are {\em oblivious}. When the input includes past observations and computations, we say the robots are {\em non-oblivious}. In {\em Move}, the movement of a robot is {\em rigid} when the robot always reaches the next position, and {\em non-rigid} when the robot may stop en route after moving a minimum distance $\delta$ (in $\mathcal{Z}_0$) (if the length of the route to the destination is smaller than $\delta$, the destination is reached). Three different types of timing and activation models have been proposed: In the {\em fully-synchronous model} (FSYNC), at each discrete time $t=0, 1, 2, \ldots$, all robots execute a Look-Compute-Move cycle with each of the Look, Compute, and Move completely synchronized. In the {\em semi-synchronous model} (SSYNC), at each discrete time $t=0, 1, 2, \ldots$, a non-empty subset of robots are activated and execute a Look-Compute-Move cycle with each of the Look, Compute, and Move completely synchronized. For fairness, we assume that each robot executes infinitely many cycles. In the {\em asynchronous model} (ASYNC), the robots do not have a common notion of time and the length of each cycle is arbitrary but finite. We also assume fairness in ASYNC. The main difference between SSYNC (thus, FSYNC) and ASYNC is that in SSYNC, all robots simultaneously take a snapshot in Look, while in ASYNC a robot may observe moving robots although the robot cannot recognize which robots are moving. The effect of obliviousness, asynchrony, and visibility on the computational power of autonomous mobile robot systems has been extensively investigated~\cite{DPV10,FPSW08,FYOKY15,SY99,YS10}. Since the only output by the oblivious robots is their geometric positions, a fundamental problem is the {\em pattern formation problem}, that requires the robots to form a target pattern from an initial configuration. Existing literature~\cite{FYOKY15,SY99,YS10}\footnote{ An erratum of \cite{FYOKY15} is available at \cite{FYOKY17}.} showed that the initial symmetry among the anonymous robots determines the set of formable patterns, irrespective of obliviousness and asynchrony. The only exception is the point formation problem of two robots, also called the {\em rendezvous problem}. In fact, Suzuki and Yamashita have shown that the rendezvous problem is solved by oblivious FSYNC robots, but cannot be solved by oblivious SSYNC robots~\cite{SY99}. In other words, the rendezvous problem demonstrates the difference between FSYNC and SSYNC (thus, ASYNC). These results consider the robots with unlimited visibility. Yamauchi and Yamashita have shown that limited visibility substantially shrinks the set of formable patterns by oblivious ASYNC robots because the robots do not know their global symmetry~\cite{YY13}. The robots can overcome the limits by distributed coordination or additional capabilities. Di Luna et al. have shown that a constant number of oblivious ASYNC robots can simulate a single non-oblivious ASYNC robot by encoding the memory contents to the geometric positions of the robots~\cite{DFSV18}. Das et al. have shown that oblivious ASYNC robots {\em with lights} can simulate oblivious SSYNC robots~\cite{DFPSY16}. A {\em luminous robot} is equipped with a light whose color is changed in every Look-Compute-Move cycle at the end of Compute and observed by other robots. The authors showed that luminous ASYNC robots with a constant number of colors can simulate an algorithm $\mathcal{A}$ designed for oblivious SSYNC robots. They presented a {\em synchronizer} that makes an activated robot accept or reject the current cycle so that the snapshot of an accepted cycle does not contain any moving robot. In an accepted cycle, the robot changes the color of its light to ``moving'' and moves to the next position computed by $\mathcal{A}$. The synchronizer guarantees fairness by making all robots wait with the ``waiting'' color after it accepts a cycle until all the other robots accept a cycle. All these techniques are heavily based on the fact that robots have unlimited visibility. In this paper, we investigate synchronization by oblivious ASYNC robots with limited visibility and we make some fundamental contributions. We start with a formal definition of simulation by mobile robots. A {\em configuration} is the set of positions of the robots in $\mathcal{Z}_0$ and an {\em execution} of algorithm $\mathcal{A}$ from an initial configuration $I$ is an infinite sequence of configurations. In SSYNC, an execution is a sequence of configurations $C_0(=I), C_1, C_2, \ldots$, where $C_t$ is the configuration at time $t$. An ASYNC execution is the sequence of configurations $C_{t_0}(=I), C_{t_1}, C_{t_2}, \ldots$ where at least one robot takes a snapshot, with $t_i < t_{i+1}$ for all $i=1,2, \ldots$. Then, the {\em footprint} of a robot is the sequence of the positions of the robot in each configuration. We say that two executions $E$ and $E'$ (possibly in different timing and activation models) are {\em similar} when the footprints and local observations at the robots are identical. We then present a sufficient condition for an ASYNC execution to have a similar SSYNC execution, and we also show that the condition is necessary with probability $1$ under a randomized ASYNC adversary. The randomized impossibility result is novel, based on a Borel probability measure space for non-rigid movement and asynchronous observations, and it provides a stronger argument than a worst-case (deterministic) analysis. Our condition not only requires snapshots of static robots but also considers a chain of concurrent observations, that cannot be treated separately in a SSYNC execution. The transitive closure with respect to the concurrent observations forms an equivalence relation and cycles of the ASYNC execution are decomposed into equivalence classes. We then introduce a ``happened-before'' relation among the equivalence classes based on the local happened-before relation at a single robot or a pair of visible robots. Our condition also requires the happened-before relation to form a directed acyclic graph so that we construct a similar SSYNC execution by applying the equivalence classes one by one in the order of one of their topological sort. We then present a synchronizer for oblivious ASYNC luminous robots to simulate an execution of an algorithm for oblivious (non- luminous) SSYNC robots satisfying those conditions, as well as some limitations of synchronizers that make use of visible lights. \noindent{\bf Related work.~} Existing literature established a rich class of distributed problems for mobile robot systems. The pattern formation problem~\cite{SY99} is one of the most important static problems, that is, the robots stop moving once they reach a terminal configuration. Suzuki and Yamashita showed the oblivious FSYNC robots can solve the rendezvous problem, while the oblivious SSYNC robots cannot~\cite{SY99}. Flocchini et al. further discussed the rendezvous problem to show the power of lights. A robot with {\em externally visible light} can change but cannot see the color of its own light, while the other robots can observe it. A robot with {\em internally visible light} can change and see the color of its own light, while the other robots cannot observe it. They showed that the ASYNC robots with externally visible lights can solve the rendezvous problem, while the SSYNC robots with internally visible lights cannot~\cite{FSVY16}. To demonstrate computational power of the luminous robots, many dynamic problems has been proposed. Das et al. proposed the {\em oscillating points} problem, that requires the robots to alternately come closer and go farther from each other~\cite{DFPSY16}. This problem shows the difference between luminous ASYNC robots and (non-luminous) FSYNC robots. Flocchini et al. examined the power of internally visible lights and that of externally visible lights in FSYNC and SSYNC with a variety of static problems such as {\em triangle rotation}, {\em center of gravity expansion}, and dynamic problems such as the {\em perpetual center of gravity expansion} and {\em shrinking rotation}~\cite{FSW19}. Synchronization was first presented in \cite{DFPSY16} to overcome the limit of the ASYNC robots with unlimited visibility. In this paper, we further investigate synchronization to demonstrate the difference between ASYNC and SSYNC with limited visibility. \noindent{\bf Organization.~} We provide detailed definitions of ASYNC and SSYNC executions and the similarity between two executions in Section~\ref{sec:preliminary}. We then provide a sufficient condition for an ASYNC execution to have a similar SSYNC execution, and investigate its necessity under a randomized ASYNC adversary in Section~\ref{sec:condition}. Section~\ref{sec:synchronizer} provides a synchronizer algorithm for oblivious luminous ASYNC robots that satisfy the necessary and sufficient conditions. We conclude our paper in Section~\ref{sec:conclusion}. \section{Preliminary} \label{sec:preliminary} We investigate a system $\mathcal{R}$ of $n$ anonymous oblivious mobile robots $\{r_1, r_2, \ldots, r_n \}$ in the 2D space. We use $r_i$ just for notation. We assume that at most one robot can occupy any given position at any time.\footnote{ We can remove this assumption with multiplicity detection capability.} We consider SSYNC and ASYNC as the {\em semi-synchronous scheduler} $\cal{SSYNC}$ and the {\em asynchronous scheduler} $\cal{ASYNC}$, respectively. We regard a scheduler as a set of schedules that it can produce. Consider an infinite execution $E$ of a deterministic algorithm $\mathcal{A}$ from an initial configuration $I$ under $\cal{ASYNC}$. Independently of $\mathcal{A}$ and $I$,\footnote{ Thus, $ASYNC$ produces any schedule including the worst-case schedule for $\mathcal{A}$ and $I$.} $\cal{ASYNC}$ nondeterministically produces a schedule $\Omega$, which specifies for each $r_i$ when it is activated and executes Look-Compute-Move cycles. Formally, $\Omega$ is a set of schedules $\Omega_i$ for each robot $r_i$, where $\Omega_i$ is an infinite sequence of Look-Compute-Move cycles. The $j$th cycle $\omega_i(j)$ of $\Omega_i$ is denoted by a triple $(o_i(j), s_i(j), f_i(j))$, where $o_i(j)$, $s_i(j)$, and $f_i(j)$ are the time instants that $r_i$ takes a snapshot in the Look, starts and ends the Move, respectively. We assume that the time interval assigned to $\omega_i(j)$ is $[o_i(j), f_i(j)]$, and $o_i(j) < s_i(j) < f_i(j) < o_i(j+1)$ for all $j=1, 2, \ldots$. Scheduler $\Omega$ is {\em fair} in the sense that each $\Omega_i$ satisfies that, for any $t \in \mathbf{R}^+$, there is a $j \in \mathbf{N}$ such that $o_i(j) > t$, where $\mathbf{R}^+$ and $\mathbf{N}$ are the sets of positive real numbers and non-negative integers, respectively. The execution is not uniquely determined by $I$, $\mathcal{A}$, and $\Omega$, due to the only source of non-determinism, that is, non-rigid movement of robots. The set of possible executions of $\mathcal{R}$ given $I$, $\mathcal{A}$, and $\Omega$ is denoted by $\mathcal{E}(\Omega, \mathcal{A}, I)$. The visibility range of each robot is the unit distance of the global coordinate system $\mathcal{Z}_0$.\footnote{The common visibility range does not promise common unit distance among the robots.} The snapshot $P_i(j)$ taken by $r_i$ at $o_i(j)$ in $\omega_i(j)$ is the set of positions of robots in $\mathcal{Z}_i$ visible from $r_i$ at $o_i(j)$. $P_i(j)$ always contains its origin, because it is the position of $r_i$ in $\mathcal{Z}_i$. The number of robots visible from $r_i$ at $o_i(i)$ is denoted by $|P_i(j)|$. Let $P_i(E) = \{P_i(j) \mid j \in {\mathbf N} \}$ and $P(E) = \{P_i(E) \mid r_i \in \mathcal{R}\}$. The position of $r_i$ at $o_i(j)$ in the global coordinate system $\mathcal{Z}_0$ is denoted by $\pi_i(j)$. Note that $r_i$ cannot recognize its position $\pi_i(j)$. The {\em footprint} of $r_i$ in $E$ is $\Pi_i(E) = \{\pi_i(j) \mid j \in {\mathbf N} \}$ and let $\Pi(E) = \{\Pi_i(E) \mid r_i \in \mathcal{R}\}$ be the set of footprints of all robots of $\mathcal{R}$. Since the system is not rigid, $\pi_i(j+1)$ is not uniquely determined by $\pi_i(j)$, $P_i(j)$, $\mathcal{Z}_i$, $\mathcal{A}$, and $I$. Let $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ and $\tilde E \in \mathcal{E}(\tilde \Omega, \mathcal{A}, I)$. If the system is rigid, $\Pi(E) = \Pi(\tilde E)$ if $P(E) = P(\tilde E)$. Otherwise, for some $\Omega$, $\mathcal{A}$, and $I$, there are executions $E, \tilde E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ such that $P(E) = P(\tilde E)$ and $\Pi(E) \neq \Pi(\tilde E)$. \begin{ex} When $n=1$, no matter where $r_1$ goes, $P_1(j) = \{(0,0)\}$. When $n=2$, suppose that $r_1$ and $r_2$ are initially at $(0,0)$ and $(0,1)$, respectively, and synchronously move in parallel along the $x$-axis of $\mathcal{Z}_0$ at the same speed. As long as their tracks are truncated at the same $x$-coordinate, independently of where they are truncated, $P_i(j) = P_i(j')$ for all $i \in \{1,2\}$ and $i, j' \in \mathbf{N}$. \end{ex} We say two executions $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ and $\tilde E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ are {\em similar}, denoted by $E \sim \tilde E$, if $P(E) = P(\tilde E)$ and $\Pi(E) = \Pi(\tilde E)$. Without loss of generality, we assume that $\cal{SSYNC}$ produces a schedule $\Omega$ such that every cycle $\omega_i(j)$ has a form $(t, t+1/4, t+3/4)$ for some $t \in \mathbf{N}$. \section{ASYNC execution with a similar SSYNC execution} \label{sec:condition} We provide a necessary and sufficient condition for an ASYNC execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ to have an SSYNC execution $\tilde E \in \mathcal{E}(\tilde \Omega, \mathcal{A}, I)$ such that $P(E) = P(\tilde E)$ and $\Pi(E) = \Pi (\tilde E)$ for some $\tilde \Omega \in \cal{SSYNC}$. \subsection{Sufficiency} In an SSYNC execution, at each discrete time $t=0, 1, 2, \ldots$ the activated robots execute a Look-Compute-Move cycle with each of the three phases completely synchronized. From this definition, we directly obtain the following three assumptions on an ASYNC execution to have a corresponding SSYNC execution. (i) No robot observes other robots moving. (ii) When two robots observe each other, they are activated at the same time in a SSYNC execution. (iii) Due to limited visibility, the above ``mutually observed'' relationship is transitive. We formally describe these assumptions. Let $S_i(j)$ be the set of robots visible from $r_i$ at time $o_i(j)$. Then, $|S_i(j)| = |P_i(j)|$ and $r_i \in S_i(j)$. Recall that $r_i$ cannot recognize the correspondence between $S_i(j)$ and $P_i(j)$. We say that $E$ is stationary, if every snapshot $P_i(j)$ in $E$ is ``stationary'' in the sense that $r_i$ does not observe another robot $r_{i'}$ in its move phase. Formally, $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ is {\em stationary} if $o_i(j) \not\in (s_{i'}(j'), f_{i'}(j'))$ holds for any pair of cycles $\omega_i(j)$ and $\omega_{i'}(j') \in \Omega$ such that $r_{i'} \in S_i(j)$.\footnote{We exclude $s_{i'}(j')$ and $f_{i'}(j')$, because $r_{i'}$ is not moving at these time instants.} \begin{assumption} \label{ass1} We assume that $E$ is stationary. \end{assumption} Let $\omega_i(j)$ and $\omega_{i'}(j') \in \Omega$ be two cycles such that $i \neq i'$ and $o_i(j) \leq o_{i'}(j')$. If $[o_i(j), f_i(j)] \cap [o_{i'}(j'), f_{i'}(j')] \neq \emptyset$ and $r_i \in S_{i'}(j')$, we say that $\omega_i(j)$ and $\omega_{i'}(j')$ {\em overlap each other}. We say $\omega_i(j)$ and $\omega_{i'}(j')$ are {\em concurrent}, denoted by $\omega_i(j) \parallel \omega_{i'}(j')$, if one of the following conditions holds: \begin{enumerate} \item $i = i'$ and $j = j'$. \item $i \neq i'$, $o_i(j) \in (f_{i'}(j'-1), o_{i'}(j')]$, $o_{i'}(j') \in [o_i(j), s_i(j)]$, and $r_{i'} \in S_i(j)$ (thus, $r_i \in S_{i'}(j')$). \item $i \neq i'$, $o_{i'}(j') \in (f_i(j-1), o_i(j)]$, $o_i(j) \in [o_{i'}(j'), s_{i'}(j')]$, and $r_i \in S_{i'}(j')$ (thus, $r_{i'} \in S_i(j)$). \end{enumerate} The concurrency relation $\parallel$ is symmetric and reflexive, but is not always transitive. By definition, we have the following proposition. \begin{proposition} \label{prop:1} For any $i$, $j$, and $j'$, $\omega_i(j) \parallel \omega_i(j')$ if and only if $j=j'$. \end{proposition} If $i \neq i'$ and $\omega_i(j)$ and $\omega_{i'}(j')$ are concurrent, then they overlap each other. Moreover, both $r_i$ and $r_j$ observe each other in $\omega_i(j)$ and $\omega_{i'}(j')$, respectively. If $\omega_{i}(j)$ and $\omega_{i'}(j')$ overlap each other, the two robots do not always observe each other. However, at least one of them observes the other. We say that $E$ is {\em pairwise aligned}, if two cycles $\omega_i(j)$ and $\omega_{i'}(j')$ that overlap each other are always concurrent. \begin{assumption} \label{ass2} We assume that $E$ is pairwise aligned. \end{assumption} Let $\stackrel{*}{\parallel}$ be the transitive closure of $\parallel$, that is an equivalence relation on $\Omega$. We abuse the term so that $\omega_i(j)$ and $\omega_{i'}(j')$ are {\em concurrent} if $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$. Since $\parallel$ is not always transitive, $\parallel \neq \stackrel{*}{\parallel}$ may hold. Let $\Omega = \Omega_{0}, \Omega_{1}, \Omega_{2}, \ldots$ be the equivalence class partition of $\Omega$ with respect to $\stackrel{*}{\parallel}$. Intuitively, the cycles in each $\Omega_i$ must be executed at the same time in the corresponding SSYNC execution. However, because $\stackrel{*}{\parallel}$ is transitive, we need to consider observations, i.e., $S_i(j)$ and $S_{i'}(j')$ for each $\omega_i(j), \omega_{i'}(j') \in \Omega_i$. Let $dist(p,q)$ denote the Euclidean distance between two points $p$ and $q$ in $\mathcal{Z}_0$. We say that $E$ is {\em consistent}, if the following conditions hold for any pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$: \begin{enumerate} \item $r_{i'} \in S_i(j)$ if and only if $r_i \in S_{i'}(j')$. \item If $r_{i'} \in S_i(j)$, or equivalently $r_i \in S_{i'}(j')$, $\omega_i(j) \parallel \omega_{i'}(j')$. \item If $r_{i'} \not\in S_i(j)$, or equivalently $r_i \not\in S_{i'}(j')$, $dist(\pi_i(j), \pi_{i'}(j')) > 1$. \end{enumerate} \begin{assumption} \label{ass3} We assume that $E$ is consistent. \end{assumption} The following proposition is an extension of Proposition~\ref{prop:1}. \begin{proposition} \label{prop:2} Suppose that $E$ is stationary, pairwise aligned, and consistent. For any $i$, $j$, and $j'$, $\omega_{i}(j) \stackrel{*}{\parallel} \omega_{i}(j')$ if and only if $j=j'$. \end{proposition} \begin{proof} If $j = j'$, $\omega_i(j) \stackrel{*}{\parallel} \omega_i(j')$ for any $i$. Otherwise, suppose that $\omega_i(j) \stackrel{*}{\parallel} \omega_i(j')$ holds. Since $r_i \in S_i(j')$ for any $i$ and $j'$, $\omega_i(j) \parallel \omega_i(j')$ by the consistency, which is a contradiction by Proposition~\ref{prop:1}. Thus, if $j \neq j'$, then $\omega_i(j) \not\stackrel{*}{\parallel} \omega_i(j')$. \qed \end{proof} Let $\omega_i(j), \omega_{i'}(j') \in \Omega$ be two cycles. We say that $\omega_i(j)$ happens {\em immediately before} $\omega_{i'}(j')$, denoted by $\omega_i(j) \rightarrow \omega_{i'}(j')$, if one of the following conditions holds: \begin{enumerate} \item $i' = i$ and $j' = j+1$. \item $i' \neq i$, $r_i \in S_{i'}(j')$ and $o_{i'}(j') \in (f_i(j), s_i(j+1)]$. \item $i' \neq i$, $r_{i'} \in S_i(j)$ and $f_{i'}(j'-1) < o_i(j) < f_i(j) < o_{i'}(j')$. \end{enumerate} We may call $\rightarrow$ a ``happened-before'' relation on $\Omega$, because $\omega_i(j) \rightarrow \omega_{i'}(j')$ denotes the fact that $\omega_i(j)$ happens before $\omega_{i'}(j')$. Thus, $\rightarrow$ is neither reflexive nor symmetric. It is worth emphasizing that in general, $\rightarrow$ is not transitive either, because it is defined based on the visibility relation between robots like $\parallel$. Note that if $\omega_i(j) \rightarrow \omega_{i'}(j')$, then $\omega_i(j) \not\parallel \omega_{i'}(j')$, because they do not overlap each other. Note also that either $r_i \in S_{i'}(j')$ or $r_{i'} \in S_i(j)$ holds, but generally not both. \begin{proposition} \label{prop:3} Suppose that $E$ is stationary, pairwise aligned, and consistent. No equivalence class $\Omega_k$ of $\Omega$ contains cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $\omega_i(j) \rightarrow \omega_{i'}(j')$. \end{proposition} \begin{proof} Let $\omega_i(j)$ and $\omega_{i'}(j')$ be any cycles in $\Omega_k$, i.e., $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$. We assume that $\omega_i(j) \rightarrow \omega_{i'}(j')$. If $i' = i$, then $j' = j$ by Proposition~\ref{prop:2}, which is a contradiction. Thus, $i' \neq i$. Since $\omega_i(j) \rightarrow \omega_{i'}(j')$, either $r_i \in S_{i'}(j')$ or $r_{i'} \in S_i(j)$ holds. This implies $\omega_i(j) \parallel \omega_{i'}(j')$ by the consistency. This is a contradiction. \qed \end{proof} Let $\Omega_k$ and $\Omega_{k'}$ be two equivalence classes of $\Omega$. If there are cycles $\omega_i(j) \in \Omega_k$ and $\omega_{i'}(j') \in \Omega_{k'}$ such that $\omega_i(j) \rightarrow \omega_{i'}(j')$, we use the notation $\Omega_k \Rightarrow \Omega_{k'}$. By Proposition~\ref{prop:3}, binary relation $\Rightarrow$ is not reflexive. We say that $E$ is {\em serializable} if the infinite graph $\mathcal{G} = (\{\Omega_0, \Omega_1, \Omega_2, \ldots\}, \Rightarrow)$ is acyclic. \begin{assumption} \label{ass4} We assume that $E$ is serializable. \end{assumption} While $\rightarrow$ can be considered as a ``happened-before'' relation on cycles, it is not adequate to consider $\Rightarrow$ as a ``happened-before'' relation on the equivalence classes. There can be cycles $\omega_i(j) \in \Omega_k$ and $\omega_{i'}(j') \in \Omega_{k'}$ such that $f_{i'}(j') < o_i(j)$ even if $\Omega_k \Rightarrow \Omega_{k'}$. In fact, $\Omega_k \Rightarrow \Omega_{k'}$ and $\Omega_k \Leftarrow \Omega_{k'}$ may hold at the same time. There is a stationary, pairwise aligned, and consistent execution which is not serializable. The following proposition is clear by definition. \begin{proposition} \label{prop:4} Suppose that $E$ is stationary, pairwise aligned, consistent, and serializable. If $\omega_i(j)$ and $\omega_i(j')$ are in an equivalence class $\Omega_k$ for some $i$, $j$, and $j'$, then $j=j'$. That is, each $\Omega_k$ contains at most one cycle for every robot $r_i$. \end{proposition} Let $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ be an execution of algorithm $\mathcal{A}$ from initial configuration $I$ under a schedule $\Omega \in \cal{ASYNC}$ and assume that $E$ is stationary, pairwise aligned, consistent, and serializable. Let $\mathcal{G} = (\{\Omega_0, \Omega_1, \Omega_2, \ldots\}, \Rightarrow)$ be the acyclic graph obtained from $E$. Without loss of generality, let $T = (\Omega_0, \Omega_1, \Omega_2, \ldots)$ be a topological sort of $\mathcal{G}$. Let $A_k = \{r_i \mid \omega_i(j) \in \Omega_k \ \text{for some $j$}\}$, keeping in mind that $\omega_i(j)$ is unique for each $r_i \in A_k$ by Proposition~\ref{prop:4}. We construct a schedule $\tilde{\Omega} \in \cal{SSYNC}$ from $T$. Intuitively, $\tilde{\Omega}$ activates all robots in $A_k$ at time $k$ for all $k \in \mathbf{N}$. Formally, $\tilde{\Omega} = \{\tilde{\omega}_i(j) \mid i=1,2,\ldots,n, \ j \in \mathbf{N} \}$, where $\tilde{\omega}_i(j) = (\tilde{o}_i(j), \tilde{s}_i(j), \tilde{f}_i(j)) = (k, k+1/4, k+3/4)$ if $\omega_i(j) \in \Omega_k$ for $k \in \mathbf{N}$. Obviously, $\tilde{\Omega}$ is well-defined and $\tilde{\Omega} \in \cal{SSYNC}$. Consider $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$. Let $\tilde{S}_i(j)$ and $\tilde{P}_i(j)$ be the set of robots visible from $r_i$ at $\tilde{o}_i(j)$ and the snapshot that $r_i$ takes at $\tilde{o}_i(j)$, respectively. Then, $P(\tilde{E}) = \{\tilde{P}_i(j) \mid i = 1,2, \ldots, n, \ \text{and} \ j \in \mathbf{N}\}$. Let $\tilde{\pi}_i(j)$ be the position of $r_i$ in $\mathcal{Z}_0$ at $\tilde{o}_i(j)$. Then, $\Pi(\tilde{E}) = \{\tilde{\pi}_i(j) \mid i = 1, 2, \ldots, n, \ \text{and} \ j \in {\mathbf{N}}\}$. Finally, we need to examine each pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ assigned to different discrete time in $\tilde{\Omega}$. Thus, $\omega_i(j) \not\stackrel{*}{\parallel} \omega_{i'}(j')$. Consider the case where $\omega_i(j) \in \Omega_k$ and $\omega_{i'}(j') \in \Omega_{k''}$ for $k < k''$ and $r_{i'}$ executes no cycle during $\Omega_{k}, \Omega_{k+1}, \ldots, \Omega_{k''-1}$. Thus, $r_{i'}$ does not move during the time period $[k, k''-1]$ in $\tilde{\Omega}$. If $r_i$ observes $r_{i'}$ in $\omega_i(j)$, $\omega_i(j)$ and $\omega_{i'}(j')$ do not overlap each other. Otherwise, $\Omega_k$ contains $\omega_{i'}(j')$. If $r_i$ does not observe $r_{i'}$ in $\omega_i(j)$, $dist(\pi_i(j), \pi_{i'}(j')) > 1$. Otherwise, $r_{i'}$ need to move during the time period $[k, k''-1]$ in $\tilde{\Omega}$. We describe this situation with the notion of naturality. For any $\omega_i(j)$ and $i'(\neq i)$, there is a $j'$ such that $k' \leq k < k''$, where $\omega_i(j) \in \Omega_k$, $\omega_{i'}(j'-1) \in \Omega_k'$, and $\omega_{i'}(j') \in \Omega_{k''}$.\footnote{We consider $k' = -1$ if $j' = 1$ and $\omega_{i'}(j')$ is not defined.} A topological sort $T = (\Omega_0, \Omega_1, \Omega_2, \ldots)$ is {\em natural} if the following conditions hold for any pair of such cycles $\omega_i(j)$ and $\omega_{i'}(j')$: \begin{enumerate} \item Suppose that $r_{i'} \in S_i(j)$. Thus $\omega_i(j) $ and $\omega_{i'}(j')$ do not overlap each other. Then $o_i(j) < o_{i'}(j')$. \item Suppose that $r_{i'} \not\in S_i(j)$. Then $dist(\pi_i(j), \pi_{i'}(j')) > 1$. \end{enumerate} We say that $E$ is {\em natural} if $E$ has a natural topological sort. \begin{assumption} \label{ass5} We assume that $E$ is natural. \end{assumption} \begin{theorem} \label{theorem:sufficiency} If an execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ satisfies Assumptions~\ref{ass1}, \ref{ass2}, \ref{ass3}, \ref{ass4}, \ref{ass5}, there is an execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$ such that $E \sim \tilde{E}$, i.e., $P(E) = P(\tilde{E})$ and $\Pi(E) = \Pi(\tilde{E})$. \end{theorem} \begin{proof} We construct an execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ as follows: If $r_i$ moves in $\omega_i(j)$ from $\pi_i(j)$ to $\pi_i(j+1)$ in $E$, we move $r_i$ in $\tilde{\omega}_i(j)$ from $\pi_i(j)$ to $\pi_i(j+1)$ to construct $\tilde{E}$ that satisfies the conditions. We guarantee its feasibility by showing that $P_i(j) = \tilde{P}_i(j)$ and $\pi_i(j) = \tilde{\pi}_i(j)$. Indeed, if $P_i(j) = \tilde{P}_i(j)$, $\pi_i(j) = \tilde{\pi}_i(j)$, and $r_i$ can move from $\pi_i(j)$ to $\pi_i(j+1)$ during $\omega_i(j)$ in $E$, then it can move to the same position in $\tilde{\omega}_i(j)$. Suppose that a topological sort $T = (\Omega_0, \Omega_1, \Omega_2, \ldots)$ is natural. It suffices to show the following claim for all $k=0, 1, 2, \ldots$: For any robot $r_i \in A_k$, $S_i(j) = \tilde{S}_i(j)$, $P_i(j) = \tilde{P}_i(j)$, $\pi_i(j) = \tilde{\pi}_i(j)$, and $\pi_i(j+1) = \tilde{\pi}_i(j+1)$ hold. The proof is by induction on $k$. \noindent{\bf Base case ($k=0$).~} Let $r_i \in A_0$ be any robot. Then, $\omega_i(j) \in \Omega_0$ holds for some $j$ and obviously $j=1$. We show that $S_i(1) = \tilde{S}_i(1)$. Recall that $\tilde{o}_i(j) = 0$. Let $r_{i'}$ be any robot. Suppose that $r_{i'} \in A_0$, i.e., $\omega_{i'}(1) \in \Omega_0$ and $\omega_i(1) \stackrel{*}{\parallel} \omega_{i'}(1)$. If $r_{i'} \in S_i(1)$, then $\omega_i(1) \parallel \omega_{i'}(1)$ by the consistency, which implies $r_{i'} \in \tilde{S}_i(1)$. Otherwise, i.e., $r_{i'} \not\in S_i(1)$, $r_i \not\in S_{i'}(1)$, and thus $dist(\pi_i(1), \pi_{i'}(1)) > 1$ by the consistency. Thus, $r_{i'} \not\in \tilde{S}_i(1)$. Next, suppose that $r_{i'} \not\in A_0$. Then, $\omega_{i'}(1) \in \Omega_{k'}$ for some $k' > 0$. If $r_{i'} \in S_i(1)$, then $o_i(1) < o_{i'}(1)$ by the naturality. Since $r_i$ and $r_{i'}$ do not move until $o_i(1)$, $r_{i'} \in \tilde{S}_i(1)$. Otherwise, i.e., $r_{i'} \not\in S_i(1)$, $dist(\pi_i(1), \pi_{i'}(1)) > 1$ by the naturality. Thus, $r_{i'} \not\in \tilde{S}_i(1)$. Since $E$ and $\tilde{E}$ start with the same initial configuration $I$, for any $r_i \in A_0$, $\pi_i(1) = \tilde{\pi}_i(1)$, and hence $P_i(1) = \tilde{P}_i(1)$ since $S_i(1) = \tilde{S}_i(1)$. Since $\pi_i(2)$ is reachable from $\pi_i(1)$ by algorithm $\mathcal{A}$ when the snapshot is $P_i(1)$, $\tilde{\pi}_i(2)(= \pi_i(2))$ is reachable from $\tilde{\pi}_i(1)(= \pi_i(1))$ by algorithm $\mathcal{A}$ when the snap shot is $\tilde{P}_i(1)(= P_i(1))$. Thus, to construct $\tilde{E}$, we move $r_i$ to $\tilde{\pi}_i(2)(= \pi_i(2))$ from $\tilde{\pi}_i(1)(= \pi_i(1))$ in $\tilde{\omega_i}(1)$. \noindent{\bf Induction step.~} Assume that the claim holds for all $0 \leq k < K$, and we show that the claim folds for $k=K$. Let $r_i \in A_k$ be any robot such that $\omega_i(j) \in \Omega_K$. If $\omega_i(j-1) \in \Omega_k$, then $k < K$ and $\pi_i(j) = \tilde{\pi}_i(j)$ by the induction hypothesis. Indeed, for all $i'$ and $j'$, if $\omega_{i'}(j') \in \Omega_{k}$ for some $k \leq K$, $\pi_{i'}(j') = \tilde{\pi}_{i'}(j')$ by the induction hypothesis. Consider any robot $r_{i'} \in S_i(j)$. Since $E$ is stationary, $r_{i'}$ is not in move phase at $o_i(j)$. Suppose $o_i(j) \in (f_{i'}(j'), o_{i'}(j'+1))$ for some $j'$. Then, $\omega_{i'}(j') \rightarrow \omega_i(j)$ and $\Omega_{K'} \Rightarrow \Omega_K$, where $\omega_{i'}(j') \in \Omega_{K'}$ for some $K' < K$. Since (i) $\pi_i(j) = \tilde{\pi}_i(j)$, (ii) $\pi_{i'}(j'+1) = \tilde{\pi}_{i'}(j'+1)$ because $K' < K$, and (iii) the position of $r_{i'}$ (in $\mathcal{Z}_0$) at $o_i(j)$ is $\pi_{i'}(j'+1)$, we have $r_{i'} \in \tilde{S}_i(j)$ and $p_{i'} = \tilde{p}_{i'}$, where $p_{i'}$ and $\tilde{p}_{i'}$ are positions of $r_{i'}$ (in $\mathcal{Z}_i$) in $P_i(j)$ and $\tilde{P_i(j)}$, respectively. Here we use the fact that either $\omega_i(j) \parallel \omega_{i'}(j'+1)$ or $\omega_i(j) \rightarrow \omega_{i'}(j'+1)$ holds, and $K \leq K''$ holds, where $\omega_{i'}(j'+1) \in \Omega_{K''}$. Suppose otherwise $o_i(j) \in [o_{i'}(j'), s_{i'}(j')]$ for some $j'$. Then $\omega_i(j) \parallel \omega_{i'}(j')$ and $\omega_{i'}(j') \in \Omega_K$. Let $\omega_i(j-1) \in \Omega_k$ and $\omega_{i'}(j'-1) \in \Omega_{k'}$. Then, $k, k' < K$. Thus, $\pi_{i'}(j') = \tilde{\pi}_i(j)$ by the induction hypothesis. Since $\pi_{i'}(j')$ is the position of $r_{i'}$ at $o_i(j)$ and $\pi_i(j) = \tilde{\pi}_i(j)$, $r_{i'} \in \tilde{S}_{i'}(j')$, and $p_{i'} = \tilde{p}_{i'}$. Finally, consider any robot $r_{i'} \in \tilde{S}_i(j)$ to confirm $S_i(j) = \tilde{S}_i(j)$. If there is a $j'$ such that $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$, $dist(\pi_i(j), \pi_{i'}(j')) > 1$ by the consistency. This implies that $r_{i'} \not\in \tilde{S}_i(j)$ by the induction hypothesis. Otherwise, $\omega_i(j) \not\stackrel{*}{\parallel} \omega_{i'}(\ell)$ for all $\ell \in \mathbf{N}$. Let $j'$ be an integer such that $K' < K < K''$, where $\omega_i(j) \in \Omega_K$, $\omega_{i'}(j'-1) \in \Omega_{K'}$, and $\omega_{i'}(j') \in \Omega_{K''}$. By the naturality, $dist(\pi_i(j), \pi_{i'}(j')) > 1$, which implies that $r_{i'} \not\in S_i(j)$ by the induction hypothesis. Since $\pi_i(j) = \tilde{\pi}_i(j)$, the position of $r_{i'}$ in $\tilde{P}_i(j)$ and $\tilde{P}_i(j)$ (both in $\mathcal{Z}_i$) are the same, i.e., $P_i(j) = \tilde{P}_i(j)$. By the same reason as the base case, to construct $\tilde{E}$, we move $r_i$ to $\tilde{\pi}(j+1) (=\pi_i(j+1))$ from $\tilde{\pi}_i(j) (= \pi_i(j))$ in $\tilde{\omega}_i(j)$. \qed \end{proof} \subsection{Necessity} The conjunction of conditions in Assumptions~\ref{ass1}, \ref{ass2}, \ref{ass3}, \ref{ass4}, and \ref{ass5} is a sufficient condition for an ASYNC execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ to have a similar SSYNC execution for some $\tilde{\Omega} \in \cal{SSYNC}$. However, it is not necessary in general. Suppose that $\mathcal{A}$ does not move any robot. Then, the robot system is at its initial configuration $I$ forever, and every execution $E$ has a similar SSYNC execution $\tilde{E}$, regardless of whether or not $E$ satisfies each of the five assumptions. We will show the necessity of the five assumptions assuming a randomized adversary that determines rigid movement and when moving robots are observed. Let $\tau_i(j)$ in $\mathcal{Z}_0$ be the route of $r_i$ computed by $\mathcal{A}$ in $\omega_i(j)$ given snapshot $P_i(j)$ in $\mathcal{Z}_i$ as input. We assume that $\tau_i(j)$ is a simple curve such that $|\tau_i(j)| > \delta$, where a curve is said to be {\em simple} if it does not contain an intersection. Recall that $r_i$ never stops en route if $|\tau_i(j)| \leq \delta$. Otherwise, $r_i$ travels an arbitrary initial part $\hat{\tau}_i(j)$ of $\tau_i(j)$ such that $|\hat{\tau}_i(j)| > \delta$ at an arbitrary (possibly variable) speed during Move in $\omega_i(j)$. Thus, another robot $r_{i'}$ can observe $r_i$ at any position $y$ in $\hat{\tau}_i(j)$ (thus, $\tau_i(j)$) at $o_{i'}(j')$ if $r_i \in S_{i'}(j')$ and $o_{i'}(j') \in (s_i(j), f_i(j))$. Intuitively, we assume that $\hat{\tau}_i(j)$ satisfying $|\hat{\tau}_i(j)| > \delta$ is chosen ``uniformly at random'' and that $y$ is chosen ``uniformly at random'' from $\tau_i(j)$ (provided that $\hat{\tau}_i(j) = \tau_i(j)$). Then, we show that if an execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, then $E$ satisfies each of the five conditions in Assumptions~\ref{ass1}, \ref{ass2}, \ref{ass3}, \ref{ass4}, and \ref{ass5} with ``probability'' $1$. \subsubsection{Stationarity} For a fixed algorithm $\mathcal{A}$ and initial configuration $I$, we show that the stationarity is necessary. To make the argument simple, we assume that the system is rigid. Then, $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ is uniquely determined by $\Omega \in \cal{ASYNC}$ if $E$ is stationary. Consider any schedule $\Omega \in \cal{ASYNC}$ that contains a unique pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $o_i(j) \in (s_{i'}(j'), f_{i'}(j'))$. Let $\tau_{i'}(j')$ be the route (in $\mathcal{Z}_0$) that $\mathcal{A}$ at $r_{i'}$ computes given $P_{i'}(j')$ (in $\mathcal{Z}_{i'}$). Then, $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ is uniquely determined by the position $y \in \tau_{i'}(j')$ of $r_{i'}$ at $o_i(j)$ (regardless of whether or not $y$ is visible from $r_i$ at $o_i(j)$). We normalize $y$ by $z = |\hat{\tau}_{i'}(j')|/|\tau_{i'}(j')| \in [0,1]$, where $\hat{\tau}_{i'}(j')$ is the prefix of $\tau_{i'}(j')$ before (and including) $y$. Then, there is a one-to-one correspondence between $y \in \tau_{i'}(j')$ and $z \in [0,1]$, because $\tau_{i'}(j')$ is a simple curve. We use the notation $E(z)$ to emphasize that $E$ is determined by $z$. We use a Borel measurable space $([0,1], \mathcal{B}([0,1]))$, where $\mathcal{B}([0,1])$ is the Borel $\sigma$-algebra on $[0,1]$, i.e., the smallest $\sigma$-algebra containing all open intervals in $[0,1]$. Then, the probability measure $\lambda(\cdot)$ for each element of $\mathcal{B}([0,1])$ is the (1-dimensional) Lebesgue probability measure on $[0,1]$, that satisfies, for all intervals $(a,b)$ where $0 \leq a < b \leq 1$, $\lambda({(a,b)}) = b-a$. Thus, $\{[a,a]\}$ and $\{ [a,a], [b,b]\}$ are $\lambda$-null sets. In the following, we consider a probability space $([0,1], \mathcal{B}([0,1]), \lambda)$. Let $\mathcal{Y} = \{y \in \tau_{i'}(j') \mid dist(y, \pi_i(j)) \leq 1\}$ be the set of positions $y$ in $\tau_{i'}(j')$ visible from $r_i$ at $o_i(j)$, and let $\mathcal{D}$ be the set of $z'$s corresponding to the $y$'s in $\mathcal{Y}$. Then, $E(z)$ is not stationary if and only if $z \in \mathcal{D}$. Since $\tau_{i'}(j')$ is continuous, $\mathcal{D} \in \mathcal{B}([0,1])$. We assume that the probability that $z \in \mathcal{D}$ is $\lambda(\mathcal{D})$. Thus $\lambda(\mathcal{D}) = 0$ means that $r_{i'}$ is not visible from $r_i$ at $o_i(j)$ and hence the stationarity is not violated with probability 1 (since we assume that the value of $z$ is chosen uniformly at random from $[0,1]$). Here and in the rest of this section, whenever we say that an execution violates a condition, we assume that the condition is violated with positive probability. That is, we assume here that $\lambda(\mathcal{D}) > 0$; the violation occurs with a positive probability. \begin{lemma} \label{lemma1} Suppose that $\lambda(\mathcal{D}) > 0$. Then, $E(z) \not\sim \tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for any $\tilde{\Omega} \in \cal{SSYNC}$ $\lambda$-almost everywhere on $\mathcal{D}$, i.e., $E(z)$ does not have a similar SSYNC execution $\tilde{E}$ for all $z \in \mathcal{D}$, except for a countable number of exceptions. \end{lemma} \begin{proof} Clearly, $E(z) \not\sim E(z')$ if $z \neq z'$ provided $z, z' \in \mathcal{D}$. Thus, $\{E(z) \mid z \in \mathcal{D}\}$ is uncountable since $\lambda(\mathcal{D}) > 0$. On the other hand, $\cal{SSYNC}$ is a countable set. Since the system is rigid, $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ is uniquely determined by $\tilde{\Omega} \in \cal{SSYNC}$ (because $\mathcal{A}$ and $I$ are fixed), $\{\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I) \mid \tilde{\Omega} \in \cal{SSYNC}\}$ is countable. Thus, only for a countable number of executions $E(z)$, there are similar SSYNC executions $\tilde{E}$. \qed \end{proof} The above lemma states the following: If the probability that $r_{i'}$ is visible from $r_i$ at $o_i(j)$ is not $0$, then the probability that $E$ has a similar SSYNC execution $\tilde{E}$ is $0$, under the condition that $r_{i'}$ is indeed visible from $r_i$ at $o_i(j)$ and the stationarity is indeed violated. We extend Lemma~\ref{lemma1} to the case in which $\Omega$ contains more than one pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $o_i(j) \in (s_{i'}(j'), f_{i'}(j'))$. We order the pairs $W_h = (\omega_i(j), \omega_{i'}(j'))$ of cycles in the increasing order of $o_i(j)$, where a tie is resolved arbitrarily. We use the concepts and notations above, let $y_h$ and $z_h$ be the position of $r_{i'}$ in $\tau_{i'}(j')$ at $o_i(j)$ and its normalization, respectively. Let $\mathcal{D}_h$ be the set of $z_h$'s such that $y_h$ is visible from $\pi_i(j)$, i.e., $r_{i'}$ is visible from $r_i$ at $o_i(j)$. Note that $\mathcal{D}_h$ depends on some of $z_1, z_2, \ldots, z_{h-1}$ in general. Since the system is rigid, $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ is uniquely determined by $Z = (z_1, z_2, \ldots)$. We use the notation $E(Z)$ to emphasize this fact. Suppose that $\lambda(\mathcal{D}_h) > 0$ in $E(Z)$ when $z_i = a_i$ for $i = 1, 2, \ldots , h-1$. By assumption the value of $z_h$ is chosen uniformly at random from $[0,1]$. If $z_h$ randomly chooses a value in $\mathcal{D}_h$, by Lemma~\ref{lemma1}, regardless of the values chosen for $z_{h+1}, z_{h+2}, \ldots$, $E(Z) \not\sim \tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for any $\tilde{\Omega} \in \cal{SSYNC}$ $\lambda$-almost everywhere on $\mathcal{D}_h$, i.e., $E(Z)$ does not have a similar SSYNC execution $\tilde{E}$ for all $z_h \in \mathcal{D}$, except for a countable number of exceptions. Thus, if $E$ violates the stationarity, it is unlikely that $E$ has a similar SSYNC execution (even under rigid system). In the following, we then assume that $E$ is stationary, and investigate the non-rigid system. \subsubsection{Pairwise alignment} To treat the non-rigid system, we use an infinite product measure space $([0,1]^{\infty}, \mathcal{B}^{\infty}([0,1]), \lambda^{\infty})$. Here $[0,1]^{\infty}$ is the Cartesian product of a countable infinity of copies of $[0,1]$. The family of events $\mathcal{B}^{\infty}([0,1])$ is the Cartesian product of a countable infinity of copies of the Borel $\sigma$-field $\mathcal{B}([0,1])$, which is the $\sigma$-algebra generated by all sets in $[0,1]^{\infty}$ represented by a finite union of cylinders, where a cylinder is a set in $[0,1]^{\infty}$ with a form $B_1 \times B_2 \times \ldots B_n \prod_{i = n+1}^{\infty} [0,1]$ for some natural number $n$ and $B_i \in \mathcal{B}([0,1])$ for all $i = 1, 2, \ldots n$. Finally $\lambda^{\infty}(\cdot)$ is the product measure on $([0,1]^{\infty}, \mathcal{B}^{\infty}([0,1]))$, which is defined by $\lambda^{\infty}(B) = \prod_{i = 1}^{\infty} \lambda(B_i)$, for all $B = B_1 \times B_2 \times \ldots \in \mathcal{B}^{\infty}([0,1])$.\footnote{ We can construct $\lambda^{\infty}$ by the Kolmogorov extension theorem in the same manner as Theorem 2.4.4 of the book~\cite{T11} by Tao.} Let $\Phi$ be a property (i.e., predicate) on $[0,1]^{\infty}$ and let $\Gamma = \{X \mid \Phi(X) \ \text{is true}\} \in \mathcal{B}^{\infty}([0,1])$. If $\mathcal{D} \setminus \Gamma$ is a $\lambda^{\infty}$-null set for $\mathcal{D} \in \mathcal{B}^{\infty}([0,1])$, we say that $\Phi$ holds {\em $\lambda^{\infty}$-almost everywhere} on $\mathcal{D}$, which means that $\Phi$ holds with probability $1$ under $\lambda^{\infty}$ on $\mathcal{D}$. We associate a random variable $y_m$ with the $m$th dimension of $[0,1]^{\infty}$. If in $\mathcal{D} \in \mathcal{B}^{\infty}([0,1])$, there is a finite set $\{i_1, i_2, \ldots, i_k\} \subset {\mathbf N}$ such that $y_{i_k}$ is uniquely determined by $y_{i_1}, y_{i_2}, \ldots, y_{i_k-1}$, then $\lambda^{\infty}(\mathcal{D}) = 0$. Let $\Omega \in \cal{ASYNC}$ be any schedule and consider $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$. Suppose that in cycle $\omega_i(j)$, a robot $r_i$, which is located at $\pi_i(j)$ (in $\mathcal{Z}_0$) at time $o_i(j)$, takes a snapshot $P_i(j)$ (in $\mathcal{Z}_i$), computes a route $\tau_i(j)$ (in $\mathcal{Z}_0$) by $\mathcal{A}$, and moves along $\tau_i(j)$ in its Move. Since the system is not rigid, it may stop en route after tracing an initial part $\hat{\tau}_i(j)$ of $\tau_i(j)$ such that $|\hat{\tau}_i(j)| \geq \delta$. Then, the end point of $\hat{\tau}_i(j)$ is $\pi_i(j+1)$ of $r_i$ when $\omega_i(j+1)$ starts. Thus, $\hat{\tau}_i(j)$ may affect the rest of execution including $\pi_i(j+1)$ and $\tau_i(j+1)$. The initial part $\hat{\tau}_i(j)$ can be denoted by a real number $z_i(j) \in [0,1]$, i.e., $z_i(j)$ represents $\hat{\tau}_i(j)$ such that $|\hat{\tau}_i(j)| = |\tau_i(j)| z_i(j) - \delta(z_i(j)-1)$. Here $z_i(j)$ is well-defined, since $\tau_i(j)$ is a simple curve and $|\tau_i(j)| > \delta$. Since the system is non-rigid, any value $z_i(j) \in [0,1]$ can occur, as assumed. More carefully, each of $z_i(j)$'s takes a value in $[0,1]$ uniformly at random, and the probability that $z_i(j) \in B$ is $\lambda(B)$ for any $B \in \mathcal{B}([0,1])$. We order $z_i(j)$ in the increasing order of $o_i(j)$, where a tie is broken arbitrarily, and fix the ordering. By associating the $m$th $z_i(j)$ with the $m$th variable $z^{(m)}$, we identify $Z = \{z_i(j) \mid r_i \in \mathcal{R}, j \in {\mathbf N}\}$ with an infinite vector $(z^{(1)}, z^{(2)}, \ldots) \in [0,1]^{\infty}$. Then, $E$ is determined uniquely by $Z \in [0,1]^{\infty}$ since $E$ is stationary. We use notation $E(Z)$ to emphasize that $E$ is determined by $Z$. It is easy to observe that $Z = Z'$ if and only if $E(Z) \sim E(Z')$. Suppose that $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$ contains a unique triple of cycles $\omega_i(j)$, $\omega_{i'}(j')$, and $\omega_{i'}(j'+1)$ such that $\omega_i(j)$ and $\omega_{i'}(j'+\ell)$ overlap each other and $r_i$ and $r_{i'}$ are mutually visible for $\ell = 0,1$. Thus, $E(Z)$ does not satisfy the pairwise alignment condition. Let $\mathcal{D}$ be the set of $Z$'s such that $E(Z)$ does not satisfy the pairwise alignment condition. We assume $\lambda^{\infty}(\mathcal{D}) > 0$ by the same reason we mentioned immediately above Lemma~\ref{lemma1}. \begin{lemma} \label{lemma2} Suppose that $\lambda^{\infty}(\mathcal{D}) > 0$. Then, $E(Z)$ does not have a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for any $\Omega \in \cal{SSYNC}$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{lemma} \begin{proof} Let $\Gamma$ be the set of vectors $Z \in \mathcal{D}$ such that $E(Z)$ has a similar SSYNC execution $\tilde{E}$. We show that $\lambda^{\infty}(\Gamma) = 0$. Suppose that $E(Z)$ has a similar SSYNC execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$. Let $k = \tilde{o}_i(j)$, $k' = \tilde{o}_{i'}(j')$, and $k'' = \tilde{o}_{i'}(j'+1)$, where $k' < k''$. We have the following four cases: (i) $k' < k \leq k''$, (ii) $k' \leq k < k''$, (iii) $k' < k'' < k$, and (iv) $k < k' < k''$. \noindent{\bf When $k' < k \leq k''$.~} At $o_i(j)$, $r_i$ is at $\pi_i(j)$ and $r_{i'}$ is at $\pi_{i'}(j')$ in $E(Z)$, and at $k = \tilde{o}_i(j)$, $r_i$ is at $\tilde{\pi}_i(j) (=\pi_i(j))$ and $r_{i'}$ is at $\tilde{\pi}_{i'}(j'+1) (= \pi_{i'}(j'+1))$ in $\tilde{E}(Z)$. Since $P_i(j) = \tilde{P}_i(j)$, $\pi_{i'}(j') \neq \pi_{i'}(j'+1)$, $E(Z)$ does not have a similar SSYNC execution $E$, since $r_{i'}$ moves at least distance $\delta$ and $\tau_{i'}(j')$ is simple. \noindent{\bf When $k' \leq k < k''$.~} By the argument above, $E(Z)$ does not have a similar SSYNC execution $\tilde{E}$ unless $k'=k$. Let $\ell \geq 1$ be the minimum integer such that $\tilde{o}_i(j+\ell) \geq k''$. At $o_{i'}(j'+1)$, $r_i$ is at $\pi_i(j)$ and $r_{i'}$ is at $\pi_{i'}(j'+1)$ in $E(Z)$, and at $k'' = \tilde{o}_{i'}(j'+1)$, $r_i$ is at $\tilde{\pi}_i(j+\ell) (= \pi_i(j+\ell))$ and $r_{i'}$ is at $\tilde{\pi}_{i'}(j'+1) (= \pi_{i'}(j'+1))$ in $E(Z)$. Since $P_{i'}(j'+1) = \tilde{P}_{i'}(j'+1)$, $\pi_i(j) = \pi_i(j+\ell)$. That is, $z_i(j+\ell-1)$ is uniquely determined by $\pi_i(j+\ell-1)$. (If there is no such $z_i(j+\ell-1)$, $E(Z)$ does not have a similar SSYNC execution $\tilde{E}$.) Thus, $\lambda^{\infty}(\Gamma) = 0$. \noindent{\bf When $k' < k'' < k$.~} Let $\ell$ be the minimum integer such that $\tilde{o}_{i'}(j'+ 1 + \ell) \geq k$. By the same argument above, we have $\pi_{i'}(j') = \pi_{i'}(j'+1+\ell)$. That is, $z_{i'}(j' + \ell)$ must be uniquely determined by $\pi_{i'}(j'+\ell)$. (If there is no such $z_{i'}(j'+\ell)$, $E(Z)$ does not have a similar SSYNC execution $\tilde{E}$.) Thus, $\lambda^{\infty}(\Gamma) = 0$. \noindent{\bf When $k < k' < k''$.~} By the same argument above, we have $\lambda^{\infty}(\Gamma) = 0$. Thus, the proof completes. \qed \end{proof} In order for $Z$ to be in $\mathcal{D}$, $E(Z)$ needs to satisfy both $r_{i'} \in S_i(j)$ and $r_i \in S_{i'}(j'+1)$, or equivalently, $dist(\pi_i(j), \pi_{i'}(j')) \leq 1$ and $dist(\pi_i(j), \pi_{i'}(j'+1)) \leq 1$. However, unlike the proof for stationarity, $\lambda^{\infty}(\mathcal{D}) > 0$ does not follow in general, since there may be a pair of an algorithm $\mathcal{A}$ and an initial configuration $I$ such that for any $Z \in [0,1]^{\infty}$, in $E(Z)$, either $r_{i'} \not\in S_i(j)$ or $r_i \not\in S_{i'}(j'+1)$ holds. If $\lambda^{\infty}(\mathcal{D}) = 0$, $E(Z)$ satisfies the pairwise alignment condition with probability $1$, and we do not consider $\mathcal{E}(\Omega, \mathcal{A}, I)$ as an instance that does not satisfy the pairwise alignment condition, even though there is a $Z \in [0,1]^{\infty}$ such that $r_{i'} \in S_{i(j)}$ and $r_i \in S_{i'}(j'+1)$ in $E(Z)$. Note that the same claim as Lemma~\ref{lemma2} holds for $E(Z)$ such that there are more than one triple $\omega_i(j)$, $\omega_{i'}(j')$, and $\omega_{i'}(j'+1)$ that violates the pairwise alignment condition. Suppose that $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$ contains a pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $\omega_i(j) \parallel \omega_{i'}(j')$. Let $\mathcal{D}$ be the set of $Z$'s such that $E(Z)$ satisfies this condition. \begin{proposition} \label{prop6} Suppose that $\lambda^{\infty}(\mathcal{D}) > 0$. If $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\Omega, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, then $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{proposition} \begin{proof} Let $\Gamma$ be the set of vectors $Z \in \mathcal{D}$ such that there is a $\tilde{\Omega} \in \cal{SSYNC}$ satisfying \begin{enumerate} \item $\tilde{o}_i(j) \neq \tilde{o}_{i'}(j')$, and \item $E(Z) \sim \tilde{E} \in \mathcal{E}(\Omega, \mathcal{A}, I)$. \end{enumerate} Then, by the same argument in the proof of Lemma~\ref{lemma2}, we have $\lambda^{\infty}(\Gamma) = 0$. \qed \end{proof} \subsubsection{Consistency} Throughout this section, we assume that $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$ contains a pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$, and that $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$. Let $\mathcal{D}$ be the set of $Z$'s such that $E(Z)$ satisfies this condition. \begin{lemma} \label{lemma3} Suppose that $\lambda^{\infty}(\mathcal{D}) > 0$. If $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$ which contains a pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$ holds has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, then $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{lemma} \begin{proof} Since $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$, there are cycles $\omega_{i_{\ell}}(j_{\ell})$ ($\ell = 0, 1, \ldots, m$) such that $(i_0, j_0) = (i,j)$ and $(i_m, j_m) = (i', j')$, and $\omega_{i_{\ell-1}}(j_{\ell-1}) \parallel \omega_{i_{\ell}}(j_{\ell})$ for all $\ell = 1, 2, \ldots, m$. By Proposition~\ref{prop6}, if $E(Z)$ has a similar SSYNC execution $\tilde{E}$, then, for $\ell = 1, 2, \ldots, m$, $\tilde{o}_{i_{\ell-1}}(j_{\ell-1}) = \tilde{o}_{i_{\ell}}(j_{\ell})$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$, which implies that $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \qed \end{proof} \begin{lemma} \label{lemma4} Suppose that $\lambda^{\infty}(\mathcal{D}) > 0$. If $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, then $r_{i'} \in S_i(j)$ if and only if $r_i \in S_{i'}(j')$, $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{lemma} \begin{proof} Since $\lambda^{\infty}(\mathcal{D}) > 0$, $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$ such that $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$ by Lemma~\ref{lemma3}. Suppose that $o_{i'}(j') < o_i(j)$. Let $\omega_{i'}(j'+ \ell'-1)$ (resp. $\omega_i(j-\ell-1)$) be the cycle of $r_{i'}$ (resp. $r_i$) such that $\pi_{i'}(j'+\ell')$ (resp. $\pi_i(j+\ell)$) be the position of $r_{i'}$ (resp. $r_i$) at $o_i(j)$ (resp. $o_{i'}(j')$). Since $\pi_i(j) = \tilde{\pi}_i(j)$, $\pi_i(j-\ell) = \tilde{\pi}_i(j-\ell)$, $\pi_{i'}(j') = \tilde{\pi}_{i'}(j')$, $\pi_{i'}(j'+\ell) = \tilde{\pi}_{i'}(j'+\ell)$, $\pi_i(j) = \tilde{\pi}_i(j-\ell)$, and $\pi_{i'}(j') = \tilde{\pi}_{i'}(j' + \ell)$, provided that $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$. Obviously, the set of $Z \in \mathcal{D}$ satisfying $\pi_i(j) = \pi_i(j-\ell)$ (resp. $\pi_{i'}(j') = \pi_{i'}(j' + \ell')$) is the $\lambda^{\infty}$-null set when $\ell > 0$ (resp. $\ell' > 0$). Thus, $r_{i'} \in S_i(j)$ if and only if $r_i \in S_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. The case $o_{i'}(j') > o_i(j)$ is analogous and the case $o_{i'}(j') = o_i(j)$ is trivial. \qed \end{proof} By the proof of above lemma, we have the following corollary. \begin{corollary} \label{corl2} Suppose that $\lambda^{\infty}(\mathcal{D}) > 0$. If $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, and $r_{i'} \in S_i(j)$ and $r_i \in S_{i'}(j')$ hold, then $\omega_i(j) \parallel \omega_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{corollary} \begin{lemma} \label{lemma5} Suppose that $\lambda^{\infty}(\mathcal{D}) > 0$. If $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, and $r_{i'} \not\in S_i(j)$ and $r_i \not\in S_{i'}(j')$ hold, then $dist(\pi_i(j), \pi_{i'}(j')) > 1$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{lemma} \begin{proof} Since $\lambda^{\infty}(\mathcal{D}) > 0$, $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, such that $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$ by Lemma~\ref{lemma3}. If $dist(\pi_i(j), \pi_{i'}(j')) \leq 1$, since $\pi_i(j) = \tilde{\pi}_i(j)$, $\pi_{i'}(j') = \tilde{\pi}_{i'}(j')$, and $\tilde{o}_i(j) = \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$, there is a robot $r_{i''}$ and cycle $\omega_{i''}(j'')$ such that $r_{i''}$ is at position $\pi_{i'}(j')$ at time $o_i(j)$, since $P_i(j) = \tilde{P}_i(j)$. Such event does not occur $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. Thus, $dist(\pi_i(j), \pi_{i'}(j')) > 1$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \qed \end{proof} \subsubsection{Serializability} \begin{lemma} \label{lemma6} If $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$ which contains a pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $\omega_i(j) \rightarrow \omega_{i'}(j')$ holds has a similar execution $\tilde{E} \in \mathcal{E}(\Omega, \mathcal{A}, I)$ for some $\tilde{\Omega} \in \cal{SSYNC}$, then $\tilde{o}_i(j) < \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{lemma} \begin{proof} If $i = i'$, since $j < j'$, $\tilde{o}_i(j) < \tilde{o}_{i'}(j')$ by definition. Suppose that $i \neq i'$. Then, $r_i \in S_{i'}(j')$ and $o_{i'}(j') \in (f_i(j), o_i(j+1))$. Then, by a similar argument in the proof of Lemma~\ref{lemma2}, $E(Z)$ does not have a similar execution $\tilde{E} \in \mathcal{E}(\Omega, \mathcal{A}, I)$ for any $\tilde{\Omega} \in \cal{SSYNC}$ such that $\tilde{o}_i(j) > \tilde{o}_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \qed \end{proof} The following corollary immediately holds by Lemma~\ref{lemma3} and Lemma~\ref{lemma6}. \begin{corollary} If $E(Z)$ does not satisfy the serializability, then $E(Z)$ does not have a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for any $\tilde{\Omega} \in \cal{SSYNC}$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$. \end{corollary} \subsubsection{Naturality} Recall that $\mathcal{G} = (\{\Omega_0, \Omega_1, \ldots\}, \Rightarrow)$ and the set $\mathcal{T}$ of topological sorts of $\mathcal{G}$ are determined by execution $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$. We sometimes associate $E(Z)$ with $\mathcal{G}$ and $\mathcal{T}$ to emphasize that they are uniquely determined by $E(Z)$. Let $\mathcal{TS}(E(Z)) (\subset \cal{SSYNC})$ be the set of schedules constructed from topological sorts in $\mathcal{T}(E(Z))$. By Lemma~\ref{lemma3} and Lemma~\ref{lemma6}, if $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ is similar to $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$, then $\tilde{\Omega} \in \mathcal{TS}(E(Z))$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D}$, provided that $\lambda^{\infty}(\mathcal{D}) > 0$. Here $\mathcal{D}$ is the set of $U \in [0,1]^{\infty}$ such that $\mathcal{G}(E(U)) = \mathcal{G}(E(Z))$. Consider any schedule $\tilde{\Omega} \in \mathcal{TS}(E(Z))$. Suppose that there is a pair of cycles $\omega_i(j)$ and $\omega_{i'}(j')$ such that $k' \leq k < k''$, where, under $\tilde{\Omega}$, $\tilde{o}_i(j) = k$, $\tilde{o}_{i'}(j'-1) = k'$, and $\tilde{o}_{i'}(j') = k''$. Obviously, $\omega_i(j) \not\stackrel{*}{\parallel} \omega_{i'}(j')$. First assume that $r_{i'} \in S_i(j)$ in $E(Z)$, and let $\mathcal{D'} \subseteq \mathcal{D}$ be the set of $Z \in \mathcal{D}$ such that $E(Z)$ satisfies this condition. \begin{lemma} \label{lemma7} Suppose that $\lambda^{\infty}(\mathcal{D'}) > 0$. If $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\Omega, \mathcal{A}, I)$, then $o_i(j) < o_{i'}(j')$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D'}$. \end{lemma} \begin{proof} By definition, $r_{i'}$ is at position $\tilde{\pi}_{i'}(j')$ at time $\tilde{o}_i(j) = k$ in $\tilde{E}$. Since $\omega_i(j) \not\stackrel{*}{\parallel} \omega_{i'}(j')$, $o_i(j) \neq o_{i'}(j')$. Suppose that $o_i(j) > o_{i'}(j')$. There is a cycle $\omega_{i'}(j'+\ell')$ such that $r_{i'}$ is at $\pi_{i'}(j' + \ell')$ at $o_i(j)$ for some $\ell' \geq 1$. Since $\pi_i(j) = \tilde{\pi}_i(j)$, $P_i(j) = \tilde{P}_i(j)$, and $r_{i'} \in S_i(j)$, there is a robot $r_{i''}$ (which may be $r_{i'}$) and a cycle $\omega_{i''}(j'')$ such that $\tilde{\pi}_{i''}(j'') = \pi_{i'}(j' + \ell')$. Thus, $E(Z)$ does not have a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D'}$. \qed \end{proof} Next assume that $r_{i'} \not\in S_i(j)$ in $E(Z)$, and let $\mathcal{D}'' \subseteq \mathcal{D}$ be the set of $Z \in \mathcal{D}$ such that $E(Z)$ satisfies this condition. \begin{lemma} \label{lemma8} Suppose that $\lambda^{\infty}(\mathcal{D''}) > 0$. If $E(Z)$ has a similar execution $\tilde{E} \in \mathcal{E}(\Omega, \mathcal{A}, I)$, then $dist(\pi_i(j), \pi_{i'}(j')) > 1$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D''}$. \end{lemma} \begin{proof} Suppose that $dist(\pi_i(j), \pi_{i'}(j')) \leq 1$. Since $r_{i'}$ is at $\tilde{\pi}_{i'}(j')$ at $\tilde{o}_i(j) = k$ in $\tilde{E}$, $\tilde{\pi}_{i'}(j') = \pi_{i'}(j')$ and $\tilde{\pi}_i(j) = \pi_i(j)$, $r_{i'} \in \tilde{S}_i(j)$. There is a robot $r_{i''}$ for some $i''(\neq i)$ and a cycle $\omega_{i''}(j'')$ such that $r_{i''}$ is at $\pi_{i'}(j')$ at $o_i(j)$ in $E(Z)$, since $\tilde{P}_i(j) = P_i(j)$. Thus, $E(Z)$ does not have a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ $\lambda^{\infty}$-almost everywhere on $\mathcal{D''}$. \qed \end{proof} Consequently, we have the following theorem. \begin{theorem} \label{theorem:necessity} Each of the five properties, stationarity, pairwise alignment, consistency, serializability, and naturality is necessary for an execution $E(Z) \in \mathcal{E}(\Omega, \mathcal{A}, I)$ to have a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Omega}, \mathcal{A}, I)$ for some schedule $\tilde{\Omega} \in \cal{SSYNC}$ with probability $1$. \end{theorem} \section{Luminous synchronizer for ASYNC robots} \label{sec:synchronizer} In this section, we present a synchronizer for oblivious luminous ASYNC robots, that produces ASYNC executions satisfying Assumptions~\ref{ass1}, \ref{ass2}, \ref{ass3}, \ref{ass4}, and \ref{ass5}. When the robots are not equipped with lights, each robot cannot recognize which robots are moving. We compensate for this weak capability by a single light at each robot.\footnote{ Remember that the color of a light is changed at the end of a Compute, and it is kept until the end of the Compute of the next cycle.} Let $C$ be the set of colors that a light can take. Each light initially takes Black ($Bk \in C$). When robot $r_i$ takes a snapshot $Q_i$ at time $t$, $Q_i$ is the set of pairs $(p_{i'}, c_{i'})$ for each $r_{i'}$ visible for $r_i$ at $t$, where $p_{i'}$ is the the position of $r_{i'}$ at $t$ in $\mathcal{Z}_i$ and $c_{i'}$ is the color of $r_{i'}$'s light at $t$. Let $P(Q_i) = \{p_{i'} \mid (p_{i'}, c_{i'}) \in Q_i\}$, and $C(Q_i) = \{c_{i'} \mid (p_{i'}, c_{i'}) \in Q_i\}$. That is, $P(Q_i)$ is the set of positions occupied by robots visible for $r_i$ in $\mathcal{Z}_i$, and $C(Q_i)$ is the set of colors visible for $r_i$. Recall that $r_i$ is aware of its color, i.e., the color of its light $c_i$ since $p_i=(0,0)$ and the robots occupy distinct points. For an initial configuration $I$ of the system of non-luminous robots, let $\hat{I} = \{(p, Bk) \mid p \in I\}$ be an initial configuration of the system of luminous robots. By definition, $P(\hat{I}) = I$ and $C(\hat{I}) = \{Bk\}$. We now define a luminous synchronizer $\mathcal{S}$ on a robot $r_i$ under any schedule $\Omega \in \cal{ASYNC}$. Given any algorithm $\mathcal{A}$ and initial configuration $I$ for non-luminous robots, the initial configuration for $\mathcal{S}$ is the corresponding configuration $\hat{I}$. Luminous synchronizer $\mathcal{S}$ on $r_i$ inhibits {\em on-the-fly} $r_i$'s motion in some cycle so that the resulting execution have a similar SSYNC execution. Precisely, $\mathcal{S}$ works as follows: In a cycle $\omega = (o, s, f)$ of robot $r$, $r$ takes a snapshot $Q$ at time $o$ in Look. In Compute, depending on $Q$, $\mathcal{S}$ on $r$ first decides whether or not it ``accepts'' $\omega$, and then computes a move route $\tau$. If it accepts $\omega$, $\tau$ is the one that $\mathcal{A}$ computes given $P(Q)$; otherwise, if it ``rejects'' $\omega$, $\tau$ is the point $(0,0)$. Finally, it decides a color $c \in C$ and the color of $r_i$'s light is changed to $c$ at the end of Compute. Thus, the color $c$ is visible from other robots at $s$ and thereafter. In Move, $r$ traces $\tau$ but it may stop en route after moving distance $\delta$. The set of executions $F$ of $\mathcal{S}$ for $\mathcal{A}$ and $I$ under scheduler $\Omega$ is denoted by $\mathcal{E}(\Omega, \mathcal{S}(\mathcal{A}), \hat{I})$. Let $\Lambda$ be the set of cycles accepted by $\mathcal{S}$ in $F$. Note that $\Lambda$ depends on $F$. From $F$ (and $\Lambda$), we can construct an execution $\check{F} \in \mathcal{E}(\Lambda, \mathcal{A}, I)$ for non-luminous robots by first extracting the behaviors of the robots for cycles in $\Lambda$ and then ignoring the colors of lights. Since the next position is computed from $P(Q)$ (not from $Q$) and the robots do not change their positions in rejected cycles, indeed $\check{F} \in \mathcal{E}(\Lambda, \mathcal{A}, I)$. We say that luminous synchronizer $\mathcal{S}$ is {\em correct} if the following conditions hold for any $\Omega$, $\mathcal{A}$, $I$, and $F \in \mathcal{E}(\Omega, \mathcal{S}(\mathcal{A}), \hat{I})$. \begin{enumerate} \item $\Lambda$ is fair. \item $\check{F}$ satisfies Assumptions~\ref{ass1}, \ref{ass2}, \ref{ass3}, \ref{ass4}, and \ref{ass5}, which implies that $\check{F} \in \mathcal{E}(\Lambda, \mathcal{A}, I)$ has a similar execution $\tilde{E} \in \mathcal{E}(\tilde{\Lambda}, \mathcal{A}, I)$ for some $\tilde{\Lambda} \in \cal{SSYNC}$. \end{enumerate} \subsection{Limit of color-based synchronizer} The {\em visibility graph} of a configuration of the robots consists of a set of vertices corresponding to the robots and a set of edges between any pair of robots within distance $1$ (in $\mathcal{Z}_0$). A {\em visibility preserving} algorithm guarantees that the visibility graph does not change in any execution. Formally, an execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ for non-luminous robots is {\em visibility preserving}, if the following condition holds: For any $r_i$ and $r_{i'}$, and for any time $t \in {\mathbf R}^+$, $dist(p_i(t), p_{i'}(t)) \leq 1$ if and only if $dist (p_i(0), p_{i'}(0)) \leq 1$, where $p_j(u)$ is the position of $r_j$ at time $u$ in $\mathcal{Z}_0$. We say that an algorithm $\mathcal{A}$ is {\em visibility preserving}, if every execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ is visibility preserving for any $\Omega \in \cal{SSYNC}$ and $I$. Let $Q_i(j) = \{(p_{i'}, c_{i'}) \mid r_{i'} \in S_i(j) \}$ be the snapshot taken by a robot $r_i$ at $o_i(j)$ in $\omega_i(j)$. Clearly, $p_i = (0,0)$. Let $X_i(j) = \{ c_{i'} \mid i' \neq i, r_{i'} \in S_i(j)\}$. In general, a luminous synchronizer on $r_i$ can use the full information on $Q_i(j)$ to decide whether it accepts $\omega_i(j)$ or not. A luminous synchronizer $\mathcal{S}$ is {\em color-based} if it uses $c_i$ and $X_i(j)$ for the selection. A color-based synchronizer $\mathcal{S}$ is {\em greedy} if it accepts $\omega_i(j)$ if and only if $C(Q_i(j)) = \{Bk\}$, i.e., $c_i = Bk$ and $X_i(j)$ is either $\{Bk\}$ or $\emptyset$. We show that any greedy synchronizer is not powerful enough in general. \begin{lemma} \label{lemma:removal} There exists a rigid system of five luminous robots such that for any greedy synchronizer $\mathcal{S}$, there is a triple $(\Omega, \mathcal{A}, I)$ such that, for some execution $F \in \mathcal{E}(\Omega, \mathcal{S}(\mathcal{A}), \hat{I})$, $\check{F}$ is not consistent. \end{lemma} \begin{proof} Consider a system of five luminous robots. We illustrate an initial part of an execution $F \in \mathcal{E}(\Omega, \mathcal{S}(\mathcal{A}), I)$. Initially, $r_1, r_2, \ldots, r_5$ are at $(0,0)$, $(0,1)$, $(1,1)$, $(1,0)$, and $(2,0)$, respectively in $\mathcal{Z}_0$. Thus, $\hat{I} = \{((0,0), Bk), ((0,1), Bk), ((1,1), Bk), ((1,0), Bk), ((2, 0), Bk)\}$. Since the visibility range is $1$, $r_2$ and $r_4$ are visible from $r_1$, but $r_5$ and $r_3$ are not visible from $r_1$. The first cycles in $\Omega$ are $\omega_1(1) = (0, 3/4, 1)$, $\omega_2(1) = (1/2, 5/4, 3/2)$, $\omega_3(1) = (1, 7/4, 2)$, $\omega_4(1) = (3/2, 9/4, 5/2)$, and $\omega_5(1) = (3, 15/4, 4)$. Recall that the initial color of light is $Bk$. Since $\mathcal{S}$ is greedy, it accepts $\omega_1(1)$, $\omega_2(1)$, and $\omega_3(1)$. Suppose that $\mathcal{A}$ moves $r_1$ to $(0, 3/4)$ in $\omega_1(1)$. Then, at time $3/2$, $r_1$ is not visible from $r_4$. Thus, all robots visible from $r_4$ is still have color $Bk$, and $\mathcal{S}$ on $r_4$ accepts $\omega_4(1)$. Observe that $\omega_1(1) \stackrel{*}{\parallel} \omega_4(1)$. Then, $\check{F}$ is not consistent regardless of the rest of $F$, because $r_4 \in S_1(1)$ but $r_1 \not\in S_4(1)$. \qed \end{proof} We extend Lemma~\ref{lemma:removal} to the color-based synchronizer. A fully synchronous scheduler $\cal{FSYNC}$ produces a schedule $\Omega$ such that $\omega_i(j) = (j-1, j-3/4, j-1/4)$ for all $i$ and $j$. Thus, $\cal{FSYNC} \subset \cal{SSYNC}$ in the sense that if $\Omega \in \cal{FSYNC}$, $\Omega \in \cal{SSYNC}$ holds. \begin{theorem} \label{theorem:removal} There exists a rigid system of five luminous robots such that for any color-based synchronizer $\mathcal{S}$, there is a triple $(\Omega, \mathcal{A}, I)$ such that, for some execution $F \in \mathcal{E}(\Omega, \mathcal{S}(\mathcal{A}), \hat{I})$, $\check{F}$ is not consistent. \end{theorem} \begin{proof} Remember the execution $F \in \mathcal{E}(\Omega, \mathcal{S}(\mathcal{A}), I)$ for five luminous robots in the proof of Lemma~\ref{lemma:removal}. The counter example relies on the assumptions that $\mathcal{S}$ is greedy and that the initial color of each robot is $Bk$. We show that there is a $\Lambda$ which changes the configuration to a one such that $\mathcal{S}$ on each robot accepts the current cycle. Consider an execution $F \in \mathcal{E}(\Lambda, \mathcal{S}(\mathcal{A}), \hat{I})$, where $\Lambda \in \cal{FSYNC}$. Then, all lights have the same color $c_j$ when the robots simultaneously start their $j$th cycles for any $j \in {\mathbf N}$ since $\mathcal{S}$ is color-based. Hence, there is a $j_0$ such that each robot $r_i$ accepts $\omega_i(j_0)$ since $\mathcal{S}$ is fair. That is, $\mathcal{S}$ on each robot $r_i$ accepts $\omega_i(j_0)$, since it accepts a cycle when $c = c_{j_0-1}$ and $X=\{C_{j_0-1}\}$. We assume without loss of generality that $\omega_i(j_0)$ is the first cycle accepted by $\mathcal{S}$. Now the configuration is $\{(p, c_{j_0-1}) \mid (p, Bk) \in \hat{I} \}$ immediately before the $j_0$th cycle starts at time $j_0-1$. We replace $\omega_i(j_0)$ for each $r_i \in \Lambda$ with \begin{itemize} \item $\omega_1(j_0) = (j_0-1, (j_0-1) + 3/4, (j_0-1)+1)$, \item $\omega_2(j_0) = ((j_0-1) + 1/2, (j_0-1) + 5/4, (j_0-1) + 3/2)$, \item $\omega_3(j_0) = ((j_0-1) + 1, (j_0-1) + 7/4, (j_0-1) + 2)$, \item $\omega_4(j_0) = ((j_0-1) + 3/2, (j_0-1) + 9/4, (j_0-1) + 5/2)$, and \item $\omega_2(j_0) = ((j_0-1) + 3, (j_0-1) + 15/4, (j_0-1) + 4)$. \end{itemize} Then, the same argument as the proof of Lemma~\ref{lemma:removal} concludes the theorem. \qed \end{proof} The above example shows that there exists no color-based synchronizer that works correctly if the algorithm is not visibility preserving. \subsection{Color-based synchronizer for vicinity preserving algorithms} Lemma~\ref{lemma:removal} and Theorem~\ref{theorem:removal} demonstrate that there is no color-based synchronizer for an arbitrary visibility preserving algorithm. Moreover, it is difficult for oblivious luminous robots to satisfy the second condition of naturality, because it requires remembering the positions of other robots. In this section, we consider algorithms with more restricted changes in the visibility graph and propose a color-based synchronizer for such algorithms. An execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ for non-luminous robots is {\em vicinity preserving}, if the following condition holds: For any $r_i$, $r_{i'}$, and for any time $t, t' \in {\mathbf R}^+$, $dist(p_i(t), p_{i'}(t')) \leq 1$ if and only if $dist (p_i(0), p_{i'}(0)) \leq 1$. In other words, $r_i$ has to stay in the vicinity of its initial position. We say that an algorithm $\mathcal{A}$ is {\em vicinity preserving}, if every execution $E \in \mathcal{E}(\Omega, \mathcal{A}, I)$ is vicinity preserving for any $\Omega \in \cal{SSYNC}$ and $I$. In this section, we propose a color-based synchronizer $\mathcal{S}_{VP}$ that uses a set $C = \{Bk, R, B, G, W\}$ of colors and show its correctness provided that $\mathcal{A}$ is vicinity preserving and designed for non-luminous SSYNC robots. We describe $\mathcal{S}_{VP}$ as a finite-state machine with a state set $C$, an input alphabet $2^C$, and an output alphabet $\{\text{accept}, \text{reject} \}$. When $\mathcal{S}_{VP}$ is executed on $r_i$, the sate of $r_i$ is the color (of the light) of $r_i$ and the input is the set $X$ of colors of the robots visible from $r_i$, excluding $r_i$'s color. Table~\ref{table:S-ST} shows the transition function and the output function of $\mathcal{S}_{VP}$, where \begin{itemize} \item $\exists c$ means any $X$ such that $c \in X$, \item $\forall(c_1, c_2, \ldots, c_k)$ means any $X$ such that $X \subseteq \{c_1, c_2, \ldots, c_k\}$, and \item $\exists c \ \wedge \ \forall(c_1, c_2, \ldots, c_k)$ means any $X$ such that $c \in X$ and $X \subseteq \{c_1, c_2, \ldots, c_k\}$. \end{itemize} The initial state of each robots is $Bk$. We assume that without loss of generality, the visibility graph of the initial configuration $I$ is connected. Since $\mathcal{A}$ is vicinity preserving, let $S_i$ be the set of neighbors of $r_i$ in the visibility graph which is defined by $I$. \begin{table}[t] \centering \caption{Finite-state machine $\mathcal{S}_{VP}$.} \begin{tabular}{|c|l|c|c|} \hline Current state & Input & Next state & Output \\ \hline \hline Bk & $\forall (Bk,B,W)$ & $R$ & accept \\ \cline{2-4} & $\exists R \wedge \forall (Bk,R,B,W)$ & $W$ & reject \\ \hline R & $\forall (R,B,W)$ & $B$ & reject \\ \hline B & $\forall (B,G)$ & $G$ & reject \\ \hline G & $\forall (Bk,G)$ & $Bk$ & reject \\ \hline W & $\forall (B,W)$ & $Bk$ & reject \\ \hline \end{tabular} \label{table:S-ST} \end{table} Robot $r_i$ is waiting when its state is $Bk$ (Black) and moving when its state is $R$ (Red). It rejects the current cycle when its state is $Bk$ and observes another robot in state $R$, and changes its state to $W$ (White). It changes its state from $W$ to $Bk$ when it does not observe any robot in state $R$. Robot $r_i$ has finished moving when its state is $B$ (Blue) and it changes its state to $G$ (Green) when the states of robots in $S_i$ are $B$ and $G$. Finally, it changes its state to $Bk$ when the states of robots in $S_i$ are $G$ and $Bk$. Let $F \in \mathcal{E}(\Omega, \mathcal{S}_{VP}(\mathcal{A}), \hat{I})$ be any execution of $\mathcal{S}_{VP}$ for a vicinity preserving algorithm $\mathcal{A}$, and let $\Lambda \subseteq \Omega$ be the set of accepted cycles (which depends on $F$). Then, $\check{F} \in \mathcal{E}(\Lambda, \mathcal{A}, I)$ is an execution of non-luminous robots. We will show that $\check{F}$ has a corresponding SSYNC execution for non-luminous robots. We demonstrate that (i) $\Lambda$ is fair, (ii) $\check{F}$ satisfies stationarity, pairwise alignment, consistency, serializability, and naturality. \begin{lemma} \label{lemma:fairness} $\Lambda$ is fair. \end{lemma} \begin{proof} We show that every robot reaches state $R$ infinitely many times in $F$. We first regard three states $Bk$, $W$, and $R$ as a virtual state $Y$. Every robot starts with state $Y$. When the state of a robot $r_i$ is $Y$ (i.e., $R$) and the sate of each robot $r_{i'} \in S_i$ is either $Y$ (i.e., $R$ or $W$) or $B$, then it can change its state to $B$. When the state of $r_i$ is $G$ and the state of each robot $r_{i'} \in S_i$ is either $G$ or $Y$ (i.e., $Bk$), then it can change its state to $Y$ (i.e., $Bk$). For the time being, we assume that if the state of $r_i$ is $Bk$, then it will eventually change its state to $R$. We show that every robot $r_i$ reaches state $Y$ infinitely many times in $F$. This implies that it reaches state $R$ infinitely many times. Then, we can conclude that $\check{F}$ is fair, because $r_i$ changes its color to $R$ in $\omega_i(j)$ if and only if $\omega_i(j)$ is accepted by $\mathcal{S}_{VP}$. Consider any robot $r_i$. Let $\Psi_i = (\psi_i(1), \psi_i(2), \ldots) \subseteq \Omega_i$, where $\psi_i(j)$ is the cycle of $r_i$ in which it changes its state for the $j$th time. The state of $r_i$ changes from state $Y$ to $B$ in $\psi_i(1)$, from state $B$ to $G$ in $\psi_i(2)$, from state $G$ to $Y$ in $\psi_i(3)$, ans so on. For each $C \in \{Y, B, G\}$, $C^{(k)}$ denotes the the state that $r_i$ takes $C$ for the $k$th time. Thus, the state of $r_i$ changes from state $Y^{(1)}$ to $B^{(1)}$ in $\psi_i(1)$, from state $B^{(1)}$ to $G^{(1)}$ in $\psi_i(2)$, from state $G^{(1)}$ to $Y^{(2)}$ in $\psi_i(3)$, and so on. Independently of $i$, in $\psi_i(j)$, $r_i$ changes its state from $c(j)$ to $c(j+1)$, where $c(j) = Y^{(\lfloor j/3 \rfloor +1)}$ if $j \pmod{3} = 1$, $c(j) = B^{(\lfloor j/3 \rfloor +1)}$ if $j \pmod{3} = 2$, and $c(j) = G^{(\lfloor j/3 \rfloor)}$ if $j \pmod{3} = 0$. Let $\psi_i(j) = (o_i(j), s_i(j), f_i(j))$. Then, $r_i$ takes a snapshot at $o_i(j)$ and changes the color of its light (i.e., its state) by $s_i(j)$. The new color becomes visible from other robots at $s_i(j)$. Thus, the state of $r_i$ at $o_i(j)$ is $c(j)$ and is $c(j+1)$ at $s_i(j)$. For any robot $r_{\ell}$, let $\sigma_{\ell}(t)$ be the sate of $r_{\ell}$ at time $t \in {\mathbf R}^+$. Thus, $\sigma_i(o_i(j)) = c(j)$ and $\sigma_i(s_i(j)) = c(j+1)$. We first claim that for any $j \in {\mathbf N}$, $\sigma_{i'}(o_i(j))$ is either $c(j)$ or $c(j+1)$ for any robot $r_{i'} \in S_i$. The proof is by induction on $j$. When $j=1$, $c_i(o_i(1)) = c(1) = Y^{(1)}$. Suppose that $\sigma_{i'}(o_i(j)) = c(j')$ for some $j'\geq 3$. Then $r_{i'}$ changes its state from $B^{(1)}$ to $G^{(1)}$ in $\psi_{i'}(2)$ and $o_{i'}(2) < o_i(1)$. It is a contradiction, because $r_i \in S_{i'}$ and $\sigma_i(o_{i'}(2)) = Y^{(1)}$. Thus, $\sigma_{i'}(o_i(1))$ is either $c(1) (= Y^{(1)})$ or $c(2) (= B^{(1)})$. Provided that $\sigma_{i'}(o_i(j))$ is either $c(j)$ or $c(j+1)$, we show that $\sigma_{i'}(o_i(j+1))$ is either $c(j+1)$ or $c(j+2)$ . By definition, if $\sigma_{i'}(o_i(j+1)) = c(\ell)$, then $\ell \geq j$. If $\ell = j$, then $r_i$ cannot change its state from $c(j+1)$ to $c(j+2)$ in $\psi_i(j+1)$. Thus, $\ell \geq j+1$. Suppose that $\ell \geq j+3$. In time interval $[o_i(j), s_i(j))$, the state of $r_i$ is still $c_i(j)$. During this interval, the state of $r_i'$ is either $c(j)$ or $c(j+1)$. Thus, by the same argument as the base case for $\psi_{i'}(j+2)$, a contradiction is derived. To show that $\Psi_i$ is an infinite sequence, we assume that it is a finite sequence and derive a contradiction. Let $h$ be the length of $\Psi_i$, i.e., $\psi_i(h)$ is the last cycle of $\Psi_i$. By the claim above, $\Psi_k$ is finite for any $k$. Without loss of generality, we assume that $\Psi_i$ is the shortest one. By the claim, the length of $\Psi_{i'}$ is either $h$ or $h+1$, if $r_{i'} \in S_i$. Let $t^*$ be a time instant that $r_i$ and all $r_{i'} \in S_i$ have finished their last cycles in $\Psi$. Since $\Omega$ is fair, there is a cycle $\omega = (o,s,f) \in \Omega_i$ such that $t^* < o$. The state of $r_i$ is $c(h+1)$ at $o$, and the state of each $r_{i'} \in S_i$ is either $c_i(h+1)$ or $c_i(h+2)$ at $o$. Thus, the state of $r_i$ changes in $\omega$, and hence $\omega \in \Psi_i$. It is a contradiction. We next show that, for all $k \in {\mathbf N}$, if the state of a robot $r_i$ is $Bk$, then it will eventually change its state to $R$ in $Y^{(k)}$ for any $k \in {\mathbf N}$. The proof is by induction on $k$. \noindent{\bf Base Case (when $k=1$):}~ Let $r_{i'} \in S_i$. The state of $r_{i'}$ is either $Y^{(1)}$ or $B^{(1)}$ (and not $G^{(1)}$) as long as the state of $r_i$ is $Y^{(1)}$ (i.e., either $Bk$, $R$, or $W$). Recall the notation $\omega_{i}(j) = (o_i(j), s_i(j), f_i(j))$ for all $i$ and $j$. By $\mathcal{S}_{VP}$, if $\sigma_i(o_i(j)) = Bk$, then it changes its state either $W$ or $R$ in $\omega_i(j)$. It changes its state to $W$ if there is an $r_{i'} \in S_i$ such that $\sigma_{i'}(o_i(1)) = R$, otherwise it changes its state to $R$. Suppose that $\sigma_i(o_i(j)) = R$. If $\sigma_{i'}(o_i(j)) \in \{R, B, W\}$ for each $r_{i'} \in S_i$, since the state of $r_i$ is $Y^{(1)}$ then $r_i$ changes its state to $B$ in $\omega_i(j)$. Otherwise, if there is an $r_{i'} \in S_i$ such that $\sigma_{i'}(o_i(j)) = Bk$, by the observation above, $r_{i'}$ changes its state to $R$ or $W$ in $\omega_{i'}(j')$, where $j'$ satisfies $o_{i'}(j'-1) < o_i(j) \leq o_{i'}(j')$. Furthermore, if $r_{i'}$ changes its state state to $W$, it maintains the state as long as the state of $r_i$ is $R$. Thus, $r_i$ eventually changes its state to $B$. We show that $r_i$ whose state is $Bk$ will eventually change its state to $R$ after repeating the loop between $Bk$ and $W$ a finite number of times. Suppose that $\sigma_i(o_i(j)) = Bk$, and let $R_i(j)$ be the set of $r_{i'} \in S_i$ such that $\sigma_{i'}(o_i(j)) = R$. Robot $r_i$ changes its state to $W$ in $\omega_i(j)$ if and only if $R_i(j) = \emptyset$. Since $|S_i| < n$, $r_i$ will eventually change its state to $R$ after repeating the loop at most $n-1$ times, since any robot $r_{i'} \in S_i$ with state $B$ will never return to $Bk$ as long as the state of $r_i$ is $X^{(1)}$. Finally, we show that $r_i$ whose state is $W$ will eventually change its state to $Bk$. Suppose that $\sigma_i(o_i(j)) = W$. If $\sigma_{i'}(o_i(j)) \in \{B, W\}$ for each $r_{i'} \in S_i$, then $r_i$ changes its state to $Bk$ in $\omega_i(j)$. Otherwise, if there is an $r_{i'} \in S_i$ such that $\sigma_{i'}(o_i(j)) \in \{Bk, R\}$, it does not change its state in $\omega_i(j)$. If $\sigma_{i'}(o_i(j)) = R$, then $r_{i'}$ will eventually change its state to $B$. If $\sigma_{i'}(o_i(j)) = Bk$, then $r_{i'}$ changes its state to $W$ in $\omega_{i'}(j')$, where $j'$ satisfies $o_{i'}(j') < o_i(j) \leq o_{i'}(j'+1)$. Thus, there is a $r_k \in S_i \cup \{r_i\}$ and $\ell \in {\mathbf N}$ such that $o_i(j) < o_k(\ell)$ such that $r_k$ changes its state from $W$ to $Bk$ in $\omega_k(\ell)$. Since the total number of times that robots other than $r_i$ repeat the loops is bounded by $(n-1)^2$, $r_i$ will eventually change its state from $W$ to $Bk$. The proof of the base case completes. \noindent{\bf Induction Step:}~ Suppose that $\sigma_i(o_i(j)) = G^{(k-1)}$ and $\sigma_{i'}(o_i(j)) \in \{Bk, G^{(k-1)}\}$ for all $r_{i'} \in S_i$. Then, $r_i$ changes its state from $G^{k-1}$ to $Bk$ in $\omega_i(j)$. The state of $r_i$ is $Bk$ as long as there is an $r_{i'} \in S_i$ such that $\sigma_{i'}(o_i(j)) = G^{(k-1)}$. We show that $r_{i'}$ will eventually change its state from $G^{(k-1)}$ to $Bk$. If $r_{i'}$ cannot change its state from $G^{(k-1)}$, then there is an $r_{i''} \in S_{i'}$ whose state is neither $Bk$ nor $G^{(k-1)}$. If the state of $r_{i''}$ is $B^{(k-1)}$, then it will eventually change its state to $G^{(k-1)}$. If it is $W$ or $R$, we can derive a contradiction, because $r_{i''}$ can change its state to $W$ or $R$ when the state of $r_i$ is $G^{(k-1)}$. Thus, we can apply the proof for the base case to complete the induction step. \qed \end{proof} \begin{lemma} \label{lemma:stationary} $\check{F}$ is stationary. \end{lemma} \begin{proof} A robot $r_i$ can move in $\omega_i(j)$ if the cycle is accepted. If $\omega_i(j)$ is accepted, the state of $r_i$ is $R$ during the interval $[s_i(j), f_i(j)]$. If a cycle $\omega_{i'}(j')$ of a robot $r_{i'} \in S_i$ satisfies that $\sigma_i(o_{i'}(j')) = R$, then $\omega_{i'}(j')$ is rejected. Thus, $\omega_{i'}(j')$ is accepted, only if $o_{i'}(j') \not\in [s_i(j), f_i(j)]$. \qed \end{proof} \begin{lemma} \label{lemma:aligned} $\check{F}$ is pairwise aligned. \end{lemma} \begin{proof} Suppose that $\check{F}$ is not pairwise aligned. In $\Lambda$, there are cycles $\omega_i(j)$, $\omega_{i'}(j')$, and $\omega_{i'}(j'+ \ell)$ satisfying the following conditions: \begin{itemize} \item $r_{i'} \in S_i$ (thus, $r_i \in S_{i'}$), \item $\omega_{i}(j)$ and $\omega_{i'}(j')$ overlap each other, \item $\omega_i(j)$ and $\omega_{i'}(j'+ \ell)$ overlap each other, and \item $\omega_{i'}(j'+1), \omega_{i'}(j'+2), \ldots, \omega_{i'}(j'+ \ell-1)$ are not accepted (hence, they are not the elements of $\Lambda$). \end{itemize} Since $\check{F}$ is stationary, we have \begin{equation*} o_{i'}(j') \leq o_i(j) < s_{i'}(j') < f_{i'}(j') < o_{i'}(j' + \ell ) < s_i(j). \end{equation*} Then, for some $0 < k < \ell$, there is a cycle $\omega_{i'}(j' + k)$ where $r_{i'}$ changes its state from $B$ to $G$. It is a contradiction since $\sigma_i(o_{i'}(j')+k) = Bk$. \qed \end{proof} Recall that $\Psi_i = (\psi_i(1), \psi_i(2), \ldots) \subseteq \Omega_i$, where $\psi_i(j)$ is the cycle of $r_i$ in which it changes its state for the $j$th time. Scheduler $\mathcal{S}_{VP}$ accepts every cycle $\psi_i \in \Psi_i$ in which $r_i$ changes its state from $Bk$ to $R$ and accepts no other cycles. \begin{proposition} \label{prop:} Suppose that $\mathcal{A}$ is vicinity preserving. Then if any pair of cycles $\psi_i(j)$ and $\psi_{i'}(j')$ in $\Lambda$ such that (in $\check{F}$) $r_{i'} \in S_i$ and $\psi_i(j) \stackrel{*}{\parallel} \psi_{i'}(j')$ satisfies that $\psi_i(j) \parallel \psi_{i'}(j')$, then $\check{F}$ is consistent. \end{proposition} \begin{proof} Since $\mathcal{A}$ is vicinity preserving, $r_{i'} \in S_i$ if and only if $r_i \in S_{i'}$ and $dist(\pi_i(j), \pi_{i'}(j')) > 1$ if $r_{i'} \not\in S_i$. \qed \end{proof} \begin{lemma} \label{lemma:consistent} $\check{F}$ is consistent. \end{lemma} \begin{proof} Assume that there are two cycles $\psi_i(j)$ and $\psi_{i'}(j)$ in $\Lambda$ such that (in $\check{F}$) $\psi_i(j) \stackrel{*}{\parallel} \psi_{i'}(j')$, $r_i \in S_{i'}$, and $\psi_i(j) \not\parallel \psi_{i'}(j')$, to derive a contradiction. Let $\psi_i(j) = (o_i(j), s_i(j), f_i(j))$. Since $\psi_i(j) \stackrel{*}{\parallel} \psi_{i'}(j')$, there are cycles $\psi_{i_h}(j_h)$ such that $\psi_{i_h}(j_h) \parallel \psi_{j_{h+1}}(j_{h+1})$ for all $h = 1, 2, \ldots, \ell-1$, where $(i_1, j_1) = (i,j)$ and $(i_{\ell}, j_{\ell}) = (i', j')$. Note that in cycle $\psi_{i_h}(j_h)$, $r_{i_h}$ changes its state from $Bk$ to $R$. Recall that in the proof of Lemma~\ref{lemma:fairness}, we showed that $\sigma_{i'}(o_i(j))$ is either $c(j)$ or $c(j+1)$. Then in cycles $\psi_{i_1}(j_1)$ and $\psi_{i_2}(j_2)$, $r_{i_1}$ and $r_{i_2}$ change their states to he same state $R$ for the $k$th time for some $k$, since $\psi_{i_1}(j_1) \parallel \psi_{i_2}(j_2)$. Thus, in $\psi_{i_1}(j_1)$ and $\psi_{i_{\ell}}(j_{\ell})$, $r_i$ and $r_{i_{\ell}}$ change their states to $R$ for the $k$th time. Since $r_{i_{\ell}} \in S_{i_1}$ and $\psi_{i_1}(j_1) \not\parallel \psi_{i_{\ell}}(j_{\ell})$, we assume that $f_{i_1}(j_1) < o_{i_{\ell}}(j_{\ell})$ by the stationarity. Since $\psi_{i_{\ell}} \in \Lambda$, $\sigma_{i_1}(o_{i_{\ell}}(j_{\ell})) \neq R$, which means that there is a cycle $\psi_{i_1}(h_1)$ for some $h_1 > j_1$, in which $r_{i_1}$ changes its state from $R$ to $B$. Obviously, $s_{i_1}(h_1) < o_{i_{\ell}}(j_{\ell})$. Consider $c_{\ell} = \sigma_{i_{\ell}}(o_{i_1}(h_1))$. By definition, $c_{\ell} \in \{W, R, B \}$. Observe that $c_{\ell} = R$ means that $c_{\ell} = R^{(k)}$ and $c_{\ell} = B$ means that $c_{\ell} = B^{(k)}$. Thus, $c_{\ell} = W$, because $r_{i_{\ell}}$ changes its state to $R^{(k)}$ in $\phi_{i_{\ell}}(j_{\ell})$. Then, there is a cycle $\psi_{i_{\ell}}(h_{\ell})$ such that $o_{i_1}(h_1) < o_{i_{\ell}}(h_{\ell})$ and $h_{\ell} < j_{\ell}$, and $r_{i_{\ell}}$ changes its state from $W$ to $Bk$ in $\psi_{i_{\ell}}(h_{\ell})$. Consider $c_{\ell-1} = \sigma_{i_{\ell-1}}(o_{i_{\ell}}(h_{\ell}))$. By the same argument above, $c_{\ell-1} = W$. Thus, $c_h = W$ or all $1 \leq h \leq \ell$. It is a contradiction, since $r_{i_1}$ changes its state to $W$ (in $Y^{(k)}$) from $R^{(k)}$ after $\psi_{i_1}(h_1)$. \qed \end{proof} \begin{lemma} \label{lemma:serializable} $\check{F}$ is serializable. \end{lemma} \begin{proof} Let $\mathcal{G} = (\{\Lambda_0, \Lambda_1, \ldots, \}, \Rightarrow)$, where $\Lambda_0, \Lambda_1, \ldots$ is the equivalence classes of $\Lambda$ with respect to $\stackrel{*}{\parallel}$. If two cycles $\omega_i(j)$ and $\omega_{i'}(j')$ are in the same class $\Lambda_{m}$, i.e., $\omega_i(j) \stackrel{*}{\parallel} \omega_{i'}(j')$, as we showed in the proof of Lemma~\ref{lemma:aligned}, $r_i$ and $r_{i'}$ change their states to the same state $R^{(k)}$ for some $k$. Suppose that $\omega_i(j) \rightarrow \omega_{i'}(j')$. Then, in $\omega_i(j)$, $r_i$ changes its state to $R^{(k)}$ for some $k$, and in $\omega_{i'}(j')$, $r_{i'}$ changes its state to $R^{(k')}$ for some $k'$. By definition, $\sigma_i(o_{i'}(j')) \neq R$, which implies $k < k'$. If $\check{F}$ is not serializable, there is a loop in $\mathcal{G}$, which is a contradiction. \qed \end{proof} \begin{lemma} \label{lemma:natural} Suppose that $\mathcal{A}$ is vicinity preserving. $\check{F}$ is natural. \end{lemma} \begin{proof} Let $T = (\Lambda_0, \Lambda_1, \Lambda_2, \ldots)$ be any topological sort of $\mathcal{G}$. Consider any pair of cycles $\psi_i(j)$ and $\psi_{i'}(j')$ in $\Lambda$ such that (in $\check{F}$) $k' \leq k < k''$, where $\psi_i(j) \in \Lambda_k$, $\psi_{i'}(j'-1) \in \Lambda_{k'}$, and $\psi_{i'}(j') \in \Lambda_{k''}$. (We assume that $k'=-1$ when $j'=1$ for the consistency.) If $r_{i'} \not\in S_i(j)$, then $dist(\pi_i(j), \pi_{i'}(j')) > 0$ since $\mathcal{A}$ is vicinity preserving. Suppose that $r_{i'} \in S_i(j)$. If $o_i(j) \geq o_{i'}(j')$, since $\omega_i(j) \not\parallel \omega_{i'}(j')$, $\omega_{i'}(j') \rightarrow \omega_i(j)$, which is a contradiction since $k < k'$. Thus, $o_i(j) < o_{i'}(j')$. \qed \end{proof} By Lemmas~\ref{lemma:fairness}, \ref{lemma:stationary}, \ref{lemma:aligned}, \ref{lemma:consistent}, \ref{lemma:serializable}, and \ref{lemma:natural}, we have the following theorem. \begin{theorem} \label{theorem:SST} For any vicinity preserving algorithm $\mathcal{A}$ for non-luminous SSYNC mobile robots, color-based synchronizer $\mathcal{S}_{VP}$ is correct. \end{theorem} \section{Conclusion} \label{sec:conclusion} In this paper, we investigated synchronization by ASYNC robots with limited visibility. We started with a sufficient condition for an ASYNC execution to have a similar SSYNC execution. Our condition consists of stationarity, pairwise alignment, consistency, serializability, and naturality on the timing of Look-Compute-Move cycles and visibility relation among the robots. We then showed the necessity of the five properties under a randomized adversary that selects non-rigid movement and asynchronous observations of the robots. Our randomized impossibility argument is a novel and stronger technique than a worst-case (deterministic) analysis. Finally, we presented a color-based synchronizer for luminous ASYNC robots together with the limit of color-based synchronizers. We showed that there exists an algorithm for which no color-based synchronizer can guarantee the five properties, if the algorithm is not visibility preserving. Then, we provided a color-based synchronizer that, for a given vicinity preserving algorithm $\mathcal{A}$, produces an ASYNC execution that satisfies the five properties. Thus, luminous ASYNC robots can simulate vicinity preserving algorithms designed for (non-luminous) SSYNC robots. There are important open problems about a necessary and sufficient condition for an algorithm to have a luminous synchronizer. The requirement of our color-based synchronizer is that an algorithm $\mathcal{A}$ is vicinity preserving. It is open whether there exists a color-based synchronizer and a general luminous synchronizer that works for visibility preserving algorithms. \section*{Acknowledgment} The authors would like to thank Prof. Toshio Nakata for his precious comments on the infinite product of a Borel probability measure space and Prof. Giovanni Viglietta for precious discussion in University of Ottawa. \bibliographystyle{plain}
1,108,101,566,080
arxiv
\section{Introduction} An $n$-fold category is a higher and wider categorical structure obtained by $n$ applications of the internal category construction. In this paper we study the homotopy theory of $n$-fold categories. Our main result is Theorem \ref{maintheoremsummary}. Namely, we have constructed a cofibrantly generated model structure on the category of small $n$-fold categories in which an $n$-fold functor is a weak equivalence if and only if its nerve is a diagonal weak equivalence. This model structure is Quillen equivalent to the usual model structure on the category of simplicial sets, and hence also topological spaces. Our main tools are model category theory, the $n$-fold nerve, and an $n$-fold Grothendieck construction for multisimplicial sets. Notions of nerve and versions of the Grothendieck construction are very prominent in homotopy theory and higher category theory, as we now explain. The Thomason model structure on $\mathbf{Cat}$ is also often present, at least implicitly. The Grothendieck nerve of a category and the Grothendieck construction for functors are fundamental tools in homotopy theory. Theorems A and B of Quillen \cite{quillenI}, and Thomason's theorem \cite{thomasonhocolimit} on Grothendieck constructions as models for certain homotopy colimits, are still regularly applied decades after their creation. Functors with nerves that are weak equivalences of simplicial sets feature prominently in these theorems. Such functors form the weak equivalences of Thomason's model structure on {\bf Cat} \cite{thomasonCat}, which is Quillen equivalent to {\bf SSet}. Earlier, Illusie \cite{illusieII} proved that the nerve and the Grothendieck construction are homotopy inverses. Although the nerve and the Grothendieck construction are not adjoints\footnote{In fact, the Grothendieck construction is not even homotopy equivalent to $c$, the left adjoint to the nerve, as follows. For any simplicial set $X$, let $\Delta/X$ denote the Grothendieck construction on $X$. Then $N(\Delta/\partial\Delta[3])$ is homotopy equivalent to $\partial\Delta[3]$ by Illusie's result. On the other hand, $Nc\partial\Delta[3]=Nc\Delta[3]=\Delta[3]$, since $cX$ only depends on 0-,1-, and 2-simplices. Clearly, $\partial\Delta[3]$ and $\Delta[3]$ are not homotopy equivalent, so the Grothendieck construction is not naturally homotopy equivalent to $c$. }, the equivalence of homotopy categories can be realized by adjoint functors \cite{fritschlatch1}, \cite{fritschlatch2}, \cite{thomasonCat}. Related results on homotopy inverses are found in \cite{latchuniqueness}, \cite{lee}, and \cite{waldhausen}. More recently, Cisinski \cite{cisinskiasterisque} has proved two conjectures of Grothendieck concerning this circle of ideas (see also \cite{jardinesummary}). On the other hand, notions of nerve play an important role in various definitions of $n$-category \cite{leinstersurvey}, namely the definitions of Simpson \cite{simpsonmodel}, Street \cite{street}, and Tamsamani \cite{tamsamani}, as well as in the theory of quasi-categories developed by Joyal \cite{joyalNotes}, \cite{joyalVolume1}, \cite{joyalVolume2}, and also Lurie \cite{lurieStableInfinity}, \cite{lurieHigherToposTheory}. For notions of nerve for bicategories, see for example work of Duskin and Lack-Paoli \cite{duskinI}, \cite{duskinII}, \cite{lackpaoli}, and for left adjoints to singular functors in general also \cite{gabrielzisman} and \cite{kellyenriched}. Fully faithful cellular nerves have been developed for higher categories in \cite{bergercellular}, together with characterizations of their essential images. Nerve theorems can be established in a very general context, as proved by Leinster and Weber in \cite{leinsternerves} and \cite{weber}, and discussed in \cite{ncategorycafenerves}. As an example, Kock proves in \cite{kocktrees} a nerve theorem for polynomial endofunctors in terms of trees. Model category techniques are only becoming more important in the theory of {\it higher} categories. They have been used to prove that, in a precise sense, simplicial categories, Segal categories, complete Segal spaces, and quasi-categories are all equivalent models for $(\infty,1)$-categories \cite{bergnersurvey}, \cite{bergnerthreemodels}, \cite{bergnersimplicialcategories}, \cite{joyaltierneyquasisegal}, \cite{rezkhomotopytheory}, and \cite{toenaxiomatization}. In other directions, although the cellular nerve of \cite{bergercellular} does not transfer a model structure from cellular sets to $\omega$-categories, it is proved in \cite{bergercellular} that the homotopy category of cellular sets is equivalent to the homotopy category of $\omega$-categories. For this, a Quillen equivalence between {\it cellular spaces} and {\it simplicial $\omega$-categories} is constructed. There is also the work of Simpson and Pellisier \cite{pellissier}, \cite{simpsonmodel}, and \cite{simpsonhigher}, developing model structures on $n$-categories for the purpose of $n$-stacks, and also a model structure for $(\infty, n)$-categories. In low dimensions several model structures have already been investigated. On {\bf Cat}, there is the categorical structure of Joyal-Tierney \cite{joyaltierney}, \cite{rezkcat}, as well as the topological structure of Thomason \cite{thomasonCat}, \cite{cisinskiThomasonFix}. A model structure on pro-objects in {\bf Cat} appeared in \cite{golasinski}, \cite{golasinskiprotranslation}, \cite{golasinskipro}. The articles \cite{heggiehomotopycofibrations}, \cite{heggietensorproduct}, \cite{heggiehomotopycolimits} and are closely related to the Thomason structure and the Thomason homotopy colimit theorem. More recently, the Thomason structure on {\bf Cat} was proved in Theorem 5.2.12 of \cite{cisinskiasterisque} in the context of Grothendieck test categories and fundamental localizers. The homotopy categories of spaces and categories are proved equivalent in \cite{hoyo} without using model categories. On {\bf 2-Cat} there is the categorical structure of \cite{lack2Cat} and \cite{lackBiCat}, as well as the Thomason structure of \cite{worytkiewicz2Cat}. Model structures on {\bf 2FoldCat} have been studied in \cite{fiorepaolipronk1} in great detail. The homotopy theory of 2-fold categories is very rich, since there are numerous ways to view 2-fold categories: as internal categories in {\bf Cat}, as certain simplicial objects in {\bf Cat}, or as algebras over a 2-monad. In \cite{fiorepaolipronk1}, a model structure is associated to each point of view, and these model structures are compared. However, there is another way to view 2-fold categories not treated in \cite{fiorepaolipronk1}, namely as certain bisimplicial sets. There is a natural notion of fully faithful double nerve, which associates to a 2-fold category a bisimplicial set. An obvious question is: does there exist a Thomason-like model structure on {\bf 2FoldCat} that is Quillen equivalent to some model structure on bisimplicial sets via the double nerve? Unfortunately, the left adjoint to double nerve is homotopically poorly behaved as it extends the left adjoint $c$ to ordinary nerve, which is itself poorly behaved. So any attempt at a model structure must address this issue. Fritsch, Latch, and Thomason \cite{fritschlatch1}, \cite{fritschlatch2}, \cite{thomasonCat} noticed that the composite of $c$ with second barycentric subdivision $\Sd^2$ is much better behaved than $c$ alone. In fact, Thomason used the adjunction $c\Sd^2 \dashv \Ex^2N$ to construct his model structure on {\bf Cat}. This adjunction is a Quillen equivalence, as the right adjoint preserves weak equivalences and fibrations by definition, and the unit and counit are natural weak equivalences. Following this lead, we move to simplicial sets via $\delta^\ast$ (restriction to the diagonal) in order to correct the homotopy type of double categorification using $\Sd^2$. Moreover, our method of proof works for $n$-fold categories as well, so we shift our focus from $2$-fold categories to general $n$-fold categories. In this paper, we construct a cofibrantly generated model structure on {\bf nFoldCat} using the fully faithful $n$-fold nerve, via the adjunction below, \begin{equation} \label{mainadjunctionintro} \xymatrix@C=4pc{\mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\Sd^2} & \ar@/^1pc/[l]^-{\Ex^2} \mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\delta_!} & \ar@/^1pc/[l]^-{\delta^\ast} \mathbf{SSet^n} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{c^n} & \ar@/^1pc/[l]^-{N^n} \mathbf{nFoldCat}} \end{equation} and prove that the unit and counit are weak equivalences. Our method is to apply Kan's Lemma on Transfer of Structure. First we prove Thomason's classical theorem in Theorem \ref{CatCase}, and then use this proof as a basis for the general $n$-fold case in Theorem \ref{MainModelStructure}. We also introduce an $n$-fold Grothendieck construction in Definition \ref{nfoldGrothendieck}, prove that it is homotopy inverse to the $n$-fold nerve in Theorems \ref{rhowe} and \ref{lambdawe}, and conclude in Proposition \ref{unitcounitwe} that the unit and counit of the adjunction (\ref{mainadjunctionintro}) are natural weak equivalences. The articles \cite{fritschlatch1} and \cite{fritschlatch2} proved in a different way that the unit and counit of the classical Thomason adjunction $\mathbf{SSet}\dashv\mathbf{Cat}$ are natural weak equivalences. Recent interest in $n$-fold categories has focused on the $n=2$ case. In many cases, this interest stems from the fact that 2-fold categories provide a good context for incorporating two types of morphisms, and this is useful for applications. For example, between rings there are ring homomorphisms and bimodules, between topological spaces there are continuous maps and parametrized spectra as in \cite{maysigurdsson}, between manifolds there are smooth maps and cobordisms, and so on. In this direction, see for example \cite{grandisdouble1}, \cite{fiore1}, \cite{fiore2}, \cite{mortondouble}, \cite{shulmanonquillenfunctors}, \cite{shulmanframed}. Classical work on 2-fold categories, originally introduced by Ehresmann as {\it double categories}, includes \cite{ehresmannone}, \cite{ehresmanntwo}, \cite{ehresmannthree}, \cite{ehresmannfour}, \cite{ehresmann2}, \cite{ehresmann}. The theory of double categories is now flourishing, with many contributions by Brown-Mosa, Grandis-Par\'e, Dawson-Par\'e-Pronk, Dawson-Par\'e, Fiore-Paoli-Pronk, Shulman, and many others. To mention only a few examples, we have \cite{brownmosa99}, \cite{grandisdouble1}, \cite{grandisdouble2}, \cite{grandisdouble3}, \cite{grandisdouble4}, \cite{dawsonparepronkpaths}, \cite{dawsonparefreedouble}, \cite{fiorepaolipronk1}, \cite{shulmanonquillenfunctors}, and \cite{shulmanframed}. There has also been interest in general $n$-fold categories from various points of view. Connected homotopy $(n+1)$-types are modelled by $n$-fold categories internal to the category of groups in \cite{lodayfinitelymany}, as summarized in the survey paper \cite{paoliinternalstructures}. Edge symmetric $n$-fold categories have been studied by Brown, Higgins, and others for many years now, for example \cite{brownhigginsgroupoidscrossedcomplexes}, \cite{brownhigginsgroupoidscubicalTcomplexes}, \cite{brownhigginscubes}, and \cite{brownhigginstensor}. There are also the more recent {\it symmetric weak cubical categories} of \cite{grandiscospans3} and \cite{grandiscospans1}. The homotopy theory of cubical sets has been studied in \cite{jardineCubical}. The present article is the first to consider a Thomason structure on the category of $n$-fold categories. Our paper is organized as follows. Section \ref{nfoldcategories} recalls $n$-fold categories, introduces the $n$-fold nerve $N^n$ and its left adjoint $n$-fold categorification $c^n$, and describes how $c^n$ interacts with $\delta_!$, the left adjoint to precomposition with the diagonal. In Section \ref{barycentric} we recall barycentric subdivision, including explicit descriptions of $\Sd^2\Lambda^k[m]$, $\Sd^2\partial\Delta[m]$, and $\Sd^2\Delta[m]$. More importantly, we present a decomposition of the poset $\bfP\Sd \Delta[m]$ into the union of three posets $\Comp$, $\Cen$, and $\Out$ in Proposition \ref{upcloseddecomposition}, as pictured in Figure \ref{subdivisionfigure} for $m=2$ and $k=1$. Though Section \ref{barycentric} may appear technical, the statements become clear after a brief look at the example in Figure \ref{subdivisionfigure}. This section is the basis for the verification of the pushout axiom \ref{KanCorollaryiv} of Corollary \ref{KanCorollary}, completed in the proofs of Theorems \ref{CatCase} and \ref{MainModelStructure}. Sections \ref{retractionsection} and \ref{pushoutsection} make further preparations for the verification of the pushout axiom. Proposition \ref{deformationretract} gives a deformation retraction of $|N(\Comp \cup \Cen)|$ to part of its boundary, see Figure \ref{subdivisionfigure}. This deformation retraction finds application in equation \eqref{QPinclusion}. The highlights of Section \ref{pushoutsection} are Proposition \ref{nervecommuteswithpushout} and Corollary \ref{nervecommuteswithcolimitdecomposition} on the commutation of nerve with certain colimits of posets. Proposition \ref{nervecommuteswithpushout} on commutation of nerve with certain pushouts finds application in equation \eqref{QPinclusion}. Other highlights of Section \ref{pushoutsection} are Proposition \ref{colimitdecomposition}, Proposition \ref{simplicial_colimitdecomposition}, and Corollary \ref{cor:specific_colimit_decompositions} on the expression of certain posets (respectively their nerves) as a colimit of two ordinals (respectively two standard simplices). Section \ref{Thomasonsection} pulls these results together and quickly proves the classical Thomason theorem. Section \ref{sectionnfolddecompositions} proves the $n$-fold versions of the results in Sections \ref{barycentric}, \ref{retractionsection}, and \ref{pushoutsection}. The $n$-fold version of Proposition \ref{colimitdecomposition} on colimit decompositions of certain posets is Proposition \ref{colimitdecompositionnfold}. The $n$-fold version of Corollary \ref{nervecommuteswithcolimitdecomposition} on the commutation of nerve with certain colimits of posets is Proposition \ref{diagonaldecomposition}. The $n$-fold version of the deformation retraction in Proposition \ref{deformationretract} is Corollary \ref{nfolddeformationretract}. The $n$-fold version of Proposition \ref{nervecommuteswithpushout} on commutation of nerve with certain pushouts is Proposition \ref{nfoldnervecommuteswithpushout}. Proposition \ref{PushoutDescription} displays a calculation of a pushout of double categories, and the diagonal of its nerve is characterized in Proposition \ref{pushoutsimplexdescription}. Section \ref{Thomasonnfoldsection} pulls together the results of Section \ref{sectionnfolddecompositions} to prove the Thomason structure on {\bf nFoldCat} in Theorem \ref{MainModelStructure}. In the last section of the paper, Section \ref{unitcounitsection}, we introduce a Grothendieck construction for multisimplicial sets and prove that it is a homotopy inverse for $n$-fold nerve in Theorems \ref{rhowe} and \ref{lambdawe}. As a consequence, we have in Proposition \ref{unitcounitwe} that the unit and counit are weak equivalences. We have also included an appendix on the Multisimplicial Eilenberg-Zilber Lemma. {\bf Acknowledgments:} Thomas M. Fiore and Simona Paoli thank the Centre de Recerca Matem\`{a}tica in Bellaterra (Barcelona) for its generous hospitality, as it provided a fantastic working environment and numerous inspiring talks. The CRM Research Program on Higher Categories and Homotopy Theory in 2007-2008 was a great inspiration to us both. We are indebted to Myles Tierney for suggesting to use the weak equivalence $\xymatrix@1{N(\Delta/X) \ar[r] & X}$ and the Weak Equivalence Extension Theorem \ref{weakequivalenceextension} of Joyal-Tierney \cite{joyaltierneysimplicial} in our proof that the unit and counit of (\ref{nfoldcatadjunction}) are weak equivalences. We also thank Andr\'e Joyal and Myles Tierney for explaining aspects of Chapter 6 of their book \cite{joyaltierneysimplicial}. We thank Denis-Charles Cisinski for explaining to us his proof that the unit and counit are weak equivalences in the Thomason structure on $\mathbf{Cat}$, as this informed our Section \ref{unitcounitsection}. We also thank Dorette Pronk for several conversations related to this project. We also express our gratitude to an anonymous referee who made many excellent suggestions. Thomas M.~Fiore was supported at the University of Chicago by NSF Grant DMS-0501208. At the Universitat Aut\`{o}noma de Barcelona he was supported by grant SB2006-0085 of the Spanish Ministerio de Educaci\'{o}n y Ciencia under the Programa Nacional de ayudas para la movilidad de profesores de universidad e investigadores espa$\tilde{\text{n}}$oles y extranjeros. Simona Paoli was supported by Australian Postdoctoral Fellowship DP0558598 at Macquarie University. Both authors also thank the Fields Institute for its financial support, as this project began at the 2007 Thematic Program on Geometric Applications of Homotopy Theory at the Fields Institute. \section{$n$-Fold Categories} \label{nfoldcategories} In this section we quickly recall the inductive definition of $n$-fold category, present an equivalent combinatorial definition of $n$-fold category, discuss completeness and cocompleteness of $\mathbf{nFoldCat}$, introduce the $n$-fold nerve $N^n$, prove the existence of its left adjoint $c^n$, and recall the adjunction $\delta_! \dashv \delta^\ast$. \begin{defn} \label{defn:nfold_category_inductive} A {\it small $n$-fold category} $\mathbb{D}=(\mathbb{D}_0,\mathbb{D}_1)$ is a category object in the category of small $(n-1)$-fold categories. In detail, $\mathbb{D}_0$ and $\mathbb{D}_1$ are $(n-1)$-fold categories equipped with $(n-1)$-fold functors $$\xymatrix@C=3pc{\mathbb{D}_1 \times_{\mathbb{D}_0} \mathbb{D}_1 \ar[r]^-\circ & \mathbb{D}_1 \ar@/^1pc/[r]^s \ar@/_1pc/[r]_t & \ar[l]|{\lr{u}} \mathbb{D}_0 }$$ that satisfy the usual axioms of a category. We denote the category of $n$-fold categories by $\mathbf{nFoldCat}$. \end{defn} Since we will always deal with small $n$-fold categories, we leave off the adjective ``small''. Also, all of our $n$-fold categories are strict. The following equivalent combinatorial definition of $n$-fold category is more explicit than the inductive definition. The combinatorial definition will only be needed in a few places, so the reader may skip the combinatorial definition if it appears more technical than one's taste. \begin{defn} \label{defn:nfold_category_combinatorial} The data for an {\it $n$-fold category $\bbD$} are \begin{enumerate} \item \label{nsets} sets $\bbD_{\epsilon}$, one for each $\epsilon \in \{0,1\}^n$, \item \label{nsourcetarget} for every $1 \leq i \leq n$ and $\epsilon'\in \{0,1\}^n$ with $\epsilon_i'=1$ we have {\it source} and {\it target} functions $$\xymatrix{s^i, t^i \co \bbD_{\epsilon'} \ar[r] & \bbD_\epsilon}$$ where $\epsilon\in \{0,1\}^n$ satisfies $\epsilon_i=0$ and $\epsilon_j=\epsilon_j'$ for all $j \neq i$ (for ease of notation we do not include $\epsilon'$ in the notation for $s^i$ and $t^i$, despite the ambiguity), \item \label{nunit} for every $1 \leq i \leq n$ and $\epsilon, \epsilon'\in \{0,1\}^n $ with $\epsilon_i=0$, $\epsilon_i'=1$, and $\epsilon_j=\epsilon_j'$ for all $j \neq i$, we have a {\it unit} $\xymatrix@1{u^i \co \bbD_\epsilon \ar[r] & \bbD_{\epsilon'}}$, \item for every $1 \leq i \leq n$ and $\epsilon, \epsilon'\in \{0,1\}^n $ with $\epsilon_i=0$, $\epsilon_i'=1$, and $\epsilon_j=\epsilon_j'$ for all $j \neq i$, we have a {\it composition} $$\xymatrix{\bbD_{\epsilon'} \times_{\bbD_{\epsilon}} \bbD_{\epsilon'} \ar[r]^-{\circ^i} & \bbD_{\epsilon'}}.$$ \end{enumerate} To form an {\it $n$-fold category}, these data are required to satisfy the following axioms. \begin{enumerate} \item \label{nsourcetargetcompatibility} {\it Compatibility of source and target:} for all $1 \leq i \leq n$ and all $1 \leq j \leq n$, $$s^i s^j = s^j s^i$$ $$t^i t^j = t^j t^i$$ $$s^i t^j = t^j s^i$$ whenever these composites are defined. \item \label{nunitcompatibility} {\it Compatibility of units with units:} for all $1 \leq i \leq n$ and all $1 \leq j \leq n$, $$u^iu^j=u^ju^i$$ whenever these composites are defined. \item \label{nunitsourcetargetcompatibility} {\it Compatibility of units with source and target}: for all $1 \leq i \leq n$ and all $1 \leq j \leq n$, $$s^iu^j=u^js^i$$ $$t^iu^j=u^jt^i$$ whenever these composites are defined. \item {\it Categorical structure:} for every $1 \leq i \leq n$ and $\epsilon, \epsilon'\in \{0,1\}^n $ with $\epsilon_i=0$, $\epsilon_i'=1$, and $\epsilon_j=\epsilon_j'$ for all $j \neq i$, the diagram in $\mathbf{Set}$ $$\xymatrix@C=3pc{\mathbb{D}_{\epsilon'} \times_{\mathbb{D}_\epsilon} \mathbb{D}_{\epsilon'} \ar[r]^-{\circ^i} & \mathbb{D}_{\epsilon'} \ar@/^1pc/[r]^{s^i} \ar@/_1pc/[r]_{t^i} & \ar[l]|{\lr{u^i}} \mathbb{D}_\epsilon }$$ is a category. \item \label{nspecificinterchangelaw} {\it Interchange law:} For every $i \neq j$ and every $\epsilon \in \{0,1\}^n$ with $\epsilon_i=1=\epsilon_j$, the compositions $\circ^i$ and $\circ^j$ can be interchanged, that is, if $w,x,y,z\in \bbD_\epsilon$, and $$t^i(w)=s^i(x), \; \; t^i(y)=s^i(z)$$ $$t^j(w)=s^j(y), \; \; t^j(x)=s^j(z),$$ $$\xymatrix@R=1pc@C=1pc{& & \\ & \ar[r]^i \ar[d]_j & \\ & &} \;\;\;\;\; \xymatrix{\ar@{-}[r] \ar@{-}[d] \ar@{}[dr]|w & \ar@{-}[r] \ar@{-}[d] \ar@{}[dr]|x & \ar@{-}[d] \\ \ar@{-}[r] \ar@{-}[d] \ar@{}[dr]|y & \ar@{-}[r] \ar@{-}[d] \ar@{}[dr]|z & \ar@{-}[d] \\ \ar@{-}[r] & \ar@{-}[r] & }$$ then $(z \circ^j y) \circ^i (x \circ^j w)=(z \circ^i x) \circ^j (y \circ^i w)$. \end{enumerate} We define $\vert \epsilon \vert$ to be the number of 1's in $\epsilon$, that is \begin{equation*} \vert \epsilon \vert:=\vert \{1\leq i \leq n \mid \epsilon_i=1\} \vert =\sum_{i=1}^n \epsilon_i. \end{equation*} If $k=\vert \epsilon \vert$, an element of $\bbD_\epsilon$ is called a {\it $k$-cube}. \end{defn} \begin{rmk} If $\bbD_\epsilon=\bbD_{\epsilon'}$ for all $\epsilon, \epsilon' \in \{0,1\}^n$ with $\vert \epsilon \vert = \vert \epsilon' \vert$, then the data \ref{nsets}, \ref{nsourcetarget}, \ref{nunit} satisfying axioms \ref{nsourcetargetcompatibility}, \ref{nunitcompatibility}, \ref{nunitsourcetargetcompatibility} are an $n$-truncated {\it cubical complex} in the sense of Section 1 of \cite{brownhigginscubes}. Compositions and the interchange law are also similar. The situation of \cite{brownhigginscubes} is {\it edge symmetric} in the sense that $\bbD_\epsilon=\bbD_{\epsilon'}$ for all $\epsilon, \epsilon' \in \{0,1\}^n$ with $\vert \epsilon \vert = \vert \epsilon' \vert$, and the $\vert \epsilon \vert$ compositions on $\bbD_\epsilon$ coincide with the $\vert \epsilon' \vert$ compositions on $\bbD_{\epsilon'}$. In the present article we study the non-edge-symmetric case, in the sense that we do {\it not} require $\bbD_\epsilon$ and $\bbD_{\epsilon'}$ to coincide when $\vert \epsilon \vert = \vert \epsilon' \vert$, and hence, the $\vert \epsilon \vert$ compositions on $\bbD_\epsilon$ are not required to be the same as the $\vert \epsilon' \vert$ compositions on $\bbD_{\epsilon'}$. \end{rmk} \begin{rmk} The generalized interchange law follows from the interchange law in \ref{nspecificinterchangelaw}. For example, if we have eight compatible 3-dimensional cubes arranged as a 3-dimensional cube, then all possible ways of composing these eight cubes down to one cube are the same. \end{rmk} \begin{prop} The inductive notion of $n$-fold category in Definition \ref{defn:nfold_category_inductive} is equivalent to the combinatorial notion of $n$-fold category in Definition \ref{defn:nfold_category_combinatorial} in the strongest possible sense: the categories of such are equivalent. \end{prop} \begin{pf} For $n=1$ the categories are clearly the same. Suppose the proposition holds for $n-1$ and call the categories $\mathbf{(n-1)FoldCat(ind)}$ and $\mathbf{(n-1)FoldCat(comb)}$. Then internal categories in $\mathbf{(n-1)FoldCat(ind)}$ are equivalent to internal categories in $\mathbf{(n-1)FoldCat(comb)}$, while internal categories in $\mathbf{(n-1)FoldCat(comb)}$ are the same as $\mathbf{nFoldCat(comb)}$. \end{pf} A 2-fold category, that is, a category object in {\bf Cat}, is precisely a {\it double category} in the sense of Ehresmann. A double category consists of a set $\bbD_{00}$ of {\it objects}, a set $\bbD_{01}$ of {\it horizontal morphisms}, a set $\bbD_{10}$ of {\it vertical morphisms}, and a set $\bbD_{11}$ of {\it squares} equipped with various sources, targets, and associative and unital compositions satisfying the interchange law. Several homotopy theories for double categories were considered in \cite{fiorepaolipronk1}. \begin{examp} There are various standard examples of double categories. To any category, one can associate the double category of commutative squares. Any 2-category can be viewed as a double category with trivial vertical morphisms or as a double category with trivial horizontal morphisms. To any 2-category, one can also associate the double category of {\it quintets}: a square is a square of morphisms inscribed with a 2-cell in a given direction. \end{examp} \begin{examp} In nature, one often finds {\it pseudo double categories}. These are like double categories, except one direction is a bicategory rather than a 2-category (see \cite{grandisdouble1} for a more precise definition). For example, one may consider 1-manifolds, 2-cobordisms, smooth maps, and appropriate squares. Another example is rings, bimodules, ring maps, and twisted equivariant maps. For these examples and more, see \cite{grandisdouble1}, \cite{fiore2}, and other articles on double categories listed in the introduction. \end{examp} \begin{examp} Any $n$-category is an $n$-fold category in numerous ways, just like a 2-category can be considered as a double category in several ways. \end{examp} An important method of constructing $n$-fold categories from $n$ ordinary categories is the external product, which is compatible with the external product of simplicial sets. This was called the {\it square product} on page 251 of \cite{ehresmannone}. \begin{defn} \label{externalproduct} If $\bfC_1, \ldots, \bfC_n$ are small categories, then the {\it external product} $\bfC_1 \boxtimes \cdots \boxtimes \bfC_n$ is an $n$-fold category with object set $\Obj \bfC_1 \times \cdots \times \Obj \bfC_n$. Morphisms in the $i$-th direction are $n$-tuples $(f_1, \dots, f_n)$ of morphisms in $\bfC_1 \times \cdots \times \bfC_n$ where all but the $i$-th entry are identities. Squares in the $ij$-plane are $n$-tuples where all entries are identities except the $i$-th and $j$-th entries, and so on. An $n$-cube is an $n$-tuple of morphisms, possibly all non-identity morphisms. \end{defn} \begin{prop} The category $\mathbf{nFoldCat}$ is locally finitely presentable. \end{prop} \begin{pf} We prove this by induction. The category $\mathbf{Cat}$ of small categories is known to be locally finitely presentable (see for example \cite{gabrielulmer}). Assume $\mathbf{(n-1)FoldCat}$ is locally finitely presentable. The category $\mathbf{nFoldCat}$ is the category of models in $\mathbf{(n-1)FoldCat}$ for a sketch with finite diagrams. Since $\mathbf{(n-1)FoldCat}$ is locally finitely presentable, we conclude from Proposition 1.53 of \cite{adamekrosicky1994} that $\mathbf{nFoldCat}$ is also locally finitely presentable. \end{pf} \begin{prop} The category $\mathbf{nFoldCat}$ is complete and cocomplete. \end{prop} \begin{pf} Completeness follows quickly, because $\mathbf{nFoldCat}$ is a category of algebras. For example, the adjunction between $n$-fold graphs and $n$-fold categories is monadic by the Beck Monadicity Theorem. This means that the algebras for the induced monad are precisely the $n$-fold categories. The category $\mathbf{nFoldCat}$ is cocomplete because $\mathbf{nFoldCat}$ is locally finitely presentable. \end{pf} The colimits of certain $k$-fold subcategories are the $k$-fold subcategories of the the colimit. To prove this, we introduce some notation. \begin{notation} Let $\leq$ denote the lexicographic order on $\{0,1\}^n$, and let $\overline{k}\in \{0,1\}^n$ with $k=|\overline{k}|$. The forgetful functor $$\xymatrix{U_{\overline{k}}\co \mathbf{nFoldCat} \ar[r] & \mathbf{kFoldCat}}$$ assigns to an $n$-fold category $\bbD$ the $k$-fold category consisting of those sets $\bbD_\epsilon$ with $\epsilon \leq \overline{k}$ and all the source, target, and identity maps of $\bbD$ between them. If we picture $\bbD$ as an $n$-cube with $\bbD_\epsilon$'s at the vertices and source, target, identity maps on the edges, then the $k$-fold subcategory $U_{\overline{k}}(\bbD)$ is a $k$-face of this $n$-cube. For example, if $n=2$ and $k=1$, then $U_{\overline{k}}(\bbD)$ is either the horizontal or vertical subcategory of the double category $\bbD$. \end{notation} \begin{prop} \label{prop:forgetful_admits_right_adjoint} The forgetful functor $\xymatrix@1{U_{\overline{k}}\co \mathbf{nFoldCat} \ar[r] & \mathbf{kFoldCat}}$ admits a right adjoint $R_{\overline{k}}$, and thus preserves colimits: for any functor $F$ into $\mathbf{nFoldCat}$ we have $$U_{\overline{k}}\left( \colim F \right) = \colim U_{\overline{k}} F.$$ \end{prop} \begin{pf} For a $k$-fold category $\bbE$, the $n$-fold category $R_{\overline{k}}\bbE$ has $U_{\overline{k}}R_{\overline{k}}\bbE=\bbE$, in particular the objects of $R_{\overline{k}}\bbE$ are the same as the objects of $\bbE$. The other cubes are defined inductively. If $k_i=0$, then $R_{\overline{k}}\bbE$ has a unique morphism (1-cube) in direction $i$ between any two objects. Suppose the $j$-cubes of $R_{\overline{k}}\bbE$ have already been defined, that is $\left(R_{\overline{k}}\bbE\right)_\epsilon$ has been defined for all $\epsilon$ with $|\epsilon|=j$. For any $\epsilon$ with $|\epsilon|=j+1$ and $\epsilon \nleq \overline{k}$, there is a unique element of $\left(R_{\overline{k}}\bbE\right)_\epsilon$ for each boundary of $j$-cubes. The natural bijection $$\mathbf{kFoldCat}(U_{\overline{k}}\bbD,\bbE)\cong \mathbf{nFoldCat}(\bbD,R_{\overline{k}} \bbE)$$ is given by uniquely extending $k$-fold functors defined on $U_{\overline{k}}\bbD$ to $n$-fold functors into $R_{\overline{k}} \bbE$. \end{pf} We next introduce the $n$-fold nerve functor, prove that it admits a left adjoint, and also prove that an $n$-fold natural transformation gives rise to a simplicial homotopy after pulling back along the diagonal. \begin{defn} The {\it $n$-fold nerve} of an $n$-fold category $\bbD$ is the multisimplicial set $N^n\bbD$ with $\overline{p}$-simplices $$(N^n\bbD)_{\overline{p}}:=Hom_{\mathbf{nFoldCat}}([p_1] \boxtimes \cdots \boxtimes [p_n],\bbD).$$ A $\overline{p}$-simplex is a $\overline{p}$-array of composable $n$-cubes. \end{defn} \begin{rmk} The $n$-fold nerve is the same as iterating the nerve construction for internal categories $n$ times. \end{rmk} \begin{examp} The $n$-fold nerve is compatible with external products: $N^n(\bfC_1 \boxtimes \cdots \boxtimes \bfC_n)=N\bfC_1 \boxtimes \cdots \boxtimes N\bfC_n$. In particular, $$N^n([m_1]\boxtimes \cdots \boxtimes [m_n])=\Delta[m_1] \boxtimes \cdots \boxtimes \Delta[m_n]=\Delta[m_1, \ldots, m_n].$$ \end{examp} \begin{prop} The functor $\xymatrix@1{N^n\co \mathbf{nFoldCat} \ar[r] & \mathbf{SSet^n}}$ is fully faithful. \end{prop} \begin{pf} We proceed by induction. For $n=1$, the usual nerve functor is fully faithful. Consider now $n>1$, and suppose $$\xymatrix@1{N^{n-1}\co \mathbf{(n-1)FoldCat} \ar[r] & \mathbf{SSet^{n-1}}}$$ is fully faithful. We have a factorization $$\xymatrix{\Cat(\mathbf{(n-1)FoldCat}) \ar@/^2pc/[rr]^{N^n} \ar[r]_-N & \left[ \Delta^{\op}, \mathbf{(n-1)FoldCat} \right] \ar[r]_-{N^{n-1}_*} & \left[ \Delta^{\op}, \mathbf{SSet^{n-1}} \right], }$$ where the brackets mean functor category. The functor $N$ is faithful, as $(NF)_0$ and $(NF)_1$ are $F_0$ and $F_1$. It is also full, for if $\xymatrix@1{F'\co N\bbD \ar[r] & N\bbE}$, then $F'_0$ and $F'_1$ form an $n$-fold functor with nerve $F'$ (compatibility of $F'$ with the inclusions $\xymatrix{e_{i,i+1}\co [1] \ar[r] & [m]}$ determines $F'_m$ from $F'_0$ and $F'_1$). The functor $N^{n-1}_*$ is faithful, since it is faithful at every degree by hypothesis. If $\xymatrix@1{(G_m')_m \co (N^{n-1}\bbD_m)_m \ar[r] & (N^{n-1}\bbE_m)_m}$ is a morphism in $\left[ \Delta^{\op}, \mathbf{SSet^{n-1}} \right]$, there exist $(n-1)$-fold functors $G_m$ such that $N^{n-1}G_m=G_m'$, and these are compatible with the structure maps for $(\bbD_m)_m$ and $(\bbE_m)_m$ by the faithfulness of $N^{n-1}$. So $N^{n-1}_*$ is also full. Finally, $N^n=N^{n-1} \circ N$ is a composite of fully faithful functors. This proposition can also be proved using the Nerve Theorem 4.10 of \cite{weber}. For a direct proof in the case $n=2$, see \cite{fiorepaolifundamental}. \end{pf} \begin{prop} The $n$-fold nerve functor $N^n$ admits a left adjoint $c^n$ called fundamental $n$-fold category or $n$-fold categorification. \end{prop} \begin{pf} The functor $N^n$ is defined as the singular functor associated to an inclusion. Since $\mathbf{nFoldCat}$ is cocomplete, a left adjoint to $N^n$ is obtained by left Kan extending along the Yoneda embedding. This is the Lemma from Kan about singular-realization adjunctions. \end{pf} \begin{examp} \label{examp:categorification_and_external_products} If $X_1, \ldots, X_n$ are simplicial sets, then $$c^n(X_1\boxtimes \cdots \boxtimes X_n)=cX_1\boxtimes \cdots \boxtimes cX_n$$ where $c$ is ordinary categorification. The symbol $\boxtimes$ on the left means external product of simplicial sets, and the symbol $\boxtimes$ on the right means external product of categories as in Definition \ref{externalproduct}. For a proof in the case $n=2$, see \cite{fiorepaolifundamental}. \end{examp} Since the nerve of a natural transformation is a simplicial homotopy, we expect the diagonal of the $n$-fold nerve of an $n$-fold natural transformation to be a simplicial homotopy. \begin{defn} \label{defn_nfold_nat_transf} An {\it $n$-fold natural transformation} $\xymatrix@1{\alpha \co F \ar@{=>}[r] & G }$ between $n$-fold functors $\xymatrix@1{F,G\co \bbD \ar[r] & \bbE }$ is an $n$-fold functor $$\xymatrix{\alpha\co \bbD \times [1]^{\boxtimes n} \ar[r] & \bbE}$$ such that $\alpha\vert_{\bbD \times \{0\}}$ is $F$ and $\alpha\vert_{\bbD \times \{1\}}$ is $G$. \end{defn} Essentially, an $n$-fold natural transformation associates to an object an $n$-cube with source corner that object, to a morphism in direction $i$ a square in direction $ij$ for all $j\neq i$ in $1 \leq j \leq n$, to an $ij$-square a 3-cube in direction $ijk$ for all $k \neq i,j$ in $1 \leq k \leq n$ etc, and these are appropriately functorial, natural, and compatible. \begin{examp} \label{examp:n_naturaltransfs_yield_an_nfold_naturaltransf} If $\xymatrix@1{\alpha_i\co\bfC_i \times [1] \ar[r] & \bfC_i' }$ are ordinary natural transformations between ordinary functors for $1 \leq i \leq n$, then $\alpha_1 \boxtimes \cdots \boxtimes \alpha_n$ is an $n$-fold natural transformation because of the isomorphism $$(\bfC_1 \times [1]) \boxtimes \cdots \boxtimes (\bfC_n \times [1]) \cong (\bfC_1 \boxtimes \cdots \boxtimes \bfC_n) \times ([1] \boxtimes \cdots \boxtimes [1]).$$ \end{examp} \begin{prop} \label{nfoldnat_gives_simplicial_homotopy} Let $\xymatrix@1{\alpha \co \bbD \times [1]^{\boxtimes n} \ar[r] & \bbE}$ be an $n$-fold natural transformation. Then $(\delta^*N^n\alpha)\circ (1_{\delta^*N^n \bbD} \times d)$ is a simplicial homotopy from $\delta^*(N^n \alpha\vert_{\bbD \times \{0\}})$ to $\delta^*(N^n \alpha\vert_{\bbD \times \{1\}})$. \end{prop} \begin{pf} We have the diagonal of the $n$-fold nerve of $\alpha$ $$\xymatrix@1@C=4pc{\delta^*(N^n \bbD) \times \delta^*(N^n[1]^{\boxtimes n}) \ar[r]^-{\delta^*N^n \alpha} & \delta^*N^n \bbE},$$ which we then precompose with $1_{\delta^*N^n \bbD}\times d$ to get $$\xymatrix@1@C=4pc{(\delta^*N^n \bbD) \times \Delta[1] \ar[r]^-{1_{\delta^*N^n \bbD} \times d} & \delta^*(N^n \bbD) \times \Delta[1]^{\times n} \ar[r]^-{\delta^*N^n \alpha} & \delta^*N^n \bbE}.$$ \end{pf} Lastly, we consider the behavior of $c^n$ on the image of the left adjoint $\delta_!$. The diagonal functor $$\xymatrix{\delta \co\Delta \ar[r] & \Delta^n}$$ $$[m] \mapsto ([m],\ldots, [m])$$ induces $\xymatrix@1{\delta^\ast\co \mathbf{SSet^n} \ar[r] & \mathbf{SSet}}$ by precomposition. The functor $\delta^\ast$ admits both a left and right adjoint by Kan extension. The left adjoint $\delta_!$ is uniquely characterized by two properties: \begin{enumerate} \item $\delta_!(\Delta[m])=\Delta[m,\ldots,m]$, \item $\delta_!$ preserves colimits. \end{enumerate} Thus, \begin{equation*} \delta_!X=\delta_!(\underset{\Delta[m] \rightarrow X}{\colim} \Delta[m])=\underset{\Delta[m] \rightarrow X}{\colim}\delta_!\Delta[m]=\underset{\Delta[m] \rightarrow X}{\colim} \Delta[m,\ldots,m] \end{equation*} where the colimit is over the simplex category of the simplicial set $X$. Further, since $c^n$ preserves colimits, we have \begin{equation*} c^n\delta_!X=\underset{\Delta[m] \rightarrow X}{\colim} c^n\Delta[m,\ldots,m]=\underset{\Delta[m] \rightarrow X}{\colim} [m]\boxtimes \cdots \boxtimes [m]. \end{equation*} Clearly, $c^n \delta_! \Delta[m]=[m] \boxtimes \cdots \boxtimes [m]$. The calculation of $c^n \delta_! \Sd^2 \Delta[m]$ and $c^n \delta_! \Sd^2 \Lambda^k[m]$ is not as simple, because external product does not commute with colimits. We will give a general procedure of calculating the $n$-fold categorification of nerves of certain posets in Section \ref{sectionnfolddecompositions}. \section{Barycentric Subdivision and Decomposition of $\bfP\Sd\Delta[m]$}\label{barycentric} The adjunction \begin{equation} \xymatrix@C=4pc{\mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^{\Sd} & \ar@/^1pc/[l]^{\Ex} \mathbf{SSet}} \end{equation} between barycentric subdivision $\Sd$ and Kan's functor $\Ex$ is crucial to Thomason's transfer from {\bf Cat} to {\bf SSet}. We will need a good understanding of subdivision for the Thomason structure on $\mathbf{nFoldCat}$ as well, so we recall it in this section. Explicit descriptions of certain subsimplices of the double subdivisions $\Sd^2\Lambda^k[m]$, $\Sd^2\partial\Delta[m]$, and $\Sd^2\Delta[m]$ will be especially useful later. In Proposition \ref{upcloseddecomposition}, we present a decomposition of the poset $\bfP\Sd\Delta[m]$, which is pictured in Figure \ref{subdivisionfigure} for the case $m=2$ and $k=1$. The nerve of the poset $\bfP\Sd\Delta[m]$ is of course $\Sd^2\Delta[m]$. This decomposition allows us to describe a deformation retraction of part of $|\Sd^2\Delta[m]|$ in a very controlled way (Proposition \ref{deformationretract}). In particular, each $m$-subsimplex is deformation retracted onto one of its faces. This allows us to do a deformation retraction of the $n$-fold categorifications as well in Corollary \ref{nfolddeformationretract}. These preparations are essential for verifying the pushout-axiom in Kan's Lemma on Transfer of Model Structures. We begin now with our recollection of barycentric subdivision. The simplicial set $\Sd\Delta[m]$ is the nerve of the poset $\bfP\Delta[m]$ of non-degenerate simplices of $\Delta[m]$. The ordering is the face relation. Recall that the poset $\bfP\Delta[m]$ is isomorphic to the poset of nonempty subsets of $[m]$ ordered by inclusion. Thus a $q$-simplex $v$ of $\Sd\Delta[m]$ is a tuple $(v_0, \dots, v_q)$ of nonempty subsets of $[m]$ such that $v_i$ is a subset of $v_{i+1}$ for all $0\leq i\leq q-1$. For example, the tuple \begin{equation} \label{vexample} (\{0\},\{0,2\},\{0,1,2,3\}) \end{equation} is a 2-simplex of $\Sd\Delta[3]$. A $p$-simplex $u$ is a {\it face} of a $q$-simplex $v$ in $\Sd\Delta[m]$ if and only if \begin{equation} \label{facerelation} \{u_0,\dots, u_p\} \subseteq \{v_0,\dots, v_q\}. \end{equation} For example the 1-simplex \begin{equation} \label{uexample} (\{0\},\{0,1,2,3\}) \end{equation} is a face of the 2-simplex in equation \eqref{vexample}. A face that is a 0-simplex is called a {\it vertex}. The vertices of $v$ are written simply as $v_0, \dots, v_q$. A $q$-simplex $v$ of $\Sd\Delta[m]$ is non-degenerate if and only if all $v_i$ are distinct. The simplices in equations (\ref{vexample}) and (\ref{uexample}) are both non-degenerate. The barycentric subdivision of a general simplicial set $K$ is defined in terms of the barycentric subdivisions $\Sd\Delta[m]$ that we have just recalled. \begin{defn} The {\it barycentric subdivision} of a simplicial set $K$ is $$\underset{\Delta[n] \rightarrow K}{\colim} \Sd\Delta[n]$$ where the colimit is indexed over the category of simplices of $K$. \end{defn} The right adjoint to $\Sd$ is the $\Ex$ functor of Kan, and is defined in level $m$ by $$(\Ex X)_m=\mathbf{SSet}(\Sd \Delta[m], X).$$ As pointed out on page 311 of \cite{thomasonCat}, there is a particularly simple description of $\Sd K$ whenever $K$ is a classical simplicial complex each of whose simplices has a linearly ordered vertex set compatible with face inclusion. In this case, $\Sd K$ is the nerve of the poset $\bfP K$ of non-degenerate simplices of $K$. The cases $K=\Sd \Delta[m], \Lambda^k[m], \Sd \Lambda^k[m], \partial \Delta[m],$ and $\Sd \partial \Delta[m]$ are of particular interest to us. We first consider the case $K=\Sd \Delta[m]$ in order to describe the simplicial set $\Sd^2\Delta[m]$. This is the nerve of the poset $\bfP\Sd\Delta[m]$ of non-degenerate simplices of $\Sd\Delta[m]$. A $q$-simplex of $\Sd^2\Delta[m]$ is a sequence $V=(V_0,\dots, V_q)$ where each $V_i=(v_0^i, \dots, v^i_{r_i})$ is a non-degenerate simplex of $\Sd\Delta[m]$ and $V_{i-1} \subseteq V_i$. For example, \begin{equation} \label{Vexample} \left(\;\;(\{01\}),\;\;(\{0\},\{01\}),\;\;(\{0\},\{01\},\{012\})\;\; \right) \end{equation} is a 2-simplex in $\Sd^2 \Delta[2]$. A $p$-simplex $U$ is a face of a $q$-simplex $V$ in $\Sd^2\Delta[m]$ if and only if \begin{equation} \label{OrderOnDoubleSubdivision} \{U_0,\dots, U_p\} \subseteq \{V_0,\dots, V_q\}. \end{equation} For example, the 1-simplex \begin{equation} \label{Uexample} \left(\;\;(\{01\}),\;\;(\{0\},\{01\},\{012\})\;\; \right) \end{equation} is a subsimplex of the 2-simplex in equation \eqref{Vexample}. The vertices of $V$ are $V_0,\dots, V_q$. A $q$-simplex $V$ of $\Sd^2\Delta[m]$ is non-degenerate if and only if all $V_i$ are distinct. The simplices in equations (\ref{Vexample}) and (\ref{Uexample}) are both non-degenerate. Figure \ref{subdivisionfigure} displays the poset $\bfP\Sd\Delta[m]$, the nerve of which is $\Sd^2\Delta[m]$. \begin{figure} \begin{center} \def\objectstyle{\scriptstyle} \xy 0;/r.21pc/: (0,0)*{\halo{02}}="1"; (80,0)*{\halo{2}}="2"; (-80,0)*{\halo{0}}="3"; (40,0)*{\halo{2,02}}="4"; (-40,0)*{\halo{0,02}}="5"; (0,50)*{\halo{012}}="6"; (-60,34.5)*{\halo{0,01}}="7"; (60,34.5)*{\halo{2,12}}="8"; (-35,40)*{\halo{0,01,012}}="9"; (35,40)*{\halo{2,12,012}}="10"; (-30,25)*{\halo{0,012}}="11"; (30,25)*{\halo{2,012}}="12"; (-25,15)*{\halo{0,02,012}}="13"; (25,15)*{\halo{2,02,012}}="14"; (0,20)*{\halo{02,012}}="15"; (22,55)*{\halo{12,012}}="16"; (-22,55)*{\halo{01,012}}="17"; (40,69)*{\halo{12}}="18"; (-40,69)*{\halo{01}}="19"; (15,86)*{\halo{1,12,012}}="20"; (-15,86)*{\halo{1,01,012}}="21"; (0,90)*{\halo{1,012}}="22"; (-20,103.5)*{\halo{1,01}}="23"; (20,103.5)*{\halo{1,12}}="24"; (0,138)*{\halo{1}}="25"; {\ar@{->}"3";"5"}; {\ar@{->}"3";"13"}; {\ar@{->}"3";"11"}; {\ar@{->}"3";"9"}; {\ar@*{[|1.5pt]}"3";"7"}; {\ar@{->}"2";"4"}; {\ar@{->}"2";"14"}; {\ar@{->}"2";"12"}; {\ar@{->}"2";"10"}; {\ar@*{[|1.5pt]}"2";"8"}; {\ar@{.>}"1";"5"}; {\ar@{.>}"1";"13"}; {\ar@{.>}"1";"15"}; {\ar@{.>}"1";"14"}; {\ar@{.>}"1";"4"}; {\ar@{.>}"6";"22"}; {\ar@{.>}"6";"20"}; {\ar@{.>}"6";"16"}; {\ar@{.>}"6";"10"}; {\ar@{.>}"6";"12"}; {\ar@{.>}"6";"14"}; {\ar@{.>}"6";"15"}; {\ar@{.>}"6";"13"}; {\ar@{.>}"6";"11"}; {\ar@{.>}"6";"9"}; {\ar@{.>}"6";"17"}; {\ar@{.>}"6";"21"}; {\ar@{.>}"15";"13"}; {\ar@{.>}"15";"14"}; {\ar@{->}"11";"13"}; {\ar@{->}"11";"9"}; {\ar@{->}"12";"10"}; {\ar@{->}"12";"14"}; {\ar@{->}"7";"9"}; {\ar@{->}"8";"10"}; {\ar@*{[|1.5pt]}"19";"7"}; {\ar@{->}"19";"9"}; {\ar@{->}"19";"17"}; {\ar@{->}"19";"21"}; {\ar@*{[|1.5pt]}"19";"23"}; {\ar@*{[|1.5pt]}"18";"8"}; {\ar@{->}"18";"10"}; {\ar@{->}"18";"16"}; {\ar@{->}"18";"20"}; {\ar@*{[|1.5pt]}"18";"24"}; {\ar@{->}"23";"21"}; {\ar@{->}"24";"20"}; {\ar@{->}"17";"9"}; {\ar@{->}"17";"21"}; {\ar@{->}"16";"20"}; {\ar@{->}"16";"10"}; {\ar@*{[|1.5pt]}"25";"23"}; {\ar@{->}"25";"21"}; {\ar@{->}"25";"22"}; {\ar@{->}"25";"20"}; {\ar@*{[|1.5pt]}"25";"24"}; {\ar@{->}"22";"20"}; {\ar@{->}"22";"21"}; {\ar@{->}"5";"13"}; {\ar@{->}"4";"14"}; \endxy \end{center} \caption{Decomposition of the poset $\bfP\Sd\Delta[2]$. The dark arrows form the poset $\bfP\Sd\Lambda^1[2]$, while its up-closure $\Out$ consists of all solid arrows. The poset $\Cen$ consists of all the triangles emanating from 012; these triangles all have two dotted sides emanating from 012. The poset $\Comp$ consists of the four triangles at the bottom emanating from 02; these four trinagles each have two dotted sides emanating from 02. The geometric realization of all triangles with at least two dotted edges, namely $|N(\Comp \cup \Cen)|$, is topologically deformation retracted onto the solid part of its boundary.} \label{subdivisionfigure} \end{figure} Next we consider $K=\Lambda^k[m]$ in order to describe $\Sd \Lambda^k[m]$ as the nerve of the poset $\bfP\Lambda^k[m]$ of non-degenerate simplices of $\Lambda^k[m]$. The simplicial set $\Lambda^k[m]$ is the smallest simplicial subset of $\Delta[m]$ which contains all non-degenerate simplices of $\Delta[m]$ except the sole $m$-simplex $1_{[m]}$ and the $(m-1)$-face opposite the vertex $\{k\}$. The $n$-simplices of $\Lambda^k[m]$ are \begin{equation} \label{simplicesofhorn} (\Lambda^k[m])_n=\{\xymatrix@1{f:[n] \ar[r] & [m]}| \, \im f \nsupseteq [m]\backslash \{k\} \}. \end{equation} A $q$-simplex $(v_0, \dots, v_q)$ of $\Sd \Delta[m]$ is in $\Sd \Lambda^k[m]$ if and only if each $v_i$ is a face of $\Lambda^k[m]$. More explicitly, $(v_0, \dots, v_q)$ is in $\Sd \Lambda^k[m]$ if and only if $|v_q| \leq m$ and in case of equality $k \in v_q$. This follows from equation \eqref{simplicesofhorn}. Similarly, a $q$-simplex $V$ in $\Sd^2 \Delta[m]$ is in $\Sd^2 \Lambda^k[m]$ if and only if all $v^i_j$ are faces of $\Lambda^k[m]$. This is the case if and only if for all $0 \leq i \leq q$, $|v^i_{r_i}|\leq m$ and in case of equality $k \in v^i_{r_i}$. This, in turn, is the case if and only if $|v^q_{r_q}|\leq m$ and in case of equality $k \in v^q_{r_q}$. See again Figure \ref{subdivisionfigure}. Lastly, we similarly describe $\Sd \partial \Delta[m]$ and $\Sd^2 \partial \Delta[m]$. The simplicial set $\partial \Delta[m]$ is the simplicial subset of $\Delta[m]$ obtained by removing the sole $m$-simplex $1_{[m]}$. A $q$-simplex $(v_0, \dots, v_q)$ of $\Sd \Delta[m]$ is in $\Sd \partial \Delta[m]$ if and only if $v_q\neq \{0,1, \dots, m\}$. A $q$-simplex $V$ of $\Sd^2 \Delta[m]$ is in $\Sd^2 \partial \Delta[m]$ if and only if $v^i_{r_i}\neq\{0,1, \dots, m\}$ for all $0 \leq i \leq q$, which is the case if and only if $v^q_{r_q}\neq\{0,1, \dots, m\}$. See again Figure \ref{subdivisionfigure}. \begin{rmk} \label{gluingofsimplices} Also of interest to us is the way that the non-degenerate $m$-simplices of $\Sd^2 \Delta[m]$ are glued together along their $(m-1)$-subsimplices. In the following, let $V=(V_0, \dots, V_m)$ be a non-degenerate $m$-simplex of $\Sd^2 \Delta[m]$. Each $V_i=(v_0^i, \dots, v^i_{r_i})$ is then a distinct non-degenerate simplex of $\Sd\Delta[m]$. See Figure \ref{subdivisionfigure} for intuition. \begin{enumerate} \item \label{gluingofsimplicesi} Then $r_i=i$, $|V_i|=i+1$, and hence also $v^m_m=\{0,1, \dots, m\}$. \item \label{gluingofsimplicesii} If $v^{m-1}_{m-1} \neq\{0,1, \dots, m\}$, then the $m$-th face $(V_0, \dots, V_{m-1})$ of $V$ is not shared with any other non-degenerate $m$-simplex $V'$ of $\Sd^2 \Delta[m]$. \\ {\it Proof:} If $v^{m-1}_{m-1} \neq\{0,1, \dots, m\}$, then the $(m-1)$-simplex $(V_0, \dots, V_{m-1})$ lies in $\Sd^2 \partial \Delta[m]$ by the description of $\Sd^2 \partial \Delta[m]$ above, and hence does not lie in any other non-degenerate $m$-simplex $V'$ of $\Sd^2\Delta[m]$. \item \label{gluingofsimplicesiii} If $v^{m-1}_{m-1}=\{0,1, \dots, m\}$, then the $m$-th face $(V_0, \dots, V_{m-1})$ of $V$ is shared with one other non-degenerate $m$-simplex $V'$ of $\Sd^2 \Delta[m]$. \\ {\it Proof:} If $v^{m-1}_{m-1}=\{0,1, \dots, m\}$, then there exists a unique $0\leq i \leq m-1$ with $v_i^{m-1} \backslash v_{i-1}^{m-1}=\{a,a'\}$ with $a \neq a'$ (since the sequence $v_0^{m-1},v_1^{m-1}, \dots, v_{m-1}^{m-1}=\{0,1, \dots, m\}$ is strictly ascending). Here we define $v^{m-1}_{i-1}=\emptyset$ whenever $i=0$. Thus, the $(m-1)$-simplex $(V_0, \dots, V_{m-1})$ is also a face of the non-degenerate $m$-simplex $V'$ where $$V'_\ell=V_\ell \text{\hspace{.5in} for $0 \leq \ell \leq m-1$}$$ $$V'_m=(v_0^{m-1}, \dots, v^{m-1}_{i-1}, v^{m-1}_{i-1} \cup \{a'\}, v^{m-1}_i, \dots , v^{m-1}_{m-1}),$$ where we also have $$V_m=(v_0^{m-1}, \dots, v^{m-1}_{i-1}, v^{m-1}_{i-1} \cup \{a\}, v^{m-1}_i, \dots , v^{m-1}_{m-1}).$$ \item \label{gluingofsimplicesiv} If $0\leq j \leq m-1$, then $V$ shares its $j$-th face $(\dots, \hat{V}_j,\dots, V_m)$ with one other non-degenerate $m$-simplex $V'$ of $\Sd^2 \Delta[m]$. \\ {\it Proof:} Since $|V_i|=i+1$, we have $V_{j+1} \backslash V_{j-1}=\{v,v'\}$ with $v\neq v'$ (we define $V_{j-1}=\emptyset$ whenever $j=0$). Then $(\dots, \hat{V}_j,\dots, V_m)$ is shared by the two non-degenerate $m$-simplices $$V=(V_0, \dots, V_{j-1}, V_{j-1} \cup \{v\}, V_{j+1}, \dots, V_m)$$ $$V'=(V_0, \dots, V_{j-1}, V_{j-1} \cup \{v'\}, V_{j+1}, \dots, V_m)$$ and no others. \end{enumerate} \end{rmk} After this brief discussion of how the non-degenerate $m$-simplices of $\Sd^2\Delta[m]$ are glued together, we turn to some comments about the relationships between the second subdivisions of $\Lambda^k[m]$, $\partial\Delta[m]$, and $\Delta[m]$. Since the counit $\xymatrix@1{cN \ar@{=>}[r] & 1_{\mathbf{Cat}} }$ is a natural isomorphism\footnote{The nerve functor is fully faithful, so the counit is a natural isomorphism by IV.3.1 of \cite{maclaneworking}}, the categories $c\Sd^2\Lambda^k[m]$, $c\Sd^2\partial\Delta[m]$, and $c\Sd^2\Delta[m]$ are respectively the posets $\bfP\Sd\Lambda^k[m]$, $\bfP\Sd\partial\Delta[m]$, and $\bfP\Sd\Delta[m]$ of non-degenerate simplices. Moreover, the induced functors $$\xymatrix@1{c\Sd^2\Lambda^k[m] \ar[r] & c\Sd^2\Delta[m]} \hspace{.5in} \xymatrix@1{c\Sd^2\partial\Delta[m] \ar[r] & c\Sd^2\Delta[m] }$$ are simply the poset inclusions $$\xymatrix@1{\bfP\Sd\Lambda^k[m] \ar[r] & \bfP\Sd\Delta[m]} \hspace{.5in} \xymatrix@1{\bfP\Sd\partial\Delta[m] \ar[r] & \bfP\Sd\Delta[m] }.$$ The down-closure of $\bfP\Sd\Lambda^k[m]$ in $\bfP\Sd\Delta[m]$ is easily described. \begin{prop} \label{Lambdadownclosed} The subposet $\bfP\Sd\Lambda^k[m]$ of $\bfP\Sd\Delta[m]$ is down-closed. \end{prop} \begin{pf} A $q$-simplex $(v_0, \dots, v_q)$ of $\Sd \Delta[m]$ is in $\Sd \Lambda^k[m]$ if and only if $|v_q| \leq m$ and in case of equality $k \in v_q$. If $(v_0, \dots, v_q)$ has this property, then so do all of its subsimplices. \end{pf} The rest of this section is dedicated to a decomposition of $\bfP\Sd\Delta[m]$ into the union of three up-closed subposets: $\Comp$, $\Cen$, and $\Out$. This culminates in Proposition \ref{upcloseddecomposition}, and will be used in the construction of the retraction in Section \ref{retractionsection} as well as the transfer proofs in Sections \ref{Thomasonsection} and \ref{Thomasonnfoldsection}. The reader is encouraged to compare with Figure \ref{subdivisionfigure} throughout. We begin by describing these posets. The poset $\Out$ is the up-closure of $\bfP\Sd\Lambda^k[m]$ in $\bfP\Sd\Delta[m]$. Although $\Out$ depends on $k$ and $m$, we omit these letters from the notation for readability. \begin{prop} \label{outer} Let $\Out$ denote the smallest up-closed subposet of $\bfP\Sd\Delta[m]$ which contains $\bfP\Sd\Lambda^k[m]$. \begin{enumerate} \item \label{outeri} The subposet $\Out$ consists of those $(v_0, \dots, v_q)\in \bfP \Sd\Delta[m]$ such that there exists a $(u_0, \dots, u_p) \in \bfP\Sd \Lambda^k[m]$ with $$\{u_0,\dots, u_p\} \subseteq \{v_0,\dots, v_q\}.$$ In particular, $(v_0, \dots, v_q)\in \bfP \Sd\Delta[m]$ is in $\Out$ if and only if some $v_i$ satisfies $|v_i|\leq m$ and in case of equality $k \in v_i$. \item \label{outerii} Define a functor $\xymatrix@1{r\co \Out \ar[r] & \bfP\Sd\Lambda^k[m]}$ by $r(v_0, \dots, v_q):=(u_0, \dots, u_p)$ where $(u_0, \dots, u_p)$ is the maximal subset $$\{u_0,\dots, u_p\} \subseteq \{v_0,\dots, v_q\}$$ that is in $\bfP\Sd \Lambda^k[m]$. Let $\xymatrix@1{\text{\rm inc} \co \bfP\Sd \Lambda^k[m] \ar[r] & \Out}$ be the inclusion. Then $r\circ \text{\rm inc} = 1_{\bfP\Sd \Lambda^k[m]}$ and there is a natural transformation $\xymatrix@1{\alpha \co \text{\rm inc}\circ r \ar@{=>}[r] & 1_{\Out}}$ which is the identity morphism on objects of $\bfP\Sd \Lambda^k[m]$. Consequently, $\vert \bfP\Sd \Lambda^k[m] \vert$ is a deformation retract of $\vert \Out \vert$. See Figure \ref{subdivisionfigure} for a geometric picture. \end{enumerate} \end{prop} \begin{pf} \begin{enumerate} \item An element of $\bfP\Sd\Delta[m]$ is in the up-closure of $\bfP\Sd\Lambda^k[m]$ if and only if it lies above some element of $\bfP\Sd\Lambda^k[m]$, and the order is the face relation as in equation \eqref{facerelation}. For the last part, we use the observation that $(u_0, \dots, u_p) \in \bfP\Sd \Lambda^k[m]$ if and only if $|u_p|\leq m$ and in the case of equality $k \in u_p$, as in the discussion after \eqref{simplicesofhorn}, and also the fact that $(u_j)\leq(u_0, \dots, u_p)$. \item For $(v_0, \dots, v_q) \in \Out$, we define $\alpha(v_0, \dots, v_q)$ to be the unique arrow in $\Out$ from $r(v_0, \dots, v_q)$ to $(v_0, \dots, v_q)$. Naturality diagrams must commute, since $\Out$ is a poset. The rest is clear. \end{enumerate} \end{pf} The following trivial remark will be of use later. \begin{rmk} \label{Deltafreecomposites} Since $\bfP\Sd\Lambda^k[m]$ is down-closed by Proposition \ref{Lambdadownclosed}, any morphism of $\bfP\Sd\Delta[m]$ that ends in $\bfP\Sd\Lambda^k[m]$ must also be contained in $\bfP\Sd\Lambda^k[m]$. Since $\Out$ is the up-closure of the poset $\bfP\Sd\Lambda^k[m]$ in $\bfP\Sd\Delta[m]$, any morphism that begins in $\bfP\Sd\Lambda^k[m]$ ends in $\Out$. \end{rmk} We can similarly characterize the up-closure $\Cen$ of $(\{0,1,\dots,m\})$ in $\bfP\Sd\Delta[m]$. We call a non-degenerate $m$-simplex of $\Sd^2\Delta[m]$ a {\it central $m$-simplex} if it has $(\{0,1, \dots, m\})$ as its $0$-th vertex. \begin{prop} \label{center} The smallest up-closed subposet $\Cen$ of $\bfP\Sd\Delta[m]$ which contains $(\{0,1,\dots,m\})$ consists of those $(v_0, \dots, v_q)\in \bfP \Sd\Delta[m]$ such that $v_q=\{0,1,\dots,m\}$. The nerve $N\Cen$ consists of all central $m$-simplices of $\Sd^2\Delta[m]$ and all their faces. A $q$-simplex $(V_0,\dots, V_q)$ of $\Sd^2\Delta[m]$ is in $N\Cen$ if and only if $v^i_{r_i}= \{0,1,\dots,m\}$ for all $0\leq i \leq q$. \end{prop} For example, the 2-simplex \begin{equation} \label{central2simplex} \left(\;\; (\{012\}), \;\; (\{01\},\{012\}), \;\; (\{0\},\{01\},\{012\}) \;\; \right) \end{equation} is a central 2-simplex of $\Sd^2 \Delta[2]$ and the 1-simplex \begin{equation} \label{faceofcentral2simplex} \left( \;\; (\{01\},\{012\}), \;\; (\{0\},\{01\},\{012\}) \;\; \right) \end{equation} is in $N\Cen$, as it is a face of the 2-simplex in equation \eqref{central2simplex}. A glance at Figure \ref{subdivisionfigure} makes all of this apparent. \begin{rmk} We need to understand more thoroughly the way the central $m$-simplices are glued together in $N\Cen$. Suppose $V$ is a central $m$-simplex, so that $v^i_i=\{0,1, \dots, m\}$ for all $0 \leq i \leq m$ by Proposition \ref{center}. From the description of $V'$ in Remark \ref{gluingofsimplices} \ref{gluingofsimplicesiii} and \ref{gluingofsimplicesiv}, and also Proposition \ref{center} again, we see for $j=1, \dots, m$ that the neighboring non-degenerate $m$-simplex $V'$ containing the $(m-1)$-face $(V_0, \dots, \hat{V}_j, \dots)$ of $V$ is also central. The face $(V_1, \dots, V_m)$ of $V$ opposite $V_0=(\{0,1,\dots,m\})$, is not shared with any other central $m$-simplex as every central $m$-simplex has $\{0, \dots, m\}$ as its $0$-th vertex. Thus, each central $m$-simplex $V$ shares exactly $m$ of its $(m-1)$-faces with other central $m$-simplices. A glance at Figure \ref{subdivisionfigure} shows that the central simplices fit together to form a $2$-ball. More generally, the central $m$-simplices of $\Sd^2\Delta[m]$ fit together to form an $m$-ball with center vertex $\{0, \dots, m\}$. \end{rmk} There is still one last piece of $\bfP\Sd\Delta[m]$ that we discuss, namely $\Comp$. \begin{prop} \label{upclosedcomp} Let $0 \leq k \leq m$. The smallest up-closed subposet $\Comp$ of $\bfP\Sd\Delta[m]$ that contains the object $(\{0,1, \dots, \hat{k}, \dots, m\})$ consists of those $(v_0, \dots, v_q)\in \bfP\Sd\Delta[m]$ with $$\{0,1, \dots, \hat{k}, \dots, m\}\in \{v_0, \dots, v_q\}.$$ \end{prop} We describe how the non-degenerate $m$-simplices of $N\Comp$ are glued together in terms of collections $C^\ell$ of non-degenerate $m$-simplices. A non-degenerate $m$-simplex $V \in N_m\bfP\Sd\Delta[m]$ is in $N_m\Comp$ if and only if each $V_0, \dots, V_m$ is in $\Comp$, and this is the case if and only if $V_0=(\{0, \dots, \hat{k}, \dots, m\})$ (recall $|V_i|=i+1$ and Proposition \ref{upclosedcomp}). For $1 \leq \ell \leq m$, we let $C^\ell$ denote the set of those non-degenerate $m$-simplices $V$ in $N_m\Comp$ which have their first $\ell$ vertices $V_0, \dots, V_{\ell-1}$ on the $k$-th face of $|\Delta[m]|$. A non-degenerate $m$-simplex $V \in N_m \Comp$ is in $C^\ell$ if and only if $v^i_i=\{0, \dots, \hat{k}, \dots, m\}$ for all $0\leq i \leq \ell-1$ and $v^i_i=\{0, \dots, m\}$ for all $\ell \leq i \leq m$. \begin{prop} \label{compgluing} Let $V \in C^\ell$. Then the $j$-th face of $V$ is shared with some other $V' \in C^\ell$ if and only if $j \neq 0, \ell -1, \ell$. \end{prop} \begin{pf} By Remark \ref{gluingofsimplices} we know exactly which other non-degenerate $m$-simplex $V'$ shares the $j$-th face of $V$. So, for each $\ell$ and $j$ we only need to check whether or not $V'$ is in $C^\ell$. Let $V \in C^\ell$. {\bf Cases} $1 \leq \ell \leq m$ and $j=0$. \\ For all $U \in C^\ell$, we have $U_0=(\{0, \dots, \hat{k}, \dots, m\})=V_0,$ so we conclude from the description of $V'$ in Remark \ref{gluingofsimplices} \ref{gluingofsimplicesiv} that $V'$ is not in $C^\ell$. {\bf Case} $\ell=m$ and $j=m-1$. \\ In this case, $v^{m-1}_{m-1}=\{0, \dots, \hat{k}, \dots, m\}$ and $v^m_m=\{0,1,\dots,m\}$. By Remark \ref{gluingofsimplices} \ref{gluingofsimplicesiv}, the $m-1$st-face of $V$ is shared with the $V'$ which agrees with $V$ everywhere except in $V_{m-1}$, where we have $(v')^{m-1}_{m-1}=\{0,\dots,m\}$ instead of $v^{m-1}_{m-1}=\{0,\dots,\hat{k},\dots,m\}$. But this $V'$ is not an element of $C^m$. {\bf Case} $\ell=m$ and $j=m$. \\ In this case, $v^{m-1}_{m-1}=\{0, \dots, \hat{k}, \dots, m\}\neq\{0,1, \dots, m\}$, so we are in the situation of Remark \ref{gluingofsimplices} \ref{gluingofsimplicesii}. The $m$-th face $(V_0, \dots, V_{m-1})$ does not lie in any other non-degenerate $m$-simplex $V'$, let alone in a $V'$ in $C^m$. {\bf Case} $\ell=m$ and $j\neq0,m-1,m$. \\ By Remark \ref{gluingofsimplices} \ref{gluingofsimplicesiv}, the $j$-th face is shared with the $V'$ that agrees with $V$ in $V_0$, $V_{m-1}$, and $V_m$, so that $V' \in C^m$. At this point we conclude from the above cases that if $\ell=m$, the $j$-th face of $V\in C^m$ is shared with another $V'\in C^m$ if and only if $j\neq 0,m-1,m$. {\bf Cases} $1 \leq \ell \leq m-1$ and $j=\ell-1$. \\ The $\ell-1$st face of $V$ is shared with that $V'$ which agrees with $V$ everywhere except in $V_{\ell-1}$, where we have $(v')^{\ell-1}_{\ell-1}=\{0,\dots,m\}$ instead of $v^{\ell-1}_{\ell-1}=\{0, \dots, \hat{k},\dots,m\}$. Hence $V'$ is not in $C^\ell$. {\bf Cases} $1 \leq \ell \leq m-1$ and $j=\ell$. \\ Similarly, the $\ell$-th face of $V$ is shared with that $V'$ which agrees with $V$ everywhere except in $V_\ell$, where we have $(v')^{\ell}_{\ell}=\{0,\dots,\hat{k},\dots,m\}$ instead of $v^{\ell}_{\ell}=\{0,\dots,m\}$. Hence $V'$ is not in $C^\ell$. {\bf Cases} $1 \leq \ell \leq m-1$ and $j\neq0,\ell-1,\ell$. \\ Then the $j$-th face is shared with a $V'$ that agrees with $V$ in $V_0$, $V_{\ell-1}$, and $V_\ell$, so that $V' \in C^\ell$. We conclude that the $j$-th face of $V\in C^\ell$ is shared with some other $V'\in C^\ell$ if and only if $j\neq 0,\ell-1,\ell$. \end{pf} \begin{prop} \label{upcloseddecomposition} Let $0 \leq k \leq m$. Recall that $\Comp$, $\Cen$, and $\Out$ denote the up-closure in $\bfP\Sd\Delta[m]$ of $(\{0,1, \dots, \hat{k},\dots,m\})$, $(\{0,1,\dots,m\})$, and $\bfP \Sd \Lambda^k[m]$ respectively. Then the poset $\bfP\Sd\Delta[m]=c\Sd^2\Delta[m]$ is the union of these three up-closed subposets: $$\bfP\Sd\Delta[m]=\Comp\cup\Cen\cup\Out.$$ The partial order on $\bfP\Sd\Delta[m]$ is given in (\ref{OrderOnDoubleSubdivision}). \end{prop} \section{Deformation Retraction of $|N(\Comp \cup \Cen)|$}\label{retractionsection} In this section we construct a retraction of $|N(\Comp \cup \Cen)|$ to that part of its boundary which lies in $\Out$. As stated in Proposition \ref{deformationretract}, each stage of the retraction is part of a deformation retraction, and is thus a homotopy equivalence. The retraction is done in such a way that we can adapt it later to the $n$-fold case. We first treat the retraction of $|N\Comp|$ in detail. \begin{prop} \label{deformationretract1} Let $C^m, C^{m-1}, \dots, C^1$ be the collections of non-degenerate $m$-simplices of $N\Comp$ defined in Section \ref{barycentric}. Then there is an $m$ stage retraction of $|N\Comp|$ onto $|N(\Comp\cap(\Cen\cup\Out))|$ which retracts the individual simplices of $C^m, C^{m-1}, \dots, C^1$ to subcomplexes of their boundaries. Further, each retraction of each simplex is part of a deformation retraction. \end{prop} \begin{pf} As an illustration, we first prove the case $m=1$ and $k=0$. The poset $\bfP\Sd\Delta[1]$ is $$\xymatrix{\mathbf{(\{0\})} \ar[r] & (\{0\},\{01\}) & (\{01\}) \ar@{.>}[l] \ar@{.>}[r] & (\{1\},\{01\}) & (\{1\}) \ar@{.>}[l]_-f}$$ and $\bfP\Sd\Lambda^0[1]$ consists only of the object $(\{0\})$. Of the nontrivial morphisms in $\bfP\Sd\Delta[1]$, the only one in $\Out$ is the solid one on the far left. The poset $\Cen$ consists of the two middle morphisms, emanating from $(\{01\})$. The only morphism in $\Comp$ is the one labelled $f$. The intersection $\Comp\cap(\Cen\cup\Out)$ is the vertex $(\{1\},\{01\})$, which is the target of $f$. Clearly, after geometrically realizing, the interval $|f|$ can be deformation retracted to the vertex $(\{1\},\{01\})$. The case $m=1$ with $k=1$ is exactly the same. In fact, $k$ does not matter, since the simplices no longer have a direction after geometric realization. The case $m=2$ and $k=1$ can be similarly observed in Figure \ref{subdivisionfigure}. For general $m \in \mathbb{N}$, we construct a {\it topological} retraction in $m$ steps, starting with Step 0. In Step 0 we retract those non-degenerate $m$-simplices of $N_m\Comp$ which have an entire $m-1$-face on the $k$-th face of $\Delta[m]$, \ie in Step 0 we retract the elements of $C^m$. Generally, in Step $\ell$ we retract those non-degenerate $m$-simplices of $N_m\Comp$ which have exactly $\ell$ vertices on the $k$-th face of $\Delta[m]$, \ie in Step $\ell$ we retract the elements of $C^{m-\ell}$. We describe Step $m-\ell$ in detail for $2 \leq \ell \leq m$. We retract each $V \in C^\ell$ to $$(V_0, \dots, \hat{V}_{\ell-1},V_\ell, \dots) \cup (V_1, \dots, V_m)$$ in such a way that for each $j\neq 0, \ell-1, \ell$ the $j$-th face $$(V_0, \dots, \hat{V}_j, \dots,V_{\ell-1},V_\ell, \dots)$$ is retracted {\it within itself} to its subcomplex $$(V_0, \dots, \hat{V}_j, \dots, \hat{V}_{\ell-1},V_\ell, \dots) \cup(\hat{V}_0, \dots, \hat{V}_j, \dots, V_{\ell-1},V_\ell, \dots).$$ We can do this to all $V\in C^\ell$ {\it simultaneously} because the prescription agrees on the overlaps: $V$ shares the face $(V_0, \dots, \hat{V}_j, \dots, V_{\ell-1},V_\ell, \dots)$ with only one other non-degenerate $m$-simplex $V'\in C^\ell$, and $V'$ differs from $V$ only in $V_j'$ by Proposition \ref{compgluing}. This procedure is done for Step 0 up to and including Step $m-2$. After Step $m-2$, the only remaining non-degenerate $m$-simplices in $N_m\Comp$ are those which have only the first vertex (\ie only $V_0$) on the $k$-th face of $\Delta[m]$. This is the set $C^1$. Every $V\in C^1$ has $$V_0=(\{0, \dots, \hat{k}, \dots, m\})$$ $$V_1=(\{0, \dots, \hat{k}, \dots, m\}, \{0, \dots, m \}),$$ so all $V\in C^1$ intersect in this edge. In Step $m-1$, we retract each $V\in C^1$ to $(V_1, \dots, V_m)$ in such a way that for $j\neq 0,1$ we retract the $j$-th face $V$ to $(V_1, \dots, \hat{V}_j, \dots)$, and further we retract the 1-simplex $(V_0,V_1)$ to the vertex $V_1$. We can do this simultaneously to all $V \in C^1$, as the procedure agrees in overlaps by Proposition \ref{compgluing}, and the observation about $(V_0,V_1)$ we made above. For each $V\in C^1$, the 0th face $(V_1, \dots, V_m)$ is also the 0th face of a non-degenerate $m$-simplex $U$ not in $N_m\Comp$, namely $$U_0=(\{0, \dots, m\})$$ $$U_j=V_j \text{ for } j \geq 1$$ by Remark \ref{gluingofsimplices} \ref{gluingofsimplicesiv}. The simplex $U$ is even central. Thus, $(V_1, \dots, V_m)$ is in the intersection $|N(\Comp\cap(\Cen\cup\Out))|$ and we have succeeded in retracting $|N\Comp|$ to $|N(\Comp\cap(\Cen\cup\Out))|$ in such a way that each non-degenerate $m$-simplex is retracted within itself. Further, each retraction is part of a deformation retraction. \end{pf} \begin{prop} \label{deformationretract2} There is a multi-stage retraction of $|N\Cen|$ onto $|N(\Cen\cap\Out)|$ which retracts each non-degenerate $m$-simplex to a subcomplex of its boundary. Further, this retraction is part of a deformation retraction. \end{prop} \begin{pf} We describe how this works for the case $m=2$ pictured in Figure \ref{subdivisionfigure}. The poset $\Cen$ consists of all the central triangles emanating from 012. These have two dotted sides emanating from 012. The intersection $\Cen\cap\Out$ consists of the indicated solid lines on those triangles and their vertices (the two triangles at the bottom have no solid lines). To topologically deformation retract $|N\Cen|$ onto $|N(\Cen\cap\Out)|$, we first deformation retract the vertical, downward pointing edge 012 - 02,012 by pulling the vertex 02,012 up to 012 while at the same time deforming the left bottom triangle to the edge 012 - 0,02,012 and the right bottom triangle to the edge 012 - 2,02,012. Then we consecutively deform each of the left triangles emanating from 012 to the its solid edge and the edge of the next one, holding the vertex 012 fixed. We deform the left triangles in this manner all the way until we reach the vertically pointing edge 012 - 1,012. Similarly, we consecutively deform each of the right triangles emanating from 012 to the its solid edge and the edge of the next one, holding the vertex 012 fixed. We deform the right triangles in this manner all the way until we reach the vertically pointing edge 012 - 1,012. Finally, we deformation retract the last remaining edge 012 - 1,012 up to the vertex 1, 012, and we are finished. It is possible to describe this in arbitrary dimensions, although it gets rather technical, as we already have seen in Proposition \ref{deformationretract1}. \end{pf} \begin{prop} \label{deformationretract} There is a multi-stage retraction of $|N(\Comp \cup \Cen)|$ to $|N((\Comp \cup \Cen)\cap\Out)|$ which retracts each non-degenerate $m$-simplex to a subcomplex of its boundary. Further, each retraction of each simplex is part of a deformation retraction. See Figure \ref{subdivisionfigure}. \end{prop} \begin{pf} This follows from Proposition \ref{deformationretract1} and Proposition \ref{deformationretract2}. \end{pf} \section{Nerve, Pushouts, and Colimit Decompositions of Subposets of $\bfP\Sd\Delta[m]$}\label{pushoutsection} In this section we prove that the nerve is compatible with certain colimits and express posets satisfying a chain condition as a colimit of two finite ordinals, in a way compatible with nerve. The somewhat technical results of this section are crucial for the verification of the pushout axiom in the proof of the Thomason model structure on $\mathbf{Cat}$ and $\mathbf{nFoldCat}$ in Sections \ref{Thomasonsection} and \ref{Thomasonnfoldsection}. The results of this section will have $n$-fold versions in Section \ref{sectionnfolddecompositions}. We begin by proving that the nerve preserves certain pushouts in Proposition \ref{nervecommuteswithpushout}. The question of commutation of nerve with certain pushouts is an old one, and has been studied in Section 5 of \cite{fritschlatch2}. The next task is to express posets satisfying a chain condition as a colimit of two finite ordinals $[m-1]$ and $[m]$ in Proposition \ref{colimitdecomposition}, and similarly express their nerves as a colimit of $\Delta[m-1]$ and $\Delta[m]$ in Proposition \ref{simplicial_colimitdecomposition}. As a consequence, the nerve functor preserves these colimits in Corollary \ref{nervecommuteswithcolimitdecomposition}. The combinatorial proof that our posets of interest, namely $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, $\Comp \cup \Cen$, $\bfP\Sd\Lambda^k[m]$, and $\Out \cap (\Comp \cup \Cen)$, satisfy the chain conditions, is found in Remark \ref{remarkonpaths} and Proposition \ref{satisfyhypothesis}. Corollary \ref{cor:specific_colimit_decompositions} summarizes the nerve commutation for the decompositions of the posets of interest. \begin{prop} \label{nervecommuteswithpushout} Suppose $\bfQ$, $\bfR$, and $\bfS$ are categories, and $\bfS$ is a full subcategory of $\bfQ$ and $\bfR$ such that \begin{enumerate} \item \label{nervecommuteswithpushouti} If $\xymatrix@1{f:x \ar[r] & y }$ is a morphism in $\bfQ$ and $x \in \bfS$, then $y \in \bfS$, \item \label{nervecommuteswithpushoutii} If $\xymatrix@1{f:x \ar[r] & y }$ is a morphism in $\bfR$ and $x \in \bfS$, then $y \in \bfS$. \end{enumerate} Then the nerve of the pushout is the pushout of the nerves. \begin{equation} N(\bfQ \coprod_\bfS \bfR) \cong N\bfQ \coprod_{N\bfS} N\bfR \end{equation} \end{prop} \begin{pf} First we claim that there are no free composites in $\bfQ \coprod_\bfS \bfR$. Suppose $f$ is a morphism in $\bfQ$ and $g$ is a morphism in $\bfR$ and that these are composable in the pushout $\bfQ \coprod_\bfS \bfR$. $$\xymatrix{w \ar[r]^f & x \ar[r]^g & y }$$ Then $x \in \Obj \bfQ \cap \Obj \bfR = \bfS$, so $y \in \bfS$ by hypothesis \ref{nervecommuteswithpushoutii}. Since $\bfS$ is full, $g$ is a morphism of $\bfS$. Then $g \circ f$ is a morphism in $\bfQ$ and is not free. The other case $f$ in $\bfR$ and $g$ in $\bfQ$ is exactly the same. Thus the pushout $\bfQ \coprod_\bfS \bfR$ has no free composites. Let $(f_1, \dots, f_p)$ be a $p$-simplex in $N(\bfQ \coprod_\bfS \bfR)$. Then each $f_j$ is a morphism in $\bfQ$ or $\bfR$, as there are no free composites. Further, by repeated application of the argument above, if $f_1$ is in $\bfQ$ then every $f_j$ is in $\bfQ$. Similarly, if $f_1$ is in $\bfR$ then every $f_j$ is in $\bfR$. Thus we have a morphism $\xymatrix{N(\bfQ \coprod_\bfS \bfR) \ar[r] & N\bfQ \coprod_{N\bfS} N\bfR}$. Its inverse is the canonical morphism $\xymatrix{N\bfQ \coprod_{N\bfS} N\bfR \ar[r] & N(\bfQ \coprod_\bfS \bfR) }$. \end{pf} \begin{prop} \label{nervecommuteswithpushouthypothesisverification} The full subcategory $(\Comp \cup \Cen)\cap\Out$ of the categories $\Comp \cup \Cen$ and $\Out$ satisfies \ref{nervecommuteswithpushouti} and \ref{nervecommuteswithpushoutii} of Proposition \ref{nervecommuteswithpushout}. \end{prop} \begin{pf} Since $\Comp$ and $\Cen$ are up-closed, the union $\Comp \cup \Cen$ is up-closed, as is its intersection with up-closed poset $\Out$. Hence conditions \ref{nervecommuteswithpushouti} and \ref{nervecommuteswithpushoutii} of Proposition \ref{nervecommuteswithpushout} follow. \end{pf} \begin{prop} \label{colimitdecomposition} Let $\bfT$ be a poset and $m \geq 1$ a positive integer such that the following hold. \begin{enumerate} \item \label{colimitdecompositioni} Any linearly ordered subposet $U=\{U_0 < U_1 < \cdots < U_p \}$ of $\bfT$ with $|U|\leq m+1$ is contained in a linearly ordered subposet $V$ of $\bfT$ with $m+1$ distinct elements. \item \label{colimitdecompositionii} Suppose $x$ and $y$ are in $\bfT$ and $x \leq y$. If $V$ and $V'$ are linearly ordered subposets of $\mathbf{T}$ with exactly $m+1$ elements, and both $V$ and $V'$ contain $x$ and $y$, then there exist linearly ordered subposets $W^0,W^1, \dots, W^k$ of $\bfT$ such that \begin{enumerate} \item $W^0=V$ \item $W^k=V'$ \item For all $0 \leq j \leq k$, the linearly ordered poset $W^j$ has exactly $m+1$ elements \item For all $0 \leq j \leq k$, we have $x \in W^j$ and $y \in W^j$ \item For all $0 \leq j \leq k-1$, the poset $W^j \cap W^{j+1}$ has $m$ elements. \end{enumerate} \item \label{colimitdecompositioniii} If $m=1$, we further assume that there are no linearly ordered subposets with 3 or more elements, that is, there are no nontrivial composites $x < y < z$. Whenever $m=1$, hypothesis \ref{colimitdecompositionii} is vacuous. \end{enumerate} Let $\bfJ$ denote the poset of linearly ordered subposets $U$ of $\bfT$ with exactly $m$ or $m+1$ elements. Then $\bfT$ is the colimit of the functor $$\xymatrix{F:\bfJ \ar[r] & \mathbf{Cat}}$$ $$\xymatrix{U \ar@{|->}[r] & U.}$$ The components of the universal cocone $\xymatrix@1{\pi:F \ar@{=>}[r] & \Delta_\bfT}$ are the inclusions $\xymatrix@1{F(U) \ar[r] & \bfT}$. \end{prop} \begin{pf} Suppose $\bfS\in\mathbf{Cat}$ and $\xymatrix@1{\alpha:F \ar@{=>}[r] & \Delta_\bfS}$ is a natural transformation. We define a functor $\xymatrix@1{G:\bfT \ar[r] & \bfS}$ as follows. Let $x$ and $y$ be elements of $\bfT$ and suppose $x \leq y$. By hypothesis \ref{colimitdecompositioni}, there is a linearly ordered subposet $V$ of $\bfT$ which contains $x$ and $y$ and has exactly $m+1$ elements. We define $G(x\leq y):=\alpha_V(x\leq y)$. We claim $G$ is well defined. If $V'$ is another linearly ordered subposet of $\mathbf{T}$ which contains $x$ and $y$ and has exactly $m+1$ elements, then we have a sequence $W^0, \dots, W^k$ as in hypothesis \ref{colimitdecompositionii}, and the naturality diagrams below. $$\xymatrix@C=5pc{W^i \ar[r]^-{\alpha_{W^i}} & \bfS \ar@{=}[d] \\ W^i \cap W^{i+1} \ar[r]^-{\alpha_{W^i \cap W^{i+1}}} \ar[u] \ar[d] & \bfS \ar@{=}[d] \\ W^{i+1} \ar[r]_-{\alpha_{W^{i+1}}} & \bfS }$$ Thus we have a string of equalities $$\alpha_{W^0}(x\leq y)=\alpha_{W^1}(x\leq y)=\cdots =\alpha_{W^k}(x\leq y),$$ and we conclude $\alpha_V(x\leq y)=\alpha_{V'}(x\leq y)$ so that $G(x\leq y)$ is well defined. The assignment $G$ is a functor, as follows. It preserves identities because each $\alpha_V$ does. If $m=1$, then there are no nontrivial composites by hypothesis \ref{colimitdecompositioniii}, so $G$ vacuously preserves all compositions. If $m \geq 2$, and the elements $x < y < z$ in are in $\mathbf{T}$, then there exists a $V$ containing all three of $x$, $y$, and $z$. The functor $\alpha_V$ preserves this composition, so $G$ does also. By construction, for each linearly ordered subposet $V$ of $\bfT$ with $m+1$ elements we have $\alpha_V=G\circ \pi_V$. Further, $G$ is the unique such functor, since such posets $V$ cover $\bfT$ by hypothesis \ref{colimitdecompositioni}. Lastly we claim that $\alpha_U=G \circ \pi_U$ for any linearly ordered subposet $U$ of $\bfT$ with $m$ elements. By hypothesis \ref{colimitdecompositioni} there exists a linearly ordered subposet $V$ of $\bfT$ with $m+1$ elements such that $U\subseteq V$. If $i$ denotes the inclusion of $U$ into $V$, by naturality of $\alpha$ and $\pi$ we have $$\alpha_U=\alpha_V \circ i = G\circ \pi_V \circ i =G \circ \pi_U.$$ \end{pf} \begin{prop} \label{simplicial_colimitdecomposition} Let $\bfT$ be a poset and $m \geq 1$ a positive integer such that the following hold. \begin{enumerate} \item \label{simplicial_colimitdecompositioni} Any linearly ordered subposet $U=\{U_0 < U_1 < \cdots < U_p \}$ of $\bfT$ is contained in a linearly ordered subposet $V$ of $\bfT$ with $m+1$ distinct elements, in particular, any linearly ordered subposet of $\bfT$ has at most $m+1$ elements. \item \label{simplicial_colimitdecompositionii} Suppose $x_0<x_1<\cdots<x_{\ell}$ are in $\bfT$ and $\ell \leq m$. If $V$ and $V'$ are linearly ordered subposets of $\mathbf{T}$ with exactly $m+1$ elements, and both $V$ and $V'$ contain $x_0<x_1<\cdots<x_{\ell}$, then there exist linearly ordered subposets $W^0,W^1, \dots, W^k$ of $\bfT$ such that \begin{enumerate} \item $W^0=V$ \item $W^k=V'$ \item For all $0 \leq j \leq k$, the linearly ordered poset $W^j$ has exactly $m+1$ elements \item For all $0 \leq j \leq k$, the elements $x_0<x_1<\cdots<x_{\ell}$ are all in $W^j$ \item For all $0 \leq j \leq k-1$, the poset $W^j \cap W^{j+1}$ has exactly $m$ distinct elements. \end{enumerate} \end{enumerate} As in Proposition \ref{colimitdecomposition}, let $\bfJ$ denote the poset of linearly ordered subposets $U$ of $\bfT$ with exactly $m$ or $m+1$ elements, let $F$ be the functor $$\xymatrix{F:\bfJ \ar[r] & \mathbf{Cat}}$$ $$\xymatrix{U \ar@{|->}[r] & U,}$$ and $\pi$ the universal cocone $\xymatrix@1{\pi:F \ar@{=>}[r] & \Delta_\bfT}.$ The components of $\pi$ are the inclusions $\xymatrix@1{F(U) \ar[r] & \bfT}$. Then $N\bfT$ is the colimit of the functor $$\xymatrix{NF:\bfJ \ar[r] & \mathbf{SSet}}$$ $$\xymatrix{U \ar@{|->}[r] & NFU}$$ and $\xymatrix@1{N\pi:NF \ar@{=>}[r] & \Delta_{N\bfT}}$ is its universal cocone. \end{prop} \begin{pf} The principle of the proof is similar to the direct proof of Proposition \ref{colimitdecomposition}. Suppose $S\in\mathbf{SSet}$ and $\xymatrix@1{\alpha:NF \ar@{=>}[r] & \Delta_S}$ is a natural transformation. We induce a morphism of simplicial sets $\xymatrix@1{G:N\bfT \ar[r] & S}$ by defining $G$ on the $m$-skeleton as follows. Let $\Delta_{m}$ denote the full subcategory of $\Delta$ on the objects $[0], [1], \dots, [m]$ and let $\xymatrix@1{\text{tr}_{m} \co \mathbf{SSet} \ar[r] & \mathbf{Set}^{\Delta_{m}^\text{op}}}$ denote the $m$-th truncation functor. The truncation $\text{tr}_{m} N\bfT$ is a union of the truncated simplicial subsets $\text{tr}_{m}NV$ for $V \in \bfJ$ with $\vert V \vert=m+1$, since $\bfT$ is a union of such $V$. We define $$\xymatrix{G_{m}\vert_{\text{tr}_{m}NV} \co \text{tr}_{m}NV \ar[r] & \text{tr}_{m}S}$$ simply as $\text{tr}_{m}\alpha_{V}$. The morphism $G_m$ is well-defined, for if $0\leq\ell \leq m$ and $x \in (\text{tr}_{m}NV)_\ell$ and $x \in (\text{tr}_{m}NV')_\ell$ with $\vert V \vert=m+1=\vert V \vert$, then $V$ and $V'$ can be connected by a sequence $W^0,W^1, \dots, W^k$ of $(m+1)$-element linearly ordered subsets of $\bfT$ that all contain the linearly ordered subposet $x$ and satisfy the properties in hypothesis \ref{simplicial_colimitdecompositionii}. By a naturality argument as in the proof of Proposition \ref{colimitdecomposition}, we have have a string of equalities $$\alpha_{W^0}(x)=\alpha_{W^1}(x)=\cdots =\alpha_{W^k}(x),$$ and we conclude $\alpha_{V}(x)=\alpha_{V'}(x)$ so that $G_m(x)$ is well defined. By definition $\Delta_{G_m} \circ \text{tr}_m N\pi=\text{tr}_m \alpha$. We may extend this to non-truncated simplicial sets using the following observation: if $\bfC$ is a category in which composable chains of morphisms have at most $m$-morphisms, and $\text{sk}_m$ is the left adjoint to $\text{tr}_m$, then the counit inclusion $$\xymatrix@1{\text{sk}_m\text{tr}_m(N\bfC) \ar[r] & N \bfC}$$ is the identity. Thus $G_m$ extends to $\xymatrix@1{G\co N\bfT \ar[r] & S}$ and $\Delta_{G} \circ N\pi=\alpha$. Lastly, the morphism $G$ is unique, since the simplicial subsets $NV$ for $\vert V \vert=m+1$ cover $N\bfT$ by hypothesis \ref{simplicial_colimitdecompositioni}. \end{pf} \begin{cor} \label{nervecommuteswithcolimitdecomposition} Under the hypotheses of Proposition \ref{simplicial_colimitdecomposition}, the nerve functor commutes with the colimit of $F$. \end{cor} Since $\Sd^2 \Delta[m]$ geometrically realizes to a {\it connected} simplicial complex that is a union of non-degenerate $m$-simplices, it is clear that we can move from any non-degenerate $m$-simplex $V$ of $\Sd^2 \Delta[m]$ to any other $V'$ by a chain of non-degenerate $m$-simplices in which consecutive ones share an $(m-1)$-subsimplex. However, if $x$ and $y$ are two vertices contained in both $V$ and $V'$, it is not clear that a chain can be chosen from $V$ to $V'$ in which all non-degenerate $m$-simplices contain both $x$ and $y$. The following extended remark explains how to choose such a chain. \begin{rmk} \label{remarkonpaths} Our next task is to prepare for the proof of Proposition \ref{satisfyhypothesis}, which says that the posets $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, and $\Comp \cup \Cen$ satisfy the hypotheses of Proposition \ref{simplicial_colimitdecomposition} for $m$, and the posets $\bfP\Sd\Lambda^k[m]$ and $\Out \cap (\Comp \cup \Cen)$ satisfy the hypotheses of Proposition \ref{simplicial_colimitdecomposition} for $m-1$. Building on Remark \ref{gluingofsimplices}, we describe a way of moving from a non-degenerate $m$-simplex $V$ of $\Sd^2 \Delta[m]$ to another non-degenerate $m$-simplex $V'$ of $\Sd^2 \Delta[m]$ via a chain of non-degenerate $m$-simplices, in which consecutive $m$-simplices overlap in an $(m-1)$-simplex, and each non-degenerate $m$-simplex in the chain contains specified vertices $x_0<x_1<\cdots<x_{\ell}$ contained in both $V$ and $V'$. Observe that the respective elements $x_0, x_1, \dots, x_\ell$ are in the same respective positions in $V$ and $V'$, for if they were in different respective positions, we would arrive a linearly ordered subposet of length greater than $m+1$, a contradiction. We first prove the analogous statement about moving from $V$ to $V'$ for $\Sd\Delta[m]$. The non-degenerate $m$-simplices of $\Sd\Delta[m]$ are in bijective correspondence with the permutations of $\{0,1, \dots, m\}$. Namely, the simplex $v=(v_0, \dots, v_m)$ corresponds to $a_0, \dots, a_m$ where $a_i=v_i \backslash v_{i-1}$. For example, $(\{1\},\{1,2\}, \{0,1,2\})$ corresponds to $1,2,0$. Swapping $a_i$ and $a_{i+1}$ gives rise to a non-degenerate $m$-simplex $w$ which shares an $(m-1)$-subsimplex with $v$, that is, $v$ and $w$ differ only in the $i$-th spot: $v_i \neq w_i$. Since transpositions generate the symmetric group, we can move from any non-degenerate $m$-simplex of $\Sd\Delta[m]$ to any other by a sequence of moves in which we only change one vertex at a time. Suppose $v$ and $v'$ are the same at spots $s_0 < s_1 < \cdots< s_\ell$, that is $v_{s_i}=v_{s_i}'$ for $0\leq i \leq \ell$. Then, using transpositions, we can traverse from $v$ to $v'$ through a chain $w^1, \dots, w^k$ of non-degenerate $m$-simplices of $\Sd\Delta[m]$, each of which is equal to $v_{s_1}, v_{s_2}, \dots, v_{s_\ell}$ in spots $s_1, s_2, \dots, s_\ell$. Indeed, this corresponds to the embedding of symmetric groups \begin{equation*} \Sym(v_{s_1})\times \left( \prod_{i=2}^\ell \Sym(v_{s_i} \backslash v_{s_{i-1}}) \right) \times \Sym(\{0,\dots,n\}\backslash v_{s_\ell}) \xymatrix@1{ \ar[r] & } \Sym(\{0,\dots,n\}) \end{equation*} and generation by the relevant transpositions. Similar, but more involved, arguments allow us to navigate the non-degenerate $m$-simplices of $\Sd^2\Delta[m]$. For a {\it fixed} non-degenerate $m$-simplex $V_m=(v_0^{m},\dots,v_m^m)$ of $\Sd\Delta[m]$, the non-degenerate $m$-simplices $V=(V_0, \dots, V_m)$ of $\Sd^2\Delta[m]$ ending in the {\it fixed} $V_m$ correspond to permutations $A_0, \dots, A_m$ of the vertices of $V_m$. For example, the 2-simplex in \eqref{Vexample} corresponds to the permutation $$\{01\},\;\; \{0\},\;\; \{012\}.$$ Again, arguing by transpositions, we can move from any non-degenerate $m$-simplex of $\Sd^2\Delta[m]$ ending in $V_m$ to any other ending in $V_m$ by a sequence of moves in which we only change one vertex at a time, and at every step, we preserve the specified vertices $x_0<x_1<\cdots<x_{\ell}$. Holding $V_m$ fixed corresponds to moving (in $\Sd^2\Delta[m]$) within the subdivision of one of the non-degenerate $m$-simplices of $\Sd\Delta[m]$ (the subdivision is isomorphic to $\Sd\Delta[m]$, the case treated above). See for example Figure \ref{subdivisionfigure} for a convincing picture. But how do we move between non-degenerate $m$-simplices that do not agree in the $m$-th spot, in other words, how do we move from non-degenerate $m$-simplices of one subdivided non-degenerate $m$-simplex of $\Sd\Delta[m]$ to non-degenerate $m$-simplices in another subdivided non-degenerate $m$-simplex of $\Sd\Delta[m]$? First, we say how to move without requiring containment of the specified vertices $x_0<x_1<\cdots<x_{\ell}$. Note that if $V$ and $W$ in $\Sd^2\Delta[m]$ only differ in the last spot $m$, then $V_m$ and $W_m$ agree in all but one spot, say $v_i^m \neq w_i^m$, and the permutations corresponding to $V$ and $W$ are respectively $$A_0, \dots, A_{m-1},v_i^m$$ $$A_0, \dots, A_{m-1},w_i^m.$$ Given arbitrary non-degenerate $m$-simplices $V$ and $V'$ of $\Sd^2\Delta[m]$, we construct a chain connecting $V$ and $V'$ as follows. First we choose a chain of $m$-simplices $\{\overline{W}^p\}_{p=0}^q$ in $\Sd\Delta[m]$ $$\overline{W}_m^p=(w_0^p, \dots, w_m^p)$$ $0\leq p \leq q$ from $V_m$ to $V_m'$ which corresponds to transpositions. This we can do by the first paragraph of this Remark. We define an $m$-simplex $\overline{W}^p$ in $\Sd^2\Delta[m]$ by $$\overline{W}^p:=( \dots, \overline{W}^p_m \backslash w_{i_p}^p,\overline{W}^p_m)$$ where $w_{i_p}^p$ is the vertex of $\overline{W}^p_m$ which distinguishes it from $\overline{W}^{p-1}_m$ for $1 \leq p \leq q$. The last letter in the permutation corresponding to $\overline{W}^p$ is $w_{i_p}^p$. The other vertices of $\overline{W}^p$ indicated by $\dots$ are any subsimplices of $\overline{W}^p_m$ written in increasing order. Now, our chain $\{W^j\}_j$ in $\Sd^2\Delta[m]$ from $V$ to $V'$ begins at $V$ and traverses to $\overline{W}^1$: starting from $V$, we pairwise transpose $v^m_{i_1}$ to the end of the permutation corresponding to $V$, then we replace $v^m_{i_1}$ by $w^1_{i_1}$, and then we pairwise transpose the first $m$ letters of the resulting permutation to arrive at the permutation corresponding to $\overline{W}^1$. Similarly, starting from $\overline{W}^1$ we move $w_{i_2}^1$ to the end, replace it by $w_{i_2}^2$, and pairwise transpose the first $m$ letters to arrive at $\overline{W}^2$. Continuing in this fashion, we arrive at $V'$ through a chain $\{W^j\}_j$ of non-degenerate $m$-simplices $W^j$ in $\Sd^2\Delta[m]$ in which $W^j$ and $W^{j+1}$ share an $(m-1)$-subsimplex. Lastly, we must prove that if $V$ and $V'$ both contain specified vertices $x_0<x_1<\cdots<x_{\ell}$, then the chain $\{W^j\}_j$ of non-degenerate $m$-simplices can be chosen so that each $W^j$ contains all of the specified vertices $x_0<x_1<\cdots<x_{\ell}$. Suppose $$V_{s_i}=x_i=V_{s_i}'$$ for all $0 \leq i \leq \ell$ and $s_0 < s_1 < \cdots < s_\ell$. Then $V_m$ and $V_m'$ both contain all of the vertices of $x_0,x_1, \dots, x_\ell$ since $$V_m\supseteq V_{s_\ell}=x_\ell \supseteq x_{\ell-1} \supseteq \cdots \supseteq x_0=V_{s_0}$$ $$V_m' \supseteq V_{s_\ell}'=x_\ell \supseteq x_{\ell-1} \supseteq \cdots \supseteq x_0=V_{s_0}'.$$ We first choose the chain $\{\overline{W}^p_m\}_p$ in $\Sd\Delta[m]$ so that each $\overline{W}^p_m$ contains all of the vertices of $x_0, x_1, \dots, x_\ell$ (this can be done by the discussion of $\Sd\Delta[m]$ above). Since we have $\overline{W}^p_m \supseteq x_\ell$, all $w^p_{i_p}$ must satisfy $i_p > s_\ell$. The first vertices of the non-degenerate $m$-simplex $\overline{W}^p$ in $\Sd^2\Delta[m]$ indicated by $\dots$ are chosen so that in spots $s_0, s_1, \dots, s_\ell$ we have $x_0, x_1, \dots, x_\ell$. For fixed $W^p_m$ we can transpose as we wish, without perturbing $x_0,x_1, \dots, x_\ell$ (again by the discussion of $\Sd\Delta[m]$ above, but this time applied to the $\Sd\Delta[m]$ isomorphic to the collection of $m$-simplices of $\Sd\Delta[m]$ ending in $\overline{W}^p_m$.) On the other hand, the part of $\{W^j\}_j$ in which we move $w^{p-1}_{i_p}$ to the right does not perturb any of $x_0,x_1, \dots, x_\ell$ because $i_p > s_\ell$. Thus, each $W^j$ has $x_0,x_1, \dots, x_\ell$ in spots $s_0,s_1, \dots, s_\ell$ respectively. \end{rmk} \begin{prop} \label{satisfyhypothesis} Let $m \geq 1$ be a positive integer. The posets $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, and $\Comp \cup \Cen$ satisfy \ref{simplicial_colimitdecompositioni} and \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition} for $m$. Similarly, $\bfP\Sd\Lambda^k[m]$ and $\Out \cap (\Comp \cup \Cen)$ satisfy \ref{colimitdecompositioni} and \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition} for $m-1$. The hypotheses of Proposition \ref{simplicial_colimitdecomposition} imply those of Proposition \ref{colimitdecomposition}, so Proposition \ref{colimitdecomposition} also applies to these posets. \end{prop} \begin{pf} We first consider $m=1$ and the various subposets of $\bfP\Sd \Delta[1]$. Let $k=0$ (the case $k=1$ is symmetric). The poset $\bfP\Sd\Delta[1]$ is $$\xymatrix{\mathbf{(\{0\})} \ar[r] & (\{0\},\{01\}) & (\{01\}) \ar@{.>}[l] \ar@{.>}[r] & (\{1\},\{01\}) & (\{1\}) \ar@{.>}[l]_-f}$$ and $\bfP\Sd\Lambda^0[1]$ consists only of the object $(\{0\})$ (the typography is chosen to match with Figure \ref{subdivisionfigure}). Of the nontrivial morphisms in $\bfP\Sd\Delta[1]$, the only one in $\Out$ is the solid one on the far left. The poset $\Cen$ consists of the two middle morphisms, emanating from $(\{01\})$. The only morphism in $\Comp$ is the one labelled $f$. The union $\Comp \cup \Cen$ consist of all the dotted arrows and their sources and targets. The intersection $\Out \cap (\Comp \cup \Cen)$ consists only of the vertex $(\{0\},\{0,1\})$. The hypotheses \ref{simplicial_colimitdecompositioni} and \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition} are clearly true by inspection for $\bfP\Sd \Delta[1]$, $\Cen$, $\Out$, $\Comp$, and $\Comp \cup \Cen$ and also $\bfP\Sd\Lambda^k[1]$ and $\Out \cap (\Comp \cup \Cen)$. We next prove that $\bfP\Sd \Delta[m]$ satisfies hypothesis \ref{simplicial_colimitdecompositioni} of Proposition \ref{simplicial_colimitdecomposition} for $m \geq 2$, and also its various subposets satisfy hypothesis \ref{simplicial_colimitdecompositioni}. Suppose $U=\{U_0 < U_1 < \cdots < U_p\}$ is a linearly ordered subposet of $\bfP\Sd \Delta[m]$. As before, we write $U_i=(u_0^i, \dots, u^i_{r_i})$. We extend $U$ to a linearly ordered subposet $V$ with $m+1$ elements so that $U_i$ occupies the $r_i$-th place (the lowest element is in the 0-th place). For $j \leq r_0$, let $V_j=(u_0^0,\dots,u_j^0)$. For $j=r_i$, $V_j:=U_i$. For $r_i \leq j < r_{i+1}-1$, we define $V_{j+1}$ as $V_j$ with one additional element of $U_{i+1} \backslash U_i$. If $|U_p|=m+1$, then we are now finished. If $|U_p|=r_p+1< m+1$, then extend $U_p$ to a strictly increasing chain of subsets of $\{0,\dots,m\}$ of length $m+1$, where the new subsets are $v_1,\dots,v_{m+1-(r_p+1)}$ and define for $j=1, \dots, m-r_p$ $$V_{r_p+j}:=V_{r_p} \cup \{v_1, \dots, v_j\}.$$ Then we have $U$ contained in $V=\{V_0< \dots < V_m\}$. Easy adjustments show that the poset $\Cen$ satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m\geq2$. If $U$ is a linearly ordered subposet of $\Cen$, then each $u^i_{r_i}$ is $\{0,1,\dots,m\}$ by Proposition \ref{center}. We take $V_0=(\{0,1,\dots,m\})$ and then successively throw in $u_0^0, \dots, u^0_{r_0-1}$ to obtain $V_1,\dots,V_{r_0}$. The higher $V_j$'s are as above. By Proposition \ref{center}, the extension $V$ lies in $\Cen$. A similar argument works for $\Comp$, since it is also the up-closure of a single point, namely $(\{0,1, \dots, \hat{k}, \dots, m\})$. The union $\Comp \cup \Cen$ also satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m\geq 2$: if $U$ is a subposet of the union, then $U_0$ is in at least one of $\Comp$ or $\Cen$, and all the other $U_i$'s are also contained in that one, so the proof for $\Comp$ or $\Cen$ then finishes the job. The poset $\Out$ satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m\geq 2$, for if $U$ is a subposet of $\Out$, then $U_0$ must contain some $u^0_i$ in $\Lambda^k[m]$ by Proposition \ref{outer}. We extend to the left of $U_0$ by taking $V_0=(u^0_i)$ and then successively throwing in the remaining elements of $U_0$. The rest of the extension proceeds as above, since everything above $U_0$ also contains $u^0_i \in \Lambda^k[m]$. The poset $\Out \cap \Comp$ satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m-1$ rather than $m$ because any element in the intersection must have at least 2 vertices, namely a vertex in $\Lambda^k[m]$ and $\{0, \dots, \hat{k}, \dots, m\}$. Similarly, the poset $\Out \cap \Cen$ satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m-1$ rather than $m$ because any element in the intersection must have at least 2 vertices, namely a vertex in $\Lambda^k[m]$ and $\{0, \dots, m\}$. The proofs that $\Out \cap \Comp$ and $\Out \cap \Cen$ satisfy hypothesis \ref{simplicial_colimitdecompositioni} are similar to the above. Since unions of subposets of $\bfP\Sd\Delta[m]$ that satisfy hypothesis \ref{simplicial_colimitdecompositioni} for $m-1$ also satisfy hypothesis \ref{simplicial_colimitdecompositioni} for $m-1$, we see that \begin{equation}\label{equ:union_intersection} (\Out \cap \Comp)\cup(\Out \cap \Cen)=\Out \cap (\Comp\cup\Cen ) \end{equation} also satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m-1$. Lastly $\bfP\Sd\Lambda^k[m]$ satisfies hypothesis \ref{simplicial_colimitdecompositioni} for $m-1$. It is down closed by Proposition \ref{Lambdadownclosed}, so for a subposet $U$, the extension of $U$ to the left in $\bfP\Sd \Delta[m]$ described above is also in $\bfP\Sd\Lambda^k[m]$. Any extension to the right which includes $k$ in the final $m$-element set is also in $\bfP\Sd\Lambda^k[m]$ by the discussion after equation \eqref{simplicesofhorn}. Next we turn to hypothesis \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition} for the subposets of $\bfP\Sd \Delta[m]$ in question, where $m\geq 2$. The poset $\bfP\Sd \Delta[m]$ satisfies hypothesis \ref{simplicial_colimitdecompositionii} by Remark \ref{remarkonpaths}. The poset $\Cen$ is the up-closure of $(\{0,1, \dots, m\})$ in $\bfP\Sd \Delta[m]$. Every linearly ordered subposet of $\Cen$ with $m+1$ elements must begin with $(\{0,1, \dots, m\})$. Given $(m+1)$-element, linearly ordered subposets $V$ and $V'$ of $\Cen$ with specified elements $x_0<x_1<\cdots<x_{\ell}$ in common, we can select the chain $\{W^j\}_j$ in Remark \ref{remarkonpaths} so that each $W^j$ has $(\{0,1, \dots, m\})$ as its $0$-vertex. Thus $\Cen$ satisfies hypothesis \ref{simplicial_colimitdecompositionii}. The poset $\Comp$ similarly satisfies hypothesis \ref{simplicial_colimitdecompositionii}, as it is also the up-closure of an element in $\bfP\Sd \Delta[m]$. The union $\Comp \cup \Cen$ satisfies hypothesis \ref{simplicial_colimitdecompositionii} as follows. If $V$ and $V'$ (of cardinality $m+1$) are both linearly ordered subposets of $\Comp$ or are both linearly ordered subposets of $\Cen$ respectively with the specified elements in common, then we may simply take the chain in $\Comp$ or $\Cen$ respectively. If $V$ is in $\Cen$ and $V'$ is in $\Comp$, then $V_0=(\{0,1, \dots, m\})$ and $V_0'=(\{0, \dots, \hat{k},\dots, m\})$. Suppose $$V_{s_i}=x_i=V_{s_i}'$$ for all $0 \leq i \leq \ell$ and $s_0 < s_1 < \cdots < s_\ell$. Then $x_0$ contains both $\{0,1, \dots, m\}$ and $\{0, \dots, \hat{k},\dots, m\}$. Then we move from $V'$ to $V''$ by transposing $\{0,1, \dots, m\}$ down to vertex 0, leaving everything else unchanged. This chain from $V'$ to $V''$ is in $\Comp$ until it finally reaches $V''$, which is in $\Cen$. From $V$ we can reach $V''$ via a chain in $\Cen$ as above. Putting these two chains together, we move from $V$ to $V'$ as desired. To show $\Out$ satisfies hypothesis \ref{simplicial_colimitdecompositionii}, suppose $V$ and $V'$ are linearly ordered subposets of cardinality $m+1$ with $V_{s_i}=x_i=V_{s_i}'$ for all $0 \leq i \leq \ell$ and $s_0 < s_1 < \cdots < s_\ell$. If $V_0=V_0'$, then we can make certain that the chain $\{W^j\}_j$ in Remark \ref{remarkonpaths} satisfies $W^j_0=V_0=V_0' \in \bfP\Sd\Lambda^k[m]$. Then each $W^j$ lies in $\Out$, and we are finished. If $V_0\neq V_0'$, then we move from $V'$ to $V''$ with $V_0''=V_0$ as follows. The elements $V_0$ and $V_0'$ are both in $V_{s_0}=x_0=V_{s_0}'$, so we can transpose $V_0$ in $V'$ down to the 0-vertex and interchange $V_0$ and $V_0'$. Each step of the way is in $\Out$. The result is $V''$, to which we can move from $V$ on a chain in $\Out$. We claim that the subposet $\Out \cap \Comp$ of $\bfP\Sd\Delta[m]$ satisfies hypothesis \ref{simplicial_colimitdecompositionii} for $m-1$. Suppose $V$ and $V'$ are linearly ordered subposets of cardinality $m$ with $V_{s_i}=x_i=V_{s_i}'$ for all $0 \leq i \leq \ell$ and $s_0 < s_1 < \cdots < s_\ell$, where $1\leq \ell \leq m-1$. Then $V_0=(v , \{0, \dots, \hat{k}, \dots, m\})$ and $V_0'=(v', \{0, \dots, \hat{k}, \dots, m\})$ where $v$ and $v'$ are elements of $\bfP\Sd\Lambda^k[m]$. We extend the $m$-element linearly ordered posets $V$ and $V'$ to $(m+1)$-element linearly ordered posets $\bar{V}$ and $\bar{V}'$ in $\Comp$ by putting $(\{0, \dots, \hat{k}, \dots, m\})$ in the 0-th spot of $\bar{V}$ and $\bar{V}'$. If $v=v'$, then we can find a chain $\{W^j\}_j$ from $\bar{V}$ to $\bar{V}'$ in $\Comp$ which preserves $x_0,x_1, \dots, x_\ell,$ and $v$ using the above result that $\Comp$ satisfies \ref{simplicial_colimitdecompositionii} for $m$. Truncating the 0-th spot of each $W^j$, we obtain the desired chain in $\Out \cap \Comp$. If $v \neq v'$, then we find a chain in $\Comp$ from $\bar{V}'$ to a $\bar{V}''$ with $v''=v$, like above, and then find a chain in $\Comp$ from $\bar{V}$ to $\bar{V}''$. Combining chains, and truncating the 0-th spot again gives us the desired path from $V$ to $V'$. By a similar argument, with the role of $\{0, \dots, \hat{k}, \dots, m\}$ played by $\{0,1, \dots, m\}$, the poset $\Out \cap \Cen$ satisfies hypothesis \ref{simplicial_colimitdecompositionii} for $m-1$. Next we claim that the union of $\Out \cap \Comp$ with $\Out \cap \Cen$ also satisfies hypothesis \ref{simplicial_colimitdecompositionii} for $m-1$. Suppose $V \subseteq \Out \cap \Comp$ and $V' \subseteq \Out \cap \Cen$ are $m$-element linearly ordered subposets with $V_{s_i}=x_i=V_{s_i}'$ for all $0 \leq i \leq \ell$ and $s_0 < s_1 < \cdots < s_\ell$, where $1\leq \ell \leq m-1$. Then $v$, $v'$, $\{0, \dots, \hat{k}, \dots, m\}$, and $\{0,1, \dots, m\}$ are in $x_0$, so we can transpose $v$ and $\{0, \dots, \hat{k}, \dots, m\}$ down in $V'$ to take the place of $v'$ and $\{0,1, \dots, m\}$, without perturbing $x_0, x_1, \dots, x_\ell$. The resulting poset $V''$ is in $\Out \cap \Comp$, and was reached from $V'$ by a chain in $\Out \cap \Cen$. By the above, we can reach $V''$ from $V$ by a chain in $\Out \cap \Comp$. Thus we have connected $V$ and $V'$ by a chain in \eqref{equ:union_intersection}, always preserving $x_0, x_1, \dots, x_\ell$, and therefore $\Out \cap (\Comp \cup \Cen)$ satisfies hypothesis \ref{simplicial_colimitdecompositionii} for $m-1$. \end{pf} \begin{rmk} The posets $\bfC^\ell$ do not satisfy the hypotheses of Proposition \ref{simplicial_colimitdecomposition}, nor those of Proposition \ref{colimitdecomposition}. \end{rmk} \begin{cor} \label{cor:specific_colimit_decompositions} Let $m \geq 1$ be a positive integer. \begin{enumerate} \item \label{cor:specific_colimit_decompositionsi} The posets $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, and $\Comp \cup \Cen$ are each a colimit of finite ordinals $[m-1]$ and $[m]$. Similarly, the posets $\bfP\Sd\Lambda^k[m]$ and $\Out \cap (\Comp \cup \Cen)$ are each a colimit of finite ordinals $[m-2]$ and $[m-1]$. (By definition $[-1]=\emptyset$.) \item \label{cor:specific_colimit_decompositionsii} The simplicial sets $N(\bfP\Sd \Delta[m])$, $N(\Cen)$, $N(\Out)$, $N(\Comp)$, and $N(\Comp \cup \Cen)$ are each a colimit of simplicial sets of the form $\Delta[m-1]$ and $\Delta[m]$. Similarly, the simplicial sets $N(\bfP\Sd\Lambda^k[m])$ and $N(\Out \cap (\Comp \cup \Cen))$ are each a colimit of simplicial sets of the form $\Delta[m-2]$ and $\Delta[m-1]$. (By definition $[-1]=\emptyset$.) \item \label{cor:specific_colimit_decompositionsiii} The nerve of the colimit decomposition in $\mathbf{Cat}$ in \ref{cor:specific_colimit_decompositionsi} is the colimit decomposition in $\mathbf{SSet}$ in \ref{cor:specific_colimit_decompositionsii}. \end{enumerate} \end{cor} \begin{pf} \begin{enumerate} \item By Proposition \ref{satisfyhypothesis}, the posets $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, and $\Comp \cup \Cen$ satisfy hypotheses \ref{simplicial_colimitdecompositioni} and \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition} for $m$, as do the posets $\bfP\Sd\Lambda^k[m]$ and $\Out \cap (\Comp \cup \Cen)$ for $m-1$. The hypotheses of Proposition \ref{simplicial_colimitdecomposition} imply the hypotheses of Proposition \ref{colimitdecomposition}, so part \ref{cor:specific_colimit_decompositionsi} of the current corollary follows from Proposition \ref{colimitdecomposition}. \item By Proposition \ref{satisfyhypothesis}, the posets $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, and $\Comp \cup \Cen$ satisfy hypotheses \ref{simplicial_colimitdecompositioni} and \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition} for $m$, as do the posets $\bfP\Sd\Lambda^k[m]$ and $\Out \cap (\Comp \cup \Cen)$ for $m-1$. So Proposition \ref{simplicial_colimitdecomposition} applies and we immediately obtain part \ref{cor:specific_colimit_decompositionsii} of the current corollary. \item This follows from Corollary \ref{nervecommuteswithcolimitdecomposition} and Proposition \ref{satisfyhypothesis}. \end{enumerate} \end{pf} \section{Thomason Structure on {\bf Cat}} \label{Thomasonsection} The Thomason structure on {\bf Cat} is transferred from the standard model structure on {\bf SSet} by transferring across the adjunction \begin{equation} \xymatrix@C=4pc{\mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^{\Sd^2} & \ar@/^1pc/[l]^{\Ex^2} \mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^{c} & \ar@/^1pc/[l]^{N} \mathbf{Cat}} \end{equation} as in \cite{thomasonCat}. In other words, a functor $F$ in $\mathbf{Cat}$ is a weak equivalence or fibration if and only if $\Ex^2 NF$ is. We present a quick proof that this defines a model structure using a corollary to Kan's Lemma on Transfer. Although Thomason did not do it exactly this way, it is practically the same, in spirit. Our proof relies on the results in the previous sections: the decomposition of $\Sd^2 \Delta[m]$, the commutation of nerve with certain colimits, and the deformation retraction. This proof of the Thomason structure on ${\bf Cat}$ will be the basis for our proof of the Thomason structure on ${\bf nFoldCat}$. The key corollary to Kan's Lemma on Transfer is the following Corollary, inspired by Proposition 3.4.1 in \cite{worytkiewicz2Cat}.\footnote{The difference between \cite{worytkiewicz2Cat} and the present paper is that in hypothesis \ref{KanCorollaryi} we require $Fi$ and $Fj$ to be small with respect to the entire category $\bfD$, rather than merely small with respect to $FI$ and $FJ$.} \begin{cor} \label{KanCorollary} Let $\mathbf{C}$ be a cofibrantly generated model category with generating cofibrations $I$ and generating acyclic cofibrations $J$. Suppose {\bf D} is complete and cocomplete, and that $F \dashv G$ is an adjunction as in {\rm (\ref{adjunction})}. \begin{equation} \label{adjunction} \xymatrix@C=4pc{{\bf C} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^{F} & \ar@/^1pc/[l]^{G} {\bf D}} \end{equation} Assume the following. \begin{enumerate} \item \label{KanCorollaryi} For every $i\in I$ and $j \in J$, the objects $\dom Fi$ and $\dom Fj$ are small with respect to the entire category $\bfD$. \item \label{KanCorollaryii} For any ordinal $\lambda$ and any colimit preserving functor $\xymatrix@1{X\co\lambda \ar[r] & {\bf C}}$ such that $\xymatrix@1{X_{\beta} \ar[r] & X_{\beta+1}}$ is a weak equivalence, the transfinite composition $$\xymatrix{X_0 \ar[r] &}\underset{\lambda}{\colim}X$$ is a weak equivalence. \item \label{KanCorollaryiii} For any ordinal $\lambda$ and any colimit preserving functor $\xymatrix@1{Y\co\lambda \ar[r] & {\bf D}}$, the functor $G$ preserves the colimit of $Y$. \item \label{KanCorollaryiv} If $j'$ is a pushout of $F(j)$ in {\bf D} for $j \in J$, then $G(j')$ is a weak equivalence in {\bf C}. \end{enumerate} Then there exists a cofibrantly generated model structure on {\bf D} with generating cofibrations $FI$ and generating acyclic cofibrations $FJ$. Further, $f$ is a weak equivalence in {\bf D} if and only $G(f)$ is a weak equivalence in {\bf C}, and $f$ is a fibration in {\bf D} if and only $G(f)$ is a fibration in {\bf C}. \end{cor} \begin{pf} For a proof of a similar statement, see \cite{fiorepaolipronk1}. The only difference between the statement here and the one proved in \cite{fiorepaolipronk1} is that here we only require in hypothesis \ref{KanCorollaryiii} that $G$ preserves colimits indexed by an ordinal $\lambda$, rather than more general filtered colimits. The proof of the statement here is the same as in \cite{fiorepaolipronk1}: it is a straightforward application of Kan's Lemma on Transfer. \end{pf} \begin{lem} \label{ExPreservesAndReflectsWEs} The functor $\Ex$ preserves and reflects weak equivalences. That is, a morphism $f$ of simplicial sets is a weak equivalence if and only if $\Ex f$ is a weak equivalence. \end{lem} \begin{pf} There is a natural weak equivalence $\xymatrix@1{1_{\mathbf{SSet}} \ar@{=>}[r] & \Ex}$ by Lemma 3.7 of \cite{kancss}, or more recently Theorem 6.2.4 of \cite{joyaltierneysimplicial}, or Theorem 4.6 of \cite{goerssjardine}. Then the Proposition follows from the naturality diagram below. $$\xymatrix@C=4pc{ X \ar[r]^-{\text{w.e.}} \ar[d]_f & \Ex X \ar[d]^{\Ex f} \\ Y \ar[r]_-{\text{w.e.}} & \Ex Y }$$ \end{pf} We may now prove Thomason's Theorem. \begin{thm} \label{CatCase} There is a model structure on $\mathbf{Cat}$ in which a functor $F$ is a weak equivalence respectively fibration if and only if $\Ex^2NF$ is a weak equivalence respectively fibration in $\mathbf{SSet}$. This model structure is cofibrantly generated with generating cofibrations $$\{\xymatrix{c\Sd^2\partial\Delta[m] \ar[r] & c\Sd^2\Delta[m]}|\; m \geq 0\}$$ and generating acyclic cofibrations $$\{\xymatrix{c\Sd^2\Lambda^k[m] \ar[r] & c\Sd^2\Delta[m]}|\; 0 \leq k \leq m \text{ and } m \geq 1\}.$$ These functors were explicitly described in Section \ref{barycentric}. \end{thm} \begin{pf} \begin{enumerate} \item \label{CatCasei} The categories $c\Sd^2\partial\Delta[m]$ and $c\Sd^2\Lambda^k[m]$ each have a finite number of morphisms, hence they are finite, and are small with respect to $\mathbf{Cat}$. For a proof, see Proposition 7.6 of \cite{fiorepaolipronk1}. \item \label{CatCaseii} The model category $\mathbf{SSet}$ is cofibrantly generated, and the domains and codomains of the generating cofibrations and generating acyclic cofibrations are finite. By Corollary 7.4.2 in \cite{hovey}, this implies that transfinite compositions of weak equivalences in $\mathbf{SSet}$ are weak equivalences. \item \label{CatCaseiii} The nerve functor preserves filtered colimits. Every ordinal is filtered, so the nerve functor preserves $\lambda$-sequences. The $\Ex$ functor preserves colimits of $\lambda$-sequences as well. We use the idea in the proof of Theorem 4.5.1 of \cite{worytkiewicz2Cat}. First recall that for each $m$, the simplicial set $\Sd\Delta[m]$ is finite, so that $\mathbf{SSet}(\Sd \Delta[m], -)$ preserve colimits of all $\lambda$-sequences. If $\xymatrix@1{Y\co\lambda \ar[r] & \mathbf{SSet}}$ is a $\lambda$-sequence, then $$\aligned (\Ex\; \underset{\lambda}{\colim} Y)_m &= \mathbf{SSet}(\Sd \Delta[m],\underset{\lambda}{\colim} Y) \\ &\cong \underset{\lambda}{\colim} \mathbf{SSet}(\Sd \Delta[m], Y)\\ &\cong (\underset{\lambda}{\colim} \Ex Y)_m. \endaligned$$ Colimits in $\mathbf{SSet}$ are formed pointwise, we see that $\Ex$ preserves $\lambda$-sequences. Thus $\Ex^2N$ preserves $\lambda$-sequences. \item \label{CatCaseiv} Let $\xymatrix@1{j\co\Lambda^k[m] \ar[r] & \Delta[m]}$ be a generating acyclic cofibration for $\mathbf{SSet}$. Let the functor $j'$ be the pushout along $L$ as in the following diagram with $m \geq 1$. $$\xymatrix@C=3pc@R=3pc{c\Sd^2\Lambda^k[m] \ar[d]_{c\Sd^2j} \ar[r]^-L & \bfB \ar[d]^{j'} \\ c\Sd^2\Delta[m] \ar[r] & \bfP}$$ We factor $j'$ into two inclusions $$\xymatrix{\bfB \ar[r]^i & \bfQ \ar[r] & \bfP}$$ and show that the nerve of each is a weak equivalence. By Remark \ref{Deltafreecomposites} the only free composites that occur in the pushout $\bfP$ are of the form $(f_1,f_2)$ $$\xymatrix{\ar[r]^{f_1} & \ar[r]^{f_2} & }$$ where $f_1$ is a morphism in $\bfB$ and $f_2$ is a morphism of $\Out$ with source in $c\Sd^2\Lambda^k[m]$ and target outside of $c\Sd^2\Lambda^k[m]$ (see for example the drawing of $c\Sd^2\Delta[m]$ in Figure \ref{subdivisionfigure}). Hence, $\bfP$ is the union \begin{equation} \label{decomposition} \bfP=\overbrace{(\bfB \coprod_{c\Sd^2\Lambda^k[m]} \Out)}^\bfQ \cup \overbrace{(\Comp \cup \Cen)}^{\bfR} \end{equation} by Proposition \ref{upcloseddecomposition}, all free composites are in $\bfQ$, and they have the form $(f_1,f_2)$. We claim that the nerve of the inclusion $\xymatrix@1{i\co\bfB \ar[r] & \bfQ}$ is a weak equivalence. Let $\xymatrix@1{\overline{r}\co \bfQ \ar[r] & \bfB}$ be the identity on $\bfB$, and for any $(v_0, \dots, v_q)\in \Out$ we define $\overline{r}(v_0, \dots, v_q)=(u_0, \dots, u_p)$ where $(u_0, \dots, u_p)$ is the maximal subset $$\{u_0,\dots, u_p\} \subseteq \{v_0,\dots, v_q\}$$ that is in $\bfP\Sd \Lambda^k[m]$ (recall Proposition \ref{outer} \ref{outerii}). On free composites in $\bfQ$ we then have $\overline{r}(f_1,f_2)=(f_1,\overline{r}(f_2))$. More conceptually, we define $\xymatrix@1{\overline{r}\co \bfQ \ar[r] & \bfB}$ using the universal property of the pushout $\bfQ$ and the maps $1_{\bfB}$ and $Lr$ (the functor $r$ is as in Proposition \ref{outer} \ref{outerii}). Then $\overline{r}i=1_\bfB$, and there is a unique natural transformation $\xymatrix@1{i\overline{r} \ar@{=>}[r] & 1_\bfQ}$ which is the identity morphism on the objects of $\bfB$. Thus $\xymatrix@1{|Ni|\co|N\bfB|\ar[r] & |N\bfQ|}$ includes $|N\bfB|$ as a deformation retract of $|N\bfQ|$. Next we show that the nerve of the inclusion $\xymatrix@1{\bfQ \ar[r] & \bfP}$ is also a weak equivalence. The intersection of $\bfQ$ and $\bfR$ in (\ref{decomposition}) is equal to $$\bfS=\Out\cap(\Comp\cup\Cen).$$ Proposition \ref{nervecommuteswithpushouthypothesisverification} then implies that $\bfQ$, $\bfR$, and $\bfS$ satisfy the hypotheses of Proposition \ref{nervecommuteswithpushout}. Then \begin{equation} \label{QPinclusion} \aligned |N\bfQ| & \cong |N\bfQ| \coprod_{|N\bfS|} |N\bfS| \text{ (pushout along identity) }\\ & \simeq |N\bfQ| \coprod_{|N\bfS|} |N\bfR| \text{ (Prop. \ref{deformationretract} and Gluing Lemma)}\\ & \cong |N\bfQ \coprod_{N\bfS} N\bfR| \text{ (realization is a left adjoint}) \\ & \cong |N(\bfQ \coprod_\bfS \bfR)| \text{ (Prop. \ref{nervecommuteswithpushout} and Prop. \ref{nervecommuteswithpushouthypothesisverification})} \\ & = |N\bfP|. \endaligned \end{equation} In the second line, for the application of the Gluing Lemma (Lemma 8.12 in \cite{goerssjardine} or Proposition 13.5.4 in \cite{hirschhorn}), we use two identities and the inclusion $\xymatrix@1{|N\bfS| \ar[r] & |N\bfR|}$. It is a homotopy equivalence whose inverse is the retraction in Proposition \ref{deformationretract}. We conclude that the inclusion $\xymatrix@1{|N\bfQ| \ar[r] & |N\bfP|}$ is a weak equivalence, as it is the composite of the morphisms in equation \eqref{QPinclusion}. It is even a homotopy equivalence by Whitehead's Theorem. We conclude that $|Nj'|$ is the composite of two weak equivalences $$\xymatrix@C=3pc{|N\bfB| \ar[r]^{|Ni|} & |N\bfQ| \ar[r] & |N \bfP|}$$ and is therefore a weak equivalence. By Lemma \ref{ExPreservesAndReflectsWEs}, the functor $\Ex$ preserves weak equivalences, so that $\Ex^2Nj'$ is also a weak equivalence of simplicial sets. Part \ref{KanCorollaryiv} of Corollary \ref{KanCorollary} then holds, and we have the Thomason model structure on $\mathbf{Cat}$. \end{enumerate} \end{pf} \section{Pushouts and Colimit Decompositions of $c^n\delta_! \Sd^2 \Delta[m]$} \label{sectionnfolddecompositions} Next we enhance the proof of the $\mathbf{Cat}$-case to obtain the $\mathbf{nFoldCat}$-case. The preparations of Section \ref{barycentric}, \ref{retractionsection}, and \ref{pushoutsection} are adapted in this section to $n$-fold categorification. \begin{prop} \label{standardgluingsofdoublecats} Let $\xymatrix@1{d^i:[m-1] \ar[r] & [m]}$ be the injective order preserving map which skips $i$. Then the pushout in $\mathbf{nFoldCat}$ \begin{equation} \label{standardgluingsofdoublecatspushout} \xymatrix@R=3pc@C=4pc{[m-1] \boxtimes \cdots \boxtimes [m-1] \ar[r]^-{d^i \boxtimes \cdots \boxtimes d^i} \ar[d]_-{d^i \boxtimes \cdots \boxtimes d^i} & [m] \boxtimes \cdots \boxtimes [m] \ar[d] \\ [m] \boxtimes \cdots \boxtimes [m] \ar[r] & \bbP} \end{equation} does not have any free composites, and is an $n$-fold poset. \end{prop} \begin{pf} We do the proof for $n=2$. We consider horizontal morphisms, the proof for vertical morphisms and more generally squares is similar. We denote the two copies of $[m] \boxtimes [m]$ by $\bbN_1$ and $\bbN_2$ for convenience. A free composite occurs whenever there are $$\xymatrix{f_1\co A_1 \ar[r] & B_1}$$ $$\xymatrix{g_2\co B_2 \ar[r] & C_2}$$ in $\bbN_1$ and $\bbN_2$ respectively such that $B_1$ and $B_2$ are identified in the pushout, and further, the images of $[m-1] \boxtimes [m-1]$ contain neither $f_1$ nor $g_2$. Inspection of $d^i \boxtimes d^i$ shows that this does not occur. \end{pf} \begin{rmk} The gluings of Proposition \ref{standardgluingsofdoublecats} are the only kinds of gluings that occur in $c^n \delta_! \Sd^2 \Delta[m]$ and $c^n \delta_! \Sd^2 \Lambda^k[m]$ because of the description of glued simplices in Remark \ref{gluingofsimplices} and the fact that $c^n \delta_!$ is a left adjoint. \end{rmk} \begin{cor} Consider the pushout $\bbP$ in Proposition \ref{standardgluingsofdoublecats}. The application of $\delta^\ast N^n$ to Diagram (\ref{standardgluingsofdoublecatspushout}) is a pushout and is drawn in Diagram (\ref{gluingtogluing}). \begin{equation} \label{gluingtogluing} \xymatrix@R=3pc@C=6pc{\Delta[m-1] \times \cdots \times \Delta[m-1] \ar[r]^-{\delta^\ast N^n(d^i \boxtimes \cdots \boxtimes d^i)} \ar[d]_-{\delta^\ast N^n(d^i \boxtimes \cdots \boxtimes d^i)} & \Delta[m] \times \cdots \times \Delta[m] \ar[d] \\ \Delta[m] \times \cdots \times \Delta[m] \ar[r] & \delta^\ast N^n\bbP} \end{equation} \end{cor} \begin{pf} The functor $N^n$ preserves a pushout whenever there are no free composites in that pushout, which is the case here by Proposition \ref{standardgluingsofdoublecats}. Also, $\delta_!$ is a left adjoint, so it preserves any pushout. \end{pf} The $n$-fold version of Proposition \ref{colimitdecomposition} is as follows. \begin{prop} \label{colimitdecompositionnfold} Let $\bfT$ and $F$ be as in Proposition \ref{colimitdecomposition}. In particular, $\bfT$ could be $\bfP\Sd \Delta[m], \Cen, \Out, \Comp$ or $\Comp \cup \Cen$ by Proposition \ref{satisfyhypothesis}. Then $c^n \delta_!N\bfT$ is the union inside of $\bfT \boxtimes \bfT \boxtimes \cdots \boxtimes \bfT$ given by \begin{equation} \label{equ:cndelta!NT} c^n \delta_!N\bfT=\underset{|U|=m+1}{\underset{U \subseteq \bfT \; \; \text{\rm lin. ord.}}{\bigcup}} U \boxtimes U \boxtimes \cdots \boxtimes U. \end{equation} Similarly, if $\bfS=\bfP\Sd\Lambda^k[m]$ or $\bfS=\Out \cap (\Comp \cup \Cen)$, then by Proposition \ref{satisfyhypothesis}, $c^n \delta_!N\bfS$ is the union inside of $\bfS \boxtimes \bfS \boxtimes \cdots \boxtimes \bfS$ given by \begin{equation} \label{equ:cndelta!NTLambda} c^n \delta_!N\bfS=\underset{|U|=m}{\underset{U \subseteq \bfS \; \; \text{\rm lin. ord.}}{\bigcup}} U \boxtimes U \boxtimes \cdots \boxtimes U. \end{equation} If $\bfT$ or $\bfS$ is any of the respective posets above, then $$c^n \delta_! N \bfT \subseteq \bfP\Sd \Delta[m]\boxtimes \bfP\Sd \Delta[m] \boxtimes \cdots \boxtimes \bfP\Sd \Delta[m]$$ $$c^n \delta_! N \bfS \subseteq \bfP\Sd \Delta[m]\boxtimes \bfP\Sd \Delta[m] \boxtimes \cdots \boxtimes \bfP\Sd \Delta[m].$$ \end{prop} \begin{pf} For any linearly ordered subposet $U$ of $\bfT$ we have $$\aligned c^n\delta_! NU &= c^n(NU \boxtimes NU \boxtimes \cdots \boxtimes NU) \\ &= cNU \boxtimes cNU \boxtimes \cdots \boxtimes cNU \\ &=U \boxtimes U \boxtimes \cdots \boxtimes U. \endaligned$$ Thus we have $$\aligned c^n \delta_! N\bfT &= c^n \delta_! N(\underset{\bfJ}{\colim} F) \text{ by Proposition \ref{colimitdecomposition}}\\ &= c^n \delta_!( \underset{\bfJ}{\colim} NF) \text{ by Corollary \ref{nervecommuteswithcolimitdecomposition}}\\ &=\underset{\bfJ}{\colim} c^n \delta_! NF \text{ because $c^n\delta_!$ is a left adjoint} \\ &=\underset{U \in \bfJ}{\colim} U \boxtimes U \boxtimes \cdots \boxtimes U \\ &=\underset{|U|=m+1}{\underset{U \subseteq \bfT \; \; \text{\rm lin. ord.}}{\bigcup}} U \boxtimes U \boxtimes \cdots \boxtimes U. \endaligned$$ This last equality follows for the same reason that $\bfT$ (=colimit of $F$) is the union of the linearly ordered subposets $U$ of $\bfT$ with exactly $m+1$ elements. See also Proposition \ref{standardgluingsofdoublecats}. \end{pf} \begin{rmk} Note that \begin{equation*} \bfT \boxtimes \bfT \boxtimes \cdots \boxtimes \bfT \supsetneq \underset{|U|=m+1}{\underset{U \subseteq \bfT \; \; \text{\rm lin. ord.}}{\bigcup}} U \boxtimes U \boxtimes \cdots \boxtimes U. \end{equation*} \end{rmk} \begin{defn} An $n$-fold category is an {\it $n$-fold preorder} if for any two objects $A$ and $B$, there is at most one $n$-cube with $A$ in the $(0,0, \dots, 0)$-corner and $B$ in the $(1,1,\dots,1)$-corner. If $\bbD$ is an $n$-fold preorder, we define an ordinary preorder on $\Obj \bbD$ by $A \leq B :\Longleftrightarrow$ there exists an $n$-cube with $A$ in the $(0,0, \dots, 0)$-corner and $B$ in the $(1,1,\dots,1)$-corner. We call an $n$-fold preorder an {\it $n$-fold poset} if $\leq$ is additionally antisymmetric as a preorder on $\Obj \bbD$, that is, $(\Obj \bbD, \leq)$ is a poset. If $\bbT$ is an $n$-fold preorder and $\bbS$ is a sub-$n$-fold preorder, then $\bbS$ is {\it down-closed in $\bbT$} if $A \leq B$ and $B \in \bbS$ implies $A \in \bbS$. If $\bbT$ is an $n$-fold preorder and $\bbS$ is a sub-$n$-fold preorder, then the {\it up-closure} of $\bbS$ in $\bbT$ is the full sub-$n$-category of $\bbT$ on the objects $B$ in $\bbT$ such that $B \geq A$ for some object $A \in \bbS$. \end{defn} \begin{examp} If $\bfT$ is a poset, then $\bfT \boxtimes \bfT \boxtimes \cdots \boxtimes \bfT$ is an $n$-fold poset, and $(a_1, \dots, a_n) \leq (b_1, \dots, b_n)$ if and only if $a_i \leq b_i$ in $\bfT$ for all $1 \leq i \leq n$. If $\bfT$ is as in Proposition \ref{colimitdecomposition}, then the $n$-fold category $c^n\delta_!N\bfT$ is also an $n$-fold poset, as it is contained in the $n$-fold poset $\bfT \boxtimes \bfT \boxtimes \cdots \boxtimes \bfT$ by equation \eqref{equ:cndelta!NT}. \end{examp} \begin{prop} \label{Lambdadownclosednfold} The $n$-fold poset $c^n\delta_!N\bfP\Sd\Lambda^k[m]$ is down-closed in $c^n\delta_!N\bfP\Sd \Delta[m]$. \end{prop} \begin{pf} Suppose $(a_1, \dots, a_n) \leq (b_1, \dots, b_n)$ in $c^n\delta_!N\bfP\Sd \Delta[m]$ and $(b_1, \dots, b_n) \in c^n\delta_!N\bfP\Sd\Lambda^k[m]$. We make use of equations \eqref{equ:cndelta!NT} and \eqref{equ:cndelta!NTLambda} in Proposition \ref{colimitdecompositionnfold}. There exists a linearly ordered subposet $V$ of $\bfP\Sd\Lambda^k[m]$ such that $|V|=m$ and $b_1, \dots, b_n \in V$. There also exists a linearly ordered subposet $U$ of $\bfP\Sd \Delta[m]$ such that $|U|=m+1$ and $a_1, \dots, a_n \in U$. In particular, $\{a_1, \dots, a_n\}$ is linearly ordered. The preorder on $\Obj c^n\delta_!N\bfP\Sd \Delta[m]$ then implies that $a_i \leq b_i$ in $\bfP\Sd \Delta[m]$ for all $i$, so that $a_i \in \bfP\Sd\Lambda^k[m]$ by Proposition \ref{Lambdadownclosed}. Since the length of a maximal chain in $\bfP\Sd\Lambda^k[m]$ is $m$, the linearly ordered poset $\{a_1, \dots, a_n\}$ has at most $m$ elements. By Proposition \ref{satisfyhypothesis}, there exists a linearly ordered subposet $U'$ of $\bfP\Sd\Lambda^k[m]$ such that $|U'|=m$ and $a_1, \dots, a_n \in U'$. Consequently, $(a_1, a_2, \ldots, a_n)\in c^n\delta_!N\bfP\Sd\Lambda^k[m]$, again by equation \eqref{equ:cndelta!NTLambda}. \end{pf} \begin{prop} \label{upclosurenfold} The up-closure of $c^n\delta_!N\bfP\Sd\Lambda^k[m]$ in $c^n\delta_!N\bfP\Sd \Delta[m]$ is contained in $c^n \delta_!N \Out$. \end{prop} \begin{pf} An explicit description of all three $n$-fold posets is given in equations \eqref{equ:cndelta!NT} and \eqref{equ:cndelta!NTLambda} of Proposition \ref{colimitdecompositionnfold}. Recall that $\bfP\Sd\Lambda^k[m]$ and $\Out$ satisfy hypothesis \ref{colimitdecompositioni} of Proposition \ref{colimitdecomposition} for $m$ and $m+1$ respectively (by Proposition \ref{satisfyhypothesis}). Suppose $$A=(a_1, a_2, \ldots, a_n)\leq (b_1, b_2, \ldots, b_n)=B$$ in $c^n\delta_!N\bfP\Sd \Delta[m]$, $A \in c^n\delta_!N\bfP\Sd\Lambda^k[m]$, and $B \in c^n\delta_!N\bfP\Sd \Delta[m]$. Then $\{a_1, a_2, \ldots, a_n \} \subseteq U$ for some linearly ordered subposet $U \subseteq \bfP\Sd\Lambda^k[m]$ with $\vert U \vert =m$, and $\{b_1, b_2, \ldots, b_n \} \subseteq V$ for some linearly ordered subposet $V \subseteq \bfP\Sd\Delta[m]$ with $\vert V \vert =m+1$. We also have $a_i \leq b_i$ in $\bfP\Sd\Delta[m]$ for all $i$, so that each $b_i$ is in the up-closure of $\bfP\Sd\Lambda^k[m]$ in $\bfP\Sd\Delta[m]$, namely in $\Out$. Since equation \eqref{equ:cndelta!NT} holds for $\Out$, we see $B \in c^n \delta_!N \Out$, and therefore the up-closure of $c^n\delta_!N\bfP\Sd\Lambda^k[m]$ is contained in $c^n \delta_!N \Out$. \end{pf} \begin{rmk} \label{Deltafreecompositesnfold} \begin{enumerate} \item If $\alpha$ is an $n$-cube in $c^n\delta_!N\bfP\Sd \Delta[m]$ whose $i$-th target is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$, then $\alpha$ is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$. \item If $\alpha$ is an $n$-cube in $c^n\delta_!N\bfP\Sd \Delta[m]$ whose $i$-th source is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$, then $\alpha$ is in $c^n \delta_!N \Out$. \end{enumerate} \end{rmk} \begin{pf} \begin{enumerate} \item If $\alpha$ is an $n$-cube in $c^n\delta_!N\bfP\Sd \Delta[m]$ whose $i$-th target is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$, then its $(1,1,\ldots,1)$-corner is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$, as this corner lies on the $i$-th target. By Proposition \ref{Lambdadownclosednfold}, we then have $\alpha$ is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$. \item If $\alpha$ is an $n$-cube in $c^n\delta_!N\bfP\Sd \Delta[m]$ whose $i$-th source is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$, then the $(0,0,\ldots,0)$-corner is in $c^n\delta_!N\bfP\Sd\Lambda^k[m]$, as this corner lies on the $i$-th source. By Proposition \ref{upclosurenfold}, we then have $\alpha$ is in $c^n \delta_!N \Out$. \end{enumerate} \end{pf} Next we describe the diagonal of the nerve of certain $n$-fold categories as a union of $n$-fold products of standard simplices in Proposition \ref{diagonaldecomposition}. This proposition is also an analogue of Corollary \ref{nervecommuteswithcolimitdecomposition} since it says the composite functor $\delta^*N^nc^n\delta_!N$ preserves colimits of certain posets. \begin{lem} \label{lem:n-fold_categorification_of_lin_ordered_poset} For any finite, linearly ordered poset $U$ we have $$\delta^*N^n c^n\delta_! NU=NU \times NU \times \cdots \times NU.$$ \end{lem} \begin{pf} Since $U$ is a finite, linearly ordered poset, $NU$ is isomorphic to $\Delta[m]$ for some non-negative integer $m$, and we have $$\aligned \delta^*N^n c^n\delta_! NU &= \delta^*N^n c^n(NU \boxtimes NU \boxtimes \cdots \boxtimes NU) \\ &=\delta^*N^n (cNU \boxtimes cNU \boxtimes \cdots \boxtimes cNU) \\ &=\delta^*N^n (U \boxtimes U \boxtimes \cdots \boxtimes U) \\ &=\delta^*(NU \boxtimes NU \boxtimes \cdots \boxtimes NU) \\ &=NU \times NU \times \cdots \times NU. \endaligned$$ \end{pf} \begin{lem} \label{lem:n-fold_skeletality} For any finite, linearly ordered poset $U$, the simplicial set $$\delta^*N^n c^n\delta_! NU=NU \times NU \times \cdots \times NU$$ is $M$-skeletal for a large enough $M$ depending on $n$ and the cardinality of $U$. \end{lem} \begin{pf} We prove that there is an $M$ such that all simplices in degrees greater than $M$ are degenerate. Without loss of generality, we may assume $U$ is $[m]$. We have $$\aligned c^n\delta_! N[m] &= c^n \delta_! \Delta[m] \\ &= c^n \left( \Delta[m] \boxtimes \Delta[m] \boxtimes \cdots \boxtimes \Delta[m] \right) \\ &= \left( c\Delta[m] \right) \boxtimes \left( c\Delta[m]\right) \boxtimes \cdots \boxtimes \left(c\Delta[m]\right) \\ &= [m] \boxtimes [m] \boxtimes \cdots \boxtimes [m] \endaligned$$ by Example \ref{examp:categorification_and_external_products}. An $\ell$-simplex in $\delta^*N^n\left([m] \boxtimes [m] \boxtimes \cdots \boxtimes [m] \right)$ is an $\ell \times \ell \times \cdots \times \ell$ array of composable $n$-cubes in $[m] \boxtimes [m] \boxtimes \cdots \boxtimes [m]$, that is, a collection of $n$ sequences of $\ell$ composable morphisms in $[m]$, namely $\left( (f^1_j)_j, (f^2_j)_j, \dots, (f^n_j)_j \right)$ where $1\leq j \leq \ell$ and $f^i_{j+1} \circ f^i_j$ is defined for $j+1\leq\ell$. An $\ell$-simplex is degenerate if and only if there is a $j_0$ such that $f^1_{j_0}, f^2_{j_0}, \dots,f^n_{j_0}$ are all identities. An $\ell$-simplex has $\ell$-many $n$-cubes along its diagonal, namely $$(f^1_{j}, f^2_{j}, \dots,f^n_{j})$$ for $1 \leq j \leq \ell$. Since $[m]$ is finite, there is an integer $M$ such that for any $\ell\geq 0$ and any $\ell$-simplex $y$, there are at most $M$-many nontrivial $n$-cubes in $y$, that is, there are at most $M$-many tuples $$(f^1_{j_1}, f^2_{j_2}, \dots,f^n_{j_n})$$ which have at least one $f^i_{j_i}$ nontrivial. If $\ell > M$ then at least one of the $\ell$-many $n$-cubes on the diagonal must be trivial, by the pigeon-hole principle. Hence, for $\ell > M$, every $\ell$-simplex of $\delta^*N^n c^n\delta_! N[m]$ is degenerate. Finally, $\delta^*N^n c^n\delta_! N[m]$ is $M$-skeletal. \end{pf} \begin{prop} \label{diagonaldecomposition} Let $m \geq 1$ be a positive integer and $\bfT$ a poset satisfying the hypotheses \ref{simplicial_colimitdecompositioni} and \ref{simplicial_colimitdecompositionii} of Proposition \ref{simplicial_colimitdecomposition}. In particular, $\bfT$ could be $\bfP\Sd \Delta[m]$, $\Cen$, $\Out$, $\Comp$, or $\Comp \cup \Cen$ by Proposition \ref{satisfyhypothesis}. Let the functor $\xymatrix@1{F\co\bfJ \ar[r] & \mathbf{Cat}}$ and the universal cocone $\xymatrix@1{\pi\co F \ar@{=>}[r] & \Delta_\bfT}$ be as indicated in Proposition \ref{colimitdecomposition}. Then $$\aligned \delta^* N^n c^n \delta_! N\bfT &=\underset{\bfJ}{\colim} \delta^* N^n c^n \delta_! NF \\ &= \underset{\bfJ}{\colim} (NF \times \cdots \times NF)\endaligned$$ where $NF(U)$ is isomorphic to $\Delta[m-1]$ or $\Delta[m]$ for all $U \in \bfJ$. Similarly, the simplicial sets $\delta^* N^n c^n \delta_! N(\bfP\Sd\Lambda^k[m])$ and $$\delta^* N^n c^n \delta_! N(\Out \cap (\Comp \cup \Cen))$$ are each a colimit of simplicial sets of the form $\Delta[m-2]\times \cdots \times \Delta[m-2]$ and $\Delta[m-1] \times \cdots \times \Delta[m-1]$. (By definition $[-1]=\emptyset$.) \end{prop} \begin{pf} We first prove directly that $\delta^* N^n c^n \delta_! N\bfT$ is a colimit of $\xymatrix@1{\delta^* N^n c^n \delta_! NF \co \bfJ \ar[r] & \mathbf{SSet}}$ along the lines of the proof of Proposition \ref{simplicial_colimitdecomposition}. Let $M>m$ be a large enough integer such that the simplicial set $\delta^*N^n c^n\delta_! N[m]$ is $M$-skeletal. Such an $M$ is guaranteed by Lemma \ref{lem:n-fold_skeletality}. Suppose $S\in\mathbf{SSet}$ and $\xymatrix@1{\alpha:\delta^* N^n c^n \delta_! NF \ar@{=>}[r] & \Delta_S}$ is a natural transformation. We induce a morphism of simplicial sets $$\xymatrix@1{G:\delta^* N^n c^n \delta_! N\bfT \ar[r] & S}$$ by defining $G$ on the $M$-skeleton as follows. As in the proof of Proposition \ref{simplicial_colimitdecomposition}, $\Delta_{M}$ denotes the full subcategory of $\Delta$ on the objects $[0], [1], \dots, [M]$ and $\xymatrix@1{\text{tr}_{M} \co \mathbf{SSet} \ar[r] & \mathbf{Set}^{\Delta_{M}^\text{op}}}$ denotes the $M$-th truncation functor. The truncation $\text{tr}_{M} (\delta^* N^n (c^n \delta_! N\bfT))$ is a union of the truncated simplicial subsets $\text{tr}_{M}(\delta^* N^n (c^n \delta_! N\bfV))$ for $V \in \bfJ$ with $\vert V \vert=m+1$, since \begin{itemize} \item $c^n\delta_!N\bfT$ is a union of such $c^n \delta_! N\bfV$ by Proposition \ref{colimitdecompositionnfold}, \item any maximal linearly ordered subset of $\bfT$ has $m+1$ elements, and \item $\delta^*N^n$ preserves unions. \end{itemize} We define $$\xymatrix{G_{M}\vert_{\text{tr}_{M}(\delta^* N^n (c^n \delta_! N\bfV))} \co \text{tr}_{M}(\delta^* N^n (c^n \delta_! N\bfV)) \ar[r] & \text{tr}_{M}S}$$ simply as $\text{tr}_{M}\alpha_{V}$. The morphism $G_M$ is well-defined, for if $0\leq\ell \leq M$ and $x \in (\text{tr}_{M}(\delta^* N^n c^n \delta_! N\bfV)_\ell$ and $x \in (\text{tr}_{M}(\delta^* N^n c^n \delta_! N\bfV)_\ell$ with $\vert V \vert=m+1=\vert V \vert$, then $V$ and $V'$ can be connected by a sequence $W^0,W^1, \dots, W^k$ of $(m+1)$-element linearly ordered subsets of $\bfT$ that all contain the linearly ordered subposet $x$ and satisfy the properties in hypothesis \ref{simplicial_colimitdecompositionii}. By a naturality argument as in the proof of Proposition \ref{colimitdecomposition}, we have have a string of equalities $$\alpha_{W^0}(x)=\alpha_{W^1}(x)=\cdots =\alpha_{W^k}(x),$$ and we conclude $\alpha_{V}(x)=\alpha_{V'}(x)$ so that $G_M(x)$ is well defined. By definition $\Delta_{G_M} \circ \text{tr}_M N\pi=\text{tr}_M \alpha$. We may extend this to non-truncated simplicial sets by recalling from above that the simplicial set $\delta^* N^n c^n \delta_! N\bfT$ is {\it $M$-skeletal}, that is, the counit inclusion $$\xymatrix@1{\text{sk}_M\text{tr}_M(\delta^* N^n c^n \delta_! N\bfT) \ar[r] & \delta^* N^n c^n \delta_! N\bfT}$$ is the identity. Thus $G_M$ extends to $\xymatrix@1{G\co N\bfT \ar[r] & S}$ and $\Delta_{G} \circ N\pi=\alpha$. Lastly, the morphism $G$ is unique, since the simplicial subsets $\delta^* N^n c^n \delta_! N\bfV$ for $\vert V \vert=m+1$ in $\bfJ$ cover $\delta^* N^n c^n \delta_! N\bfT$ by hypothesis \ref{simplicial_colimitdecompositioni}. So far we have proved $\delta^* N^n c^n \delta_! N\bfT =\underset{\bfJ}{\colim} \delta^* N^n c^n \delta_! NF$. It only remains to show $\underset{\bfJ}{\colim} \delta^* N^n c^n \delta_! NF=\underset{\bfJ}{\colim} (NF \times \cdots \times NF)$. But this follows from Lemma \ref{lem:n-fold_categorification_of_lin_ordered_poset} and that fact that $FV=V$ for all $V \in \bfJ$. \end{pf} The $n$-fold version of Proposition \ref{deformationretract} is the following. \begin{cor} \label{nfolddeformationretract} The space $|\delta^*N^n c^n \delta_! N (\Out \cap (\Comp \cup \Cen))|$ includes into the space $|\delta^* N^n c^n \delta_!N(\Comp \cup \Cen)|$ as a deformation retract. \end{cor} \begin{pf} Recall that realization $|\cdot|$ commutes with colimits, since it is a left adjoint, and that $|\cdot|$ also commutes with products. We do the multi-stage deformation retraction of Proposition \ref{deformationretract} to each factor $|\Delta[m]|$ of $|\Delta[m]|\times \cdots \times |\Delta[m]|$ in the colimit of Proposition \ref{diagonaldecomposition}. This is the desired deformation retraction of $|\delta^* N^n c^n \delta_! N(\Comp \cup \Cen)|$ to $|\delta^*N^n c^n \delta_! N(\Out \cap (\Comp \cup \Cen))|$. \end{pf} \begin{prop} \label{PushoutDescription} Consider $n=2$. Let $\xymatrix@1{j\co\Lambda^k[m] \ar[r] & \Delta[m]}$ be a generating acyclic cofibration for $\mathbf{SSet}$, $\bbB$ a double category, and $L$ a double functor as below. Then the pushout $\bbQ$ in the diagram \begin{equation} \label{PushoutDiagram} \xymatrix@C=3pc@R=3pc{c^2\delta_!\Sd^2\Lambda^k[m] \ar[d]_{c^2\delta_!\Sd^2j} \ar[r]^-L & \bbB \ar[d] \\ c^2\delta_!N \Out \ar[r] & \bbQ} \end{equation} has the following form. \begin{enumerate} \item \label{PushoutDescription_object_set} The object set of $\bbQ$ is the pushout of the object sets. \item \label{PushoutDescription_horizontal} The set of horizontal morphisms of $\bbQ$ consists of the set of horizontal morphisms of $\bbB$, the set of horizontal morphisms of $c^2\delta_!N\Out$, and the set of formal composites of the form $$\xymatrix@C=3pc{\ar[r]^{f_1} & \ar[r]^{(1,f_2)} & }$$ where $f_1$ is a horizontal morphism in $\bbB$, $f_2$ is a morphism in $\Out$, and the target of $f_1$ is the source of $(1,f_2)$ in $\Obj \bbQ$. \item \label{PushoutDescription_vertical} The set of vertical morphisms of $\bbQ$ consists of the set of vertical morphisms of $\bbB$, the set of vertical morphisms of $c^2\delta_!N\Out$, and the set of formal composites of the form $$\xymatrix@R=3pc{\ar[d]^{g_1} \\ \ar[d]^{(g_2,1)} \\ \\}$$ where $g_1$ is a vertical morphism in $\bbB$, $g_2$ is a morphism in $\Out$, and the target of $g_1$ is the source of $(g_2,1)$ in $\Obj \bbQ$. \item \label{PushoutDescription_squares} The set of squares of $\bbQ$ consists of the set of squares of $\bbB$, the set of squares of $c^2\delta_!N\Out$, and the set of formal composites of the following three forms. \begin{enumerate} \item \label{PushoutDescription_squares_a} $$\xymatrix@R=4pc@C=4pc{ \ar[r]^{f_1} \ar[d]_{g_1} \ar@{}[dr]|{\alpha_1} & (W,A') \ar[r]^{(1_W,f_2)} \ar[d]|{\tb{(g,1_{A'})}} & (W,B') \ar[d]^{(g,1_{B'})} \\ \ar[r]_{p_1} & (A,A') \ar[r]_{(1_A,f_2)} & (A,B')}$$ \item \label{PushoutDescription_squares_b} $$\xymatrix@R=4pc@C=4pc{\ar[r]^{f_1} \ar[d]_{g_1} \ar@{}[dr]|{\beta_1} & \ar[d]^{q_1} \\ (A,W') \ar[r]|{\lr{(1_A,f)}} \ar[d]_{(g_2,1_{W'})} & (A,A') \ar[d]^{(g_2,1_{A'})} \\ (B,W') \ar[r]_{(1_B,f)} & (B,A') }$$ \item \label{PushoutDescription_squares_c} $$\xymatrix@R=4pc@C=4pc{\ar[r]^{f_1} \ar[d]_{g_1} \ar@{}[dr]|{\gamma_1} & (W,A') \ar[r]^{(1_W,f_2)} \ar[d]|{\tb{(g,1_{A'})}} & (W,B') \ar[d]^{(g,1_{B'})} \\ (A,W') \ar[r]|{\lr{(1_A,f)}} \ar[d]_{(g_2,1_{W'})} & (A,A') \ar[r]|{\lr{(1_A,f_2)}} \ar[d]|{\tb{(g_2,1_{A'})}} & (A,B') \ar[d]^{(g_2,1_{B'})} \\ (B,W') \ar[r]_{(1_B,f)} & (B,A') \ar[r]_{(1_B,f_2)} & (B,B') }$$ \end{enumerate} where $\alpha_1,\beta_1,\gamma_1$ are squares in $\bbB$, the horizontal morphisms $f_1,p_1$ are in $\bbB$, the vertical morphisms $g_1,q_1$ are in $\bbB$, and the morphisms $f$, $f_2$, $g$, $g_2$ are in $\Out$. Further, each boundary of each square in $c^2\delta_!N\Out$ must belong to a linearly ordered subset of $\Out$ of cardinality $m+1$ (see Proposition \ref{colimitdecompositionnfold}). So for example, $f$ and $g_2$ must belong to a linearly ordered subset of $\Out$ of cardinality $m+1$, and $f_2$ and $g$ must belong to another linearly ordered subset of $\Out$ of cardinality $m+1$. Of course, the sources and targets in each of \ref{PushoutDescription_squares_a}, \ref{PushoutDescription_squares_b}, and \ref{PushoutDescription_squares_c} must match appropriately. \end{enumerate} \end{prop} \begin{pf} All of this follows from the colimit formula in $\mathbf{DblCat}$, which is Theorem 4.6 of \cite{fiorepaolipronk1}, and is also a special case of Proposition \ref{prop:forgetful_admits_right_adjoint} in the present paper. The horizontal and vertical 1-categories of $\bbQ$ are the pushouts of the horizontal and vertical 1-categories, so \ref{PushoutDescription_object_set} follows, and then \ref{PushoutDescription_horizontal} and \ref{PushoutDescription_vertical} follow from Remark \ref{Deltafreecomposites}. To see \ref{PushoutDescription_squares}, one observes that the only free composite pairs of squares that can occur are of the first two forms, again from Remark \ref{Deltafreecomposites}. Certain of these can be composed with a square in $c^2 \delta_! N \Out$ to obtain the third form. No further free composites can be obtained from these ones because of Remark \ref{Deltafreecomposites} and the special form of $c^2 \delta_! N \Out$. \end{pf} \begin{prop} \label{pushoutsimplexdescription} Consider $n=2$ and the pushout $\bbQ$ in diagram (\ref{PushoutDiagram}). Then any $q$-simplex in $\delta^*N^2\bbQ$ is a $q\times q$-matrix of composable squares of $\bbQ$ which has the form in Figure \ref{pushoutsimplex}. \begin{figure} \setlength{\unitlength}{1mm} \begin{picture}(50,50) \put(0,0){\line(1,0){50}} \put(0,0){\line(0,1){50}} \put(50,0){\line(0,1){50}} \put(0,50){\line(1,0){50}} \put(30,30){\line(0,1){20}} \put(35,30){\line(0,1){20}} \put(0,30){\line(1,0){35}} \put(0,35){\line(1,0){35}} \put(15,40){\makebox(0,0)[b]{$\bbB$}} \put(32.5,31){\makebox(0,0)[b]{$c$}} \put(32.5,40){\makebox(0,0)[b]{$a$}} \put(15,31){\makebox(0,0)[b]{$b$}} \put(25,15){\makebox(0,0)[b]{$c^2\delta_!N\Out$}} \end{picture} \caption{A $q$-simplex in $\delta^*N^2\bbQ$.}\label{pushoutsimplex} \end{figure} The submatrix labelled $\bbB$ is a matrix of squares in $\bbB$. The submatrix labelled $a$ is a single column of squares of the form \ref{PushoutDescription_squares_a} in Proposition \ref{PushoutDescription} \ref{PushoutDescription_squares} (the $\alpha_1$'s may be trivial). The submatrix labelled $b$ is a single row of squares of the form \ref{PushoutDescription_squares_b} in Proposition \ref{PushoutDescription} \ref{PushoutDescription_squares} (the $\beta_1$'s may be trivial). The submatrix labelled $c$ is a single square of the form \ref{PushoutDescription_squares_c} in Proposition \ref{PushoutDescription} \ref{PushoutDescription_squares} (part of the square may be trivial). The remaining squares in the $q$-simplex are squares of $c^2\delta_!N\Out$. \end{prop} \begin{pf} These are the only composable $q\times q$-matrices of squares because of the special form of the horizontal and vertical 1-categories. \end{pf} \begin{rmk} The analogues of Propositions \ref{PushoutDescription} and \ref{pushoutsimplexdescription} clearly hold in higher dimensions as well, only the notation gets more complicated. Proposition \ref{prop:forgetful_admits_right_adjoint} provides the key to proving the higher dimensional versions, namely, it allows us to calculate the pushout in $\mathbf{nFoldCat}$ in steps: first the object set of the pushout, then sub-1-categories of the pushout in all $n$-directions, then the squares in the sub-double-categories of the pushout in each direction $ij$, then the cubes in the sub-3-fold-categories of the pushout in each direction $ijk$, and so on. Since we do not need the explicit formulations of Propositions \ref{PushoutDescription} and \ref{pushoutsimplexdescription} for $n>2$ in this paper, we refrain from stating and proving them. In fact, we do not even need the case $n=2$ for this paper; we only presented Propositions \ref{PushoutDescription} and \ref{pushoutsimplexdescription} as an illustration of how the pushout in $\mathbf{nFoldCat}$ works in a specific case. \end{rmk} The $n$-fold version of \ref{nervecommuteswithpushout} is the following. \begin{prop} \label{nfoldnervecommuteswithpushout} Suppose $\bbQ$, $\bbR$, and $\bbS$ are $n$-fold categories, and $\bbS$ is an $n$-foldly full $n$-fold subcategory of $\bbQ$ and $\bbR$ such that \begin{enumerate} \item \label{nfoldnervecommuteswithpushouti} If $\xymatrix@1{f:x \ar[r] & y }$ is a 1-morphism in $\bbQ$ (in any direction) and $x \in \bbS$, then $y \in \bbS$, \item \label{nfoldnervecommuteswithpushoutii} If $\xymatrix@1{f:x \ar[r] & y }$ is a 1-morphism in $\bbR$ (in any direction) and $x \in \bbS$, then $y \in \bbS$. \end{enumerate} Then the nerve of the pushout of $n$-fold categories is the pushout of the nerves. \begin{equation} N^n(\bbQ \coprod_\bbS \bbR) \cong N^n\bbQ \coprod_{N^n\bbS} N^n\bbR \end{equation} \end{prop} \begin{pf} We claim that there are no free composite $n$-cubes in the pushout $\bbQ \coprod_\bbS \bbR$. Suppose that $\alpha$ is an $n$-cube in $\bbQ$ and $\beta$ is an $n$-cube in $\bbR$ and that these are composable in the $i$-th direction. In other words, the $i$-th target of $\alpha$ is the $i$-th source of $\beta$, which we will denote by $\gamma$. Then $\gamma$ must be an $(n-1)$-cube in $\bbS$, as it lies in both $\bbQ$ and $\bbR$. Since the corners of $\gamma$ are in $\bbS$, we can use hypothesis \ref{nfoldnervecommuteswithpushoutii} to conclude that all corners of $\beta$ are in $\bbS$ by travelling along edges that emanate from $\gamma$. By the fullness of $\bbS$, the cube $\beta$ is in $\bbS$, and also $\bbQ$. Then $\beta \circ_i \alpha$ is in $\bbQ$ and is not free. If $\alpha$ is in $\bbR$ and $\beta$ is in $\bbQ$, we can similarly conclude that $\beta$ is in $\bbS$, $\beta \circ_i \alpha$ is in $\bbR$, and $\beta \circ_i \alpha$ is not a free composite. Thus, the pushout $\bbQ \coprod_\bbS \bbR$ has no free composite $n$-cubes, and hence no free composites of any cells at all. Let $(\alpha_{\overline{j}})_{\overline{j}}$ be a $p$-simplex in $N^n(\bbQ \coprod_\bbS \bbR)$. Then each $\alpha_{\overline{j}}$ is an $n$-cube in $\bbQ$ or $\bbR$, since there are no free composites. By repeated application of the argument above, if $\alpha_{(0,\ldots,0)}$ is in $\bbQ$ then every $\alpha_{\overline{j}}$ is in $\bbQ$. Similarly, if $\alpha_{(0,\ldots,0)}$ is in $\bbR$ then every $\alpha_{\overline{j}}$ is in $\bbR$. Thus we have a morphism $\xymatrix{N^n(\bbQ \coprod_\bbS \bbR) \ar[r] & N^n\bbQ \coprod_{N^n\bbS} N^n\bbR}$. Its inverse is the canonical morphism $\xymatrix{N^n\bbQ \coprod_{N^n\bbS} N^n\bbR \ar[r] & N^n(\bbQ \coprod_\bbS \bbR) }$. Note that we have not used the higher dimensional versions of Propositions \ref{PushoutDescription} and \ref{pushoutsimplexdescription} anywhere in this proof. \end{pf} \section{Thomason Structure on $\mathbf{nFoldCat}$}\label{Thomasonnfoldsection} We apply Corollary \ref{KanCorollary} to transfer across the adjunction below. \begin{equation}\label{nfoldcatadjunction} \xymatrix@C=4pc{\mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\Sd^2} & \ar@/^1pc/[l]^-{\Ex^2} \mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\delta_!} & \ar@/^1pc/[l]^-{\delta^\ast} \mathbf{SSet^n} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{c^n} & \ar@/^1pc/[l]^-{N^n} \mathbf{nFoldCat}} \end{equation} \begin{prop} \label{Ex_squared} Let $F$ be an $n$-fold functor. Then the morphism of simplicial sets $\delta^\ast N^nF$ is a weak equivalence if and only if $\Ex^2\delta^\ast N^nF$ is a weak equivalence. \end{prop} \begin{pf} This follows from two applications of Lemma \ref{ExPreservesAndReflectsWEs}. \end{pf} \begin{thm} \label{MainModelStructure} There is a model structure on $\mathbf{nFoldCat}$ in which an $n$-fold functor $F$ a weak equivalence respectively fibration if and only if $\Ex^2\delta^\ast N^nF$ is a weak equivalence respectively fibration in $\mathbf{SSet}$. Moreover, this model structure on $\mathbf{nFoldCat}$ is cofibrantly generated with generating cofibrations $$\{\xymatrix{c^n\delta_!\Sd^2\partial\Delta[m] \ar[r] & c^n\delta_!\Sd^2\Delta[m]}|\; m \geq 0\}$$ and generating acyclic cofibrations $$\{\xymatrix{c^n\delta_!\Sd^2\Lambda^k[m] \ar[r] & c^n\delta_!\Sd^2\Delta[m]}|\; 0 \leq k \leq m \text{ and } m \geq 1\}.$$ \end{thm} \begin{pf} We apply Corollary \ref{KanCorollary}. \begin{enumerate} \item The $n$-fold categories $c^n\delta_!\Sd^2\partial\Delta[m]$ and $c^n\delta_!\Sd^2\Lambda^k[m]$ each have a finite number of $n$-cubes, hence they are finite, and are small with respect to $\mathbf{nFoldCat}$. For a proof, see Proposition 7.7 of \cite{fiorepaolipronk1} and the remark immediately afterwards. \item This holds as in the proof of \ref{CatCaseii} in Theorem \ref{CatCase}. \item The $n$-fold nerve functor $N^n$ preserves filtered colimits. Every ordinal is filtered, so $N^n$ preserves $\lambda$-sequences. The functor $\delta^\ast$ preserves all colimits, as it is a left adjoint. The functor $\Ex$ preserves $\lambda$-sequences as in the proof of \ref{CatCaseiii} in Theorem \ref{CatCase}. \item Let $\xymatrix@1{j\co\Lambda^k[m] \ar[r] & \Delta[m]}$ be a generating acyclic cofibration for $\mathbf{SSet}$. Let the functor $j'$ be the pushout along $L$ as in the following diagram with $m \geq 1$. \begin{equation} \label{L} \xymatrix@C=3pc@R=3pc{c^n\delta_!\Sd^2\Lambda^k[m] \ar[d]_{c^n\delta_!\Sd^2j} \ar[r]^-L & \bbB \ar[d]^{j'} \\ c^n\delta_!\Sd^2\Delta[m] \ar[r] & \bbP} \end{equation} We factor $j'$ into two inclusions \begin{equation} \label{nfoldinclusions} \xymatrix{\bbB \ar[r]^i & \bbQ \ar[r] & \bbP} \end{equation} and show that $\delta^*N^n$ applied to each yields a weak equivalence. For the first inclusion $i$, we will see in Lemma \ref{nfoldsimplicialdefretract} that $\delta^* N^n i$ is a weak equivalence of simplicial sets. By Remark \ref{Deltafreecompositesnfold}, the only free composites of an $n$-cube in $c^n \delta_! \Sd^2 \Delta[m]$ with an $n$-cube in $\bbB$ that can occur in $\bbP$ are of the form $\beta \circ_i \alpha$ where $\alpha$ is an $n$-cube in $\bbB$ and $\beta$ is an $n$-cube in $c^n \delta_! N \Out$ with $i$-th source in $c^n \delta_! N \bfP \Sd \Lambda^k[m]$ and $i$-th target outside of $c^n \delta_! N \bfP \Sd \Lambda^k[m]$. Of course, there are other free composites in $\bbP$, most generally of a form analogous to Proposition \ref{PushoutDescription} \ref{PushoutDescription_squares_c}, but these are obtained by composing the free composites of the form $\beta \circ_i \alpha$ above. Hence $\bbP$ is the union \begin{equation} \label{decompositionnfold} \bbP=\overbrace{(\bbB \coprod_{c^n\delta_!N\bfP\Sd\Lambda^k[n]} c^n\delta_!N\Out)}^\bbQ \cup \overbrace{(c^n\delta_!N(\Comp \cup \Cen ))}^{\bbR}. \end{equation} Note that we have not used the higher dimensional versions of Propositions \ref{PushoutDescription} and \ref{pushoutsimplexdescription} to draw this conclusion. We show that $\delta^*N^n$ applied to the second inclusion $\xymatrix@1{\bbQ \ar[r] & \bbP}$ in equation \eqref{nfoldinclusions} is a weak equivalence. The intersection of $\bbQ$ and $\bbR$ in (\ref{decompositionnfold}) is equal to $$\aligned \bbS&=c^n\delta_!N(\Out)\cap c^n\delta_!N(\Comp\cup\Cen) \\ &=c^n\delta_!N(\Out\cap(\Comp\cup\Cen)). \endaligned$$ Propositions \ref{nervecommuteswithpushouthypothesisverification} and \ref{colimitdecompositionnfold} then imply that $\bbQ$, $\bbR$, and $\bbS$ satisfy the hypotheses of Proposition \ref{nfoldnervecommuteswithpushout}. Then \begin{equation*} \aligned |\delta^*N^n\bbQ| & \cong |\delta^*N^n\bbQ| \coprod_{|\delta^*N^n\bbS|} |\delta^*N^n\bbS| \text{ (pushout along identity) }\\ & \simeq |\delta^*N^n\bbQ| \coprod_{|\delta^*N^n\bbS|} |\delta^*N^n\bbR| \text{ (Cor. \ref{nfolddeformationretract} and Gluing Lemma)}\\ & \cong |\delta^*\left(N^n\bbQ \coprod_{N^n\bbS} N^n\bbR \right)| \text{ (the functors $|\cdot|$ and $\delta^*$ are left adjoints}) \\ & \cong |\delta^*N^n(\bbQ \coprod_\bbS \bbR)| \text{ (Prop. \ref{nfoldnervecommuteswithpushout}) } \\ & = |\delta^*N^n\bbP|. \endaligned \end{equation*} In the second line, for the application of the Gluing Lemma, we use two identities and the inclusion $\xymatrix@1{|\delta^*N^n\bbS| \ar[r] & |\delta^*N^n\bbR|}$. It is a homotopy equivalence whose inverse is the retraction in Corollary \ref{nfolddeformationretract}. We conclude that the inclusion $\xymatrix@1{|\delta^*N^n\bbQ| \ar[r] & |\delta^*N^n\bbP|}$ is a weak equivalence, as it is the composite of the morphisms above. It is even a homotopy equivalence by Whitehead's Theorem. We conclude that $|\delta^*N^n j'|$ is the composite of two weak equivalences $$\xymatrix@C=4pc{|\delta^*N^n\bbB| \ar[r]^{|\delta^*N^ni|} & |\delta^*N^n\bbQ| \ar[r] & |\delta^*N^n \bbP|}$$ and is therefore a weak equivalence. Thus $\delta^*N^n j'$ is a weak equivalence of simplicial sets. By Lemma \ref{ExPreservesAndReflectsWEs}, the functor $\Ex$ preserves weak equivalences, so that $\Ex^2\delta^*N^n j'$ is also a weak equivalence of simplicial sets. Part \ref{KanCorollaryiv} of Corollary \ref{KanCorollary} then holds, and we have the Thomason model structure on $\mathbf{nFoldCat}$. \end{enumerate} \end{pf} \begin{lem} \label{nfoldsimplicialdefretract} The inclusion $\xymatrix@1{\delta^\ast N^n i\co \delta^\ast N^n\bbB \ar[r] & \delta^\ast N^n \bbQ}$ embeds the simplicial set $\delta^\ast N^n\bbB$ into $\delta^\ast N^n \bbQ$ as a simplicial deformation retract. \end{lem} \begin{pf} Recall $\xymatrix@1{i\co \bbB \ar[r] & \bbQ}$ is the inclusion in equation \eqref{nfoldinclusions} and $\bbQ$ is defined as in equation \eqref{decompositionnfold}. We define an $n$-fold functor $\xymatrix@1{\overline{r}\co \bbQ \ar[r] & \bbB}$ using the universal property of the pushout $\bbQ$ and the functor from Proposition \ref{outer} \ref{outerii} called $\xymatrix@1{r\co \Out \ar[r] & \bfP\Sd \Lambda^k[m]}$. If $(v_0, \dots, v_q)\in \Out$ then $r(v_0, \dots, v_q):=(u_0, \dots, u_p)$ where $(u_0, \dots, u_p)$ is the maximal subset $$\{u_0,\dots, u_p\} \subseteq \{v_0,\dots, v_q\}$$ that is in $\bfP\Sd \Lambda^k[m]$. We have $$\aligned c^n\delta_!N\bfP\Sd \Lambda^k[m]&= \underset{|U|=m}{\underset{U \subseteq \bfP\Sd \Lambda^k[m] \; \; \text{\rm lin. ord.}}{\bigcup}} U \boxtimes U \boxtimes \cdots \boxtimes U \\ &\subseteq \underset{|U|=m+1}{\underset{U \subseteq \Out \; \; \text{\rm lin. ord.}}{\bigcup}} U \boxtimes U \boxtimes \cdots \boxtimes U \\ &= c^n\delta_!N\Out. \endaligned$$ Recall $L$ is the $n$-fold functor in diagram (\ref{L}). We define $\overline{r}$ on $c^n\delta_!N\Out$ to be $$\xymatrix@1{L \circ (r\boxtimes r \boxtimes \cdots \boxtimes r) \co c^n\delta_!N\Out \ar[r] & \bbB}$$ and we define $\overline{r}$ to be the identity on $\bbB$. This induces the desired $n$-fold functor $\xymatrix@1{\overline{r}\co \bbQ \ar[r] & \bbB}$ by the universal property of the pushout $\bbQ$. By definition we have $\overline{r}i=1_\bbB$. We next define an $n$-fold natural transformation $\xymatrix@1{\overline{\alpha}\co i\overline{r} \ar@{=>}[r] & 1_\bbQ}$ (see Definition \ref{defn_nfold_nat_transf}), which will induce a simplicial homotopy from $\delta^*N^n(i\overline{r})$ to $1_{\delta^*N^n\bbQ}$ as in Proposition \ref{nfoldnat_gives_simplicial_homotopy}. Let $$\xymatrix@1{f_1\co \bbB \ar[r] & \bbB^{[1]\boxtimes \cdots \boxtimes [1]}}$$ $$\xymatrix@1{f_2\co c^n\delta_!N\Out \ar[r] & \bbB^{[1]\boxtimes \cdots [1]}}$$ be the $n$-fold functors corresponding to the $n$-fold natural transformations $$\xymatrix@1{pr_\bbB \co \bbB \times ([1]\boxtimes \cdots \boxtimes [1]) \ar[r] & \bbB}$$ $$\xymatrix@1{L \circ (\alpha \boxtimes \cdots \boxtimes \alpha)\co c^n\delta_!N\Out \times ([1]\boxtimes \cdots \boxtimes [1]) \ar[r] & \bbB}$$ (recall $\mathbf{nFoldCat}$ is Cartesian closed by Ehresmann--Ehresmann \cite{ehresmannthree}, the definition of $\alpha$ in Proposition \ref{outer} \ref{outerii}, and Example \ref{examp:n_naturaltransfs_yield_an_nfold_naturaltransf}). Then the necessary square involving $f_1$, $f_2$, $L$ and the inclusion $$\xymatrix{c^n\delta_!N\bfP\Sd \Lambda^k[m] \ar[r] & c^n\delta_!N\Out}$$ commutes ($\alpha\boxtimes \cdots \boxtimes \alpha$ is trivial on $c^n\delta_!N\bfP\Sd \Lambda^k[m]$), so we have an $n$-fold functor $\xymatrix@1{f\co \bbQ \ar[r] & \bbB^{[1]\boxtimes \cdots \boxtimes [1]}}$, which corresponds to an $n$-fold natural transformation $$\xymatrix{\overline{\alpha}\co i\overline{r} \ar@{=>}[r] & 1_\bbQ}.$$ Thus $\overline{\alpha}$ induces a simplicial homotopy from $\delta^*N^n(i)\circ \delta^*N^n(\overline{r})$ to $1_{\delta^*N^n\bbQ}$ and from above we have $\delta^*N^n(\overline{r}) \circ \delta^*N^n(i)=1_{\delta^*N^n\bbB}$. This completes the proof that the inclusion $\xymatrix@1{\delta^\ast N^n i\co \delta^\ast N^n\bbB \ar[r] & \delta^\ast N^n \bbQ}$ embeds the simplicial set $\delta^\ast N^n\bbB$ into $\delta^\ast N^n \bbQ$ as a simplicial deformation retract. We next write out what this simplicial homotopy is in the case $n=2$. We denote by $\sigma$ this simplicial homotopy from $\delta^*N^n(i\overline{r})$ to $1_{\delta^*N^n\bbQ}$. For each $q$, we need to define $q+1$ maps $\xymatrix@1{\sigma_\ell:(\delta^*N^n\bbQ)_q \ar[r] & (\delta^*N^n\bbQ)_{q+1}}$ compatible with the face and degeneracy maps, $\delta^*N^n(i\overline{r})$, and $1_{\delta^*N^n\bbQ}$. We define $\sigma_\ell$ on a $q$-simplex $\alpha$ of the form in Proposition \ref{pushoutsimplexdescription}. {\it This $q$-simplex $\alpha$ has nothing to do with the $n$-fold natural transformation $\alpha$ above.} Suppose that the unique square of type \ref{PushoutDescription_squares_c} of Proposition \ref{PushoutDescription} is in entry $(u,v)$ and $u \leq v$. If $\ell<u$, then $\sigma_\ell(\alpha)$ is obtained from $\alpha$ by inserting a row of vertical identities between rows $\ell$ and $\ell+1$ of $\alpha$, as well as a column of horizontal identity squares between columns $\ell$ and $\ell+1$ of $\alpha$. Thus $\sigma_\ell(\alpha)$ is vertically trivial in row $\ell+1$ and horizontally trivial in column $\ell+1$ of $\alpha$. If $\ell=u$ and $u <v$, then to obtain $\sigma_{\ell}(\alpha)$ from $\alpha$, we replace row $u$ by the two rows that make row $u$ into a row of formal vertical composites, and we insert a column of horizontal identity squares between column $u$ and column $u+1$ of $\alpha$. If $\ell=u$ and $u=v$, then to obtain $\sigma_{\ell}(\alpha)$ from $\alpha$, we replace row $u$ by the two rows that make row $u$ into a row of formal vertical composites, and we replace column $u$ by the two columns that make column $u$ into a column of formal horizontal composites. If $u<\ell<v$, then to obtain $\sigma_{\ell}(\alpha)$ from $\alpha$, we replace row $u$ by the row of squares $\beta_1$ in $\bbB$ that make up the first part of the formal vertical composite row $u$ (consisting partly of region $b$ of Proposition \ref{pushoutsimplexdescription}), then rows $u+1, u+2, \ldots, \ell$ of $\sigma_{\ell}(\alpha)$ are identity rows, row $\ell+1$ of $\sigma_{\ell}(\alpha)$ is the composite of the bottom half of row $u$ of $\alpha$ with rows $u+1, u+2, \ldots, \ell$ of $\alpha$, and the remaining rows of $\sigma_\ell(\alpha)$ are the remaining rows of $\alpha$ (shifted down by 1). We also insert a column of horizontal identity squares between column $\ell$ and column $\ell+1$ of $\alpha$. If $u<\ell=v$, then to obtain $\sigma_{\ell}(\alpha)$ from $\alpha$, we do the row construction as in the case $u<\ell<v$, and we also replace column $v$ by the two columns that make column $v$ into a column of formal horizontal composites. If $u\leq v<\ell$, then to obtain $\sigma_{\ell}(\alpha)$ from $\alpha$, we do the row construction as in the case $u<\ell<v$, and we also do the analogous column construction. The maps $\sigma_\ell$ for $0\leq\ell\leq q$ are compatible with the boundary operators, $\delta^*N^n(i\overline{r})$, and $1_{\delta^*N^n\bbQ}$ for the same reason that the analogous maps associated to a natural transformation of functors are compatible with the face and degeneracy maps and the functors. Indeed, the $\sigma_\ell$'s are defined precisely as those for a natural transformation, we merely take into account the horizontal and vertical aspects. In conclusion, we have morphisms of simplicial sets $$\xymatrix{\delta^*N^n(i)\co \delta^\ast N^n\bbB \ar@{^{(}->}[r] & \delta^\ast N^n \bbQ}$$ $$\xymatrix{\delta^*N^n(\overline{r})\co \delta^\ast N^n \bbQ \ar[r] & \delta^\ast N^n\bbB}$$ such that $(\delta^*N^n(\overline{r})) \circ (\delta^*N^n(i))=1_{\delta^\ast N^n\bbB}$ and $(\delta^*N^n(i)) \circ (\delta^*N^n(\overline{r}))$ is simplicially homotopic to $1_{\delta^*N^n\bbQ}$ via the simplicial homotopy $\sigma$. \end{pf} \section{Unit and Counit are Weak Equivalences} \label{unitcounitsection} In this section we prove that the unit and counit of the adjunction in (\ref{nfoldcatadjunction}) are weak equivalences. Our main tool is the $n$-fold Grothendieck construction and the theorem that, in certain situations, a natural weak equivalence between functors induces a weak equivalence between the colimits of the functors. We prove that $N^n$ and the $n$-fold Grothendieck construction are ``homotopy inverses''. From this, we conclude that our Quillen adjunction (\ref{nfoldcatadjunction}) is actually a Quillen equivalence. The left and right adjoints of (\ref{nfoldcatadjunction}) preserve weak equivalences, so the unit and counit are weak equivalences. \begin{defn} \label{nfoldGrothendieck} Let $\xymatrix@1{Y\co(\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ be a multisimplicial set. We define the {\it $n$-fold Grothendieck construction} $\Delta^{\boxtimes n} / Y \in \mathbf{nFoldCat}$ as follows. The {\it objects} of the $n$-fold category $\Delta^{\boxtimes n} / Y$ are $$\Obj \Delta^{\boxtimes n} / Y =\{(y,\overline{k})|\overline{k}=([k_1], \ldots, [k_n]) \in \Delta^{\times n}, y \in Y_{\overline{k}}\}.$$ An {\it $n$-cube} in $\Delta^{\boxtimes n} / Y$ with $(0,0,\ldots,0)$-vertex $(y,\overline{k})$ and $(1,1,\ldots,1)$-vertex $(z,\overline{\ell})$ is a morphism $\xymatrix@1{\overline{f}=(f_1, \ldots, f_n)\co \overline{k} \ar[r] & \overline{\ell}}$ in $\Delta^{\times n}$ such that \begin{equation} \label{nfoldGrothendieckmorphism} \overline{f}^*(z)=y. \end{equation} For $\epsilon_\ell\in \{0,1\}$, the $(\epsilon_1,\epsilon_2,\ldots,\epsilon_n)$-vertex of such an $n$-cube is $$(f_1^{1-\epsilon_1},f_2^{1-\epsilon_2}, \ldots, f_n^{1-\epsilon_n})^*(z).$$ For $1\leq i \leq n$, a {\it morphism in direction $i$} is an $n$-cube that has $f_j$ the identity except at $j=i$. A {\it square in direction $i i'$} is an $n$-cube such that $f_j$ is the identity except at $j=i$ and $j=i'$, etc. In this way, the edges, subsquares, subcubes, etc. of an $n$-cube $\overline{f}$ are determined. \end{defn} \begin{examp} If $n=1$, then the Grothendieck construction of Definition \ref{nfoldGrothendieck} is the usual Grothendieck construction of a simplicial set. \end{examp} \begin{examp} The Grothendieck construction $\Delta/\Delta[m]$ of the simplicial set $\Delta[m]$ is the comma category $\Delta/[m]$. \end{examp} \begin{examp} The Grothendieck construction commutes with external products, that is, for simplicial sets $X_1,X_2, \ldots, X_n$ we have $$\Delta^{\boxtimes n}/(X_1\boxtimes X_2 \boxtimes \cdots \boxtimes X_n)=(\Delta/X_1) \boxtimes (\Delta/X_2) \boxtimes \cdots \boxtimes (\Delta/X_n).$$ \end{examp} \begin{rmk} \label{p-multisimplices} We describe the $n$-fold nerve of the $n$-fold Grothendieck construction. We learned the $n=1$ case from Chapter 6 of \cite{joyaltierneysimplicial}. Let $\xymatrix@1{Y\co(\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ be a multisimplicial set and $\overline{p}=([p_1],\ldots,[p_n]) \in \Delta^{\times n}$. Then a $\overline{p}$-multisimplex of $N^n(\Delta^{\boxtimes n} / Y)$ consists of $n$ composable paths of morphisms in $\Delta$ of lengths $p_1, p_2, \ldots, p_n$ $$\xymatrix{\langle f^1_1, \dots, f^1_{p_1} \rangle\co [k_0^1] \ar[r]^-{f^1_1} & [k_1^1] \ar[r]^{f^1_2} & \cdots \ar[r]^{f^1_{p_1}} & [k^1_{p_1}] }$$ $$\xymatrix{\langle f^2_1, \dots, f^2_{p_2} \rangle\co [k_0^2] \ar[r]^-{f^2_1} & [k_1^2] \ar[r]^{f^2_2} & \cdots \ar[r]^{f^2_{p_2}} & [k^2_{p_2}] }$$ $$\cdots$$ $$\xymatrix{\langle f^n_1, \dots, f^n_{p_n} \rangle\co [k_0^n] \ar[r]^-{f^n_1} & [k_1^n] \ar[r]^{f^n_2} & \cdots \ar[r]^{f^n_{p_n}} & [k^n_{p_n}] }$$ and a multisimplex $z$ of $Y$ in degree $$\overline{k_{\overline{p}}}:=(k^1_{p_1}, k^2_{p_2},\ldots,k^n_{p_n}).$$ The last vertex in this $\overline{p}$-array of $n$-cubes in $\Delta^{\boxtimes n} / Y$ is $$(z,([k^1_{p_1}], [k^2_{p_2}],\ldots,[k^n_{p_n}])).$$ The other vertices of this array are determined from $z$ by applying the $f$'s and their composites as in equation \eqref{nfoldGrothendieckmorphism}. Thus, the set of $\overline{p}$-multisimplices of $N^n(\Delta^{\boxtimes n} / Y)$ is \begin{equation} \label{overlinepmultisimplices} \underset{\langle f^n_1, \dots, f^n_{p_n} \rangle}{\underset{\cdots}{\underset{\langle f^2_1, \dots, f^2_{p_2} \rangle}{\underset{\langle f^1_1, \dots, f^1_{p_1} \rangle}{\coprod}}}}Y_{\overline{k_{\overline{p}}}}. \end{equation} \end{rmk} \begin{prop} \label{NdnGrothendieckpreservescolimits} The functor $Y \mapsto N^n(\Delta^{\boxtimes n} / Y)$ preserves colimits. \end{prop} \begin{pf} The set of $\overline{p}$-multisimplices of $N^n(\Delta^{\boxtimes n} / Y)$ is (\ref{overlinepmultisimplices}). The assignment of $Y$ to the expression in (\ref{overlinepmultisimplices}) preserves colimits. \end{pf} \begin{rmk} We can also describe the $p$-simplices of $\delta^*N^n(\Delta^{\boxtimes n} / Y)$. We learned the $n=1$ case from Joyal and Tierney in Chapter 6 of \cite{joyaltierneysimplicial}. A $p$-simplex of $\delta^* N^n(\Delta^{\boxtimes n} / Y )$ is a composable path of $p$ $n$-cubes $$\xymatrix{\overline{f^i}\co (y^{i-1},\overline{k^{i-1}}) \ar[r] & (y^{i},\overline{k^i})}$$ ($i=1,\ldots,p$). Each $y^i$ is determined from $y^p$ by the $\overline{f^i}$'s, as in equation \eqref{nfoldGrothendieckmorphism}. The last target, namely $(y^p,\overline{k^p})$, is the same as a morphism of multisimplicial sets $\xymatrix@1{\Delta^{\times n}[\overline{k^i}] \ar[r] & Y}$. So by Yoneda, a $p$-simplex is the same as a composable path of morphisms of multisimplicial sets $$\xymatrix{\Delta^{\times n}[\overline{k^0}] \ar[r] & \Delta^{\times n}[\overline{k^1}] \ar[r] & \cdots \ar[r] & \Delta^{\times n}[\overline{k^p}] \ar[r] & Y }.$$ The set of $p$-simplices of $\delta^* N^n(\Delta^{\boxtimes n} / Y )$ is \begin{equation} \label{psimplicesofGrothendieck} \coprod_{\Delta^{\times n}[\overline{k^0}] \rightarrow \Delta^{\times n}[\overline{k^1}] \rightarrow \cdots \rightarrow \Delta^{\times n}[\overline{k^p}]} Y_{\overline{k^p}}. \end{equation} \end{rmk} Let us recall the natural morphism of simplicial sets $\xymatrix@1{N (\Delta/ X) \ar[r] & X}$ in 6.1 of \cite{joyaltierneysimplicial}, which we shall call $\rho_X$ as in Appendix A of \cite{moerdijksvenssonOnshapiro}. First note that any path of morphisms in $\Delta$ \begin{equation} \label{pathinDelta} \xymatrix{ [k_0] \ar[r] & [k_1] \ar[r] & \cdots \ar[r] & [k_p] } \end{equation} determines a morphism \begin{equation} \label{imagemap} \begin{array}{c} \xymatrix{[p] \ar[r] & [k_p]} \\ i \mapsto \im{k_i} \end{array} \end{equation} where $\im{k_i}$ refers to the image of $k_i$ under the composite of the last $p-i$ morphisms in (\ref{pathinDelta}). Note also that paths of the form (\ref{pathinDelta}) are in bijective correspondence with paths of the form \begin{equation} \xymatrix{ \Delta[k_0] \ar[r] & \Delta[k_1] \ar[r] & \cdots \ar[r] & \Delta[k_p] } \end{equation} by the Yoneda Lemma. \label{rhoXdefinition} The morphism $\xymatrix@1{\rho_X\co N (\Delta/ X) \ar[r] & X}$ sends a $p$-simplex $$\xymatrix{\Delta[k^0] \ar[r] & \Delta[k^1] \ar[r] & \cdots \ar[r] & \Delta[k^p] \ar[r] & X }$$ to the composite $$\xymatrix{\Delta[p] \ar[r] & \Delta[k^p] \ar[r] & X}$$ where the first morphism is the image of (\ref{imagemap}) under the Yoneda embedding. As is well known, the morphism $\xymatrix@1{N (\Delta/ X) \ar[r] & X}$ is a natural weak equivalence (see Theorem 6.2.2 of \cite{joyaltierneysimplicial}, page 21 of \cite{illusieII}, page 359 of \cite{waldhausen}). We analogously define a morphism of multisimplicial sets $$\xymatrix{\rho_Y\co N^n(\Delta^{\boxtimes n} / Y) \ar[r] & Y}$$ natural in $Y$. Consider a $\overline{p}$-multisimplex of $N^n(\Delta^{\boxtimes n} / Y)$ as in Remark \ref{p-multisimplices}. For each $1 \leq j \leq n$, the path $\langle f^j_1, \dots, f^j_{p_j} \rangle$ gives rise to a morphism in $\Delta$ $$\xymatrix{[p_j] \ar[r] & [k^j_{p_j}]}$$ as in (\ref{pathinDelta}) and (\ref{imagemap}). Together these form a morphism in $\Delta^{\times n}$, which induces a morphism of multisimplicial sets $$\xymatrix{\Delta^{\times n}[\overline{p}] \ar[r] & \Delta^{\times n}[\overline{k_{\overline{p}}}]}.$$ The morphism $\rho_Y$ assigns to the $\overline{p}$-multisimplex we are considering the $\overline{p}$-multisimplex $$\xymatrix{\Delta^{\times n}[\overline{p}] \ar[r] & \Delta^{\times n}[\overline{k_{\overline{p}}}] \ar[r]^-z & Y}.$$ This completes the definition of the natural transformation $\rho$. \begin{rmk} \label{rhowithexternalproducts} The natural transformation $\rho$ is compatible with external products. If $X_1,X_2,\dots,X_n$ are simplicial sets and $Y=X_1\boxtimes X_2\boxtimes \cdots \boxtimes X_n$, then $$\xymatrix{\rho_{Y} \co N^n(\Delta^{\boxtimes n}/Y) \ar[r] & Y}$$ is equal to $$\rho_{X_1} \boxtimes \rho_{X_2} \boxtimes \cdots \boxtimes \rho_{X_n} \co$$ $$\xymatrix{ N(\Delta /X_1) \boxtimes N(\Delta /X_2) \boxtimes \cdots \boxtimes N(\Delta /X_n) \ar[r] & X_1 \boxtimes X_2 \boxtimes \cdots \boxtimes X_n.}$$ Thus $\delta^*\rho_Y=\rho_{X_1}\times \rho_{X_2} \times \cdots \times \rho_{X_n}$ is a weak equivalence, since in $\mathbf{SSet}$ any finite product of weak equivalences is a weak equivalence. We conclude that $\rho_Y$ is a weak equivalence of multisimplicial sets whenever $Y$ is an external product. (For us, a morphism $f$ of multisimplicial sets is a {\it weak equivalence} if and only if $\delta^*f$ is a weak equivalence of simplicial sets.) As we shall soon see, $\rho_Y$ is a weak equivalence for all $Y$. \end{rmk} We quickly recall what we will need regarding Reedy model structures. The following definition and proposition are part of Definitions 5.1.2, 5.2.2, and Theorem 5.2.5 of \cite{hovey}, or Definitions 15.2.3, 15.2.5, and Theorem 15.3.4 of \cite{hirschhorn} \begin{defn} Let $(\mathcal{B},\mathcal{B}_+,\mathcal{B}_-)$ be a Reedy category and $\mathcal{C}$ a category with all small colimits and limits. For $i \in \mathcal{B}$, the {\it latching category} $\mathcal{B}_i$ is the full subcategory of $\mathcal{B}_+/i$ on the {\it non-identity} morphisms $\xymatrix@1{b \ar[r] & i}$. For $F \in \mathcal{C}^\mathcal{B}$ the {\it latching object of $F$ at $i$} is the colimit $L_iF$ of the composite functor \begin{equation} \label{latchingobjectequation} \xymatrix{\mathcal{B}_i \ar[r] & \mathcal{B} \ar[r]^F & \mathcal{C} }. \end{equation} For $i \in \mathcal{B}$, the {\it matching category} $\mathcal{B}^i$ is the full subcategory of $i/\mathcal{B}_-$ on the {\it non-identity} morphisms $\xymatrix@1{i \ar[r] & b}$. For $F \in \mathcal{C}^\mathcal{B}$ the {\it matching object of $F$ at $i$} is the limit $M_iF$ of the composite functor \begin{equation} \label{matchingobjectequation} \xymatrix{\mathcal{B}^i \ar[r] & \mathcal{B} \ar[r]^F & \mathcal{C} }. \end{equation} \end{defn} \begin{thm}[Kan] Let $(\mathcal{B},\mathcal{B}_+,\mathcal{B}_-)$ be a Reedy category and $\mathcal{C}$ a model category. Then the levelwise weak equivalences, Reedy fibrations, and Reedy cofibrations form a model structure on the category $\mathcal{C}^\mathcal{B}$ of functors $\xymatrix@1{\mathcal{B} \ar[r] & \mathcal{C}}$. \end{thm} \begin{rmk} \label{remarkconsequence} A consequence of the definitions is that a functor $\xymatrix@1{\mathcal{B} \ar[r] & \mathcal{C}}$ is {\it Reedy cofibrant} if and only if the induced morphism $\xymatrix{L_iF \ar[r] & Fi}$ is a cofibration in $\mathcal{C}$ for all objects $i$ of $\mathcal{B}$. \end{rmk} \begin{prop}[Compare Example 15.1.19 of \cite{hirschhorn}] The category of multisimplices $$\Delta^{\times n}Y:=\Delta^{\times n}/Y$$ of a multisimplicial set $\xymatrix@1{Y\co (\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ is a Reedy category. The degree of a $\overline{p}$-multisimplex is $p_1+p_2\cdots+p_n$. The direct subcategory $(\Delta^{\times n}Y)_+$ consists of those morphisms $(f_1,\dots, f_n)$ that are iterated coface maps in each coordinate, \ie injective maps in each coordinate. The inverse subcategory $(\Delta^{\times n}Y)_-$ consists of those morphisms $(f_1,\dots, f_n)$ that are iterated codegeneracy maps in each coordinate, \ie surjective maps in each coordinate. \end{prop} \begin{prop}[Compare Proposition 15.10.4(1) of \cite{hirschhorn}] \label{multisimpliceshavefibrantconstants} If $\mathcal{B}$ is the category of multisimplices of a multisimplicial set, then for every $i \in \mathcal{B}$, the matching category $\mathcal{B}^i$ is either connected or empty. \end{prop} \begin{pf} This follows from the multidimensional Eilenberg-Zilber Lemma, recalled in Proposition \ref{prop:EZmultsimplicial}. Let $\xymatrix@1{Y\co(\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ be a multipsimplicial set and $\mathcal{B}=\Delta^{\times n} Y$ its category of multisimplices. Let $\xymatrix@1{i\co \Delta^{\times n}\left[ \overline{p} \right] \ar[r] & Y }$ be a degenerate multisimplex. Then there exists a non-trivial, componentwise surjective map $\overline{\tau}$ and a totally non-degenerate multisimplex $t$ with $i=(\overline{\tau})^*t$. The pair $(\overline{\tau},t)$ is an object of the matching category $\mathcal{B}^i$. If $(\overline{\eta},b)$ is another object of $\mathcal{B}^i$, there exists a componentwise surjective map $\overline{g}$ and a totally non-degenerate $b' \in \mathcal{B}$ such that $b=(\overline{g})^*b'$. But $i=(\overline{\eta})^*b=(\overline{\eta})^*(\overline{g})^*b'$ implies that $b'=t$, $\overline{g} \circ \overline{\eta}=\overline{\tau}$, and $\overline{g}$ is a morphism in $\mathcal{B}^i$ from $(\overline{\eta},b)$ to $(\overline{\tau},t)$. Thus, whenever $i$ is degenerate, there is a morphism from any object of $\mathcal{B}^i$ to $(\overline{\tau},t)$ and $\mathcal{B}^i$ is connected. One can also show $(\overline{\tau},t)$ is a terminal object of $\mathcal{B}^i$, but we do not need this. Let $\xymatrix@1{i\co \Delta^{\times n}\left[ \overline{p} \right] \ar[r] & Y }$ be a totally non-degenerate multisimplex. An object of the matching category $\mathcal{B}^i$ is a non-trivial, componentwise surjective map $\overline{\eta}$ and a multisimplex $b$ with $i=(\overline{\eta})^*b$. Such $\overline{\eta}$ and $b$ cannot exist because $i$ is totally non-degenerate. Thus, whenever $i$ is totally non-degenerate, the matching category $\mathcal{B}^i$ is empty . \end{pf} \begin{thm} \label{colimitpreservesweakequivalences} Suppose $\mathcal{C}$ is a model category and $\mathcal{B}$ is a Reedy category such that for all $i \in \mathcal{B}$, the matching category $\mathcal{B}^i$ is either connected or empty. Then the colimit functor $$\xymatrix{\text{\rm colim}\co \mathcal{C}^\mathcal{B} \ar[r] & \mathcal{C}}$$ takes levelwise weak equivalences between Reedy cofibrant functors to weak equivalences between cofibrant objects of $\mathcal{C}$. \end{thm} \begin{pf} This is merely a summary of Definition 15.10.1(2), Proposition 15.10.2(2), and Theorem 15.10.9(2) of \cite{hirschhorn}. \end{pf} \begin{notation} \label{BC} Let $\xymatrix@1{Y\co (\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ be a multisimplicial set, $\mathcal{B}=\Delta^{\times n}Y$, $\mathcal{C}=\mathbf{SSet}$, and $\xymatrix@1{i \co \Delta^{\times n}[\overline{m}] \ar[r] & Y}$ an object of $\mathcal{B}$. Then the set of nonidentity morphisms in $\mathcal{B}_+$ with target $i$ is the set of morphisms $(f_1, \dots, f_n)$ in $\Delta^{\times n}$ with target $[\overline{m}]$ such that each $f_j$ is injective and not all $f_j$'s are the identity. \end{notation} \begin{notation} \label{twofunctors} Let $F$ and $G$ be the following two functors. $$\xymatrix{F\co\Delta^{\times n} Y \ar[r] & \mathbf{SSet^n}}$$ $$\left[\Delta^{\times n}[\overline{m}] \rightarrow Y \right] \mapsto N^n(\Delta^{\boxtimes n}/ \Delta^{\times n}[\overline{m}])$$ $$\xymatrix{G\co\Delta^{\times n} Y \ar[r] & \mathbf{SSet^n}}$$ $$\left[\Delta^{\times n}[\overline{m}] \rightarrow Y \right] \mapsto \Delta^{\times n}[\overline{m}]$$ Note that $\delta^\ast \circ F$ and $\delta^\ast \circ G$ are in $\mathcal{C}^\mathcal{B}$. The natural transformation $\rho$ induces a natural transformation we denote by $$\xymatrix{\rho^Y\co F \ar@{=>}[r] & G}.$$ \end{notation} \begin{rmk} \label{levelwiseweakequivalence} The natural transformation $\rho^Y$ is levelwise a weak equivalence by Remark \ref{rhowithexternalproducts}. \end{rmk} \begin{lem} \label{colimrho^Y=rho_Y} The morphism in $\mathbf{SSet^n}$ \begin{equation*} \underset{\Delta^{\times n} Y}{\colim}\rho^Y\co \underset{\Delta^{\times n} Y}{\colim} F \xymatrix{ \ar[r] &}\underset{\Delta^{\times n} Y}{\colim} G \end{equation*} is equal to $$\xymatrix{\rho_Y\co N^n(\Delta^{\boxtimes n}/Y) \ar[r] & Y.}$$ \end{lem} \begin{pf} By Proposition \ref{NdnGrothendieckpreservescolimits}, we have $$\aligned \underset{\Delta^{\times n} Y}{\colim} F &=\underset{\Delta^{\times n}[\overline{m}] \rightarrow Y}{\colim}N^n(\Delta^{\boxtimes n}/ \Delta^{\times n}[\overline{m}]) \\ &=N^n(\Delta^{\boxtimes n}/(\underset{\Delta^{\times n}[\overline{m}] \rightarrow Y}{\colim} \Delta^{\times n}[\overline{m}])) \\ &=N^n(\Delta^{\boxtimes n}/Y). \endaligned$$ \end{pf} \begin{lem} \label{delta*Fcofibrant} The functor $$\xymatrix{\delta^*\circ F\co\Delta^{\times n} Y \ar[r] & \mathbf{SSet}}$$ $$\left[\Delta^{\times n}[\overline{m}] \rightarrow Y \right] \mapsto N(\Delta/ \Delta[m_1]) \times N(\Delta/ \Delta[m_2]) \times \cdots \times N(\Delta/ \Delta[m_n])$$ is Reedy cofibrant. \end{lem} \begin{pf} We use Notations \ref{BC} and \ref{twofunctors}. The colimit of equation \eqref{latchingobjectequation} is $$L_i(\delta^\ast \circ F)=\underset{1 \leq j \leq n}{\bigcup} N(\Delta/\Delta[m_1]) \times \cdots \times N(\Delta/\partial \Delta[m_j]) \times \cdots \times N(\Delta/\Delta[m_n])$$ and $\delta^\ast \circ F (i)=N(\Delta/ \Delta[m_2]) \times \cdots \times N(\Delta/ \Delta[m_n]).$ The map $$\xymatrix{L_i(\delta^\ast \circ F) \ar[r] & \delta^\ast \circ F(i)}$$ is injective, or equivalently, a cofibration. Remark \ref{remarkconsequence} now implies that $\delta^\ast \circ F$ is Reedy cofibrant. \end{pf} \begin{lem} \label{delta*Gcofibrant} The functor $$\xymatrix{\delta^* \circ G\co\Delta^{\times n} Y \ar[r] & \mathbf{SSet}}$$ $$\left[\Delta^{\times n}[\overline{m}] \rightarrow Y \right] \mapsto \Delta[m_1] \times \Delta[m_2] \times \cdots \times \Delta[m_n]$$ is Reedy cofibrant. \end{lem} \begin{pf} We use Notations \ref{BC} and \ref{twofunctors}. The colimit of equation \eqref{latchingobjectequation} is $$L_i(\delta^\ast \circ G)=\underset{1 \leq j \leq n}{\bigcup} \Delta[m_1] \times \cdots \times \partial \Delta[m_j] \times \cdots \times \Delta[m_n]$$ and $\delta^\ast \circ G (i)=\Delta[m_1] \times \Delta[m_2] \times \cdots \times \Delta[m_n].$ The morphism $$\xymatrix{L_i(\delta^\ast \circ G) \ar[r] & \delta^\ast \circ G(i)}$$ is injective, or equivalently, a cofibration. Remark \ref{remarkconsequence} now implies that $\delta^\ast \circ G$ is Reedy cofibrant. \end{pf} \begin{thm} \label{rhowe} For every multisimplicial set $\xymatrix@1{Y\co (\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$, the morphism $$\xymatrix{\rho_Y\co N^n(\Delta^{\times n}/Y) \ar[r] & Y}$$ is a weak equivalence of multisimplicial sets. \end{thm} \begin{pf} Fix a multisimplicial set $Y$, and let $F$, $G$, and $\rho^Y$ be as in Notation \ref{twofunctors}. The natural transformation $\xymatrix@1{\delta^* \rho^Y \co \delta^* F \ar@{=>}[r] & \delta^* G}$ is levelwise a weak equivalence of simplicial sets by Remark \ref{levelwiseweakequivalence}, and is a natural transformation between Reedy cofibrant functors by Lemmas \ref{delta*Fcofibrant} and \ref{delta*Gcofibrant}. By Proposition \ref{multisimpliceshavefibrantconstants}, each matching category of the Reedy category $\Delta^{\times n} Y$ is connected or empty. Theorem \ref{colimitpreservesweakequivalences} then guarantees that the morphism \begin{equation*} \underset{\Delta^{\times n} Y}{\colim}\delta^*\rho^Y\co \underset{\Delta^{\times n} Y}{\colim} \delta^*\circ F \xymatrix{ \ar[r] &}\underset{\Delta^{\times n} Y}{\colim} \delta^*\circ G \end{equation*} is a weak equivalence of simplicial sets. Since $\delta^*$ is a left adjoint, it commutes with colimits, and we have $$\underset{\Delta^{\times n} Y}{\colim}\delta^*\rho^Y =\delta^*\underset{\Delta^{\times n} Y}{\colim}\rho^Y=\delta^*\rho_Y$$ by Lemma \ref{colimrho^Y=rho_Y}. We conclude $\delta^*\rho_Y$ is a weak equivalence, and that $\rho_Y$ is a weak equivalence of multisimplicial sets. \end{pf} We also define an $n$-fold functor $$\xymatrix{\lambda_\bbD \co \Delta^{\boxtimes n} / N^n(\bbD) \ar[r] & \bbD}$$ natural in $\bbD$, by analogy to Appendix A of \cite{moerdijksvenssonOnshapiro}, and many others. If $(y,\overline{k})$ is an object of $\Delta^{\boxtimes n} / N^n(\bbD)$, then $\lambda(y, \overline{k})$ is the $n$-fold category in the last vertex of the array of $n$-cubes $y$, namely $$\lambda_\bbD(y,\overline{k})=y_{\overline{k}}.$$ \begin{thm}\label{lambdawe} For any $n$-fold category $\bbD$, we have $N^n(\lambda_\bbD)=\rho_{N^n(\bbD)}$. In particular, $\lambda_\bbD$ is a weak equivalence of $n$-fold categories. \end{thm} \begin{cor} \label{Ndinducesequivalence} The functor $\xymatrix@1{N^n\co \mathbf{nFoldCat} \ar[r] & \mathbf{SSet^n}}$ induces an equivalence of categories $$\Ho\mathbf{nFoldCat} \simeq \Ho \mathbf{SSet^n}.$$ Here $\text{\rm Ho}$ refers to the category obtained by formally inverting weak equivalences. There is no reference to any model structure. \end{cor} \begin{pf} An ``inverse'' to $N^n$ is the $n$-fold Grothendieck construction, since $\rho$ and $\lambda$ induce natural isomorphisms after passing to homotopy categories by Theorems \ref{rhowe} and \ref{lambdawe}. \end{pf} The following simple proposition, pointed out to us by Denis-Charles Cisinski, will be of use. \begin{prop} \label{prop:cisinski} Let $\xymatrix@C=4pc{{\bf C} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^{F} & \ar@/^1pc/[l]^{G} {\bf D}}$ be a Quillen equivalence. If both $F$ and $G$ preserve weak equivalences, then \begin{enumerate} \item \label{item:FG_detect} Both $F$ and $G$ detect weak equivalences, \item \label{item:unit_counit_wes} The unit and counit of the adjunction $F \dashv G$ are weak equivalences. \end{enumerate} \end{prop} \begin{pf} \noindent \ref{item:FG_detect} We prove $F$ detects weak equivalences; the proof that $G$ detects weak equivalences is similar. Let $\xymatrix@1{Q\co \bfC \ar[r] & \bfC}$ be a cofibrant replacement functor on $\bfC$, that is, $QC$ is cofibrant for all objects $C$ in $\bfC$ and there is a natural acyclic fibration $\xymatrix@1{q\co QC \ar[r] & C}$. Suppose $Ff$ is a weak equivalence. Then $FQf$ is a weak equivalence (apply $F$ to the naturality diagram for $f$ and $Q$ and use the 3-for-2 property). The total left derived functor $\bfL F$ is the composite $$\xymatrix@C=3pc{\Ho \bfC \ar[r]_{\Ho Q} \ar@/^1pc/[rr]^{\bfL F} & \Ho \bfC_c \ar[r]_{\Ho F\vert_{\bfC_c}} & \Ho \bfD },$$ where $\bfC_c$ is the full subcategory of $\bfC$ on the cofibrant objects of $\bfC$. Then $\bfL F [f]$ is an isomorphism in $\Ho \bfD$, as $FQf$ is a weak equivalence in $\bfD$. The functor $\bfL F$ detects isomorphisms, as it is an equivalence of categories, so $[f]$ is an isomorphism in $\Ho \bfC$. Finally, a morphism in $\bfC$ is a weak equivalence if and only if its image in $\Ho \bfC$ is an isomorphism, so $f$ is a weak equivalence in $\bfC$, and $F$ detects weak equivalences. \noindent \ref{item:unit_counit_wes} We prove that the unit of the adjunction $F\dashv G$ is a natural weak equivalence; the proof that the counit is a natural weak equivalence is similar. Let $\xymatrix@1{Q\co \bfC \ar[r] & \bfC}$ be a cofibrant replacement functor on $\bfC$, that is, $QC$ is cofibrant for every object $C$ in $\bfC$ and there is a natural acyclic fibration $\xymatrix@1{q_C\co QC \ar[r] & C}$. Let $\xymatrix@1{R \co \bfD \ar[r] & \bfD}$ be a fibrant replacement functor on $\bfD$, that is, $RD$ is fibrant for every object $D$ in $\bfD$ and there is a natural acyclic cofibration $\xymatrix@1{r_D\co D \ar[r] & RD}$. Since $F\dashv G$ is a Quillen equivalence, the composite $$\xymatrix@C=4pc{QC \ar[r]^-{\eta_{QC}} & GFQX \ar[r]^-{Gr_{FQX}} & GRFQX}$$ is a weak equivalence by Proposition 1.3.13 of \cite{hovey}. Then $\eta_{QC}$ is a weak equivalence by the 3-for-2 property and the hypothesis that $G$ preserves weak equivalences. An application of 3-for-2 to the naturality diagram for $\eta$ $$\xymatrix@C=3pc{QC \ar[r]^-{\eta_{QC}} \ar[d]_{q_C} & GFQC \ar[d]^{GFq_C} \\ C \ar[r]_-{\eta_C} & GFC }$$ shows that $\eta_C$ is a weak equivalence (recall $GF$ preserves weak equivalences). \end{pf} \begin{lem} \label{lem:HoG_equiv_implies_right_derived_equiv} Let $\xymatrix@1{G\co \bfD \ar[r] & \bfC}$ be a right Quillen functor. Suppose $\xymatrix@1{\Ho G \co \Ho \bfD \ar[r] & \Ho \bfC}$ is an equivalence of categories. Then the total right derived functor $$\xymatrix@C=3pc{\Ho \bfD \ar[r]_{\Ho R} \ar@/^1pc/[rr]^{\bfR G} & \Ho \bfD_f \ar[r]_{\Ho G\vert_{\bfD_f}} & \Ho \bfC }$$ is an equivalence of categories. Here $R$ is a fibrant replacement functor on $\bfD$, and $\bfD_f$ is the full subcategory of $\bfD$ on the fibrant objects. \end{lem} \begin{pf} The functors $\xymatrix@1@C=3pc{\Ho \bfD \ar@<.5ex>[r]^{\Ho R} & \ar@<.5ex>[l]^{\Ho i} \Ho \bfD_f}$ are equivalences of categories, ``inverse'' to one another. Then $\Ho G\vert_{\bfD_f}=(\Ho G) \circ (\Ho i)$ is a composite of equivalences. \end{pf} \begin{lem} \label{lem:right_derived_equiv_implies_Quillen_equiv} Suppose $L \dashv R$ is an adjunction and $R$ is an equivalence of categories. Then the unit $\eta$ and counit $\varepsilon$ of this adjunction are natural isomorphisms. \end{lem} \begin{pf} By Theorem 1 on page 93 of \cite{maclaneworking}, $R$ is part of an adjoint equivalence $L' \dashv R$ with unit $\eta'$ and counit $\varepsilon'$. By the universality of $\eta$ and $\eta'$ there exists an isomorphism $\xymatrix@1{\theta_X\co LX \ar[r] & L'X}$ such that $(R\theta_X) \circ \eta_X=\eta_X'$. Since $\eta_X'$ is also an isomorphism, we see that $\eta_X$ is an isomorphism. A similar argument shows that the counit $\varepsilon$ is a natural isomorphism. \end{pf} \begin{prop} \label{unitcounitwe} The unit and counit of (\ref{nfoldcatadjunction}) \begin{equation*} \xymatrix@C=4pc{\mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\Sd^2} & \ar@/^1pc/[l]^-{\Ex^2} \mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\delta_!} & \ar@/^1pc/[l]^-{\delta^\ast} \mathbf{SSet^n} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{c^n} & \ar@/^1pc/[l]^-{N^n} \mathbf{nFoldCat}} \end{equation*} are weak equivalences. \end{prop} \begin{pf} Let $F\dashv G$ denote the adjunction in (\ref{nfoldcatadjunction}). This is a Quillen adjunction by Theorem \ref{MainModelStructure}. We first prove it is even a Quillen equivalence. The functor $\Ex^2 \delta^*$ is known to induce an equivalence of homotopy categories, and $N^n$ induces an equivalence of homotopy categories by Corollary \ref{Ndinducesequivalence}, so $G=\Ex^2 \delta^* N^n$ induces an equivalence of homotopy categories $\Ho G$. Lemma \ref{lem:HoG_equiv_implies_right_derived_equiv} then says that the total right derived functor $\bfR G$ is an equivalence of categories. The derived adjunction $\bfL F \dashv \bfR G$ is then an adjoint equivalence by Lemma \ref{lem:right_derived_equiv_implies_Quillen_equiv}, so $F \dashv G$ is a Quillen equivalence. By Ken Brown's Lemma, the left Quillen functor $F$ preserves weak equivalences (every simplicial set is cofibrant). The right Quillen functor $G$ preserves weak equivalences by definition. Proposition \ref{prop:cisinski} now guarantees that the unit and counit are weak equivalences. \end{pf} We now summarize our main results of Theorem \ref{MainModelStructure}, Corollary \ref{Ndinducesequivalence}, Proposition \ref{unitcounitwe}. \begin{thm} \label{maintheoremsummary} \begin{enumerate} \item There is a cofibrantly generated model structure on $\mathbf{nFoldCat}$ such that an $n$-fold functor $F$ is a weak equivalence (respectively fibration) if and only if $\Ex^2 \delta^* N^n(F)$ is a weak equivalence (respectively fibration). In particular, an $n$-fold functor is a weak equivalence if and only if the diagonal of its nerve is a weak equivalence of simplicial sets. \item The adjunction \begin{equation*} \xymatrix@C=4pc{\mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\Sd^2} & \ar@/^1pc/[l]^-{\Ex^2} \mathbf{SSet} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{\delta_!} & \ar@/^1pc/[l]^-{\delta^\ast} \mathbf{SSet^n} \ar@{}[r]|{\perp} \ar@/^1pc/[r]^-{c^n} & \ar@/^1pc/[l]^-{N^n} \mathbf{nFoldCat}} \end{equation*} is a Quillen equivalence. \item The unit and counit of this Quillen equivalence are weak equivalences. \end{enumerate} \end{thm} \begin{cor} \label{maincorollary} The homotopy category of $n$-fold categories is equivalent to the homotopy category of topological spaces. \end{cor} Another approach to proving that $N^n$ and the $n$-fold Grothendieck construction are homotopy inverse would be to apply a multisimplicial version of the following Weak Equivalence Extension Theorem of Joyal-Tierney. We apply the present Weak Equivalence Extension Theorem to prove that there is a natural isomorphism $$\xymatrix@1{\delta^* N^n(\Delta^{\boxtimes n} / \delta_!\text{-} ) \ar@{=>}[r] & 1_{\text{\rm Ho}\;\mathbf{SSet}}}.$$ \begin{thm}[Theorem 6.2.1 of \cite{joyaltierneysimplicial}] \label{weakequivalenceextension} Let $\xymatrix@1{\phi \co F \ar@{=>}[r] & G}$ be a natural transformation between functors $\xymatrix@1{F,G\co \Delta \ar[r] & \mathbf{SSet}}$. We denote by $\xymatrix@1{\phi^+\co F^+ \ar@{=>}[r] & G^+}$ the left Kan extension along the Yoneda embedding $\xymatrix@1{Y\co \Delta \ar[r] & \mathbf{SSet}}$. $$\xymatrix{\mathbf{SSet} \ar[dr]^{F^+,G^+} & \\ \Delta \ar[r]_-{F,G} \ar[u]^Y & \mathbf{SSet}}$$ Suppose that $G$ satisfies the following condition. \begin{itemize} \item $\im G\epsilon^0 \cap \im G \epsilon^1 = \emptyset,$ where $\xymatrix@1{\epsilon^i\co [0] \ar[r] & [1]}$ is the injection which misses $i$. \end{itemize} If $\xymatrix@1{\phi[m]\co F[m] \ar[r] & G[m]}$ is a weak equivalence for all $m \geq 0$, then $$\xymatrix@1{\phi^+X\co F^+ X \ar[r] & G^+X }$$ is a weak equivalence for every simplicial set $X$. \end{thm} \begin{lem} \label{Grothendieckpreservescolimits} The functor $$\xymatrix{\mathbf{SSet^n} \ar[r] & \mathbf{SSet}}$$ $$\xymatrix{Y \mapsto \delta^* N^n(\Delta^{\boxtimes n} / Y )}$$ preserves colimits. \end{lem} \begin{pf} The functor which assigns to $Y$ the expression in (\ref{psimplicesofGrothendieck}) is colimit preserving. \end{pf} \begin{prop} \label{zigzag1} For every simplicial set $X$, the canonical morphism $$\xymatrix@1{\delta^* N^n(\Delta^{\boxtimes n} / \delta_!X ) \ar[r] & \delta^*\delta_!X}$$ is a weak equivalence. \end{prop} \begin{pf} We apply the Weak Equivalence Extension Theorem \ref{weakequivalenceextension}. Let $\xymatrix@1{F,G\co \Delta \ar[r] & \mathbf{SSet}}$ be defined by $$F[m]=\delta^* N^n(\Delta^{\boxtimes n} / \delta_!\Delta[m] )$$ $$G[m]=\delta^*\delta_!\Delta[m].$$ The functor $$\xymatrix{\delta^* N^n(\Delta^{\boxtimes n} / \delta_!\text{-})\co \mathbf{SSet} \ar[r] & \mathbf{SSet}}$$ preserves colimits by Lemma \ref{Grothendieckpreservescolimits} and the fact that $\delta_!$ is a left adjoint. The functor $$\xymatrix{\delta^* \delta_!\co \mathbf{SSet} \ar[r] & \mathbf{SSet}}$$ preserves colimits since both $\delta^*$ and $\delta_!$ are both left adjoints. Thus the canonical comparison morphisms $$\xymatrix{F^+X \ar[r] & \delta^* N^n(\Delta^{\boxtimes n} / \delta_!X )}$$ $$\xymatrix{G^+X \ar[r] & \delta^*\delta_!X}$$ are isomorphisms. The condition on $G$ listed in Theorem \ref{weakequivalenceextension} is easy to verify, since $$\xymatrix{G\epsilon^0=\epsilon^0\times \epsilon^0 \co \Delta[0] \times \Delta[0] \ar[r] & \Delta[1] \times \Delta[1]}$$ $$\xymatrix{G\epsilon^1=\epsilon^1\times \epsilon^1 \co \Delta[0] \times \Delta[0] \ar[r] & \Delta[1] \times \Delta[1]}.$$ All that remains is to define natural morphisms $$\xymatrix{\phi[m]\co \delta^* N^n(\Delta^{\boxtimes n} / \Delta[m,\ldots,m] ) \ar[r] & \Delta[m] \times \cdots \times \Delta[m]}$$ and to show that each is a weak equivalence of simplicial sets. By the description in Definition \ref{nfoldGrothendieck}, an object of $\Delta^{\boxtimes n} / \Delta[m,\ldots,m]$ is a morphism $$\xymatrix{y=(y_1,\ldots,y_n)\co \overline{k} \ar[r] & ([m],\ldots,[m])}$$ in $\Delta^{\times n}$. An $n$-cube $\overline{f}$ is a morphism in $\Delta^{\times n}$ making the diagram $$\xymatrix{\overline{k} \ar[rr]^{\overline{f}} \ar[dr]_{y} & & \overline{k'} \ar[dl]^{y'} \\ & ([m], \ldots, [m]) &}$$ commute. A $p$-simplex in $\delta^* N^n(\Delta^{\boxtimes n} / \Delta[m,\ldots,m] )$ is a path $\overline{f^1},\ldots,\overline{f^p}$ of composable morphisms in $\Delta^{\times n}$ making the appropriate triangles commute. We see that $$\delta^* N^n(\Delta^{\boxtimes n} / \Delta[m,\ldots,m] ) \cong N(\Delta/\Delta[m]) \times \cdots N(\Delta/\Delta[m]).$$ We define $\phi[m]$ to be the product of $n$-copies of the weak equivalence $$\xymatrix{\rho_{\Delta[m]} \co N(\Delta/\Delta[m]) \ar[r] & \Delta[m]}$$ defined on page \pageref{rhoXdefinition}. Since $\phi[m]$ is a weak equivalence for all $m$, we conclude from Theorem \ref{weakequivalenceextension} that the canonical morphism $$\xymatrix@1{\phi^+ X \co \delta^* N^n(\Delta^{\boxtimes n} / \delta_!X ) \ar[r] & \delta^*\delta_!X}$$ is a weak equivalence for every simplicial set $X$. \end{pf} \begin{lem} \label{zigzag2} There is a natural weak equivalence $\xymatrix@1{\delta^*\delta_!X & X \ar[l]}$. \end{lem} \begin{pf} In Theorem \ref{weakequivalenceextension}, let $F$ be the Yoneda embedding and $G$ once again $\delta^*\delta_!$. The diagonal morphism $$\xymatrix{\Delta[m] \ar[r] & \Delta[m] \times \cdots \times \Delta[m]}$$ is a weak equivalence, as both the source and target are contractible. \end{pf} \begin{prop} There is a zig-zag of natural weak equivalences between $\delta^* N^n(\Delta^{\boxtimes n} / \delta_!\text{-} )$ and the identity functor on $\mathbf{SSet}$. Consequently, there is a natural isomorphism $$\xymatrix@1{\delta^* N^n(\Delta^{\boxtimes n} / \delta_!\text{-} ) \ar@{=>}[r] & 1_{\text{\rm Ho}\;\mathbf{SSet}}}.$$ \end{prop} \begin{pf} This follows from Proposition \ref{zigzag1} and Lemma \ref{zigzag2}. \end{pf} \section{Appendix: The Multidimensional Eilenberg-Zilber Lemma} In Proposition \ref{multisimpliceshavefibrantconstants} we made use of the multidimensional Eilenberg-Zilber Lemma to prove that the matching category $\mathcal{B}^i$ is either connected or empty whenever $\mathcal{B}$ is a category of multisimplices $\Delta^{\times n} Y$. In this Appendix, we prove the multidimensional Eilenberg-Zilber Lemma. We merely paraphrase Joyal--Tierney's proof of the two-dimensional case in \cite{joyaltierneysimplicial} in order to make the present paper more self-contained. \begin{prop}[Eilenberg-Zilber Lemma] \label{prop:EZSSet} Let $Y$ be simplicial set and $y \in Y_p$. Then there exists a unique surjection $\xymatrix@1{\eta\co [p] \ar[r] & [q]}$ and a unique non-degenerate simplex $y' \in Y_p$ such that $y=\eta^*(y')$. \end{prop} \begin{pf} Proofs can be found in many books on simplicial homotopy theory, for example see Lemma 15.8.4 of \cite{hirschhorn}. \end{pf} \begin{defn} Let $\xymatrix@1{Y\co(\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ be a multisimplicial set. A multisimplex $y \in Y_{\overline{p}}$ is {\it degenerate in direction $i$} if there exists a surjection $\xymatrix@1{\eta_i \co [p_i] \ar[r] & [q_i]}$ and a multisimplex $y' \in Y_{p_1, \dots, p_{i-1}, q_i, p_{i+1}, \dots, p_n}$ such that $y=(\id_{p_1}, \dots, \id_{p_{i-1}}, \eta,\id_{p_i}, \dots, \id_{p_n})^*(y')$. A multisimplex $y \in Y_{\overline{p}}$ is {\it non-degenerate in direction $i$} if it is not degenerate in direction $i$. A multisimplex $y \in Y_{\overline{p}}$ is {\it totally non-degenerate} if is it not degenerate in any direction. \end{defn} \begin{prop}[Multidimensional Eilenberg-Zilber Lemma] \label{prop:EZmultsimplicial} Let $\xymatrix@1{Y\co(\Delta^{\times n})^{\op} \ar[r] & \mathbf{Set}}$ be a multisimplicial set and $y \in Y_{\overline{p}}$. Then there exist unique surjections $\xymatrix@1{\eta_i\co [p_i] \ar[r] & [q_i]}$ and a unique totally non-degenerate multisimplex $y_n \in Y_{\overline{q}}$ such that $y=(\overline{\eta})^* y_n$. \end{prop} \begin{pf} We simply reproduce Joyal--Tierney's proof in Chapter 5 Bisimplicial sets, \cite{joyaltierneysimplicial}. Let $y=y_0$ for the proof of existence. The Eilenberg-Zilber Lemma for $\mathbf{SSet}$, recalled in Proposition \ref{prop:EZSSet}, guarantees surjections $\xymatrix@1{\eta_i\co [p_i] \ar[r] & [q_i]}$ and multisimplices $y_i \in Y_{q_1, \dots, q_{i-1}, q_i, p_{i+1}, \dots, p_n}$ such that $$y_{i-1}=(\id_{q_1}, \dots, \id_{q_{i-1}}, \eta_i,\id_{p_{i+1}}, \dots, \id_{p_n})^*(y_i)$$ and each $y_i$ is non-degenerate in direction $i$ for all $i=1,2, \dots, n$. Then $y=(\eta_1, \dots, \eta_n)^*(y_n)$. The multisimplex $y_n$ is totally non-degenerate, for if it were degenerate in direction $i$, so that $$y_n=(\id_{q_1}, \dots, \id_{q_{i-1}}, \eta_i', \id_{q_{i+1}}, \dots, \id_{q_n})^*(y_i'),$$ we would have $y_i$ degenerate in direction $i$: $$\aligned y_i &= (\id_{q_1}, \dots, \id_{q_i}, \eta_{i+1}, \dots, \eta_n)^*(y_n) \\ &= (\id_{q_1}, \dots, \id_{q_i}, \eta_{i+1}, \dots, \eta_n)^*(\id_{q_1}, \dots, \id_{q_{i-1}}, \eta_i', \id_{q_{i+1}}, \dots, \id_{q_n})^*(y_i') \\ &=(\id_{q_1}, \dots, \id_{q_{i-1}}, \eta_i', \id_{p_{i+1}}, \dots, \id_{p_n})^*(\id_{q_1}, \dots, \id_{q_i}, \eta_{i+1}, \dots, \eta_n)^*(y_i'). \endaligned$$ But $y_i$ is non-degenerate in direction $i$. For the uniqueness, suppose $\xymatrix@1{\eta_i'\co [p_i] \ar[r] & [q_i']}$ and $y_n' \in Y_{\overline{q'}}$ is another totally non-degenerate multisimplex such that $y=(\overline{\eta'})^* y_n'$. The diagram in $\Delta^{\times n}$ associated to the $n$ pushouts in $\Delta$ $$\xymatrix{[p_i] \ar[r]^{\eta_i} \ar[d]_{\eta_i'} & [q_i] \ar[d]^{\mu_i} \\ [q_i'] \ar[r]_{\mu_i'} & [r_i]}$$ is a pushout in $\Delta^{\times n}$ ($\eta_i$ and $\eta_i'$ are all surjective). The Yoneda embedding then gives us a pushout in $\mathbf{SSet^n}$. $$\xymatrix@R=4pc@C=4pc{\Delta^{\times n}[\overline{p}] \ar[r]^{\Delta^{\times n}[\overline{\eta}]} \ar[d]_{\Delta^{\times n}\left[\overline{\eta'}\right]} & \Delta^{\times n}[\overline{q}] \ar[d]^{\Delta^{\times n}[\overline{\mu}]} \\ \Delta^{\times n}\left[\overline{q'}\right] \ar[r]_{\Delta^{\times n}\left[\overline{\mu'}\right]} & \Delta^{\times n}[\overline{r}] }$$ Since $$(\overline{\eta'})^* y_n'=y=(\overline{\eta})^* y_n,$$ the universal property of this pushout produces a unique multisimplex $z \in Y_{\overline{r}}$ such that $$y_n'=(\overline{\mu'})^\ast(z),\;\;\;\ y_n=(\overline{\mu})^\ast(z).$$ The multisimplices $y_n$ and $y_n'$ are totally non-degenerate, so $\overline{\mu}=\overline{\id}$ and $\overline{\mu'}=\overline{\id}$, and consequently $\overline{\eta'}=\overline{\eta}$ and $y_n'=y_n$. \end{pf}
1,108,101,566,081
arxiv
\section{#1}} \newcommand{\subsect}[1]{\subsection{#1}} \newcommand{\subsubsect}[1]{\subsubsection{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\footnote{\footnote} \footskip 1.0cm \def\sxn#1{\bigskip\medskip \sect{#1} \smallskip } \def\subsxn#1{\medskip \subsect{#1} \smallskip } \def\subsubsxn#1{\medskip \subsubsect{#1} \smallskip } \begin{document} \thispagestyle{empty} \setcounter{page}{0} \begin{flushright} UICHEP-TH/95-3 \\ May 1995\\ hep-th/9505028 \end{flushright} \vspace{1cm} \begin{center} {\LARGE SELF-INTERSECTION NUMBERS AND }\\ \vspace{5mm} {\LARGE RANDOM SURFACES ON THE LATTICE}\\ \vspace{5mm} \vspace{0.75cm} {\large P. Teotonio-Sobrinho \footnote[1]{e-mail: {\tt [email protected]}}}\\ \vspace{.50cm} {\it Department of Physics, University of Illinois at Chicago,\\ Chicago, IL 60607-7059, USA.} \end{center} \vspace{2mm} \vspace{.2cm} \begin{abstract} String theory in 4 dimensions has the unique feature that a topological term, the oriented self-intersection number, can be added to the usual action. It has been suggested that the corresponding theory of random surfaces wold be free from the problem encountered in the scaling of the string tension. Unfortunately, in the usual dynamical triangulation it is not clear how to write such a term. We show that for random surfaces on a hypercubic lattice however, the analogue of the oriented self-intersection number $I[\sigma]$ can be defined and computed in a straightforward way. Furthermore, $I[\sigma]$ has a genuine topological meaning in the sense that it is invariant under the discrete analogue of continuous deformations. The resulting random surface model is no longer free and may lead to a non trivial continuum limit. \end{abstract} \newpage \sxn{Introduction} String theory is supposed to play a crucial role in our understanding of fundamental physics. Either as an effective model for strong interactions or as a fundamental theory for unification, it has been the focus of numerous investigations. In its Euclidean version, string theory is the statistical mechanics of random surfaces. A 2d manifold $M$ is immersed on a target space $X$ resulting on a surface $s\subset X$. The partition function is given by a weighted integral over all surfaces $s$. Theories of random surfaces are naturally related to many different physical systems such as membranes in biophysics \cite{D} and 3d Ising model \cite{3dI1,3dI2}. {}From this perspective, there is enough motivation to investigate lattice versions of random surfaces and study their continuum limit. From the QCD point of view, it would be very interesting to have a lattice version of 4-dimensional strings with a nontrivial continuum limit. Unfortunately, such goal seems to be very difficult. In the past few years much work has been done in this direction. The simplest and most natural model is the immediate translation of the Nambu-Goto theory. One starts by replacing $X$ by a ${\bf Z}^d$ lattice and surfaces by polyhedra made of 2 dimensional plaquettes on ${\bf Z}^d$. The partition function is then defined to be the sum over all surfaces $\sigma$ with a statistical weight given by the number of plaquettes of $\sigma$. If $M$ is assumed to have the topology of a sphere, the model is called planar random surfaces \cite{W}-\cite{DFJ2}. Such a model is directly related to the $1/n$ limit of $SU(n)$ lattice Yang-Mills theory. Surprisingly, the planar random surface model was proved to be trivial \cite{DFJ2} and it can not describe any QCD physics. Another approach, called dynamical triangulation \cite{DyTr}, is used to discretize Polyakov's string theory. The base manifold $M$ is replaced by a generic triangulation where the lengths of the links are taken to be equal. The embedding of a given triangulation on a continuous manifold $X$ defines a surface $s$ and the action (Gaussian ) can be taken to be the area of $s$. In addition to the sum over immersions, one also sums over all possible triangulations in order to take into account the intrinsic geometry of $M$. The important question is whether the model has a well defined continuum limit. It has been shown that the string tension does not tend to zero at the critical point, giving rise to pathologically crumpled surfaces. Consequently this simple model does not lead to a sensible continuum limit \cite{AD}. A natural attempt to overcome the problem is to add to the action a term depending on the extrinsic curvature of $s$ in order to suppress the contributions of "spiked" surfaces. Analytic calculations, using Nambu-Goto action plus an extrinsic curvature term, suggest that the corresponding coupling constant renormalizes to zero and consequently the discrete action can not have a nontrivial continuum limit \cite{H}-\cite{K}. On the other hand much numerical work has been carried out to simulate Polyakov's action together with extrinsic curvature terms \cite{ExtCurv}. Some evidence of scaling has been found. Adding an extrinsic curvature term is not the only way of modifying the usual Gaussian theory. When the target space $X$ is 4-dimensional, a new possibility is available for string theory. It has been shown that a topological term can be added to the usual string action \cite{BLS} (See also \cite{P,MN}). It introduces an extra weight factor given by $exp(i\theta I[\sigma])$, where $I[s]\in {\bf Z}$ is a topological number. In a sense, it is the analogue of the $\theta $-term in QCD. The integer number $I[s]$, the so-called oriented self-intersection number, is a measure of how the embedding of $M$ on $X$ self-intersects. The resulting theory is described by the partition function \begin{equation} Z(\lambda,\theta )=\int {\cal D} s~e^{-\lambda A[s]+i \theta I[s]},\label{2.2} \end{equation} where $A[s]$ is the usual Nambu-Goto term given by the area of $s$. It has been suggested \cite{P} that such a partition function would describe smooth surfaces for $\theta =\pi $. Therefore it would be a better candidate for an effective theory of QCD. The presence of the analogue of a $\theta $-term is a very suggestive indication \cite{MN}. One way of studying (\ref{2.2}) is to introduce a lattice regularization and make the functional integral into a sum. Unfortunately it is not so clear how to proceed due to the presence of a topological term. This is a problem common to many theories involving topological terms. The first difficulty is to define and compute the corresponding counterparts on the lattice. Secondly, the topological meaning of such terms in a discrete setting is not always clear. A good example of the situation is given by the QCD instanton number on the lattice. We refer to \cite{L} for the discussion of one possible solution to the problem of instanton number. We would like to have a scheme of discretization for the random surface problem where the definition of oriented self-intersection number $I[s]$ naturally corresponds to the continuum counterpart. If the base space $M$ is discretized, as it it is in dynamical triangulation, it seems to be very difficult, or even impossible, to come out with a discrete counterpart for topological numbers. The reason being that the usual way of discretizing the manifold $M$ is by looking at lattices, i.e. a cell decomposition $K(M)$ made of vertices, links and faces. However, the scalar field describing the string is defined only for the set $K_0(M)$ of all vertices. Consequently, the set of all field configuration is just $\Gamma=X^{n (K_0(M))}$, where $n (K_0(M))$ is the number of vertices. The quantity $I[s]$ is a function on $\Gamma $, but unfortunately $\Gamma $ has no information about $M$. The situation is clearly not satisfactory and one should try something different. Some time ago, an alternative approach to discretization was formulated by Sorkin \cite{S}. In this scheme, $M$ is substituted by a finite topological space $Q(M)$ that has the ability of reproducing important topological features of $M$. When the number of points in $Q(M)$ increases, $Q(M)$ approximates $M$ better and better. It is possible to define a certain continuum limit, where $M$ can be recovered exactly. Subsequent research developed this methods and made then usable for doing approximations in quantum physics \cite{Poset}. It turns out that this techniques can be nicely applied to self-intersecting surfaces. Indeed, as it will be explained, the corresponding definition of $I[s]$ for the discrete theory is a faithful translation of the definition in the continuum. Furthermore, it has a truly topological meaning. We learned from previous work that there are two alternatives for discretization in terms of finite topological spaces. In the first approach, both the base space $M$ and the target space $X$ are replaced by the discrete spaces $Q(M)$ and $Q(X)$. In the second one, only the target space $X$ is discretized. The second possibility is not very useful for a generic field theory, however it can be efficiently applied to string theory. In this paper we adopt the second possibility with additional restrictions on $Q(X)$. Under these specific conditions the resulting formalism can be reinterpreted in terms of random surfaces made of plaquettes embedded on a usual hypercubic lattice. Although this work was inspired by looking at finite topological spaces, they will not be explicitly mentioned here. An account of self-intersection numbers when both target and base spaces are discretized will be reported elsewhere. In this paper, we present a discrete model for random surfaces corresponding to the Nambu-Goto theory modified by the presence of the topological term. The case of surfaces with no handles is a modification of the usual planar surfaces. We argue that the pathological behavior observed for the Gaussian action, i.e., $\theta =0$ in (\ref{2.2}), may not occur for other values of $\theta $, possibly leading to a nontrivial continuum limit. The discretization of (\ref{2.2}), without the term $exp(i\theta I[\sigma])$, has been extensively studied in the past \cite{W}-\cite{DFJ2}. Our main objective in this paper is to include the topological term, or in other words, to make sense of the self-intersection number $I[\sigma]$ for any configuration in the model. We will show that $I[\sigma]$ has all the properties that we want. It is an integer, gives the right answer in the continuum limit, and it has a topological meaning in the sense that it is invariant under the analogue of continuous deformations of $\sigma$. To make the paper self contained, the usual self-intersection number for the continuum case is reviewed in Section 2. For the same purpose, some elements of the theory of cell complexes and homology are briefly mentioned in Section 3. The discrete model is discussed in Section 4. The self-intersection number is first defined for a very special class of configurations. Finally, the extension of $I[\sigma]$ for an arbitrary configuration is given by an explicit formula. The topological invariance of $I[\sigma]$ is also demonstrated. Some generic comments on the consequences of the term $I[\sigma]$ are collected in Section 5. \sxn{The Usual Intersection Number}\label{se:2} Consider a 2d manifold $M$ without boundary (parameter space) and a fixed 4d target manifold $X$. For simplicity one can take $X$ to be $\relax{\rm I\kern-.18em R}^4$. Let $\varphi :M\rightarrow X$ be a continuous map (immersion) and $s\subset X$ the surface determined by $\varphi $. Different points of $M$ can be mapped to the same point of $X$. Therefore the surface $s$ can have self-intersections. The self-intersection number $I [s]$ is a measure of how $s$ self-intersects. Usually $I [s]$ is given in terms of local fields. Let $\xi _a,\! (a=1,2)$ be local coordinates of $M$ and $\varphi ^\mu , \! (\mu =1,2,3,4)$ the components of $\varphi $. Then the integer $I[s]$ is given by \cite{BLS,P,MN} \begin{equation} I [s]=\frac{-1}{16\pi }\int d^2\xi \sqrt{g} g^{ab}\nabla _at^{\mu \nu} \nabla _b\tilde t^{\mu \nu} \label{I} \end{equation} where \[ g_{ab}=\frac{\partial \varphi ^\mu }{\partial \xi ^a} \frac{\partial \varphi ^\mu }{\partial \xi ^b}\, , \] \[ t^{\mu \nu }=\frac{\epsilon ^{ab}}{\sqrt g}\partial _a\varphi ^\mu \partial _b\varphi ^\nu \] and \[ \tilde t^{\mu \nu }=\frac{1}{2} \epsilon ^{\mu \nu \alpha \beta }t^{\alpha \beta}. \] If $s$ and $s' $ are homotopic, i.e. they can be continuously deformed into each other, then $I[s ]=I[s ' ]$. We will use the notation $s \sim s '$ to indicate homotopy. The intuitive notion of self-intersection number is very simple. For a 2d surface in 4 dimensions, self-intersection can happen on regions of dimension two, one and zero. Suppose $s$ self-intersects only at a certain number $n$ of isolated points. Furthermore, assume that at any intersection point the two branches of $s$ are not tangent to each other. In this case we say that $s$ is transversal. The simplest invariant associated with $s$ is $I_2[s]$, or intersection module 2. $I_2[s]$ is zero or 1 if $n$ is respectively even or odd. Given any surface $s '$ with transversal self-intersection, one can show that $I_2[s ']=I_2[s]$ if $s' \sim s$. The invariant $I_2[s]$ is extended to non transversal configurations in the following way. Find a transversal surface $\tilde s \sim s$ and define $I_2[s]$ to be equal to $I_2[\tilde s]$. This definition is motivated by a theorem stating that $\tilde s $ always exists and can be made infinitesimally close to $s$ \cite{GP}. In this paper we will make use of a distinct, but equivalent \cite{LS}, presentation of $I[s ]$. It turns out that the invariant $I[s]$ in (\ref{I}) can be seen as a refinement of $I_2[s]$. Instead of simply counting the number of intersections, one associates ``charges'' $\pm 1$ to each intersection and sums over all charges. Let $W$ be the set of points $x_i\in X$ such that $\varphi(p)=\varphi(p')=x_i$, for some pair $p,p'\in M$. Consider also a positive oriented base $\{v_1,v_2\}$ of tangent vectors at $p\in M$ and similarly for $\{w_1,w_2\}$ at $p'\in M$. The map $\varphi $ will induce two sets $\{v_1',v_2'\}$ and $\{w_1',w_2'\}$ of vectors tangent to $s $ at $x_i$. We say that $s$ is transversal iff, for all $x_i\in W$, the sets \begin{equation} B(x_i):=\{v_1',v_2',w_1',w_2'\}\label{2} \end{equation} are linearly independent. Therefore, for each $x_i$, $B(x_i)$ is a base of tangent vectors and defines an orientation, called product orientation at $x_i$. One can compare the product orientation for each $x_i$ with the pre-existent orientation of $X$ and assign a ``charge'' $+1$ if the orientations agree, and $-1$ otherwise. The oriented self-intersection number $I [s]$ is defined to be the sum of all such ``charges''. Observe that there is a potential ambiguity in (\ref{2}), because we can exchange $\{v_1',v_2'\}$ and $\{w_1',w_2'\}$. Obviously, this is not the case, since it does not affect the orientation of $B(x_i)$. The definition of transversality presented so far makes use of tangent vectors, and this is a problem when dealing with discrete spaces. Fortunately, there is an alternative way of defining transversality that is more useful for us. In some coordinate system, a small neighborhood of an intersection point $x_i$ can be identified with an open set $U_i$ of $\relax{\rm I\kern-.18em R}^4$, where $x_i$ sits at the origin. Let us call $s_i$ and $s_i'$ the two branches of $s \cap U_i$. Transversality means that we can find a local coordinate system for $U_{i}$ such that the points of $s$ have coordinates of the form $(y_1,y_2,0,0)$ for $s_i$ and $(0,0,y_3,y_4)$ for $s_i'$. In other words, $U_{i}$ can be identified with the Cartesian product $s_i\times s_i'$. If we give to $s_i\times s_i'$ the product orientation, then \begin{equation} U_i=I[s_i,s_i']~s_i\times s_i'\,, \end{equation} where $I[s_i,s_i']=\pm 1$. The oriented self-intersection number is defined to be \cite{GP} \begin{equation} I [s]=\sum _iI[s_i,s_i']\label{int} \end{equation} Formula (\ref{int}) is valid only for transversal surfaces. The extension of this definition to an arbitrary configuration depends on the result mentioned before. Two homotopic transversal configurations have the same $I[s]$, and for any non-transversal $s$, there is a transversal $\tilde s$ such that $\tilde s\sim s$. In the same way as for $I_2[s]$, one can safely define $I[s]$ to be $I[\tilde s]$. The main advantage of (\ref{int}) is that it can be generalized to the discrete situation. However, this approach to self-intersection does not give a way of computing $I[s]$ for non-transversal configurations. In this sense, the integral formula (\ref{I}) is more useful, but unfortunately very difficult to be translated to the lattice. For this reason, we will work with the discrete version of (\ref{int}). Finally, in Section \ref{se:4.3} we will give an explicit formula to compute the self-intersection number for arbitrary configurations. \sxn{Hypercubic Lattices} In this section we briefly review some notions of homology theory that we will need. We refer to \cite{HW} for a systematic exposition. Abstractly, an $n$-cell $\alpha_{(n)}$ is a space (of dimension $n$), together with subspaces \mbox{$\alpha^i_{(n-1)}\subset \alpha_{(n)}$} called faces. The subsets $\alpha^i_{(n-1)}$ are themselves $(n-1)$-cells, so we can consider their corresponding $\alpha^j_{(n-2)}$ faces. The kind of cells that we will be interested in are regular, meaning that any $\alpha^j_{(n-2)}$ belongs to exactly two $(n-1)$-cells in $\alpha_{(n)}$. By definition, an 1-cell have only two 0-cells as faces, and a 0-cell has no faces. A cell complex $K$ of dimension $n$ is defined to be an union of $n$-cells and it is totally characterized by its elements $\alpha_{(k)}^l$ and their inclusion relations. Therefore, two abstract complexes $K^1$ and $K^2$ are regarded as identical if there is an one to one map $f:K^1\rightarrow K^2$ that preserves the inclusion relations. It is customary to indicate by $K_{(p)}\subset K$ the union of all cells of dimension $p$. Concretely, an (regular) $n$-cell $\alpha_{(n)}$ and the corresponding $\alpha_{(n-1)}^i\subset \alpha_{(n)}$ can be realized as $n$-dimensional polygon in $\relax{\rm I\kern-.18em R}^n$ and respective $(n-1)$-dimensional faces. A cell decomposition of a $n$-manifold $Y$ is an abstract complex $K(Y)$ of dimension $n$ such that its concrete realization is homeomorphic to $Y$. An important property of $K(Y)$ is that any two $(n-1)$-cells belong to at most two $n$-cells. Given two abstract cell complexes $K^1$ and $K^2$, one can define the product cell complex $K^1\times K^2$. The cells of $K^1\times K^2$ are ordered pairs of cells \begin{equation} \alpha_{(n+m)}^{i,j}:=\left(\alpha^i_{(m)},\alpha^j_{(n)}\right), ~~~\alpha_{(m)}^i\subset K^1~~~ \alpha_{(n)}^j\subset K^2 \end{equation} together with the inclusion relations \begin{equation} \left(\alpha^k_{(m-1)},\alpha^l_{(n-1)}\right)\subset \left(\alpha^i_{(m)},\alpha^j_{(n)}\right)~~~\mbox{iff}~~~ \alpha^k_{(m-1)}\subset \alpha^i_{(m)}~\mbox{ and }~\alpha^l_{(n-1)}\subset \alpha^j_{(n)}. \label{5.5} \end{equation} In this paper, we will be restricted to consider $n$-cells that can be realized as cubes of dimension $n$. Abstractly, a cubic $n$-cell $L_{(n)}$ is by definition the product \begin{equation} L_{(4)}=L^1\times L^2\times ...\times L^n \end{equation} of $n$ 1-cells $L^i$. In other words, a cell $\alpha \subset L_{(n)}$ is given by \begin{equation} \alpha=(\alpha^1,\alpha^2,...,\alpha^n) \end{equation} where $\alpha^i$ can be $L^i$ or one of its vertices. A cubic cell complex of dimension $n$ will be the union of cubic cells of dimension $n$. Given a cell complex $K$, one defines the vector space $C_n(K,{\bf Z})$ as the linear combination of $n$-cells, with coefficients in ${\bf Z}$ \begin{equation} C_n(K,{\bf Z})=\left\{ \xi _{(n)} =\sum _i \lambda_i\alpha_{(n)}^i~:~~\lambda_i\in {\bf Z},~~~ \alpha^i_{(n)}\subset K\right\} \end{equation} The vectors $\xi _{(n)}$ are called $n$-chains. The direct sum of all $C_n(K,{\bf Z})$ will be denoted by $C(K,{\bf Z})$. The definition of orientation is related to a linear operator $$\partial :C_n(K,{\bf Z})\rightarrow C_{(n-1)}(K,{\bf Z}),$$ called the boundary operator. It is enough to define $\partial $ for the base elements $\alpha^i_{(n)}$. Intuitively, the boundary $\partial \alpha^i_{(n)}$ of an $n$-cell $\alpha^i_{(n)}$ has to do with its faces. In other words, it is a linear combination, with coefficients $\pm 1$, of all $(n-1)$-cells $\alpha^j_{(n-1)}$ such that $\alpha^j_{(n-1)}\subset \alpha^i_{(n)}$. We define \begin{equation} \partial \alpha^i_{(n)}=0~~ \mbox{ if }n=0 \end{equation} and \begin{equation} \partial \alpha^i_{(n)}=\sum _j I_{nc}(\alpha^i_{(n)},\alpha^j_{(n-1)}) \alpha^j_{(n-1)}.\label{5.2} \end{equation} The coefficients $I_{nc}(\alpha^i_{(n)},\alpha^j_{(n-1)})=\pm 1$ are called the incidence numbers and they have to be assigned in such way that \begin{equation} \partial \partial \xi=0~~~~~\mbox{for any }\xi \in C(K,{\bf Z}). \label{5.2.1} \end{equation} In terms of incidence numbers, (\ref{5.2.1}) is equivalent to \begin{equation} \sum _j I_{nc}(\alpha^i_{(n)},\alpha^j_{(n-1)})I_{nc} (\alpha^j_{(n-1)},\alpha^k_{(n-2)})=0.\label{5.2.2} \end{equation} It turns out that incidence numbers can be assigned recursively in a simple way, and this is what is used to define orientation. First, it is assumed that the boundary of an 1-cell $\alpha_{(1)}$ with faces $\alpha^1_{(0)}$ and $\alpha^2_{(0)}$ can only be $\pm (\alpha^2_{(0)}-\alpha^1_{(0)})$. In other words, for a given $i$ there are only two possibilities for $I_{nc}(\alpha_{(1)}^i,\alpha_{(0)}^j)$, and one is the negative of the other. Suppose now that all $I_{nc}(\alpha_{(1)}^i,\alpha_{(0)}^j)$ have been chosen for a given 2-cell $\alpha_{(2)}^k$. It is easy to see that there are only two possibilities for $I_{nc}(\alpha_{(2)}^k,\alpha_{(1)}^i)$ satisfying (\ref{5.2.2}) and one is the negative of the other. This is actually a general fact. Once the incidence numbers are chosen for the faces $\alpha_{(n-1)}^i$ of an $n$-cell $\alpha_{(n)}^k$, there are only two possible choices for $I_{nc}(\alpha_{(n)}^k,\alpha_{(n-1)}^i)$. {}From above it follows that, once we find a possible configuration of incidence numbers, all the others can be obtained by a certain set of transformations. Let us introduce a function $g(\alpha)$ from $K(Y)$ to $\{-1,+1\}$. Given a possible configuration of incidence numbers $I_{nc}^0$ we define a new configuration $gI_{nc}^0$ \begin{equation} gI_{nc}^0(\alpha^i_{(r)},\alpha^j_{(r-1)})=g(\alpha^i_{(r)})~I_{nc}^0(\alpha^i_{(r)}, \alpha^j_{(r-1)})~g(\alpha^j_{(r-1)}). \label{t} \end{equation} It is clear that the $gI_{nc}^0$ satisfy (\ref{5.2.2}). Furthermore, all possibilities for $I_{nc}$ can be generated in this way, by starting from any $I_{nc}^0$. Now consider an $n$-dimensional complex $K(Y)$ associated with some $n$-manifold $Y$. What we call local orientations of $K$ is the freedom to choose independently $I_{nc}(\alpha^i_{(n)},\alpha^j_{(n-1)})$ at each $n$-cell. We say that two configurations $I_{nc}$ and $I_{nc}'$ define the same local orientation of $K(Y)$, or are equivalent $I_{nc}\sim I_{nc}'$, if they are related by a transformation (\ref{t}) with $g(\alpha_{(n)}^i)=1$ \[ I_{nc}\sim I_{nc}'~~\mbox{ iff }~~~I_{nc}'=gI_{nc}~~ \mbox{ for some $g$ such that }g(\alpha^i_{(n)})=1. \] A global orientation for $K(Y)$ appears when we start to compare the local orientations for neighboring $n$-cells. We say that the local orientation at $\alpha^1_{(n)}$ agrees with the local orientation at $\alpha^2_{(n)}$ iff \begin{equation} I_{nc}(\alpha^1_{(n)},f_{(n-1)})=-I_{nc}(\alpha^2_{(n)},f_{(n-1)}),\label{5.4} \end{equation} where $f_{(n-1)}$ is the unique common face. An $n$-dimensional complex, together with an orientation, is called oriented iff all the local orientations agree. A vector $\xi _{(n)}\in C_{(n)}(K,{\bf Z})$ of the form \begin{equation} \xi _{(n)}=\sum_i s_i\alpha_{(n)}^i~;~~~~s_i=\pm 1\label{vec} \end{equation} is interpreted as the oriented $n$-dimensional subcomplex of $K$ given by \[ \bigcup _i s_i\alpha_{(n)}^i, \] where the factors $s_i=\pm 1$ indicate orientation. If $\xi _{(n)}$ is globally oriented, then $\partial \xi _{(n)}$ is also globally oriented. Let $K^1$ and $K^2$ be two globally oriented cell complexes and $\partial _1$ and $\partial _2$ be boundary operators defined on $C(K^1,{\bf Z})$ and $C(K^2,{\bf Z})$. Consider an operator $\partial $ acting on \mbox{$C(K^1\times K^2,{\bf Z})$} in the following way \begin{equation} \partial (\alpha^1_{(m)},\alpha^2_{(n)})=(\partial _1\alpha^1_{(m)},\alpha^2_{(n)})+ (-1)^m (\alpha^1_{(m)},\partial _2\alpha^2_{(n)})\,,\label{5.6} \end{equation} It follows immediately that $\partial ^2=0$. Therefore $I_{nc}^1$ and $I_{nc}^2$ will define a configuration of incidence numbers $I_{nc}^1\times I_{nc}^2$ for $K^1\times K^2$. If $g_1I_{nc}^1$ and $g_2I_{nc}^2$ are two other incidence numbers equivalent to $I_{nc}^1$ and $I_{nc}^2$, a simple calculation shows that \begin{equation} g_1I_{nc}^1\times g_2I_{nc}^2=g_{(1\times 2)}I_{nc}^1\times I_{nc}^2 \end{equation} where $g_{(1\times 2)}\left((\alpha^1,\alpha^2) \right)=g_1(\alpha^1)g_2(\alpha^2)$. Therefore (\ref{5.6}) induces a canonical orientation on $K^1\times K^2$, called the product orientation. It is a simple exercise to verify that the product orientation is also global. An 1-cell, or link $L^i$, is totally determined by its vertices $a^i$ and $b^i$. It is standard to write $[a^i,b^i]$ for $L^i$ and define \begin{equation} \partial [a^i,b^i]= [a^i]-[b^i]\label{315} \end{equation} then $[b^i,a^i]$ will be identified with $-L^i$. Therefore, the cube \begin{equation} L_{(4)}=([a^1,b^1],[a^2,b^2],[a^3,b^3],[a^4,b^4])\label{L4} \end{equation} has a standard set of incidence number determined by (\ref{315}) and (\ref{5.6}). Whenever we write a cubic cell as in (\ref{L4}), the standard incidence numbers are assumed. \sxn{The Discrete Model} The discretization is done by introducing a grid on the space $\Gamma$ of all surfaces. This allow us to write the functional integral (\ref{2.2}) as a sum. In other words, $\Gamma$ will be substituted by some discrete space $\Gamma_d$, where we can define an area $A[\sigma]$ and and an intersection number $I[\sigma]$ for any configuration $\sigma\in \Gamma_d$. \subsxn{Space of Configurations}\label{s:2} The space of configurations we need to consider is given by the set of all immersions $\varphi $ of a 2-dimensional manifold $M$ on some 4-dimensional target space $X$.\footnote{We assume that $X$ has no boundary, but $M$ may have boundary components.} However, the action (Nambu-Goto) depends only on the area of the surface $s$ determined by $\varphi $. Any two immersions that give the same surface $s$ in $X$ are regarded as equivalent. The relevant set of configurations $\Gamma$ is then the set of all such surfaces. Evidently not all $s$ are submanifolds of $X$. They can be degenerated surfaces in the sense that they can fold on themselves, i.e., more than one point of $M$ can be mapped to the same point of $X$. Let us assume, for simplicity, that $X=\relax{\rm I\kern-.18em R}^4$. Consider the discrete lattice ${\bf Z}^4$ of points (vertices) $v_i\in X$ that have integer coordinates in some lattice spacing unit $a$. It determines a cell decomposition $K(X)$ of $X$ where $K_{(0)}={\bf Z}^4$ and $K_{(n)}$ is the set of all $n$-dimensional elementary cubes determined by ${\bf Z}^4$. It is useful to think of $K(X)$ as the product of 1-dimensional complexes. According to the notation introduced at the end of Section 3, an arbitrary cell $\alpha\subset K(X)$ will be written as \begin{equation} \alpha=(\alpha^1,\alpha^2,\alpha^3,\alpha^4)\label{6.1} \end{equation} where the variable $\alpha^i$ can take the values $[p^i]$ (0-cell) or $[p^j,p^j+1]$ (1-cell), $p_i\in {\bf Z}$. They will be the base elements for $C(K(X),{\bf Z})$. Similarly one can also discretize the 4 dimensional torus ${\bf T}^4$ by taking $K(X)^{(0)}=({\bf Z}_n)^4$. In this paper we will be limited to examine only these two cases. We are now in a position to define the discrete space $\Gamma_d$ that will be used to approximate the infinite dimensional space of configurations $\Gamma$. The set $\Gamma_d$ will be a countable sub set of $\Gamma$. A configuration $\sigma$ belongs to $\Gamma_d$ if the corresponding surface lies entirely on plaquettes of $K(X)$. Since the number of plaquettes is countable, so is the number of elements in $\Gamma_d$. Another way of interpreting $\Gamma_d$ is to think of a configuration $\sigma$ as the natural two dimensional generalization of a random walk. Let us explain. Consider a random walker that starts at a vertex $v^0$ and then moves to a neighboring vertex $v^1$. The trajectory, or curve traversed by the random walker, can be specified by the oriented link $[v^0,v^1]$. The subsequent steps can be described by adding more links to one end of the curve. Eventually, the random walker may go to a vertex $v$ that has been visited before, and the curve self-intersects. In this case, the curve is no longer regular, in the sense that $v$ belongs to more than 2 links. Analogously, one starts to construct a surface by marking some 2d subcomplex $K^p\subset K(X)$, where $K^p$ is the union of $p$ plaquettes of $K(X)$. The surface $K^p$ is supposed to be regular in the sense that all 1-cells belongs to at most 2 plaquettes. Alternatively, $K^p$ can be also written as a vector in $C_{(2)}(K(X),{\bf Z})$ \begin{equation} K^p=s_1\alpha_{(2)}^1+s_2\alpha_{(2)}^2+...+s_p\alpha_{(2)}^p, ~~~s_i=\pm 1, \label{6.2} \end{equation} where $\alpha_{(2)}^i$ are base elements of the form (\ref{6.1}). Now, one tries to add one more plaquette $s_{p+1}\alpha_{(2)}^{p+1}$ in such way that it has at least one common link with $K^p$. Eventually, it may happen that, for the resulting complex, some links belong now to more than 2 plaquettes and the resulting complex $K^{p+1}=K^p+s_{p+1}\alpha_{(2)}^{p+1}$ would not be regular. The extreme case is when \begin{equation} \alpha_{(2)}^{p+1}=\alpha_{(2)}^{j}, ~~~\mbox{for some $\alpha_{(2)}^{j}$ in $K^p$}, \end{equation} meaning that $\alpha_{(2)}^{p+1}$ has been previously marked. The idea is to make $K^{p+1}$ regular by hand. One enlarges $K(X)$ by introducing a copy of the base element $\alpha_{(2)}^{p+1}$ and denoting it by $\bar{\alpha_{(2)}^{p+1}}$. Then, $K^{p+1}$ is defined to be the abstract complex given by \begin{equation} K^{p+1}=K^p+s_{p+1}\bar{\alpha_{(2)}^{p+1}}. \label{6.3} \end{equation} and it is regular by construction. The process is iterated a number of times. Eventually, it will be necessary to introduce many copies of a given element $\alpha_{(2)}^{i}$. They will be denote by $\bar{\alpha_{(2)}^i}$, $\bar{\bar {\alpha_{(2)}^i}}$, etc. A configuration $\sigma$ with area $n$ is any abstract cell complex $\sigma= K^n$ constructed as above such that $\sigma$ represents a cell decomposition of $M$. We can write $\sigma$ in the form \begin{equation} \sigma=\sum _{i=1}^l\beta^i, \label{s} \end{equation} where \[ \beta^i= s_i\alpha_{(2)}^i+\bar{s_i}\,\bar{\alpha_{(2)}^i}+ \bar{\bar{s_i}} \,\, \bar{\bar {\alpha_{(2)}^i}}+...~. \] Notice that a configuration $\sigma $ can not in general be interpreted as a subcomplex of $K(X)$, i.e., a vector of the form (\ref{vec}). This happens only if $\sigma$ self-intersects on a subcomplex of dimension zero. There is a very useful map $\xi $ from $\Gamma_d $ to $C_{(2)}(K(X),{\bf Z})$. If $\sigma$ is as in (\ref{s}), then $\xi (\sigma)$ is defined to be the following vector in $C_{(2)}(K(X),{\bf Z})$ \begin{equation} \xi (\sigma)=\left(s_1+\bar{s_1}+\bar{\bar {s_1}}+...\right)\alpha_{(2)}^1 + \left(s_2+\bar{s_2}+\bar{\bar {s_2}}+...\right)\alpha_{(2)}^2 +...+ \left(s_l+\bar{s_l}+\bar{\bar {s_l}}+...\right)\alpha_{(2)}^l. \label{xis} \end{equation} In other words, all occurrences of $\bar{\alpha_{(2)}^i}$, $\bar{\bar{\alpha_{(2)}^i}}$, etc, in (\ref{s}) are replaced by $\alpha_{(2)}^i$. This map will be used on Section 4.3. Let $\gamma_i, (i=1,...,n)$ denote fixed loops on $K(X)$. The relevant observables are the $n$ point Green functions \begin{equation} Z_{n,m}(\gamma_1,...,\gamma_n;\lambda, \theta )=\sum _{\sigma\in \Gamma_d(\gamma_1,...,\gamma_n)} e^{-\lambda A[\sigma]+i\theta I[\sigma]}\label{f.1} \end{equation} The sum is done over the set $\Gamma_d(\gamma_1,...,\gamma_n)$ of all surfaces with $n$ holes and $m$ handles, such that \begin{equation} \partial \sigma=\bigcup _{i=1}^n\gamma_i. \end{equation} Some correlation functions play a special rule in the analysis of the theory. For example the string tension, is defined by \begin{equation} \tau(\lambda,\theta )=\lim _{\gamma\rightarrow \infty}\frac{1}{LM}\log {Z_{1,m} (\gamma_{LM};\lambda,\theta )}, \end{equation} where $\gamma_{LM}$ is a rectangular loop with $L\times M$ links. The simplest random surface model would be given by the sum over surfaces with fixed topology. Let us assume, for example, $X=\relax{\rm I\kern-.18em R}^4$ and surfaces with no handles. This gives us a generalization of the planar random surface model. \subsxn{Intersection Number}\label{se:4.2} As defined on Section \ref{se:2}, the self-intersection number involves the notion of transversality. We would like to have a definition of transversality for our discrete surfaces that is a natural generalization of the definition for continuous surfaces. Let us consider on $\relax{\rm I\kern-.18em R}^{(n+m)}$ two hyper surfaces $s^1_{(m)}$ and $s^2_{(n)}$ of dimensions $m$ and $n$. Suppose they meet at a point $x$. We say that they are perpendicular if their tangent vectors are perpendicular. It is also equivalent to say that for a small neighborhood $U_x$ of $x$, the surfaces are flat and $U_x$ can be canonically identified with $s^1_{(m)}\times s^2_{(m)}$. If this is the case, $s^1_{(m)}$ and $s^2_{(n)}$ are surely transversal. If we are dealing with cubic cells this seems to be the natural notion of transversality. Let us make the idea more precise. We will use the convention that $\alpha^i$, $v^i$ and $l^i$ are variables related to the 1-dimensional cell $[p^i,p^i+1]$, $p^i\in {\bf Z}$, with the following ranges \begin{eqnarray} \alpha^i & = & [p^i],[p^i],[p^i,p^i+1];\nonumber \\ v^i & = & [p^i],[p^i+1]; \nonumber \\ l^i & = & [p^i,p^i+1]. \nonumber \end{eqnarray} Consider a cubic cell $\alpha_{(4)}=(l^1,l^2,l^3,l^4)$ as defined on Section 3. Let $V,L$ denote subcells of $\alpha_{(4)}$. Consider $V$ to be a 3d cube and $L$ a link such that they have one common vertex. For example, one can take \begin{eqnarray} V & = & (l^1,l^2,l^3,v^4)\nonumber \\ L & = & (v^1,v^2,v^3,l^4) \label{7.1} \end{eqnarray} sharing the vertex $(v^1,v^2,v^3,v^4)$. From (\ref{7.1}) we see that $V$ and $L$ are perpendicular in the obvious sense. Arbitrary cells $\omega_V$ and $\omega_L$ of $V$ and $L$ are of the form \begin{equation} \omega_V=(\alpha^1,\alpha^2,\alpha^3,v^4)~~\mbox{ and }~~\omega_L=(v^1,v^2,v^3,\alpha^4). \end{equation} There is a canonical one to one map $f$ between the product complex $V\times L$ and $\alpha_{(4)}$ given by \begin{equation} f\left((\omega_V,\omega_L)\right)=(\alpha^1,\alpha^2,\alpha^3,\alpha^4)\label{5.8}. \end{equation} One can show that $f$ preserves the inclusion relations and that if $V\times L$ is oriented according to (\ref{5.6}), it also preserves orientation. Consequently $V\times L$ can be identified with $\alpha_{(4)}$. We will write \begin{equation} V\times L=\alpha_{(4)},\label{5.9} \end{equation} instead of $f(V\times L)=\alpha_{(4)}$ since the identification (\ref{5.8}) is canonical. The analogous identification of $L\times V$ with $\alpha_{(4)}$ does not preserve orientation, and we write \begin{equation} L\times V=-\alpha_{(4)} \end{equation} The last two formulas can be generalized in an obvious way. First let us introduce some notation. Consider a cube $\alpha_{(n+m)}=(l^1,...,l^{n+m})$ of dimension $(n+m)$. We will write a $n$-dimensional sub-cell belonging to $\alpha_{(n+m)}$ as \begin{equation} \alpha_{[a_1a_2...a_n]}= [(l^{a_1},l^{a_2},...,l^{a_n}, v^{a_{n+1}},...,v^{a_{n+m}})], \end{equation} where indices $a_i$ take values $(1,...,n+m)$. The square brackets $[~.~]$ stands for the ordering of the indices $a_1,a_2,...,a_n$, or the ordered permutation of the objects with indices $a_1,a_2,...,a_n$. The notation indicates that $\alpha_{[a_1a_2...a_n]}$ have link components only on the directions $a_1,...,a_n$. In other words, a generic cell in $\alpha_{[a_1a_2...a_n]}$ is of the form \begin{equation} \omega_{[a_1...a_n]}= \left[(\alpha^{a_1},...,\alpha^{a_n}, v^{a_{n+1}},...,v^{a_{n+m}})\right]. \end{equation} Suppose now that we have another subcell $\alpha_{[b_1...b_m]}$ that shares exactly one vertex with $\alpha_{[a_1...a_n]}$. Consequently the two sets of indices $\{a_1...a_n\}$ and $\{b_1...b_m\}$ cannot have any elements in common. There is a canonical one to one map \mbox{$f:\alpha_{[a_1...a_n]}\times \alpha_{[b_1...b_m]}\rightarrow \alpha_{(m+n)}$} given by \begin{equation} f((\omega_{[a_1...a_n]},\omega_{[b_1...b_m]}))=\left[(\alpha^{a_1},...,\alpha^{a_n}, \alpha^{b_1},...,\alpha^{b_m})\right]. \end{equation} It is a simple matter to show that under this canonical identification we can write \begin{equation} \alpha_{[a_1...a_n]}\times\alpha_{[b_1...b_m]}= \epsilon _{[a_1...a_n][b_1...b_m]}\alpha_{(n+m)}\label{x} \end{equation} where $\epsilon _{i...k}$ is the usual Levi-Civita symbol. Observe that, for two arbitrary cells $\alpha_{(n)}$ and $\beta_{(m)}$, the product $\alpha_{n}\times \beta_{m}$ always makes sense as an abstract complex. However, its canonical identification with an $(n+m)$-cell $\alpha_{(n+m)}$ only makes sense if they belong to $\alpha_{(n+m)}$ and share a single vertex. Let $K(X)$ be the cubic cell decomposition of $\relax{\rm I\kern-.18em R}^4$ (or ${\bf T}^4$) and $\sigma $ a configuration in $\Gamma_d$. The first condition for $\sigma$ to be considered transversal is that it self-intersects only on vertices. In particular, $\sigma $ has to be a 2-dimensional subcomplex of $K(X)$. Let $v=([p^1],[p^2],[p^3],[p^4])$ be one of the vertices where the self intersection occurs. We define a neighborhood $U_v$ of $v$ to be the union of all 4-cells $\alpha_{(4)}^k$ that contain $v$. An example of of such a cell is $([p^1-1,p^1],[p^2-1,p^2],[p^3,p^3+1],[p^4,p^4+1])$. Since we are restricted to cell decompositions of $\relax{\rm I\kern-.18em R}^4$ (or ${\bf T}^4$), $U_v$ is the union of 16 4-cells. In other words, considered as a vector in $C_{(4)}(K(X),{\bf Z})$, $U_v$ is given by \begin{equation} U_v=\sum _{k=1}^{16} \alpha_{(4)}^k, \end{equation} Consider the 1-dimensional complex $L^i=[p^i-1,p^i]\cup [p^i,p^i+1]$, or \begin{equation} L^i=[p^i-1,p^i] + [p^i,p^i+1] \end{equation} made of 2 adjacent vertices. One can see that neighborhood $U_v$ is the product of 4 such 1-dimensional complexes. In other words, \begin{equation} U_v=(L^1,L^2,L^3,L^4). \end{equation} Let $\sigma_v$ and $\sigma_v'$ be the two components of $\sigma\cap U_v$. We say that the intersection is transversal iff $\sigma$ and $\sigma'$ are of the form \begin{equation} \begin{array}{ccc} \sigma_v&=&s\left[(L^a,L^b,[p^c],[p^d])\right]\\ \\ \sigma_v'&=&s'\left[(L^c,L^d,[p^a],[p^b])\right] \end{array}~~~\mbox{with $[a,b,c,d]=1234~$ and $~s,s'=\pm 1$}.\label{7.2} \end{equation} In this case we can write \begin{equation} U_v=I[\sigma_v,\sigma_v']~\sigma_v\times \sigma_v'\,,\label{5.11} \end{equation} where the canonical identification is being used. The coefficient $I[\sigma_v,\sigma_v']=\pm 1$ is called the intersection number at $v$. From (\ref{x}), (\ref{7.2}) and (\ref{5.11}), it follows that \begin{equation} I[\sigma_v,\sigma_v']=ss'\epsilon _{[ab][cd]}.\label{ss} \end{equation} Finally, the self-intersection number $I[\sigma]$ is defined to be the sum of all intersection numbers \begin{equation} I[\sigma]=\sum_v I[\sigma_v,\sigma'_v].\label{5.13} \end{equation} Let $s$ be the continuous surface associated with a transversal $\sigma$. The surface $s$ is transverse in the usual sense, therefore $I[s]$ defined by (\ref{int}) can also be computed. It is a very simple exercise involving tangent vectors to show that $I[s]=I[\sigma]$. \subsxn{Topological Invariance and Non-transversal Configurations} \label{se:4.3} The definition of the self-intersection number for transversal configurations, presented in Section \ref{se:4.2}, is the exact analogue of the continuous definition on Section \ref{se:2}. To complete the correspondence with the continuous case we need to introduce on $\Gamma_d$ a notion of continuous deformations, or homotopy of configurations, and show that $I[\sigma]$ is an invariant. We also have to extend the $I[\sigma]$ to non-transversal configurations. Intuitively, two configurations $\sigma_1$ and $\sigma_2$ should be considered homotopic iff their continuous counterparts $s_1$ and $s_2$ can be deformed into each other by a sequence of small deformations. Let us explain what is meant by a small deformation. Given $\sigma_1\in \Gamma_d$, consider a small portion $D_1$ of $\sigma_1$ such that $D_1$ is a $2$-dimensional sub-complex in $K(X)$. Furthermore, $D_1$ is required to be topologically equivalent to a disk. A small deformation will be a process where $D_1$ is removed and substituted by another disk $D_2\subset K(X)$. We say that the resulting surface $\sigma_2$ is a continuous deformation of $\sigma_1$ iff $(D_2\cup-D_1)$ is the boundary of a 3d complex $B\subset K(X)$, or \begin{equation} D_2-D_1=\partial B, \end{equation} where $B$ is topologically equivalent to a $3$-dimensional ball. It is clear that a minimal deformation happens when $B$ is a cube, $D_1$ is one of its plaquettes and $D_2$ the union of the other 5 plaquettes. However, such a minimal deformation is not enough to generate all small deformations, as we will illustrate by an example. Let $D_1$ be the union of two adjacent plaquettes as in Fig. 1(a). Let us apply to each plaquette of $D_1$ the minimal deformation described above. The resulting surface (Fig. 1(b)) consists of two cubic boxes, open on the top, placed side by side. It does not correspond to a regular surface. Notice that it has 2 superposed plaquettes that are glued along the top link (see figure). Another alternative is to deform $D_1$ into $D_2'$ (Fig. 1(c)), a regular surface consisting of single box open on the top. But for consistency, $D_2$ has to be homotopic to $D_2'$. It is clear that, to have a complete set of minimal deformations, we need to include another deformation rule. Two superposed plaquettes glued along some of their links can be removed. Obviously, the links they do not share should remain. An example of the second type of elementary deformations is shown in Fig. 1(d). \begin{figure}[t] \begin{center} \unitlength=1.00mm \linethickness{0.8pt} \begin{picture}(147.00,120.00) \put(0.00,20.00){\line(0,1){25.00}} \put(0.00,45.00){\line(1,0){50.00}} \put(50.00,45.00){\line(0,-1){25.00}} \put(50.00,20.00){\line(-1,0){50.00}} \put(0.00,20.00){\line(1,0){25.00}} \put(25.00,20.00){\line(0,1){25.00}} \put(0.00,20.00){\line(1,0){25.00}} \put(0.00,45.00){\line(2,1){20.00}} \put(20.00,55.00){\line(1,0){50.00}} \put(50.00,45.00){\line(2,1){20.00}} \put(70.00,55.00){\line(0,-1){25.00}} \put(70.00,30.00){\line(0,0){0.00}} \put(70.00,30.00){\line(-2,-1){20.00}} \put(0.00,110.00){\line(1,0){50.00}} \put(50.00,110.00){\line(2,1){20.00}} \put(70.00,120.00){\line(-1,0){50.00}} \put(20.00,120.00){\line(-2,-1){20.00}} \put(0.00,110.00){\line(0,0){0.00}} \put(25.00,110.00){\line(2,1){20.00}} \put(127.00,110.00){\line(2,1){20.00}} \put(95.00,120.00){\line(-2,-1){20.00}} \put(75.00,110.00){\line(0,0){0.00}} \put(100.00,110.00){\line(2,1){20.00}} \put(75.00,110.00){\line(0,-1){25.00}} \put(75.00,85.00){\line(1,0){20.00}} \put(95.00,85.00){\line(1,5){5.00}} \put(100.00,110.00){\line(1,-5){5.00}} \put(127.00,85.00){\line(0,1){25.00}} \put(147.00,120.00){\line(0,-1){25.00}} \put(147.00,95.00){\line(-2,-1){20.00}} \put(95.00,120.00){\line(0,-1){8.00}} \put(80.00,45.00){\line(2,1){20.00}} \put(75.00,20.00){\line(1,5){5.00}} \put(80.00,45.00){\line(1,-5){5.00}} \put(75.00,20.00){\line(5,2){9.00}} \put(85.00,20.00){\line(2,1){20.00}} \put(105.00,30.00){\line(-1,5){5.00}} \put(120.00,20.00){\line(0,1){25.00}} \put(140.00,55.00){\line(0,-1){25.00}} \put(120.00,20.00){\line(2,1){20.00}} \put(120.00,45.00){\circle*{2.00}} \put(120.00,20.00){\circle*{2.00}} \put(140.00,30.00){\circle*{2.00}} \put(140.00,55.00){\circle*{2.00}} \put(35.00,74.00){\makebox(0,0)[cc]{(a)}} \put(110.00,74.00){\makebox(0,0)[cc]{(b)}} \put(110.00,5.00){\makebox(0,0)[cc]{(d)}} \put(35.00,5.00){\makebox(0,0)[cc]{(c)}} \put(20.00,55.00){\line(0,-1){9.00}} \put(45.00,46.00){\line(0,1){9.00}} \put(95.00,85.00){\line(2,1){8.00}} \put(75.00,110.00){\line(1,0){51.94}} \put(95.00,120.00){\line(1,0){52.00}} \put(105.00,85.00){\line(1,0){21.94}} \put(120.00,120.00){\line(1,-4){1.90}} \put(108.00,36.00){\line(1,0){6.99}} \put(115.00,38.00){\line(-1,0){7.04}} \put(117.00,37.00){\line(-2,1){4.02}} \put(113.00,35.00){\line(2,1){4.00}} \put(5.00,105.00){\makebox(0,0)[cc]{{\large $D_1$}}} \put(80.00,80.00){\makebox(0,0)[cc]{{\large $D_2$}}} \put(5.00,15.00){\makebox(0,0)[cc]{{\large $D_2'$}}} \linethickness{0.4pt} \put(70.00,30.00){\line(0,0){0.00}} \put(125.00,95.00){\line(1,0){21.94}} \put(125.00,95.00){\line(-2,-1){20.00}} \put(125.00,95.00){\line(-1,6){2.00}} \put(75.00,85.00){\line(2,1){20.00}} \put(95.00,95.00){\line(1,0){20.00}} \put(95.00,95.00){\line(0,1){0.00}} \put(115.00,95.00){\line(-2,-1){9.97}} \put(115.00,95.00){\line(1,6){2.03}} \put(70.00,30.00){\line(-1,0){18.93}} \put(49.00,30.00){\line(-1,0){23.00}} \put(24.00,30.00){\line(-1,0){4.04}} \put(20.00,30.00){\line(-2,-1){20.00}} \put(20.00,30.00){\line(0,1){0.04}} \put(45.00,43.00){\line(0,-1){0.03}} \put(45.00,30.00){\line(-2,-1){19.93}} \put(95.00,95.00){\line(0,1){14.00}} \put(45.00,30.00){\line(0,1){14.00}} \put(20.00,30.00){\line(0,1){14.00}} \put(120.00,120.00){\line(-1,-5){1.99}} \end{picture} \end{center} {\footnotesize {\bf Fig. 1.} (a) is the disk $D_1$ made of two adjacent plaquettes. (b) is a possible continuous transformation $D_2$, where each plaquette of $D_1$ is minimally deformed. It consists of two cubic boxes, open on the top, placed side by side. The two superposed plaquettes are drawn slightly separated to make the picture clear. (c) is an alternative deformation $D_2'$ of $D_1$. The elementary deformation connecting $D_2$ and $D_2'$ is shown in (d) } \end{figure} \clearpage We recall that associated to each configuration $\sigma\in \Gamma_d$ there is a 2-chain \mbox{$\xi (\sigma)\in C_{(2)}(K(X),{\bf Z})$} given by (\ref{xis}). It turns out that small deformations have a very simple interpretation in terms of chains. If $\sigma_1$ and $\sigma_2$ differ by a minimal deformation of the type illustrated in Fig. 1(d), it follows from the definitions that $\xi (\sigma_1)-\xi (\sigma_2)$ is equal to zero. In general we have \begin{equation} \sigma_1\sim \sigma_2~~\mbox{ implies } ~~\xi (\sigma_2)= \xi (\sigma_2)+\partial B, \label{5.14} \end{equation} for some 3-chain $B$. Equation (\ref{5.14}) is the clue to the invariance of $I[\sigma]$ under continuous deformations. In order to proceed it will be useful to introduce two kinds of products involving chains. The first one is a scalar product $\langle \cdot,\cdot \rangle $ on $C(K(X),{\bf Z})$. As usual it is enough to give the product for the base elements. We define \begin{equation} \langle \alpha_{(m)}^i,\alpha_{(n)}^j\rangle = \delta _{mn}\delta _{ij}.\label{inter} \end{equation} The second one is a cross product. Let $\alpha_{(m)}^i$ and $\alpha_{(n)}^j$ be base elements such that \mbox{$(m+n)=4$}. The cross product $\alpha_{(m)}^i\times \alpha_{(n)}^j$ will be given by (\ref{x}) if they belong to the same $4$-cell, and will be zero otherwise. For arbitrary chains $\xi _{(m)}^1$ and $\xi _{(n)}^2$ the product is extended by linearity. Let us regard the cell decomposition $K(X)$ as a vector on $C_{(4)}(K(X),{\bf Z})$. We can assume that all 4-cells $\alpha_{(4)}^i$ have a coherent orientation and therefore \begin{equation} K(X)=\sum _i \alpha_{(4)}^i, \label{eq1} \end{equation} where the sum is over all 4-cells. Let $\sigma$ be a transversal configuration with $n$ plaquettes. In this case, the expression (\ref{xis}) reduces to \begin{equation} \xi (\sigma)= s_1\alpha_{(2)}^1+s_2\alpha_{(2)}^2+...+s_n\alpha_{(2)}^n.\label{eq2} \end{equation} Consider the product $\xi (\sigma)\times \xi (\sigma)$. Because of the way the cross product was defined, most of the $n^2$ terms in the expansion of $\xi (\sigma)\times \xi (\sigma)$ will be zero. There will be contributions only from plaquettes that share exactly one vertex, or in other words, from plaquettes that contain the intersection points $v$. It is not difficult to see that there will be 32 non vanishing terms per each intersection point $v$. From (\ref{x}), (\ref{7.2}) and (\ref{ss}) one can show that each term is equal a 4-cell multiplied by the intersection number $I[\sigma_v,\sigma_v']$. Combining (\ref{eq1}) and (\ref{eq2}) with the previous observation, one can see that the oriented self-intersection number $I[\sigma]$ can be expressed as \begin{equation} I[\sigma]=\frac{1}{32}\langle K(X),\xi (\sigma)\times \xi (\sigma)\rangle .\label{7.3} \end{equation} The topological invariance of (\ref{7.3}) is a consequence of the identity \begin{equation} \langle K(X),\xi _{(2)} \times \partial \xi _{(3)}\rangle= \langle K(X),\partial \xi _{(2)} \times \xi _{(3)}\rangle , \label{7.4} \end{equation} where $\xi _{(2)}$ and $\xi _{(3)}$ are arbitrary 2-chains and 3-chains. Let us assume (\ref{7.4}) for the moment. Given $\sigma \sim \sigma'$, it follows from (\ref{5.14}), (\ref{7.3}) that \begin{equation} I[\sigma']=\frac{1}{32} \langle K(X), \xi (\sigma) \times \xi (\sigma) \rangle + \frac{1}{16} \langle K(X), \xi (\sigma) \times \partial B\rangle + \frac{1}{32} \langle K(X), \partial B\times \partial B\rangle. \label{top0} \end{equation} Using the identities (\ref{7.4}) and $\partial ^2=0$ we have \begin{equation} I[\sigma']-I[\sigma]=\frac{1}{16} \langle K(X), \partial \xi (\sigma) \times B\rangle \label{top} \end{equation} If $\sigma$ has no boundary, then $\partial \xi (\sigma)=0$ and $I[\sigma]=I[\sigma']$. When the surface $\sigma$ has a boundary, (\ref{7.3}) is still well defined, but is no longer invariant under arbitrary deformations. One has to be restricted to the class of deformations such that $\langle K(X),\partial \xi (\sigma) \times B\rangle=0$. For example, if the fluctuations on $\sigma$ occur far from its boundary, i.e., $B=0$ at the boundary of $\sigma$, the r.h.s. of (\ref{top}) obviously vanishes. The extension of $I[\sigma]$ to non-transversal configurations is now obvious. Given $\sigma$, one computes $\xi (\sigma)$ by formula (\ref{xis}) and uses (\ref{7.3}) to compute $I[\sigma]$. This is a well-defined procedure, since the r.h.s. of (\ref{7.3}) makes sense for an arbitrary vector $\xi (\sigma)$ in $C_{(2)}(K(X),{\bf Z})$. We would like to indicate how identity (\ref{7.4}) can be proven. It is enough to verify it for the base elements in $C_{(2)}(K(X),{\bf Z})$ and $C_{(3)}(K(X),{\bf Z})$. In the notation of \mbox{Section \ref{se:4.2}}, let \begin{eqnarray} \xi _{(2)}&=&\left[ (l^a,l^b,v^c,v^d)\right] \nonumber \\ \xi _{(3)}&=&\left[ (\tilde l^a,\tilde l^i,\tilde v^j,\tilde v^k)\right]. \end{eqnarray} Notice that $\xi _{(2)}$ and $\xi _{(3)}$ have to have link components on one common direction given by the repeated index $a$. Let us first compute $\langle K,\partial \xi _{(2)}\times \xi _{(3)}\rangle$. The only terms in $\partial \xi _{(2)}$ that contribute are \[ \epsilon ^{ab}\left[ ([p^a],l^b,v^c,v^d)\right]- \epsilon^{ab}\left[ ([p^a+1],l^b,v^c,v^d)\right]. \] After some algebra we have \begin{equation} \langle K(X),\partial \xi _{(2)}\times \xi _{(3)}\rangle = \epsilon^{ab}\epsilon^{[ajk]b}\left(\delta _{[p^a+1]\subset \tilde l^a}- \delta _{[p^a]\subset \tilde l^a} \right) ,\label{A} \end{equation} where $\delta _{[p^a]\subset \tilde l^a}$ is zero unless $[p^a]\subset \tilde l^a$. Analogously, \begin{equation} \langle K(X),\xi _{(2)}\times \partial \xi _{(3)}\rangle = \epsilon^{a[jk]}\epsilon^{[ab][jk]}\left( \delta _{[\tilde p^a]\subset l^a}- \delta _{[\tilde p^a+1]\subset l^a}\right) .\label{B} \end{equation} The fact that $K(X)$ has no boundary has been used to derive the last two equations. A little thought shows that $\epsilon^{ab}\epsilon^{[ajk]b}=\epsilon^{a[jk]}\epsilon^{[ab][jk]}$. If $l^a$ and $\tilde l^a$ do not share any vertex or if $l^a=\tilde l^a$, (\ref{A}) and (\ref{B}) are both zero. For the cases where they are adjacent, the r.h.s. of (\ref{A}) and (\ref{B}) give the same result. \sxn{Final Remarks} A discrete model of random surfaces with topological term was introduced. The model is described by the Green functions $Z_{n,m}(\gamma_1,...,\gamma_n;\lambda ,\theta )$ defined by (\ref{f.1}). We show that the topological term $I[\sigma]$ is well defined and can be computed explicitly by formula (\ref{7.3}) for the cases where the target space $X$ is $\relax{\rm I\kern-.18em R}^4$ or the 4-torus ${\bf T}^4$. In principle, one can study the behavior of $Z_{n,m}(\gamma_1,...,\gamma_n;\lambda \theta )$ for a fixed number $m$ of handles. Let us examine the case $n=m=0$ and $X=\relax{\rm I\kern-.18em R}^4$. The partition function $Z_{0,0}$ is a sum over surfaces with the topology of $S^2$. Since $\sigma$ has no boundary, the corresponding chain $\xi (\sigma)$ is actually a cycle, i.e. \begin{equation} \partial \xi(\sigma)=0 \label{f.2} \end{equation} But $K(X=\relax{\rm I\kern-.18em R}^4)$ is homologicaly trivial, and all closed chains are also exact \cite{HW}. Therefore \begin{equation} \xi(\sigma)=\partial \omega \label{f.3} \end{equation} for some $\omega \in C_{(1)}(K(X),{\bf Z})$. {}From (\ref{7.3}), (\ref{7.4}) and (\ref{f.3}) one sees immediately that \begin{equation} I[\sigma]=0.\label{f.4} \end{equation} Even though $\sigma$ can self-intersect at many points, the intersection numbers add up to zero\footnote{In particular, for transversal configurations the total number of intersection points has to be even.}. In the computation of $Z_{0,0}$, it does not matter if the $\theta $-angle is zero or not. In other words \begin{equation} Z_{0,0}(\lambda,\theta )=Z_{0,0}(\lambda,0) \end{equation} Contrary to $Z_{0,0}$, the "$n$ point" Green functions $Z_{n,0}$ depend on $\theta $. For example, let us examine $Z_{1,0}$. The sum is now performed over the set $\Gamma_d(\gamma)$ of surfaces $\sigma$ with boundary $\gamma$ and no handles, in other words, surfaces with the topology of a disk. In contrast with (\ref{f.4}), one can easily show that there are surfaces in $\Gamma_d(\gamma)$ that have self-intersection numbers different from zero. Consider the following construction. Let $\sigma_0$ be a transversal surface with the topology of $S^2$. Suppose that $\sigma_0$ self-intersects at $2k$ points $v_i\in K(X)$. For each $v_i$ there is a corresponding pair of points $p_i$ and $p_i'$ in the abstract cell complex $\sigma_0$. Let us associate a ``charge'' $\pm \frac12$ to each point $p_i$ and $p_i'$ according if the intersection number at $v_i$ is $\pm 1$. From (\ref{f.4}) it follows that the total ``charge'' is zero. Consider now a loop $\gamma$ dividing $\sigma_0$ into disks $\sigma_1$ and $\sigma_2$ with $\partial \sigma_1=\partial (\sigma_2)=\gamma$. Some pairs of points $p_i, p_i'$ will be completely contained in $\sigma_1$ and some others will have one point in $\sigma_1$ and the other point in $\sigma_2$. (We assume that $\gamma$ does not touch any intersection.) Let us call $q_i$ the ``charge'' in $\sigma_i$ ($i=1,2$) due to the pairs that are not divided by $\gamma$, and $q_{12}$ the remaining ``charge''. Then \begin{equation} q_1+q_{12}+q_2=0.\label{c} \end{equation} The intersection number $I[\sigma_i]$ is obviously equal to $q_i$. Then, the contribution of $\sigma_1$ and $\sigma_2$ to $Z_1(\gamma;\theta )$ is given by \begin{equation} e^{iq_1\theta }e^{-\lambda A[\sigma_1]}+ e^{iq_2\theta }e^{-\lambda A[\sigma_2]} \label{f.5} \end{equation} In particular, if $\theta =\pi$, $q_1$ is even and $q_2$ is odd, the contribution (\ref{f.5}) reduces to \begin{equation} e^{-\lambda A[\sigma_1]}-e^{-\lambda A[\sigma_2]}. \end{equation} {}From (\ref{c}), one can see that $q_1$ and $q_2$ are integers such that $-k\leq q_1+q_2\leq k$. Therefore, $Z_1(\gamma;\lambda,\theta )$ does depend on $\theta $. Nothing much is known about the critical behavior of the model in the entire parameter space $(\lambda ,\theta )$, except for $\theta =0$. Unfortunately, for $\theta =0$ the continuum limit is trivial. The sickness of the model at $\theta =0$ is a consequence of the fact that the bare string tension has no zeros \cite{DFJ2}. However, due to (\ref{f.5}), the bare string tension \[ \tau(\lambda,\theta )=\lim _{\gamma\rightarrow \infty}\frac{1}{LM}\log {Z_{1,0}(\gamma_{LM};\lambda,\theta )}. \] can have a radically different behavior for $\theta \neq 0$. It is conceivable that, for $\theta =\pi$, there are critical points where $\tau(\lambda ,\theta )$ does go to zero. The speculation of such a nontrivial continuum limit deserves further investigation. \vspace{2cm} \noindent {\Large \bf Acknowledgments } I would like to thank M. Bowick for bringing to my attention the problem of the self-intersection number and for many useful discussions. I also would like to thank A.P. Balachandran, L. Chandar, E. Ercolessi, G. Harris and S. Vaidya for their comments and suggestions. This work was supported by the Department of Energy under contract number DE-FG-02-84ER40173. \newpage
1,108,101,566,082
arxiv
\section{Introduction} The exploration of the Higgs sector is a primary focus of the LHC physics program, with measurements of the Higgs couplings to fermions and gauge bosons, the Higgs mass, and Higgs CP properties becoming ever more precise. Very little is known, however, about the Higgs tri-linear and quartic self-couplings which are unambiguously predicted in the Standard Model (SM). The SM Higgs tri-linear coupling can be most sensitively probed by double Higgs production through gluon fusion which unfortunately has a very small rate\cite{Plehn:1996wb}, even at high energy and high luminosity\cite{Frederix:2014hta}. The best current limit on double Higgs production is from the ATLAS experiment\cite{ATLAS-CONF-2016-049}, $\sigma(pp\rightarrow hh)/\sigma(pp\rightarrow hh)_{SM} < 29$, with prospects for only modest improvements at higher luminosity. A definitive measurement of the SM tri-linear Higgs self-coupling appears out of reach at the LHC\cite{Baur:2002qd,Dolan:2012rv,Baglio:2012np}. Given the small SM rate for double Higgs production, it is an excellent place to search for Beyond the SM (BSM) physics. In the presence of a scalar resonance coupling to the SM-like Higgs boson, the double Higgs rate can be significantly enhanced. This can occur in the MSSM and the NMSSM, for example. The simplest possibility is to add a hypercharge-$0$ real scalar to the model which interacts with SM fermions and gauge bosons only through the mixing with the Higgs doublet. The LHC phenomenology in the context of the real singlet model has been extensively studied in the literature\cite{Barger:2014taa,Barger:2007im,Profumo:2007wc,Pruna:2013bma,Chen:2014ask,Dawson:2015haa,Lewis:2017dme,No:2013wsa}. When the most general scalar potential (without the imposition of a $Z_2$ symmetry) is considered, the real singlet model can have a first order electroweak phase transition\cite{Espinosa:2011ax,Profumo:2014opa,Curtin:2014jma,Chen:2017qcz} for some values of the parameters. The complex scalar singlet extension has new features beyond the real singlet case. It has several phases, 2 of which can accommodate a dark matter candidate\cite{Coimbra:2013qq,Gonderinger:2012rd}. In the broken phase of this model (which is the subject of this work) there are $3$ neutral scalar particles which mix to form the mass eigenstates, one of which is the $125~GeV$ scalar. Final states with $2$ different mass scalar particles can be resonantly produced in this scenario and there are large regions of parameter space where the couplings of the new scalars to SM particles are highly suppressed, making the dominant production mechanism of the new scalars the Higgs decays to other Higgs-like particles. The resonant production of two different mass Higgs particles is a smoking gun for this class of theories. We study the most general case of a complex scalar singlet extension of the SM, without the introduction of any new symmetries for the potential. The complex singlet model has been previously studied imposing a softly broken $U(1)$ symmetry and benchmark points described for the study of the decay of the heavy scalar to the SM Higgs boson and the lighter scalar of the model\cite{Costa:2015llh,Muhlleitner:2017dkd}. The parameter space of the model we study is larger, allowing for new phenomenology. The basic features of the model are discussed in Section II and the limits on the model from perturbativity, unitarity and the oblique parameters are presented in Sec. \ref{sec:lims}. Our most interesting results are the implications for double Higgs studies and the description of scenarios where one of the new Higgs bosons is predominantly produced in association with the $125~GeV$ boson. This is discussed in Sec. \ref{sec:hh}. \section{Model} \label{sec:model} We consider a model containing the SM $SU(2)$ doublet, $\Phi$, and a complex scalar singlet, $S_c$. Since $S_c$ has hypercharge -$0$ it does not couple directly to SM fermion or gauge fields, and its tree level interactions with SM fermions and gauge bosons result entirely from mixing with $\Phi$. The most general renormalizable scalar potential is\cite{Barger:2008jx}, \begin{eqnarray} {\cal V}(\Phi,S_c)&=& {\mu^2\over 2}\Phi^\dagger\Phi+{\lambda\over 4}(\Phi^\dagger\Phi)^2 +\biggl({1\over 4} {\delta_1} \Phi^\dagger\Phi S_c +{1\over 4}\delta_3 \Phi^\dagger\Phi S_c^2 +a_1S_c \nonumber \\ && +{1\over 4} b_1 S_c^2 +{1\over 6} e_1S_c^3 +{1\over 6} e_2S_c\mid S_c\mid^2 +{1\over 8}d_{1}S_c^4 +{1\over 8}d_{3}S_c^2\mid S_c\mid^2 + h.c.\biggr) \nonumber \\ && +{1\over 4}d_2(\mid S_c\mid^2)^2 +{\delta_2\over 2} \Phi^\dagger \Phi \mid S_c\mid^2 +{1\over 2}b_2\mid S_c\mid^2 \, , \label{eq:vdef} \end{eqnarray} where $a_1,b_1,e_1,e_2,d_1,d_3, \delta_1$ and $\delta_3$ are complex. After spontaneous symmetry breaking, in unitary gauge, \begin{equation} \Phi=\left(\begin{matrix}0\\{h+v\over\sqrt{2}}\end{matrix}\right),\quad S_c={1\over\sqrt{2}}\biggl(S+v_S+i(A+v_A)\biggr)\, . \end{equation} Since we have included all allowed terms in Eq. \ref{eq:vdef}, the coefficients can always be redefined such that $v_S=v_A=0$. This makes the potential of Eq. \ref{eq:vdef} identical to that obtained by adding $2$ real singlets to the SM and there is no CP violation. Previous work\cite{Costa:2015llh,Barger:2008jx} imposed a global $U(1)$ symmetry or a $Z_2$ symmetry to eliminate some of the terms in the potential, making the shift to $v_S=v_A=0$ in general not possible. The mass eigenstate fields are $h_1,h_2,h_3$ (masses $m_1,m_2,m_3$) are found from the rotation, \begin{equation} \left( \begin{matrix} h_1\\ h_2\\ h_3\\ \end{matrix} \right)= {V} \left( \begin{matrix} h\\ S\\ A\\ \end{matrix} \right)\, , \label{eq:massbasis} \end{equation} where $V$ is a $3\times 3$ unitary matrix with, \begin{equation} {V}\equiv\left ( \begin{matrix} c_1 & -s_1 c_3 & -s_1s_3\\ s_1c_2 & c_1c_2 c_3- s_2 s_3 & c_1c_2s_3+s_2c_3 \\ s_1s_2 & c_1s_2c_3+c_2s_3 & c_1s_2s_3-c_2c_3 \\ \end{matrix}\right) \label{eq:ckm} \end{equation} and we abbreviate $c_i=\cos\theta_i$, etc. Note that the phase usually associated with the CKM-like mixing matrix does not appear since the mass matrix in terms of the real fields $h$, $S$, and $A$ is strictly real by hermiticity. Since all allowed terms are included in Eq. \ref{eq:vdef}, we are free to perform a field redefinition $S_c \rightarrow S_c e^{i\phi}$ while leaving the form of the potential unchanged. We choose to take $S_c \rightarrow S_c e^{i\theta_3}$. This results in the field redefinitions, \begin{equation} \label{eq:phaserot} \left( \begin{matrix} h\\ S\\ A\\ \end{matrix} \right)\rightarrow \left( \begin{matrix} 1 & 0 & 0 \\ 0 & c_3 & -s_3 \\ 0 & s_3 & c_3 \\ \end{matrix} \right) \left( \begin{matrix} h\\ S\\ A\\ \end{matrix} \right)\, , \end{equation} which, when combined with Eqs.~\ref{eq:massbasis} and \ref{eq:ckm} with matrix multiplication, leads to a simplified mixing matrix, \begin{equation} {V} \rightarrow \left ( \begin{matrix} c_1 & -s_1 & 0\\ s_1c_2 & c_1c_2&s_2 \\ s_1s_2 & c_1s_2 & -c_2 \\ \end{matrix}\right)\, . \label{eq:ckmreduced} \end{equation} So we see that performing a suitable phase rotation is equivalent to setting $\theta_3=0$. For the rest of the paper, we use this convention to eliminate $\theta_3$. We take as inputs to our scans, \begin{equation} v=246~GeV, m_1=125~GeV, m_2,m_3,\theta_1, \theta_2, \delta_2, \delta_3, d_1, d_2, d_3, e_1, e_2\, \label{eq:parms} \end{equation} where $\delta_3,d_1,d_3,e_1$ and $e_2$ can be complex and are defined in Eq. \ref{eq:vdef}. The SM-like Higgs boson is identified with $h_1$ with $m_1=125~GeV$. The couplings of $h_1$ to SM particles are suppressed by a factor $c_1$ relative to the SM rate. The states are ordered according to their couplings to SM particles. $h_1$ has the strongest couplings to SM particles, $h_2$ couplings are suppressed by $s_1c_2$ relative to the SM couplings, and $h_3$ couplings are the smallest, and are suppressed by $s_1s_2$ relative to SM couplings. The mass ordering of $h_2$ and $h_3$ is arbitrary. The ATLAS experiment restricts the value of $c_1$ to be, \begin{equation} c_1=\mid V_{11}\mid~ >~ 0.94\, , \end{equation} at $95\%$ confidence level using Run-1 Higgs coupling fits\cite{Aad:2015pla}. Similarly, a global fit to Higgs coupling strengths by CMS and ATLAS\cite{Khachatryan:2016vau}, \begin{equation} \mu=1.09\pm .11\, , \end{equation} yields an identical limit on $c_1$. \section{Limits from Perturbativity, Oblique Parameters and Unitarity} \label{sec:lims} The parameters of the model must satisfy constraints from electroweak precision measurements, searches for heavy Higgs bosons, and limits from perturbative unitarity, along with the restrictions from single Higgs production discussed in the previous section. Fits to the oblique parameters place strong limits on the allowed scalar masses and mixings. Analytic results for a model with 2 additional scalar singlets are given in Ref. \cite{Dawson:2009yx}. For $m_i >> M_W, M_Z$, the approximate contributions are , \begin{eqnarray} \Delta {\cal S}&\sim&( 1-\mid V_{11}\mid^2) {\cal S}_{SM} +{1\over 12 \pi} \Sigma_{i=1,2,3}\mid V_{i1}\mid^2 \log\biggl({m_i^2\over m_1^2}\biggr) \nonumber \\ \Delta {\cal T}&\sim &(1-\mid V_{11}\mid^2){\cal T}_{SM} -{3\over 16 \pi c_W^2} \Sigma_{i=1,2,3}\mid V_{i1}\mid^2\log\biggl({m_i^2\over m_1^2}\biggr) \nonumber \\ \Delta {\cal U}&\sim &(1-\mid V_{11}\mid^2){\cal U}_{SM} \, . \label{eq:loglim} \end{eqnarray} The restrictions from the oblique parameters\cite{deBlas:2016ojx} on $V_{21}=s_1c_2$ for the minimum value of $c_1=.94$ allowed by single Higgs production are shown on the LHS and for $c_1=.96$ on the RHS of Fig. \ref{fg:stu}. TeV scale masses require quite small values of $V_{21}$, which is the parameter that determines the coupling of $h_2$ to SM particles. The flat portions of the curves for small $m_2$ in Fig. \ref{fg:stu} represent the imposed limit on $\theta_1$ from single Higgs production. As this limit becomes stronger, the limits from oblique parameters becomes less and less relevant. As the $h_1$ couplings become more and more SM-like ($\theta_1\rightarrow 0$), the allowed coupling of $h_2$ to SM particles becomes highly suppressed. The constraints from the oblique parameters shown in Fig. \ref{fg:stu} are consistent with those obtained in the real singlet model in Ref. \cite{Pruna:2013bma}. For the values of $\theta_2$ allowed by Fig. \ref{fg:stu}, the direct searches, $pp\rightarrow h_{2}(h_3) \rightarrow W^+W^-$ do not provide additional restrictions on $V_{21}$\cite{Aaboud:2017gsl,Khachatryan:2015cwa}. \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{oblique-eps-converted-to.pdf}} {\includegraphics[width=0.48\textwidth]{oblique_2-eps-converted-to.pdf}} \caption{Limits on $m_2$ for allowed couplings of $h_1$ to SM particles[$\cos\theta_1=.94$ (LHS) and $\cos\theta_1=.96$ (RHS)] for various values of $m_3$ using the oblique parameter ($\cal {S,T,U}$) limits of Ref. \cite{deBlas:2016ojx}. \label{fg:stu} } \end{figure} \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{wmass-eps-converted-to.pdf}} \caption{Maximum allowed value of $V_{21}$ from the $W$ mass measurement as a function of $m_2$ in the real singlet model and in the complex singlet model with $\theta_2=0$ from Ref. \cite{Lopez-Val:2014jva}. \label{fg:wmass} } \end{figure} In the real singlet model, much stronger constraints are placed on the parameters from the $W$ boson mass than from the oblique parameters\cite{Chalons:2016lyk,Lopez-Val:2014jva}. For example, in the real singlet model for $m_{2}=1~TeV$, the $W$ mass measurement requires $\mid V_{21}\mid < .19$. For $\theta_2=0$, $h_3$ does not couple to SM particles and the results of Refs. \cite{Chalons:2016lyk,Lopez-Val:2014jva} can be applied directly to the complex singlet case. The results of Ref. \cite{Lopez-Val:2014jva} are shown in Fig. \ref{fg:wmass}. The calculation of the limit from the $W$ mass in the complex singlet model for non-zero $\theta_2$ is beyond the scope of this paper and involves contributions from all $3$ Higgs bosons and could potentially yield interesting limits. The limits from the oblique parameters in the complex singlet case, (Fig. \ref{fg:stu}), demonstrates that the dependence of the limits on $m_3$ is non-trivial. The quartic couplings in the potential are strongly limited by the requirement of perturbative unitarity of the $2\rightarrow 2$ scattering processes\cite{Lee:1977eg}. We compute the $J=0$ partial waves, $a_0$, in the high energy limit where only the quartic couplings contribute and require $\mid a_0\mid < {1\over 2}$. The contributions from the tri-linear couplings are suppressed at high energy and do not contribute in this limit. For example, we find the restriction from the process, $(SS)/ \sqrt{2}\rightarrow (SS) / \sqrt{2}$, \begin{equation} Re(d_1+d_2+d_3)\lesssim {32\pi\over 3}\, . \end{equation} Similarly, from $hS\rightarrow hS$, we find, \begin{equation} Re(\delta_2+\delta_3) \lsim16 \pi\, . \end{equation} Looking at the eigenvectors for neutral CP even scattering processes, \begin{equation} \biggl\{ \omega^+\omega^-\,, {zz\over \sqrt{2}}\, , {hh\over \sqrt{2}}\, ,hS\, , hA\, , {SS\over \sqrt{2}}\,, {AA\over \sqrt{2}}\,, A S\biggr\} \, , \end{equation} ($\omega^\pm,z$ are the Goldstone bosons), we find the generic upper limits on the real and imaginary quartic couplings, \begin{eqnarray} Re(d_i), Im(d_i)&\lesssim & {32\pi\over 3}\,, i=1,2,3\nonumber \\ \delta_2, Re(\delta_3), Im(\delta_3) \lesssim 16 \pi\, . \end{eqnarray} These upper limits are conservative bounds, and more stringent bounds are obtained from looking at the eigenvalues of the 8 by 8 scattering matrix. These upper bounds on the parameters involve finding solutions to higher order polynomials and do not have simple analytic solutions. Thus, the bounds from perturbative unitarity are determined numerically and imposed in the scans of the next section. The tri-linear Higgs couplings depend on the scalar masses and could potentially become large. In the limit of small mixing, $\theta_1<<1$ and $\theta_2=0$, the $h_2 h_1 h_1$ coupling is, \begin{equation} \lambda_{211}\rightarrow \sin\theta_1\biggl\{ {2m_1^2\over v} \biggl(1+{m_2^2\over 2 m_1^2}\biggr) -v\biggl(\delta_2+Re(\delta_3)\biggr) \biggr\}\, ,~ \text{small angle limit} \label{eq:small} \end{equation} and we see that the growth of $\lambda_{211}$ with large $m_2$ is mitigated by the $\sin(\theta_1)$ suppression. The decay width for $h_2\rightarrow h_1h_1$ is\cite{Chen:2014ask}, \begin{equation} \Gamma(h_2\rightarrow h_1 h_1) ={\lambda_{211}^2\over 32 \pi m_2} \sqrt{1-{4m_1^2\over m_2^2}}\, . \end{equation} In Fig. \ref{fg:lamfig}, we have taken all parameters real and scanned over $-5 < \delta_2,\delta_3<5$ for fixed $m_3$, $\theta_1$ and $\theta_2$. The dependence on $e_1$ and $e_2$ is minimal in the small angle limit, as demonstrated in Eq. \ref{eq:small}. In all cases, we have $\Gamma(h_2\rightarrow h_1 h_1) << m_2$, showing that there is no problem with the tri-linear couplings becoming non-perturbative in the small angle limit. Increasing the range we scan over changes the numerical results, but $\Gamma(h_2\rightarrow h_1 h_1)/m_2$ is always $<<1$. \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{lam211-eps-converted-to.pdf}} \caption{ Decay width for $h_2\rightarrow h_1 h_1$ when all parameters are taken real and $\delta_2$ and $\delta_3$ are scanned over. \label{fg:lamfig} } \end{figure} Finally, we require that the parameters correspond to an absolute minimum of the potential. This has been extensively studied for the real singlet model in Refs. \cite{Lewis:2017dme,Espinosa:2011ax,Coimbra:2013qq} and analytic results derived. For the case of the complex singlet, we scan over parameter space for numerically allowed values of the parameters\cite{Gonderinger:2012rd} and do not obtain an analytic solution. \section{Results} \label{sec:hh} In the limit of $\theta_2 \rightarrow 0$, (as suggested by the single Higgs rates), the scalar $h_3$ does not couple directly to SM particles and it can only be observed through di-Higgs production. We will consider $h_3$ to be in the $100-400~GeV$ mass range. The largest production rate at the LHC is through the resonant process $gg\rightarrow h_2 \rightarrow h_1 h_3$. The complex singlet model is thus an example of new physics that will first be seen in the study of di-Higgs resonances\cite{Muhlleitner:2017dkd,Bowen:2007ia}. We perform a scan over the parameters of Eq. \ref{eq:parms}, subject to the restrictions discussed in the previous section\footnote{For the complex singlet model with a $U(1)$ symmetry, a comparable scan can be performed using the program ScannerS\cite{Coimbra:2013qq}.}. We always fix $c_1=0.94$ and consider the $2$ cases, $\theta_2=0$ and $\theta_2={\pi\over 12}$. For the allowed parameter space, we compute the amplitude for $gg\rightarrow h_1 h_3$ shown in Fig. \ref{fg:diags}. Analytic results in the context of the MSSM are given in Ref. \cite{Plehn:1996wb}. We use the central NLO LHAPDF set\cite{Butterworth:2015oua,Buckley:2014ana}, with $\mu_R=\mu_F=M_{hh}$\footnote{$M_{hh}\equiv (p_{h_1}+p_{h_3})^2$.}. In Fig. \ref{fg:mhh}, we show the invariant $M_{hh}$ spectrum for resonant $h_1 h_3$ production compared to the SM $h_1 h_1$ spectrum at $13~TeV$. The complex singlet model curves are more sharply peaked than those of the SM and demonstrate a significant enhancement of the rate relative to the SM double Higgs rate for the parameters we have chosen. The spectrum has only a small dependence on $\theta_2$, visible at high $M_{hh}$. We have included a finite width for $m_2$ in the calculation: For $m_ 2=400~GeV$ and $m_3=130~GeV$, the width is quite large, $\Gamma_2=263~GeV (\theta=0)$ and $\Gamma_2=295~GeV (\theta={\pi/12})$\footnote{ The parameters of the $\theta_2=0$ curve on the LHS of Fig. \ref{fg:diags} are, for example, $\delta_2=21.6$, $Re(\delta_3)= -14.5$, $Im(\delta_3)= -22.9$, $Re(d_1)=1.15$, $Im(d_1)=1.64$, $d_2= 13.3$, $Re(d_3)= 10.5$, $Im(d_3)= 10.2$, $Re(e_1)= 1.18v$, $Im(e_1)= -2.66v$, $Re(e_2)=-8.29v$, $Im(e_2)= 3.67v$. These parameters correspond to $\lambda_{211}= -2.9v$ $\lambda_{311}= 6.77v$, $ \lambda_{321}= -11.1v$ and $ \lambda_{331}= 11.2v$.}. We have included the width using the Breit-Wigner approximation, although typically $\Gamma_2/m_2\sim {\cal{O}}({1\over 2})$. The shoulder due to the width is clear on the LHS of Fig. \ref{fg:diags}. There is a smaller width for $h_2$ when $m_3$ is increased to $250~GeV$: $\Gamma_2=129~GeV (\theta=0)$ and $\Gamma_2=137~GeV (\theta={\pi/12})$ on the RHS of Fig. \ref{fg:diags}. The widths are calculated by scaling the SM results from Ref. \cite{Dittmaier:2011ti} with the appropriate mixing angles and adding the relevant widths $h_i\rightarrow h_j h_k$. \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{squarediagram-eps-converted-to.pdf}} {\includegraphics[width=0.48\textwidth]{trianglediagram-eps-converted-to.pdf}} \caption{ Feynman diagrams for the production of $h_j h_k$, $i,j,k=1,2,3$. \label{fg:diags} } \end{figure} \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{dsigmhh_400_130-eps-converted-to.pdf}} {\includegraphics[width=0.48\textwidth]{dsigmhh_400_250-eps-converted-to.pdf}} \caption{ $M_{hh}$ spectrum of the complex singlet model production of $h_1h_3$ from the resonant exchange of $h_2$. The dominant contribution in the loops is from the top quark. \label{fg:mhh} } \end{figure} In Figs. \ref{fg:rates} and \ref{fg:rates27}, we show mass regions where the rate for $h_1h_3$ production is significantly enhanced relative to the SM $h_1h_1$ production. This enhancement can be traced to the relatively large values of the tri-linear Higgs couplings defined from Eq. \ref{eq:vdef}, \begin{equation} {\cal V}\rightarrow {1\over 2}\lambda_{211}h_1^2 h_2+{1\over 2}\lambda_{311} h_1^2h_3+{1\over 2}\lambda_{331} h_1 h_3^2 +\lambda_{321}h_1h_2h_3+\cdots\, , \end{equation} that are allowed by the imposed restrictions. In the SM, the $hhh$ coupling is fixed by $m_h$, whereas here, the trilinear couplings of the potential are relatively unconstrained. In Fig. \ref{fg:tri}, we show the region of parameter space allowed by limits on the oblique parameters, perturbative unitarity, and the minimization of the potential where the $h_1h_1 h_1$ tri-linear coupling is greater than $5$ times the SM value. This enhancement of the tri-linear scalar coupling requires rather light values of $m_2$ and $m_3$ as shown in Fig. \ref{fg:tri}. In roughly the same region as shaded in Fig. \ref{fg:tri}, the $h_2h_1h_1$ and $h_3h_2h_1$ couplings are $8$ times the SM $h_1h_1h_1$ coupling. This enhancement is consistent with the results of Ref. \cite{Costa:2015llh} in the complex singlet model with a global $U(1)$ symmetry imposed on the potential. The cut-offs on the high $m_2$ ends of the plots on the LHS in Figs. ~\ref{fg:rates} and \ref{fg:rates27} are due to the oblique parameter restrictions in the non-zero $\theta_2$ mixing scenario. The same results for $\sqrt{S}=27$ and $100$ TeV are shown in Fig. \ref{fg:rates27}. At all energies there is a significant region of phase space where the $h_1h_3$ rate is large, relative to SM double Higgs production. For $m_3>250~GeV$, the dominant decay chain from $h_1 h_3$ production will be $h_1h_3\rightarrow h_1h_1 h_1\rightarrow (b{\overline b}) ( b{\overline b}) ( b{\overline b})$. For $m_3 < 2 m_1$, $h_3$ will decay through the extremely small couplings to SM particles and through the off-shell decay $h_3\rightarrow h_1 h_1^*\rightarrow h_1 f {\overline f}$ and will be extremely long lived. In the limiting case where $\theta_2=0$, the only allowed decay for $h_3$ is the off-shell decay chain through the couplings to $h_1$. \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{rates-eps-converted-to.pdf}} {\includegraphics[width=0.48\textwidth]{rates_nomix-eps-converted-to.pdf}} \caption{Regions of parameter space allowed by limits on oblique parameters, perturbative unitarity, and the minimization of the potential where the rate for $h_1h_3$ production is significantly larger than the SM $h_1h_1$ rate at $\sqrt{S}=13~TeV$. \label{fg:rates} } \end{figure} \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{rates_mix_27-eps-converted-to.pdf}} {\includegraphics[width=0.48\textwidth]{rates_mix_100-eps-converted-to.pdf}} \caption{Regions of parameter space allowed by limits on oblique parameters, perturbative unitarity, and the minimization of the potential where the rate for $h_1h_3$ production is significantly larger than the SM $h_1h_1$ rate at $\sqrt{S}=27~TeV$ and $100~TeV$. \label{fg:rates27} } \end{figure} \begin{figure} \centering {\includegraphics[width=0.48\textwidth]{tri-eps-converted-to.pdf}} \caption{Region of parameter space allowed by limits on oblique parameters, perturbative unitarity, and the minimization of the potential where the $h_1h_1 h_1$ tri-linear coupling is greater than $5$ times the SM value. \label{fg:tri} } \end{figure} \section{Conclusions} We have studied an extension of the SM with a complex scalar singlet. We considered the most general renormalizable scalar potential and imposed no additional symmetries. In this scenario, there are $3$ scalar bosons, one of which, $h_3$, has very small couplings to SM particles and will be primarily observed through di-Higgs decays, $h_2\rightarrow h_1 h_3$. Subject to the constraints of electroweak precision measurements, single Higgs production rates, and perturbative unitarity, there are regions of phase space where the rate for $h_1h_3$ production is significantly enhanced relative to the SM $h_1h_1$ rate. Therefore, the search for pair production of Higgs bosons with different masses is a distinctive signature of this class of model. \section*{Acknowledgements} S.D. is supported by the U.S. Department of Energy under grant No.~DE-AC02-98CH10886 and contract DE-AC02-76SF00515. M.S. is supported by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists, Office of Science Graduate Student Research (SCGSR) program. The SCGSR program is administered by the Oak Ridge Institute for Science and Education (ORISE) for the DOE. ORISE is managed by ORAU under contract number DE-SC0014664. We thank I.M. Lewis for discussions. \bibliographystyle{h-physrev}
1,108,101,566,083
arxiv
\section{Introduction} During a visit to a medical care provider, a physician typically records important information about patient presentation, diagnosis, and treatment via free-form text. These notes contain rich clinical data not found in structured electronic medical records. For example, many rare diseases are not well documented with standardized codes, but evidence of symptoms or diagnoses may be recorded in free-form text. Clinical notes can also prove useful for extracting relevant medical conditions for cohort building, such as for identifying patients for clinical trials or developing disease progression models using features from the text. Because these text sequences can be very long, digging through patient records to find relevant information is a tedious, manual process, and automation with standard ML approaches is often insufficient. This motivates the development of an accurate, automated, and interpretable method for extracting medical conditions from long medical documents. Convolutional neural network (CNN) models have been the architecture of choice for long medical text \citep{mullenbach_explainable_2018,gehrmann_comparing_2018,li_icd_2019,reys_predicting_2020,hu_explainable_2021}, largely due to the computational complexity of the self-attention mechanism in Transformers like BERT \citep{devlin_bert_2019}. However, recent advancements in sparse-attention or long LMs \citep{zaheer_big_2021,choromanski_rethinking_2021,beltagy_longformer_2020,kitaev_reformer_2020} suggest it is now possible to represent long medical documents without convolutions that fail to capture interactions between distant tokens in a text sequence, or the truncation and segmentation with pooling methods that ML practitioners apply to standard Transformers in practice \citep{huang_clinicalbert_2020}. While there are approaches for interpreting the predictions of traditional ML models and neural networks \citep{sundararajan_axiomatic_2017,lundberg_unified_2017}, understanding the blocks of text driving the predictions of long LMs is not straightforward. A common approach to interpret the predictions of Transformers is to examine the attention weights of tokens. Although subword tokenization has been shown to be performant in downstream classification tasks, the attention weights of individual subword tokens are not always informative or interpretable, and attention weight interpretation has been criticized \citep{jain_attention_2019}. The limitations of word and subword level explanations are especially prevalent in a healthcare context where word pieces are often divorced of clinical meaning or capture only part of a phrase representing a clinical concept. For example, consider the phrase, ``patient developed atrial fibrillation," consisting of many tokens. Understanding the impact of blocks of text (represented as sequences of subword tokens) on model predictions only becomes more challenging for models with sparse-attention, as not all subwords attend to each other in the self-attention layers. In this work, we introduce a novel method, the Masked Sampling Procedure (MSP), to identify important blocks of text used by long LMs or any text classifier to predict document labels. Our method is unique and valuable in that we simultaneously mask multi-token blocks to answer the counterfactual: “what if this block of text had been absent?” Unlike previous work, runtime does not depend on document length, and we provide a rigorous method to compute p-values for each text block. Our method extends to any number of multi-token text blocks to measure interactions, and we report the benefits of specific masking probabilities. We validated that MSP returns clinically informative explanations of medical condition predictions from a very long LM with a blinded experiment involving two physicians. In Section~\ref{sec:msp_validation} we share insights from our clinician collaborators regarding the explanations surfaced by MSP and describe the superior performance and runtime efficiency of MSP compared to the state-of-the-art, showing that our method is up to $100\times$ faster and $\approx 1.7\times$ better at identifying important text blocks from a very long LM applied to long medical documents. Finally, in Section~\ref{sec:classifier_performance}, we describe the benefit of using sparse-attention LMs in the context of predicting medical conditions from long medical documents (up to 32,768 tokens), extending the length of the typical LM for clinical documents from 512 to 32,768 with an over 5\% absolute improvement in micro-average-precision over a popular and effective CNN architecture for predicting medical conditions from text on four different size train sets. \section{Related Work} Many explainability methods have been proposed, such as those that examine the individual attention weights of tokens in LMs \citep{skrlj_exploring_2021}, gradient-based methods that attempt to reveal the saliency of individual tokens \citep{yin_interpreting_2022}, and approaches that perturb the input text to measure importance \citep{kokalj_bert_2021}. The most similar approach to our procedure is likely the Sampling and Occlusion (SOC) algorithm \citep{jin_towards_2020}. \citet{jin_towards_2020} apply SOC to BERT and show that SOC outperforms a variety of competitive baselines including GradSHAP \citep{lundberg_unified_2017}, a popular approach combining ideas from Integrated Gradients \citep{sundararajan_axiomatic_2017} with perturbation-based feature importance, on three benchmark datasets. SOC masks one word or text block at a time to compute the impact on label predictions and eliminates the dependence on surrounding context for a given block by sampling neighboring words from a trained LM. However, if the trained LM performs well, the sampled neighboring words will be similar to the original context. This sampling procedure is computationally expensive which we discuss in Section~\ref{sec:runtime_comparison}. Further related work includes traditional text representation approaches like TF-IDF~\citep{sparck_jones_statistical_1988} and word2vec~\citep{mikolov_efficient_2013}, predicting medical conditions using CNNs with attention~\citep{mullenbach_explainable_2018, hu_explainable_2021, lovelace_dynamically_2020}, LMs that improve on traditional text representations~\citep{vaswani_attention_2017, devlin_bert_2019}, sparse-attention LMs~\citep{kitaev_reformer_2020, beltagy_longformer_2020, choromanski_rethinking_2021, zaheer_big_2021}, and domain-specific pretraining of LMs for medical text~\citep{alsentzer_publicly_2019, lee_biobert_2020, liu_self-alignment_2021}. See Supplemental Section~\ref{sec:further_discussion_of_related_work} for more details. To our knowledge, the only research applying long LMs to the clinical domain is ~\citet{li_clinical-longformer_2022}. The authors fine-tuned Longformer and Big Bird~\citep{zaheer_big_2021} for clinical question answering~\citep{pampari_emrqa_2018} and named-entity recognition. We benchmark the first clinically pretrained long LM for multi-label classification of conditions from clinical notes, extending the typical sequence length of long LMs by $8\times$ (from 4,096 to 32,768) to address the problem of extracting information from long, individual, patient medical histories. \section{Cohort} We use two document types in our experiments: medical charts, which are long-form clinical notes concatenated from many visits, and discharge summaries. In both cases, we consider a single document to be the entire sequence of tokens. The Optum Chart dataset consists of 6,526,116 full-length medical charts collected from 2017-2018 for Medicare patients. We use 5,481,937 unlabeled charts for pretraining text representations. We use 640,000 labeled charts for training, 64,000 for validation, and 187,953 for testing. These charts contain an average of 8,402 subword tokens (std. dev. 9,852) per clinical document. Labels were derived from manual chart reviews. Descriptive statistics can be found in Supplemental Table~\ref{tab:descriptive_stats} and condition prevalence in Supplemental Table~\ref{tab:optumcharts85_performance_compare}. We broke the 640,000 chart train set into four datasets to measure the effect of train set size. The same validation and test sets were used in all experiments. MIMIC-III \citep{johnson_mimic-iii_2016} contains de-identified clinical records for intensive care unit (ICU) patients treated at Beth Israel Deaconess Medical Center. Included is a set of discharge summary notes and ICD-9 diagnoses associated with each ICU stay. We use the subset of discharge summaries from \citet{mullenbach_explainable_2018} consisting of 11,371 notes from 2001-2012 and the top 50 most common ICDs appearing in each summary. Descriptive statistics can be found in Supplemental Table~\ref{tab:descriptive_stats}. We use the same 8,067 sample train, 1,574 validation, and 1,730 test sets as in \citet{mullenbach_explainable_2018}. \section{Methods} \subsection{Masked Sampling Procedure} To reveal which text blocks have the largest effect on the predictions of long LMs or any text classifier, we propose MSP (Algorithm~\ref{alg:masked_sampling}). To explain predictions from a text sequence, MSP randomly masks all text blocks of size $B$ subwords with probability $P$, feeds the new sequence to the classifier, then measures the difference in label probability between the masked and unmasked versions of the sequence. Over many iterations $N$, large differences in predicted probabilities originating from masking a given text block suggest the block contributed important evidence to the label prediction. MSP outputs the top $K$ most important blocks for each label along with a measure of statistical significance computed by comparing to randomly sampled text blocks using a bootstrap procedure, with the null hypothesis, that, text blocks with high importance, as determined by MSP, are no more important to a label prediction than randomly sampled blocks (see Algorithm~\ref{alg:masking_sig}). \SetKwComment{Comment}{/* }{ */} \begin{algorithm2e}[ht] \caption{Masked Sampling Procedure (MSP)}\label{alg:masked_sampling} \DontPrintSemicolon \KwData{$X_{i} \in \mathbb{R}^{S_{i} \times d_{c}}$} \KwResult{maskedSampleProbs $\in \mathbb{R}^{N \times L}$, maskIndices $\in \{0,1\}^{N \times (S_{i}/B)}$} \SetKwFunction{FSum}{MSP} \SetKwProg{Fn}{Function}{:}{} \SetKw{KwBy}{by} \Fn{\FSum{$X$, $N$, $B$, $P$}}{ maskedSampleProbs $\gets$ [ ]\; maskedIndices[$1:N, 1:(S_{i}/B)$] $\gets 0$\; $\hat{y_{i}} \gets$ LanguageModel($X_{i}$)\; \For{$n=1$ \KwTo $N$}{ \For{$j=1$ \KwTo $S_{i}$ \KwBy B}{ $r \gets \mathcal{U}(0, 1)$\; \If{$r < P$}{ $X[j:j+B] \gets$ maskToken\; maskedIndices[n, j/B] = 1\; } } $\hat{y_{n}} \gets$ LanguageModel($X_{i}$)\; $\Delta \hat{y_{n}} \gets \hat{y_{i}} - \hat{y_{n}}$\; maskedSampleProbs.append($\Delta \hat{y_{n}}$) } \KwRet maskedSampleProbs, maskedIndices\; } \end{algorithm2e} Two clinicians validated the ability of MSP to explain predictions from a very long Big Bird model (Section~\ref{sec:very_long_big_bird}) on randomly sampled discharge summaries from MIMIC compared to SOC~\citep{jin_towards_2020} and a random algorithm. The reviews were conducted as blind experiments. Both clinicians were independently presented with text blocks and ICD labels from the discharge summary test set without knowing the origin. The supplied text blocks were among the top five most important for the corresponding label according to MSP, SOC, or by random selection. The clinicians then indicated whether each text block was informative for making the diagnosis provided by the ICD. We compared the number of informative text blocks from each method along with differences in runtime. For MSP, we set $P=0.1$ according to an experiment with a single clinical reviewer comparing values of $P$ shown in Supplemental Table~\ref{tab:masking_experiment}. For a fair comparison to SOC, we fixed $B=10$ and set the expected number of times a given phrase is masked to 100. We used the sampling radius of 10 tokens recommended by \citet{jin_towards_2020} and set the number of sampling rounds to 100. \subsection{Baseline Text Representations} We compared the performance of several text representations and classifiers for the task of predicting medical conditions from clinical text to the Big Bird LM for which we generated explanations with MSP. These methods operated at either the word or subword level following text preprocessing (see Supplemental Methods~\ref{sec:dataset_preprocessing}). More details on baseline text representations can be found in Supplemental Methods~\ref{sec:appendix_model_development_language} \subsection{Very Long Big Bird} \label{sec:very_long_big_bird} Big Bird's sparse-attention mechanism approximates full self-attention with a combination of global tokens, sliding window attention, and random approximations of fully connected graphs representing full self-attention. These mechanisms take the memory consumption from $O(L^2)$ to $O(kL)$, where $k$ is the size of the sliding attention window. To pretrain a Big Bird LM on clinical text, we first trained a Byte Pair Encoding subword tokenizer \citep{sennrich_neural_2016} to tokenize the text. After cleaning, we truncated all text to 32,768 subwords following tokenization, and pretrained with masked language modeling (MLM) as in~\citet{zaheer_big_2021}. \subsection{Text Classifiers} We are interested in identifying conditions from medical documents relevant to diagnosing or treating patients and focus on two datasets with 85 and 50 medical condition labels respectively. To predict these conditions, we used ElasticNet, a Feed Forward Neural Network (FFNN), BERT variants with text segmentation and pooling (on MIMIC only), CAML, and Big Bird, all trained as multi-label classifiers. Here we describe CAML and Big Bird, which were the most competitive. Details on all models can be found in Supplemental Methods~\ref{sec:appendix_model_development_classification}. CAML uses a CNN layer to extract features from the word2vec embedding matrix and an attention mechanism to localize signal for a particular prediction task. We implemented CAML as described in \cite{mullenbach_explainable_2018} using a CNN layer with filter size between 32 and 512, kernel size between 3 and 10, and dropout on the embedding layer between 0 and 0.5. The output is a vector of probabilities, one for each label, to which we applied the sigmoid function and trained the model to minimize binary cross-entropy loss. The Optum Chart sequences are $8\times$ larger than the typical "long" sequence \citep{tay_long_2020} at a maximum length of 32,768 tokens. We pretrained Big Bird from random initialization on medical documents, added a classification head with a single feed-forward layer of size 1536 (2x the hidden size), an output layer with one neuron per label, and trained using binary cross-entropy loss. \section{Results} \subsection{Clinical Validation of MSP} \label{sec:msp_validation} We examine the clinical utility of MSP in a blind experiment with two clinicians, first discussing examples of informative text blocks, then comparing the number of informative text blocks surfaced by MSP to the SOC algorithm in the blind experiment. Finally, we discuss runtime. \subsubsection{Informative Text Blocks} \input{tables/masked_text_results.tex} Table~\ref{tab:masked_text_results} depicts example text blocks and their importance computed via MSP that were deemed informative during an initial clinical review. This review confirmed three general features that drive text block “informative-ness.” The most obvious were exact matches with diagnosis text. For example the text block “pneumonia patient being discharged o n maximal copd regimen including,” was highly informative in implying a diagnosis of “pneumonia, organism unspecified.” Less obvious were synonyms or close synonyms for a diagnosis. In the text-string “lovenox bridge nstemi o n admission the patient had elevated,” ``NSTEMI" is an acronym for ``non-ST segment elevation myocardial infarction," which is synonymous with the diagnosis ``subendocardial infarction." Other common elements in highly informative blocks were drugs that are always, or almost always used for a particular diagnosis. MSP identified the block “consulted amiodarone was held rhythm slowly began to recover she,” associated with the diagnosis “atrial fibrillation.” Amiodarone is an antidysrhythmic drug mostly used for atrial fibrillation. MSP also identified obscure but clinically relevant blocks, such as “al likely improve as pna improves s p cabg complicated,” including “s”, “p”, and “cabg.” Grouped together, these suggest the patient is “status-post” coronary artery bypass grafting, meaning they have had the procedure. In order to graft coronary arteries, the patient must be placed on an aortocoronary bypass machine to allow the procedure to be completed. This block was associated with the diagnosis of “aortocoronary bypass status.” Another seemingly obscure but clinically informative block, ``albuterol and ipra prn his acidosis slowly improved as did" appeared for the diagnosis of ``acute respiratory failure." Even though none of the words comprising the diagnosis exist in the block, clinician review confirmed that the block is associated with acute respiratory failure, despite the lack of matches for words in the diagnosis label. Albuterol, a fully and correctly spelled-out drug associated with respiratory distress, is related to bronchial obstruction, seen in chronic obstructive pulmonary disease (COPD). Ipra, an abbreviation for ipratropium bromide, is used in COPD. COPD is a common cause of acute respiratory failure. Acidosis, identified through arterial blood testing, is a sign of hypoventilation, which causes elevation in blood carbon dioxide levels and resultant accumulation of H\textsubscript{2}CO\textsubscript{3} (an acid). This is seen in people with COPD exacerbation who experience respiratory failure. \subsubsection{Blind Experiment Analysis} \input{tables/msp_vs_soc_vs_rand.tex} \begin{figure}[h] \centering \includegraphics{figures/exp4_ps_at_k.pdf} \caption{Precision for the top $K$ text blocks surfaced by MSP, SOC, and the random algorithm (RND) according to each reviewer for each document, label pair with 95\% confidence intervals computed using 1000 bootstrap iterations.} \label{fig:ps_at_k} \end{figure} Two clinicians received 400 text block, diagnosis pairs from each of MSP, SOC, and the random algorithm and independently annotated the text as either uninformative or informative for making the ICD diagnosis (see Supplemental Methods~\ref{sec:blind_experiment_sampling} for details on how text blocks were selected). Table~\ref{tab:msp_vs_soc_vs_rand} depicts the number of informative text blocks surfaced by each explainability algorithm. Figure~\ref{fig:ps_at_k} depicts the precision of each algorithm according to both reviewers. MSP is superior to SOC in terms of the total number of clinically informative text blocks surfaced and precision, especially when limiting the number of blocks surfaced to a small number. Supplemental Figure~\ref{fig:mrrs_at_k} depicts performance of these algorithms from an information retrieval perspective. \subsubsection{Runtime Comparison} \label{sec:runtime_comparison} \input{tables/algo_runtimes.tex} \input{tables/algo_runtimes_theoretical.tex} On the MIMIC discharge summaries of modest length (IQR 1,029-1,929), MSP was up to $100\times$ faster than SOC (Table~\ref{tab:algo_runtimes}). For $J$ sampling iterations per block, masking probability $P$, and document length $L$, using MSP, the number of evaluations of the text by the classifier is $O(J/P)$ for computing the importance of individual sentences and $O(J/P^2)$ for pairs. Using SOC, the number of evaluations is $O(JL)$ and $O(JL^2)$ respectively. Thus, the running time of our approach does not grow with the document length as the number of model evaluations does not depend on $L$. Since SOC has a quadratic dependency on $L$, it is very expensive for computing the importance of individual sentences in documents of even modest length and infeasible for computing the importance of sentence pairs (see example in Table~\ref{tab:algo_runtimes_theoretical}). In the medical and other domains, we expect distant pieces of information to interact, and use pairs analysis with MSP to demonstrate how Big Bird integrates distant contextual information in Supplemental Results~\ref{sec:integrating_distant_contextual_information}. \subsection{Medical Condition Prediction} \label{sec:classifier_performance} \input{tables/model_perf_compare.tex} We assessed model performance when predicting medical conditions in long medical charts from the Optum Chart dataset. Since the prevalence of each label is often very low (median: 0.6\%), we used precision and recall as our metrics of interest (specifically, area-under the precision-recall (AUPR) curve, or average-precision (AP)) \citep{saito_precision-recall_2015}. In Table~\ref{tab:model_perf_compare} we show performance in terms of AP micro- and macro-averaged across labels. For most labels, Big Bird outperformed CAML, (see Supplemental Figure~\ref{fig:best_model_comparison}a), and across training datasets of four sizes performed over 5\% better than CAML in micro-average-precision (see Supplemental Results~\ref{sec:effect_of_training_set_size} and Supplemental Tables~\ref{tab:12800_perf_transpose}, \ref{tab:64000_perf_transpose}, \ref{tab:128000_perf_transpose}, \ref{tab:640000_perf_transpose}). Supplemental Figure~\ref{fig:best_model_comparison}c shows the $\textnormal{log}_{2}$-scaled ratio of the Big Bird AUPR to the CAML AUPR as a function of label prevalence. Big Bird generally outperforms CAML on labels with prevalence $>$ 5\%, but many of the most significant improvements are found in rare labels (prevalence $\leq$ 5\%). AUPRs for each label are included in Supplemental Table~\ref{tab:optumcharts85_performance_compare}. Next, we assessed performance for predicting any of the 50 most common ICD-9s assigned to a MIMIC discharge summary. As baselines, we trained multiple TF-IDF-based models, CAML, and several BERT variants with different types of pooling over segments of text (see Supplemental Methods~\ref{sec:appendix_model_development_classification}). We explored Big Bird architectures with varying sequence lengths (4,096 or 32,768 tokens) and pretraining datasets (general or clinical text). As shown in Table~\ref{tab:model_perf_compare}, on MIMIC, Big Bird with sequence length 4,096 outperformed Big Bird with sequence length 32,768. This is likely due to the average document length in MIMIC being shorter than 4,096 tokens. We found that Big Bird (4,096 max sequence length) pretrained on the Optum Chart dataset outperformed Big Bird (4,096 max sequence length) pretrained on general text (Supplemental Table~\ref{tab:8067_perf_transpose}). This supports previous work demonstrating that in-domain pretraining from scratch is superior to cross-domain fine-tuning for tasks in the biomedical domain~\citep{lee_biobert_2020}. Of the baselines we compared with Big Bird, CAML performed best on MIMIC. Supplemental Figure~\ref{fig:best_model_comparison}b and d shows the performance of Big Bird and CAML for each label. BERT variants using pooled segment representations performed worse than CAML and Big Bird (Supplemental Table~\ref{tab:bert_mimic_results}). This suggests learning a complete representation of an input sequence outperforms aggregation over segments. \section{Discussion} The purpose of this research is to extract meaningful insights from long medical documents in an auditable and transparent way. We discussed and demonstrated the performance benefits of sparse-attention LMs for extracting medical conditions from very long text and proposed MSP to address the major challenge of interpreting long LM predictions. MSP can explain medical condition predictions from discharge summaries using the very long Big Bird LM $\approx 1.7\times$ better than a state-of-the-art explainability algorithm and up to $100\times$ faster. It's also tractable for generating important text block pairs. We view improving the underlying representations of medical text and understanding the predictive elements as key steps toward ensuring that ML can be safely deployed in the medical domain but acknowledge there is more work to do to scale ML across the healthcare system in a just and transparent way. \clearpage
1,108,101,566,084
arxiv
\section{Introduction} An important aspect in physical and information sciences is the acquisition of optimal knowledge about some quantities of interest, through the acts of measurement on the information carriers or probes. Quantum parameter estimation and quantum metrology study, in particular, the ultimate attainable precision in estimating the parameters of interest, under the constraints set by quantum theory. Estimation of a relative phase, such as one in an optical interferometer for gravitational wave sensing \cite{Abbott2016,Abbott2017,Tse2019} or atomic states for frequency measurements and magnetic field sensing \cite{Bollinger1996,Zhang2016,Maze2008,Taylor2008}, temperature in quantum systems \cite{Kucsko2013,Mehboudi2019,Moreva2020}, as well as the spatial extent of composite light sources\cite{Ram2006,Donnert2006,Raj2008,Weissleder2008} in imaging tasks, are examples in which the ultimate precise estimators are highly desired. While it is quite typical to have just a single parameter of interest, more generally, one might encounter quantum estimation problems that concern genuinely multiple parameters, and for which \emph{simultaneous} or \emph{joint}-parameter estimation is called for \cite{Szczykulska2016,Liu2019}. They include, for instance, estimation of phases and decoherence strength\cite{Vidrighin2014,Crowley2014}, numerous parameters for unitary operations\cite{Humphreys2013,Baumgratz2016,Goldberg2020}, and all or some parameters characterizing the spatial extent of light sources, such as the centroid, brightness, separation and orientation \cite{Tsang2016,Nair2016,Lupo2016}. Understandably, finding and implementing the optimal measurement for joint estimation of multiple parameters is generally more challenging than in the case of single-parameter estimation. In particular, joint estimation need not be ``\emph{compatible}", i.e., unless certain conditions are met, using no matter what measurement and estimation strategy, one can never \emph{simultaneously} estimate \emph{all} the parameters as precisely as one would optimally achieve when estimating just one parameter at a time, while assuming the other parameters are known and fixed. This is not surprising, as the measurement bases needed to attain the optimal precision for different parameters need not correspond to complementary observables\cite{Bohr1928,Schwinger1960}. Many important aspects and interesting results on multi-parameter estimation have been established. They include: the general conditions for ``compatible" joint estimation\cite{Ragy2016}, the mathematical conditions for the measurement operators being optimal in estimating all the parameters \cite{Pezze2017,Yang2019}, and bounds on estimation precision and their evaluations \cite{Holevo1982,Nagaoka1989,Hayashi2005,Guta2006,Hayashi2008,Kahn2009,Yamagata2013,Suzuki2016, Albarelli2019,Tsang2020, Sidhu2021,Albarelli2021}. However, given a specific multi-parameter quantum estimation problem, the measurement that attains the optimal joint estimation precision is still not explicitly deducible and recognizable, not to say its implementation scheme. Finding them remains thus a problem of both theoretical and practical interest. In this work, we perform a case study on joint estimation of the length and the direction of the Bloch vector for some identically prepared qubit states. In particular, the Bloch vector is assumed to be confined to a known plane, such that we are here essentially studying a two-parameter estimation problem. This elementary model, despite its simplicity, has significance for various physical problems, such as in quantum sensing and quantum imaging. For example, joint estimation of a phase shift and the amplitude of phase diffusion\cite{Vidrighin2014}, states parameters in two-level quantum mechanical systems such as the purity and the phase for the polarization degrees of freedom of light \cite{James2001}, and the separation and centroid of two binary light sources which serves as a model to discuss resolution limits in imaging\cite{Chrostowski2017}, can be fittingly mapped into our model under appropriate circumstances. Hereinafter, we will not make explicit references to the physical meaning of the parameters, keeping the estimation problem at the abstract level. We also take notice that there is a related and detailed work by Bagan et al. studying optimal full estimation of qubit states \cite{Bagan2006}, but with a somewhat different presentation angle and analysis than what we will have here. The organization of this work is as follows. In Sec.~\ref{Sec2}, we review the basic ingredients of quantum parameter estimation that we will use in this work. Readers that are familiar with the relevant concepts and results in quantum parameter estimation can then proceed directly to Secs.~\ref{Sec3} and \ref{Sec4}, where we formally introduce and study our two-parameter estimation problem. As a result, upon treating the qubits as a collection of spin-1/2 elementary systems, we show how the compatible joint estimation can be \emph{asymptotically} achieved by the global or collective measurement of the total angular momentum squared operator, followed by a commuting, local projective measurement. Furthermore, we look deeper into the special case of nearly-pure qubit estimation, and highlight the link with the problem of sub-Rayleigh resolution imaging. We then discuss how such estimation scheme can be carried out in principle, either exactly or approximately, and by using either passive unitary setup or more general quantum circuits. We also present a conjecture on the projection onto the two largest total angular momentum values, using a Bell multiport setup. Finally, we discuss briefly a few points in Sec.~\ref{Sec5} and conclude in Sec.~\ref{Sec6}. \section{Preliminaries}\label{Sec2} For the sake of completeness, in this section we review the basic ingredients of quantum parameter estimation theory that will be relevant for this work. We start with the single-parameter estimation scenario, then followed by multi-parameter estimation scenario. \subsection{Single-parameter estimation} \noindent In this scenario, our task is to estimate the parameter, labeled as $\theta$, that is encoded in the qudit quantum state $\rho(\theta)$. Generally, we obtain information about, and hence an estimator for $\theta$, from the measurement statistics of $N_\mathrm{total}$ identical copies of $\rho(\theta)$. In particular, we consider $N_\mathrm{total}=\nu N$, whereby we group $N$ copies together as an $N$-qudit ensemble described by the state $\rho_N(\theta)=\rho(\theta)^{\otimes N}$, and measure them with a probability-operator measurement (POM), specified by the set of $N$-qudit operators $\{M_\ell\}$, with $M_\ell\geq0\, \forall \ell,\; \sum_\ell M_\ell=\mathds{1}_{d^N}$. From the measurement, a single outcome $\ell_1$ will be recorded, with the probability of getting outcome $\ell_1=\ell$ given by the Born's rule, i.e., $p_\ell(\theta)=\Tr\{\rho_N(\theta) M_\ell\}$. Then, upon repeating the whole process over $\nu$ rounds, we construct an estimator\footnote{The estimator $\hat{\theta}_N(D)$ depends of course on $\nu$ as well. Here we write out only the dependence on $N$ but not $\nu$ explicitly to avoid overloading the notation.} $\hat{\theta}_N(D)$ from all the collected outcomes or data $D=\{\ell_1,\ell_2,\cdots,\ell_\nu\}$, using for example the maximum-likelihood principle \cite{Hradil2004,Paris2004}. In this work, we will consider three specifications. Firstly, our choice of estimation precision quantifier is the \emph{mean squared error} (MSE), $\Delta^2\hat{\theta}_N$, which is the expected squared difference between the true value of the parameter and its estimate. That is, \begin{eqnarray}\label{eq:MSEn} \Delta^2\hat{\theta}_N:=\mathbb{E}[(\theta-\hat{\theta}_N(D))^2]=\sum_D L(D|\theta)(\theta-\hat{\theta}_N(D))^2, \end{eqnarray} where $L(D|\theta)$ is the likelihood of observing the data $D$, given the true parameter value $\theta$. Secondly, we consider locally unbiased estimators, for which the MSE will be lower bounded by the \emph{Cram\'{e}r-Rao bound} (CRB)~\cite{Kay1993,Braunstein1994}: \begin{eqnarray}\label{eq:CRB} \nu \Delta^2\hat{\theta}_N \;\geq\;\frac{1}{F_{N;\theta}}\;\geq\;\frac{1}{\mathcal{F}_{N;\theta}}, \end{eqnarray} where \begin{eqnarray}\label{eq:FI}\fl F_{N;\theta}=F_{N;\theta}[\rho_N(\theta),\{M_\ell\}]:=\sum_\ell \frac{\dot{p}_\ell(\theta)^2}{p_\ell(\theta)}, \quad \mathrm{and} \quad \mathcal{F}_{N;\theta}=\mathcal{F}_{N;\theta}[\rho_N(\theta)]:=\max_{\{M_\ell\}}F_{N;\theta}, \end{eqnarray} are respectively the \emph{Fisher information} (FI) and the \emph{quantum FI} (QFI)---the largest FI upon optimizing the choice of measurement, with $\dot{p}_\ell(\theta)=\partial_\theta p_\ell(\theta)$. Lastly, we here focus on \emph{asymptotic quantum inference}\cite{Hayashi2005}, where we study the ultimate precision obtainable in the limit of unlimited number of repetitions, $\nu\rightarrow\infty$. Realistically, we do not have unlimited repetitions of course, but the CRB will get sufficiently tight with large but finite $\nu$. Then, the inequalities in the CRB, Eq.~(\ref{eq:CRB}), are saturated asymptotically\cite{Hayashi2005,Kay1993}, and we may turn our attention to the QFI as the figure of merit for the optimal attainable estimation precision. Given $\rho_N(\theta),$ the QFI and a choice of measurement that achieves it is well known:\cite{Helstrom1976, Paris2009} \begin{eqnarray}\label{eq:QFI} \mathcal{F}_{N;\theta}[\rho_N(\theta)]=\tr\{\rho_N(\theta)L_\theta^2\}, \end{eqnarray} where the Hermitian symmetric-logarithmic derivative (SLD) $L_\theta$ is defined implicitly by \begin{eqnarray}\label{eq:SLD} \partial_\theta\rho_N(\theta):=\frac{1}{2}\big\{L_\theta,\rho_N(\theta)\big\}, \end{eqnarray} where $\big\{a,b\big\}=ab+ba$ is the anti-commutator. Moreover, the QFI can be attained by choosing the projective measurement into the eigenstates of the SLD, though in general cases there could be other possible choices as well. As a remark, with $\rho_N(\theta)=\rho(\theta)^{\otimes N}$ having a product state structure, it follows from additivity of FI \cite{Lehmann1998,Toth2014} that $F_{N;\theta}=NF_{1;\theta}$, $\mathcal{F}_{N;\theta}=N\mathcal{F}_{1;\theta}$, and the SLD will have a single-particle operator or an independent-sum structure, i.e., $L_\theta=\sum_{i=1}^N L_\theta^{(i)}$. Hence, \emph{local measurements} are sufficient to attain the QFI. This is expected and can be understood intuitively: with uncorrelated information carriers, we will never obtain more than the sum of optimal information from each individual. In this case then, the division into $\nu$ repetitions and $N$-ensemble can be seen as superfluous, as whatever the $\nu$ and $N$ separately are for fixed $N_\mathrm{total}=\nu N\rightarrow\infty$, they all correspond to the same situation of having $N_\mathrm{total}$ independent local measurements on $N_\mathrm{total}$ independent qudits i.e., $N_\mathrm{total} \Delta^2\hat{\theta}_N\geq \mathcal{F}_{1;\theta}^{-1}$. More generally though, it does matter how we divide the $N_\mathrm{total}$ copies into $\nu$ repetitions of $N$-qudit ensemble, especially when we consider POM beyond local measurements on each qudits. Such a consideration is necessary, for example, when the local measurements are noisy due to some imperfect implementation, e.g., crosstalks mixing up the measurement outcomes locally, resulting in the effective POM elements no longer being rank-1 projectors. On one hand, insistence on using local measurements would thus have the achievable precision of estimating $\theta$ in $\rho_N(\theta)$ lower limited by $N_\mathrm{total}\Delta^2\hat{\theta}_N\geq \lambda(\mathcal{F}_{1;\theta})^{-1}$ for some constant $\lambda>1$ that is characteristic of the noise. On the other hand, by allowing \emph{collective} or \emph{joint measurements}, interestingly, one can show that the measurement noise can be effectively negated, and now $N_\mathrm{total}\Delta^2\hat{\theta}_N\geq c_N(\mathcal{F}_{1;\theta})^{-1}$, with an $N$-\emph{dependent} coefficient $c_N$ that $\rightarrow1$ in the large $N$ limit \cite{Len2021}. In this situation then, it pays off to have larger $N$, so long as $\nu$ is still sufficiently large to keep the CRB tight. In this work we shall not consider complications due to imperfect measurements, though as we will see next, we still nevertheless need to consider collective measurement for joint estimation of multiple parameters, and keep the distinction between $\nu$ repetitions and grouping into $N$-qudit ensemble for good. \subsection{Joint-parameter estimation} We move on to review the estimation of multiple parameters $\boldsymbol{\theta}=\{\theta_1,\theta_2,\cdots,\theta_K\}$ encoded in the state $\rho(\boldsymbol{\theta})$. As in previous section, we consider $N$ identical copies of the qudit grouped together and described by the state $\rho_N(\boldsymbol{\theta})=\rho(\boldsymbol{\theta})^{\otimes N}$, and construct locally unbiased estimators $\hat{\thetavec}_N=\{\hat{\theta}_{N;1},\hat{\theta}_{N;2},\cdots,\hat{\theta}_{N;K}\}$ with the data $D$ collected from $\nu$ repetitions of a measurement of $\rho_N(\boldsymbol{\theta})$ with some $N$-qudit POM $\{M_\ell\}$. Then, with the generalizations of MSE to \emph{covariance matrix} $\mathcal{C}_N$, whose $(i,j)$-th matrix element is given by \begin{eqnarray}\label{eq:cmat} \fl \mathcal{C}_{N;i,j}= \mathbb{E}[(\theta_i-\hat{\theta}_{N;i}(D))(\theta_j-\hat{\theta}_{N;j}(D))]=\sum_D L(D|\boldsymbol{\theta}) (\theta_i-\hat{\theta}_{N;i}(D))(\theta_j-\hat{\theta}_{N;j}(D)), \end{eqnarray} and FI to \emph{FI matrix} $F_{N;\boldsymbol{\theta}}$, whose $(i,j)$-th matrix element reads \begin{eqnarray}\label{eq:FIij} F_{N;i,j}=\sum_\ell \frac{1}{p_\ell(\boldsymbol{\theta})}\frac{\partial p_\ell(\boldsymbol{\theta})}{\partial \theta_i}\frac{\partial p_\ell(\boldsymbol{\theta})}{\partial \theta_j}, \end{eqnarray} where $p_\ell(\boldsymbol{\theta})=\tr\{\rho_N(\boldsymbol{\theta})M_\ell\}$, the multi-parameter CRB, a matrix inequality, reads \begin{eqnarray}\label{eq:mCRB} \nu\mathcal{C}_N \geq F_{N,\boldsymbol{\theta}}^{-1}. \end{eqnarray} Equivalently, Eq.~(\ref{eq:mCRB}) can be written as a numerical bound $\nu\tr\{\mathcal{C}_N W\}\geq\tr\{F_{N;\boldsymbol{\theta}}^{-1}W\}$ for any non-negative $K\times K$ matrix $W$, known as the cost matrix or weight matrix. The covariance matrix can be further lower limited first by the Holevo bound and then by the multi-parameter QFI CRB: \cite{Holevo1982, Nagaoka1989} \begin{eqnarray}\label{eq:HolevoCRB}\fl \nu\tr\{\mathcal{C}_N W\}\geq\tr\{F_{N;\boldsymbol{\theta}}^{-1}W\}\geq \min_{\{X_i\}}\{\tr\{W\Re Z\} + \|W\Im Z\|_1\}\geq \tr\{\mathcal{F}_{N;\thetavec}^{-1}W\}. \end{eqnarray} In the Holevo bound, $\|\cdot\|_1$ is the trace norm, $Z$ is the matrix with elements $Z_{i,j}=\tr\{X_i X_j\rho_N(\boldsymbol{\theta})\}$, where $\{X_i\}_{i=1}^K$ is a set of Hermitian matrices satisfying $\tr\{X_i\frac{\partial\rho_N(\boldsymbol{\theta})}{\partial\theta_j}\}=\delta_{i,j}$ and $\tr\{\rho_N(\boldsymbol{\theta})X_i\}=0\,\forall i$. Meanwhile, in the multi-parameter QFI CRB, we have the \emph{QFI matrix} $\mathcal{F}_{N;\thetavec}$, with matrix elements $\mathcal{F}_{N;i,j}:=\frac{1}{2}\tr\{\rho_N(\boldsymbol{\theta})\{L_{\theta_i},L_{\theta_j}\}\}$, where the SLDs are defined analogously as in Eq.~(\ref{eq:SLD}), i.e., $\partial_{\theta_i}\rho_N(\boldsymbol{\theta}):=\frac{1}{2}\big\{L_{\theta_i},\rho_N(\boldsymbol{\theta})\big\}$. Note that the diagonal elements of the QFI matrix are exactly the QFI for the individual parameters as in single-parameter estimation, i.e., $\mathcal{F}_{N;i,i}=\mathcal{F}_{N;\theta_{i}}$, while the diagonal elements of the covariance matrix, are, of course, the MSE for the individual parameters, i.e., $\nu\mathcal{C}_{N;i,i}=\nu\Delta^2\hat{\theta}_{i;N}$. By the same reasoning as in the single-parameter cases, the first inequality in Eq.~(\ref{eq:HolevoCRB}) can always be saturated in the limit of large repetition, $\nu\rightarrow\infty$ \cite{Kay1993}. Remarkably, the Holevo bound can always be attained as well in that limit \cite{Hayashi2008,Guta2006,Kahn2009,Yamagata2013}. The last, weaker multi-parameter QFI CRB however, is only tight and equal to the Holevo bound iff the condition $\tr\{\rho_N(\boldsymbol{\theta})[L_{\theta_i},L_{\theta_j}]\}=0\,\forall i\neq j$ is satisfied, where $[a,b]=ab-ba$ is the commutator\cite{Matsumoto2002,Vaneph2013,Crowley2014,Ragy2016}. Finally, should a more stringent condition $\tr\{\rho_N(\boldsymbol{\theta})L_{\theta_i}L_{\theta_j}\}=0\,\forall i\neq j$ be met, we have $\mathcal{F}_{N;i,j}=0 \,\forall i\neq j$, and the QFI matrix is diagonal. In this case then, the inverse of the QFI matrix can be done element-wise, and we have $\nu\mathcal{C}_{N;i,i}=\nu\Delta^2\hat{\theta}_{i;N}=1/\mathcal{F}_{N;i,i}=1/\mathcal{F}_{N;\theta_{i}}$ as $\nu\rightarrow\infty$, i.e., for all the parameters, there exist a measurement scheme which allows us to estimate them simultaneously, each with an optimal asymptotic precision that is equal to that of the corresponding single-parameter estimation scenario. We shall refer to this as ``compatible" joint-parameter estimation. \section{Two-parameter estimation for qubit states}\label{Sec3} In this section, we apply the general formalism described above to estimation of the length and direction of Bloch vector for a qubit state. Then, we show how one can asymptotically achieve compatible joint estimation for the two parameters, by a measuring an observable that can be thought of as the total angular momentum squared operator, followed by a commuting, local projective measurement. Furthermore, we look deeper into the case of nearly-pure qubit estimation, and reveal a connection with the problem of superresolution imaging in the sub-Rayleigh limit. \subsection{The qubit model} \begin{figure*}[t!] \centering \includegraphics[width=0.4\textwidth]{MPEfQS-fig1} \caption{\textbf{Bloch-sphere representation of the qubit model}. The qubit state is represented by the Bloch vector $\boldsymbol{s}$, with its length given by $s(\varepsilon)$ and direction $\hat{\mathrm{e}}_{\boldsymbol{s}}(\varphi)=\sin\varphi~\hat{\mathrm{e}}_{x}+\cos\varphi~\hat{\mathrm{e}}_{z}$. The parameter of interest of estimation are $\varphi$ and $\varepsilon$.} \label{fig:MPEfQS-fig1} \end{figure*} Consider a qubit described by the state $\rho(\boldsymbol{\theta})$, which in the familiar Bloch-sphere representation as visualized in Fig.~\ref{fig:MPEfQS-fig1}, has a Bloch vector $\boldsymbol{s}$ that is confined to the $x$-$z$ plane, and deviates from the $z$-axis by some angle $\varphi$. We allow the length of the Bloch vector to be possibly further parametrized by the variable $\varepsilon$, such that \begin{eqnarray}\label{eq:BlochvecS} \fl \rho(\boldsymbol{\theta})=\rho(\varepsilon,\varphi)=\frac{1}{2}\big(\mathds{1}+\boldsymbol{s}\cdot\boldsymbol{\sigma}\big), \quad \boldsymbol{s}=s(\varepsilon)\hat{\mathrm{e}}_{\boldsymbol{s}}(\varphi), \quad \hat{\mathrm{e}}_{\boldsymbol{s}}(\varphi)=\sin\varphi~\hat{\mathrm{e}}_{x}+\cos\varphi~\hat{\mathrm{e}}_{z}, \end{eqnarray} where $\boldsymbol{\sigma}$ is the usual Pauli operator vector. The two parameters of interest of estimation are $\varphi$ and $\varepsilon$. It is helpful to think of the qubit as a spin-1/2 system, and so equivalently $\rho(\boldsymbol{\theta})$ can be expressed as \begin{eqnarray} \label{eq:rhoSingle} \rho(\varepsilon,\varphi)&=\mathrm{e}^{-\mathrm{i} \sigma_y\varphi/2}\frac{\mathrm{e}^{\beta \sigma_z}}{2\cosh\beta}\mathrm{e}^{\mathrm{i} \sigma_y\varphi/2}, \end{eqnarray} where $\tanh\beta=s(\varepsilon)$ and $\sigma_\ell$, $\ell={x,y,z}$, is the Pauli-$\ell$ operator. With Eq.~(\ref{eq:rhoSingle}), $N$ identical copies of the qubit in $\rho(\boldsymbol{\theta})$ can then be treated as a collection of spin-1/2s, with the $N$-qubit state written concisely as \begin{eqnarray}\label{eq:rhoN} \rho_N(\boldsymbol{\theta})=\rho_N(\varepsilon,\varphi)=\rho(\varepsilon,\varphi)^{\otimes N}=\mathrm{e}^{-\mathrm{i} J_y\varphi}\frac{\mathrm{e}^{2\beta J_z}}{(2\cosh\beta)^N}\mathrm{e}^{\mathrm{i} J_y\varphi}, \end{eqnarray} with the generalization to $N$-spin angular momentum, i.e., $\boldsymbol{J}=\frac{1}{2}\sum_{i=1}^N \boldsymbol{\sigma}^{(i)}$, where $\boldsymbol{\sigma}^{(i)}$ is the Pauli operator vector for the $i$-th qubit. \subsection{Single-parameter estimation} First, let us work out the estimation precision limit, should we just estimate one of the parameter in the state $\rho_N(\varepsilon,\varphi)$ in Eq.~(\ref{eq:rhoN}), with another parameter being actually known. \subsubsection{Estimation of $\varepsilon$} With $\varphi$ being known and fixed, it is straightforward to find that \begin{eqnarray}\label{eq:SLDeps} L_\varepsilon&=\frac{1}{1-s(\varepsilon)^2}\big[2\mathrm{e}^{-\mathrm{i} J_y\varphi}J_z\mathrm{e}^{\mathrm{i} J_y\varphi}-Ns(\varepsilon)\mathds{1}\big] \frac{\partial s(\varepsilon)}{\partial\varepsilon}, \end{eqnarray} and \begin{eqnarray}\label{eq:QFIeps} \mathcal{F}_{N;\varepsilon}&= N\frac{1}{1-s(\varepsilon)^2}\Big(\frac{\partial s(\varepsilon)}{\partial\varepsilon}\Big)^2 \end{eqnarray} Then, as $\mathrm{e}^{-\mathrm{i} \sigma_y\varphi/2}\sigma_z\mathrm{e}^{\mathrm{i} \sigma_y\varphi/2}=\hat{\mathrm{e}}_{\boldsymbol{s}}\cdot\boldsymbol{\sigma}$, from Eq.~(\ref{eq:SLDeps}), the optimal precision $\mathcal{F}_{N;\varepsilon}$ can be attained by performing the projective measurement specified by the directions $\pm\hat{\mathrm{e}}_{\boldsymbol{s}}$, i.e., $\{\frac{\mathds{1}+\hat{\mathrm{e}}_{\boldsymbol{s}}\cdot\boldsymbol{\sigma}}{2},\frac{\mathds{1}-\hat{\mathrm{e}}_{\boldsymbol{s}}\cdot\boldsymbol{\sigma}}{2}\}$, for each of the qubit. A corresponding estimator $\hat{\varepsilon}_N(D)$ can be simply obtained as follows: Denote $\nu_{\pm}^{(i)}$ as the relative frequencies of obtaining the outcome of $\frac{\mathds{1}\pm\hat{\mathrm{e}}_{\boldsymbol{s}}\cdot\boldsymbol{\sigma}^{(i)}}{2}$ for the $i$-th qubit over the $\nu$ repetitions, such that $\nu_+^{(i)}+\nu_-^{(i)}=1$ for all $i$. Then, with $\nu_{\pm}:=\sum_{i=1}^N\frac{\nu_{\pm}^{(i)}}{N}$, $\hat{\varepsilon}_N(D):=s^{-1}(\nu_+-\nu_-)$. Incidentally, this `linear inversion' estimator is also the maximum-likelihood estimator, which in the $\nu\rightarrow\infty$ limit is known to be unbiased and saturates the CRB in Eq.~(\ref{eq:CRB}) \cite{Hayashi2005,Kay1993}. \subsubsection{Estimation of $\varphi$}\label{sec:varphi} With $\varepsilon$ being known and fixed instead, we find that \begin{eqnarray} L_\varphi&=2s(\varepsilon)\mathrm{e}^{-\mathrm{i} J_y\varphi}J_x\mathrm{e}^{\mathrm{i} J_y\varphi}, \label{eq:SLDphi} \\ \mathcal{F}_{N;\varphi}&=Ns(\varepsilon)^2, \label{eq:QFIphi} \end{eqnarray} where the optimal precision $\mathcal{F}_{N;\varphi}$ can be obtained with the projective measurement onto the direction specified by $\pm\hat{\mathrm{e}}_{\boldsymbol{s}'}$, where $\hat{\mathrm{e}}_{\boldsymbol{s}'}=\cos\varphi\,\hat{\mathrm{e}}_{x}-\sin\varphi\,\hat{\mathrm{e}}_{z}$, for each qubit. Similar to the estimation of $\varepsilon$, an asymptotically unbiased estimator $\hat{\varphi}_N(D)$ saturating the CRB can be obtained as $\hat{\varphi}_N(D):=\varphi+\sin^{-1} (\frac{\nu_+-\nu_-}{s})$, where now $\nu_{\pm}:=\sum_{i=1}^N\frac{\nu_{\pm}^{(i)}}{N}$ is the sum of relative frequencies for the projection onto $\frac{\mathds{1}\pm\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}^{(i)}}{2}$. As a remark, $\mathcal{F}_{N;\varphi}$ can only be attained by the said projective measurement for any $s(\varepsilon)$ that is strictly not equal to unity; for the exceptional case of $s(\varepsilon)=1$, measurement along the direction $\cos\alpha\,\hat{\mathrm{e}}_{x}-\sin\alpha\,\hat{\mathrm{e}}_{z}$ with arbitrary $\alpha$ would also attain $\mathcal{F}_{N;\varphi}$. \subsection{Joint-parameter estimation} For the state $\rho_N(\varepsilon,\varphi)$ in Eq.~(\ref{eq:rhoN}), from Eq.~(\ref{eq:SLDeps}) and Eq.~(\ref{eq:SLDphi}), one can readily verify that $\tr\{\rho_N(\boldsymbol{\theta})L_\varepsilon L_\varphi\}=\tr\{\rho_N(\boldsymbol{\theta})L_\varphi L_\varepsilon\}=0$. Hence, the multi-parameter QFI CRB of Eq.~(\ref{eq:HolevoCRB}) can be saturated, and moreover, the QFI matrix is diagonal, such that it is in principle possible to realize ``compatible" joint estimation for both parameters. That is, the optimal precisions in simultaneously estimating the two parameters in $\rho_N(\varepsilon,\varphi)$ are given by their respective QFI as in the single-parameter scenarios; see Eq.~(\ref{eq:QFIeps}) and Eq.~(\ref{eq:QFIphi}). Notice that, however, $L_\varepsilon L_\varphi \neq L_\varphi L_\varepsilon$, which means that the optimal local measurement bases for estimation of $\varepsilon$ and $\varphi$ actually do not commute. In fact, as $\hat{\mathrm{e}}_{\boldsymbol{s}}\cdot\hat{\mathrm{e}}_{\boldsymbol{s}'}=0$, they correspond to mutually unbiased bases \cite{Schwinger1960,Durt2010}---acquiring optimal information about one parameter as such completely erases information about the other, and therefore we could never obtain simultaneous estimation of both parameters with optimal precision by measuring the $N$ qubits separately. Therefore in order to attain still the multi-parameter QFI CRB, we must now consider collective or global measurements on the whole $N$-qubit ensemble instead. \subsubsection{Measurement with $\boldsymbol{J}^2$ and $L_\varphi$} We look for collective measurements that preserve information about the second parameter, say $\varphi$, when accessing information about the first parameter, say $\varepsilon$. Indeed, as $\varphi$ serves as the angle of rotation generated by angular momentum $J_y$, see Eq.~(\ref{eq:rhoN}), it is inviting for us to consider measuring the angular momentum squared operator, $\boldsymbol{J}^2$, which is invariant under rotation, for estimation of $\varepsilon$. Then, as $\boldsymbol{J}^2$ commutes with $L_\varphi$---one can explicitly confirm with Eq.~(\ref{eq:SLDphi}), we can at the same time measure the eigenstates of $L_\varphi$ and so estimate $\varphi$ with optimal precision, i.e., in the $\nu\rightarrow\infty$ limit, \begin{eqnarray}\label{eq:MSEvarphi} \nu\Delta^2\hat{\varphi}_N = \mathcal{F}_{N;\varphi}^{-1}, \end{eqnarray} using for example the estimator $\hat{\varphi}_N(D)$ introduced in Sec.~\ref{sec:varphi}. \noindent For estimation precision of $\varepsilon$, we apply the well-known error-propagation formula\cite{Wineland1992} for $\nu\rightarrow\infty$, which for our case here, reads (see \ref{app:errorpropJvec2} for details of the evaluation) \begin{eqnarray}\label{eq:J2errpro} \nu\Delta^2\hat{\varepsilon}_N&= \frac{\tr\{(\boldsymbol{J}^2)^2\rho_N(\boldsymbol{\theta})\}-\tr\{\boldsymbol{J}^2\rho_N(\boldsymbol{\theta})\}^2}{|\frac{\partial}{\partial\varepsilon}\tr\{\boldsymbol{J}^2\rho_N(\boldsymbol{\theta})\}|^2}\nonumber\\ &=\frac{1}{N}\Bigg[\frac{(3-2N)s(\varepsilon)^4+2(N-3)s(\varepsilon)^2+3}{2(N-1)s(\varepsilon)^2\left(\frac{\partial s(\varepsilon)}{\partial\varepsilon}\right)^2}\Bigg], \end{eqnarray} with the estimator $\hat{\varepsilon}_N(D)$ defined by Eq.~(\ref{eq:expJ}), with $\langle\boldsymbol{J}^2\rangle$ approximated by $\sum_{j=N/2-\lfloor N/2 \rfloor}^{N/2} j(j+1)\nu_j$, where $\nu_j$ is the relative frequency for the projection onto the $j$-subspace of $\boldsymbol{J}^2$ over $\nu$ repetitions. Now, in the large $N$ limit, $\nu\Delta^2\hat{\varepsilon}_N$ has the asymptotic behaviour \begin{eqnarray}\label{eq:J2errprolargeN} \nu\Delta^2\hat{\varepsilon}_N&\sim \frac{1}{N}(1-s(\varepsilon)^2)\left(\frac{\partial s(\varepsilon)}{\partial\varepsilon}\right)^{-2}+\Or\Big(\frac{1}{N^2}\Big)\nonumber\\ &=\mathcal{F}_{N;\varepsilon}^{-1}+\Or\Big(\frac{1}{N^2}\Big), \end{eqnarray} where $\Or(1/N^2)$ stands for terms that are of order $1/N^2$ and beyond. We thus found that indeed the $\boldsymbol{J}^2$ measurement optimally estimates $\varepsilon$ as $N$ gets ever larger, and as said without sacrificing the estimation precision for $\varphi$ at all. In Fig.~\ref{fig:MPEfQS-fig2}, we illustrate the convergence of the weighted sum of two the estimation precision, $\frac{1}{2}\nu\Delta^2\hat{\varepsilon}_N+\frac{1}{2}\nu\Delta^2\hat{\varphi}_N$, with the $\boldsymbol{J}^2$ and $L_\varphi$ measurement strategy, to the ultimate bound $\frac{1}{2}\mathcal{F}_{N;\varepsilon}^{-1}+\frac{1}{2}\mathcal{F}_{N;\varphi}^{-1}$, for the chosen parametrization of $s(\varepsilon)=1-\varepsilon^2/8$. \begin{figure*}[t!] \centering \includegraphics[width=0.7\textwidth]{MPEfQS-fig2} \caption{\textbf{Weighted sum of mean squared error (MSE) in estimating $\varepsilon$ and $\varphi$ using the global $\boldsymbol{J}^2$ and local $L_\varphi$ measurement} as given by Eq.~(\ref{eq:MSEvarphi}) and Eq.~(\ref{eq:J2errpro}), for different $N$ values (\emph{thick solid line}). The lower \emph{dashed line} corresponds to the ultimate (weighted) achievable precision should we have perfect knowledge about the other parameter, and only a single variable to estimate. Note that the plot starts at $N=2$, as the classification of collective versus local measurements is not meaningful for $N=1$, and in this plot we consider a specific parametrization of $s(\varepsilon)=1-\varepsilon^2/8$ with $\varepsilon^2/8=0.1$, though the convergence behaviour is general and independent of the parametrization. For comparison, we also include the best (weighted) MSE obtainable (upper \emph{dashed line}), should we use the strategy of dividing equally the $\nu$ resources into estimating $\varepsilon$ and $\varphi$ separately. The advantage of $\boldsymbol{J}^2$ collective measurement is evident for $N\geq3$.} \label{fig:MPEfQS-fig2} \end{figure*} \subsubsection{Nearly-pure qubit estimation}\label{sec:nearly-pure} Having established the general results for compatible joint estimation for $\varepsilon$ and $\varphi$, let us look deeper into the scenario where the qubits are nearly pure, i.e., $s(\varepsilon)\approx1$. In particular then, let us apply the parameterization $s(\varepsilon)=1-\varepsilon^2/8$ with $\varepsilon\ll1$, a choice that is motivated by a case study in quantum imaging superresolution problem: As reported in Ref.~\cite{Chrostowski2017}, estimating $\varphi$ and $\varepsilon$ in this parameterization could be seen as estimating the centroid and the separation between two close incoherent light sources (up to some multiplicative constant), respectively. We shed some light as to how the $\boldsymbol{J}^2$ measurement works in this $\varepsilon\ll1$ regime. From the theory of angular momentum, see textbooks like Refs.~\cite{CohenTannoudji, Gottfried2013}, we know that the full $N$-spin Hilbert space, $\mathcal{H}_\mathrm{total}$, can be decomposed into direct sum of orthogonal subspaces specified by two numbers, $j$ and $g(j)$: \begin{eqnarray}\label{eq:Htotal} \mathcal{H}_\mathrm{total}&= \bigoplus_{j=N/2-\lfloor N/2 \rfloor}^{N/2} \Big(\bigoplus_{g(j)=1}^{\mu_j} \mathcal{H}_{j,g(j)}\Big). \end{eqnarray} Here, $j=\{N/2, N/2-1, \ldots, N/2-\lfloor N/2 \rfloor\}$ is nothing but the usual quantum number associated with the total angular momentum squared, while $g(j)=\{1,\ldots,\mu_j-1,\mu_j\}$ specifies the $\mu_j$ different subspaces for the same $j$ value. Generally, the multiplicity factor $\mu_j$ depends on how the total angular momentum is composed; for our case of $N$ spin-1/2s, it is known that, see for example Refs.~\cite{Cirac1999} and \cite{Banaszek2001} \footnote{There is a typo in the equation (A4) of Ref.~\cite{Banaszek2001}.}, $\mu_j=\binom{N}{N/2-j}-\binom{N}{N/2-j-1}=\frac{2j+1}{N+1}\binom{N+1}{N/2-j}$. \noindent Denote $\mathcal{P}_j$ as the projector onto the $j$-angular momentum space, i.e., the space $\bigoplus_{g(j)=1}^{\mu_j} \mathcal{H}_{j,g(j)}$. Then, with the quantum state $\rho_N(\varepsilon,\varphi)$ of Eq.~(\ref{eq:rhoN}), evidently $\rho_N(\varepsilon,\varphi)=\sum_{j=N/2-\lfloor N/2\rfloor}^{N/2}\mathcal{P}_j \rho_N(\varepsilon,\varphi) \mathcal{P}_j$, and the probability that the $N$-qubit ensemble is found to have the total angular momentum value $j$ is given by \begin{eqnarray}\label{eq:pj} p_j&=\tr\Big\{\mathcal{P}_j \mathrm{e}^{-\mathrm{i} J_y\varphi}\mathrm{e}^{2\beta J_z}\mathrm{e}^{\mathrm{i} J_y\varphi}\Big\}/(2\cosh\beta)^N=\mu_j \sum_{m=-j}^{j} \mathrm{e}^{2\beta m}/(2\cosh\beta)^N\nonumber\\ &=\mu_j \frac{\sinh\big(\beta(1+2j)\big)}{\sinh\beta}\frac{1}{(2\cosh\beta)^N}\nonumber\\ &= \mu_j \frac{\big[\varepsilon\sqrt{16-\varepsilon^2}\big]^{-2j}\big[(16-\varepsilon^2)^{1+2j}-(\varepsilon^2)^{1+2j}\big]}{16(1-\varepsilon^2/8)}\frac{\varepsilon^N}{4^N}\Big(1-\frac{\varepsilon^2}{16}\Big)^{N/2}. \end{eqnarray} Notice that according to expectations $p_j$ does not depend on $\varphi$. As $\varepsilon\ll1$, we expand Eq.(\ref{eq:pj}) in powers of $\varepsilon$, and the first two leading terms are \begin{eqnarray}\label{eq:pj2nd} p_j&\approx\mu_j \Big(\frac{\varepsilon}{4}\Big)^{N-2j}\Big[1+a_{j,N}\varepsilon^2\Big], \end{eqnarray} where $a_{j,N}=\frac{1}{16}(1-j-\frac{N}{2}-\delta_{j,0})$. More precisely, the truncated series expansion in Eq.~(\ref{eq:pj2nd}) is an asymptotic one, and approximates Eq.~(\ref{eq:pj}) well if $N\varepsilon^2\ll1$. Then, from Eq.~(\ref{eq:pj2nd}) we see that the leading contribution for the FI, up to $N\varepsilon^2$, is from the three subspaces with $j=N/2, N/2-1$, and $N/2-2$. That is, \begin{eqnarray} p_{\frac{N}{2}}&\approx1-\frac{N-1}{16}\varepsilon^2,\label{eq:pN/2}\\ p_{\frac{N}{2}-1}&\approx(N-1)\frac{\varepsilon^2}{16}\Big[1+\frac{(2-N)}{16}\varepsilon^2\Big], \quad [\mathrm{defined\, only\, for} N\geq2],\label{eq:pN/2-1}\\ p_{\frac{N}{2}-2}&\approx\frac{N(N-3)\varepsilon^4}{512}, \quad [\mathrm{defined\, only\, for} N\geq4], \label{eq:pN/2-2} \end{eqnarray} and with $f_j=\frac{1}{p_j} \Big(\frac{\partial p_j}{\partial\varepsilon}\Big)^2$, we have \begin{eqnarray} f_{\frac{N}{2}}&\approx\frac{(N-1)^2}{64}\varepsilon^2, \label{eq:fN/2}\\ f_{\frac{N}{2}-1}&\approx\frac{N-1}{4}+\frac{3(N-1)}{64}\Big(2-N-\delta_{N,2}\Big)\varepsilon^2, \label{eq:fN/2-1}\\ f_{\frac{N}{2}-2}&\approx\frac{N(N-3)}{32}\varepsilon^2, \label{eq:fN/2-2} \end{eqnarray} such that \begin{eqnarray}\label{eqFne2nd} \fl F_{N,\varepsilon}=\sum_{j=N/2-\lfloor N/2\rfloor}^{N/2} f_j=f_{\frac{N}{2}}+f_{\frac{N}{2}-1}+f_{\frac{N}{2}-2}+ \big[\textsf{terms of $\Or(\varepsilon^4)$ and higher} \big]\nonumber \\ \fl \hphantom{F_{N,\varepsilon}}\approx \frac{N-1}{4}+\Big[\frac{(N-1)^2}{64}+\frac{3(N-1)}{64}(2-N-\delta_{N,2})+\frac{N(N-3)}{32}\eta(N-4)\Big]\varepsilon^2, \end{eqnarray} where $\eta(\cdot)$ is the unit step function, and here we define $\eta(0)\equiv1$. For $N\geq4$, the FI up to $N\varepsilon^2$ is then equal to \begin{eqnarray} F_{N,\varepsilon}&=\frac{N}{4}\Big(1+\frac{\varepsilon^2}{16}\Big)-\Big(\frac{1}{4}+\frac{5\varepsilon^2}{64}\Big). \end{eqnarray} Upon comparing with the QFI in Eq.~(\ref{eq:QFIeps}) that now reads \begin{eqnarray}\label{eq:FQNeps-exp} \mathcal{F}_{N;\varepsilon}=N\frac{4}{16-\varepsilon^2}=\frac{N}{4}\Big(1+\frac{\varepsilon^2}{16}+\frac{\varepsilon^4}{256}+\cdots\Big), \end{eqnarray} we then confirm that in the large $N$ limit the three $j=N/2, N/2-1, N/2-2$ subspaces of the $\boldsymbol{J}^2$ operator carry optimal information about $\varepsilon$ when $\varepsilon\ll1$, and $N\varepsilon^2\ll1$. In particular, from Eq.~(\ref{eq:pN/2}) and Eq.~(\ref{eq:pN/2-1}) we see that most detection events will be contributed by the $j=N/2$ subspace, followed by the $j=N/2-1$ subspace. It is however these rare detection events from the $j=N/2-1$ subspace that contain most information about $\varepsilon$, see Eq.~(\ref{eq:fN/2-1}); the abundant detection events from the $j=N/2$ subspace and the even rarer detection events from the $j=N/2-2$ subspace contribute only information with relative weight $N\varepsilon^2\ll1$. The identification of a few specific angular momentum `modes' that carry most information about the parameter in this regime is a reminiscent of the application of the spatial-mode demultiplexing technique for the superresolution imaging problem\cite{Tsang2016, Len2020}: at the close separation regime (corresponding to $\varepsilon\ll1$), most of the information about the separation can be extracted from the fundamental spatial mode---the transfer function of the imaging system (corresponding to $j=N/2$), and the first `excited' mode, which is the derivative of the transfer function with respect to the spatial dimension (corresponding to $j=N/2-1$). Note that, when $N$ gets ever larger such that $N\varepsilon^2>1$ and the asymptotic series approximation Eq.~(\ref{eq:pj2nd}) is no longer accurate, we need to then take into account lower and lower angular momentum `modes', such that the $\boldsymbol{J}^2$ measurement still provides the asymptotic optimal precision as given by Eq.~(\ref{eq:J2errpro}) and Eq.~(\ref{eq:J2errprolargeN}). \section{Realization of the $\boldsymbol{J}^2$ and $L_\varphi$ measurement}\label{Sec4} Let us now discuss the implementation of the $\boldsymbol{J}^2$ and $L_\varphi$ measurement. While measuring the local operator $L_\varphi$ should be straightforward, measuring the global operator $\boldsymbol{J}^2$ is more challenging. For the simplest scenario with $N=2$, exact implementation of such a scheme is known for bosonic system: measurement of the $\boldsymbol{J}^2$ basis is realized using the Hong-Ou-Mandel (HOM) interference effect \cite{Hong1987, Dai2014}, whereas the $L_\varphi$ measurement is performed in its local basis, after the interference. In fact, a recent experiment demonstrating superresolution estimation of the centroid and separation between two mutually incoherent sources in the sub-Rayleigh limit \cite{Parniak2018}---a task which can be well approximated as the estimation of $\varepsilon$ and $\varphi$ in our model \cite{Chrostowski2017}---utilizes essentially the same idea. For the sake of clarity as well as introducing the notation, below, we first recount how the setup of HOM interferometer, followed by projective measurements at the end works for the $N=2$ case. Note that here, our study include both bosons and fermions. Then, we show how this can be generalized to the $N=3$ case, before moving on to briefly discussing the more complicated scenarios with larger $N$, where the $\boldsymbol{J}^2$ measurement is approached from a quantum circuit perspective, with ancilla qubits involved. Lastly, we look closer into the case of nearly-pure qubit estimation, where in conjunction with the discussion in Sec.~\ref{sec:nearly-pure}, non-exact but \emph{approximate} implementations that focus on distinguishing between just the $j=N/2$ and $j=N/2-1$ subspaces will be considered. In what follows, for convenience of the notation, we will use $\{\th, \textsc{v}\}$ to specify a basis for the qubit system, such that, e.g., $\sigma_z=\proj{\th}-\proj{\textsc{v}}$. \subsection{$N=2$} \begin{figure*}[t!] \centering \includegraphics[width=0.95\textwidth]{MPEfQS-fig3} \caption{\textbf{(a) Implementation of $\boldsymbol{J}^2$ and $L_\varphi$ measurement for $N=2$ qubits.} The two qubits are set to enter a Hong-Ou-Mandel (HOM) interferometer which consists of a 50:50 beam splitter (BS), whose action is equivalent to applying a Hadamard transformation $\mathcal{U}$ to the creation operators, see Eq.~(\ref{eq:HOMn=2}). The HOM interference signatures uniquely determine the projection onto the eigenspaces of $\boldsymbol{J}^2$: bunching to the same output port is equal to $j=1(0)$ projection for bosons (fermions), while anti-bunching at different output ports is equal to $j=0(1)$ projection for bosons (fermions). At the end of each output ports, a projection onto the operators $\{\frac{\mathds{1}+\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}}{2},\frac{\mathds{1}-\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}}{2}\}$ then completes the measurement of $L_\varphi$. \textbf{(b) Bell multiport}: The HOM in (a) can be seen as a special case of a Bell multiport with $N=2$, where the Hadamard transformation is now generalized to the discrete Fourier transform. For $N=3$, the measurement statistics (including coincidences) at (between) the various output ports allow us to implement the $\boldsymbol{J}^2$ and $L_\varphi$ measurement exactly; see Eqs.~(\ref{eq:N=3sigboson}) and (\ref{eq:N=3sigferm}). For $N\geq4$, however, Bell multiport cannot implement $\boldsymbol{J}^2$ perfectly, as not all interference signatures for different $j$ values are now distinct. We conjecture, nevertheless, that the interference signatures for $j=N/2$ and $j=N/2-1$ are distinct, which we can then make use of in the limit of nearly-pure qubit estimation, as most detection events in this case are contained in this two subspaces; see Sec.~\ref{sec:nearly-pure}.} \label{fig:MPEfQS-fig3} \end{figure*} A HOM interferometer, schematically depicted in Fig.~\ref{fig:MPEfQS-fig3}(a), consists of a 50:50 beam splitter (BS), with one qubit entering simultaneously from each of the input ports. The action of the BS can be summarized as a Hadamard transformation $\mathcal{U}$ linking the input and output creation operators, i.e., \begin{eqnarray} \label{eq:HOMn=2} \left( \begin{array}{cc} \bd{\alpha}{1} \\ \bd{\alpha}{2} \end{array} \right)&=\mathcal{U}\left( \begin{array}{cc} \ad{\alpha}{1} \\ \ad{\alpha}{2} \end{array} \right)=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right) \left( \begin{array}{cc} \ad{\alpha}{1} \\ \ad{\alpha}{2} \end{array} \right), \end{eqnarray} where the operator $\ad{\alpha}{i} (\bd{\alpha}{i})$ specifies the creation of a qubit with the state $\alpha=\{\th,\textsc{v}\}$ at the input (output) port $i=\{1,2\}$. As usual, we have $\big[\ad{\alpha}{i},\ad{\beta}{j}\big]=\big[\bd{\alpha}{i},\bd{\beta}{j}\big]=0$ for bosons, and $\big\{\ad{\alpha}{i},\ad{\beta}{j}\big\}=\big\{\bd{\alpha}{i},\bd{\beta}{j}\big\}=0$ for fermions, which denies the possibility of having two fermions with same degrees of freedom at the same output. In this notation, we thus have $\rho_N(\varepsilon,\varphi)=\bigotimes_{i=1}^N \rho^{(i)}(\varepsilon,\varphi)$, with \begin{equation} \label{eq:rhoi} \fl \rho^{(i)}(\varepsilon,\varphi)=\mathrm{e}^{-\mathrm{i} \sigma_y\varphi/2}\Big(\frac{1+s(\varepsilon)}{2}\ad{\th}{i}\proj{\textsc{vac}}\aa{\th}{i}+\frac{1-s(\varepsilon)}{2}\ad{\textsc{v}}{i}\proj{\textsc{vac}}\aa{\textsc{v}}{i}\Big)\mathrm{e}^{\mathrm{i} \sigma_y\varphi/2}, \end{equation} where $\ket{\textsc{vac}}$ is the vacuum state, $\aa{\alpha}{i}=(\ad{\alpha}{i})^\dagger$. One can then show that (see \ref{app:BellmultiportN=2} for explicit demonstration with the bosonic case) we have the following unique signatures at the output ports: \begin{eqnarray}\label{eq:N=2sigboson} j=1&:\quad \textrm{all two qubits exit at the same ports;}\nonumber\\ j=0&:\quad \textrm{all two qubits exit at distinct ports,\quad \quad [bosons]} \end{eqnarray} or \begin{eqnarray}\label{eq:N=2sigferm} j=1&:\quad \textrm{all two qubits exit at distinct ports;}\nonumber\\ j=0&:\quad \textrm{all two qubits exit at the same ports.\quad [fermions]} \end{eqnarray} Note that these signatures are independent of the parameter $\varphi$, and hence information about it is preserved as it should. To then extract information about and estimate $\varphi$, we simply measure the projectors $\{\frac{\mathds{1}+\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}}{2},\frac{\mathds{1}-\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}}{2}\}$ at the end of each of the output ports, which of course corresponds to measuring the eigenspaces of $L_\varphi$. Note that, number-resolving detection technique is needed here to capture all the signatures. \subsection{$N=3$} A natural question arises: Can we still implement the $\boldsymbol{J}^2$ and $L_\varphi$ measurement for the case of $N\geq3$, using similar idea of observing different measurement statistics, including coincidence counts, at the various output ports? Here, we attempt to answer this question by considering a generalization of HOM interferometer to the \emph{Bell multiport} setup \cite{Zukowski1997,Lim2005}, as schematically depicted in Fig.~\ref{fig:MPEfQS-fig3}(b), and can generally be realized with combinations of beam splitters, phase shifters, and mirrors \cite{Reck1994}. In this case, one qubit simultaneously enters from the each of the $N$ input of the Bell multiport, whose action, in generalization to Eq.~(\ref{eq:HOMn=2}), is the discrete Fourier transform, i.e., \begin{eqnarray}\label{eq:dFTN} \bd{\alpha}{k}=\sum_{\ell=1}^N\mathcal{U}_{k\ell}\ad{\alpha}{\ell}, \quad \quad \mathcal{U}_{k\ell}=\frac{1}{\sqrt{N}}\mathrm{e}^{\frac{2\pi\mathrm{i}}{N}(k-1)(\ell-1)}. \end{eqnarray} We work out explicitly the case for $N=3$ here, where the Bell multiport is also known as the Bell tritter \cite{Zukowski1997, Bouchard2021} and has been realized experimentally for example in photonic systems \cite{Spagnolo2013,Schaeff2015}. Leaving the details to \ref{app:BellmultiportN=3}, we find that the projection onto the $j$-eigenspace of $\boldsymbol{J}^2$ has the following unique signatures at the output ports: \begin{eqnarray}\label{eq:N=3sigboson} j=\frac{3}{2}&:\quad \textrm{all three qubits exit at the same ports \emph{or} distinct ports;}\nonumber\\ j=\frac{1}{2}&:\quad \textrm{two qubits exit at the same ports,} \nonumber\\ &\hspace{0.75cm} \textrm{another at a different port, \quad\quad \quad [bosons]} \end{eqnarray} or \begin{eqnarray}\label{eq:N=3sigferm} j=\frac{3}{2}&:\quad \textrm{all three qubits exit at distinct ports;}\nonumber\\ j=\frac{1}{2}&:\quad \textrm{two qubits exit at the same ports,} \nonumber\\ &\hspace{0.75cm} \textrm{another at a different port. \quad\;\;\quad [fermions]} \end{eqnarray} Thus, upon measuring the projectors $\{\frac{\mathds{1}+\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}}{2},\frac{\mathds{1}-\hat{\mathrm{e}}_{\boldsymbol{s}'}\cdot\boldsymbol{\sigma}}{2}\}$ at the outputs, all the outcome statistics indeed allow us to implement exact $\boldsymbol{J}^2$ and $L_\varphi$ measurements with a Bell tritter for $N=3$. \subsection{$N\geq4$} For $N\geq4$, exact realization of the measurement of $\boldsymbol{J}^2$ basis using interference signatures of the ``paths" in a passive, linear setup such as a HOM or Bell tritter, to the best knowledge of the author, is not available. For example, with the Bell multiport with $N=4$ in Eq.~(\ref{eq:dFTN}), we do find that between the $j=2$ and $j=1$ eigenspaces the interference signatures are unique, but then there are no more room for another distinct signature for the $j=0$ eigenspace. In fact, as computing the output probabilities for an arbitrarily \emph{given} transformation $\mathcal{U}$ on the creation operators set $\{\ad{\alpha}{\ell}\}$ with \emph{fixed} $\alpha$ is already hard\cite{Valiant1979,Aaronson2011}---a central identity in boson sampling \cite{Aaronson2013,Brod2019}, finding a transformation $\mathcal{U}$ that has output signatures that are different for each $j$-eigenspace, which now includes also inputs of different $\alpha$, is even harder as $N$ gets larger. Let us hence briefly switch to discuss the problem from a quantum computation perspective, where instead of just a passive setup, we allow the usage of ancillary qubits and control gates to realize the measurement of the common eigenbasis of $\boldsymbol{J}^2$ and $L_\varphi$. Indeed then, upon identifying the $\boldsymbol{J}^2$ (and $L_\varphi$ common) eigenbasis as the Schur basis\cite{Bacon2006}---the union of bases for both irreducible representations of the globally-symmetric unitary group as well as the permutation group---the task of projection onto this common eigenbasis can be mapped onto the outcomes of a general Schur transformation circuit, for which efficient constructions have been developed \cite{Bacon2006,Bacon2007,Kirby2017,Kirby2018}. In particular, the whole circuit can be implemented with $\Or(N^3)$ two-level gates (unitaries that act non-trivially on two dimensions), which can then be approximately executed with the universal and fault-tolerant set of Clifford and T gates \cite{Nielsen2010}. Then, should each of these two-level gates can be approximately executed with up to at most $\Or(\gamma/N^3)$ error, the full Schur transformation circuit can be realized with an error $\gamma$ using overall $\Or(N^4\log(N/\gamma))$ universal gates, as well as $\Or(\log N)$ ancilla \cite{Kirby2017,Kirby2018}. \subsection{Nearly-pure qubit estimation: Approximate realization} Consider now the situation in Sec.~\ref{sec:nearly-pure}, i.e., nearly-pure qubit estimation. In particular, we stick with the parameterization of $s(\varepsilon)=1-\varepsilon^2/8, \varepsilon\ll 1$. As we have discussed in Sec.~\ref{sec:nearly-pure}, in the limit of $\varepsilon\ll1$ and $N\varepsilon^2\ll1$, the rare $j=N/2-1$ detection events are the ones that contain most information about $\varepsilon$. We can see this from another angle, by writing Eq.~(\ref{eq:rhoN}) differently, namely, up to terms of $\Or(N\varepsilon^2)$, \begin{eqnarray}\label{eq:rhoN2} \rho_N(\varepsilon,\varphi)&=\mathrm{e}^{-\mathrm{i} J_y\varphi}\Big(\frac{1+s(\varepsilon)}{2}\proj{\th}+\frac{1-s(\varepsilon)}{2}\proj{\textsc{v}}\Big)^{\otimes N}\mathrm{e}^{\mathrm{i} J_y\varphi}\nonumber\\ &\approx\Big(1-\frac{N\varepsilon^2}{16}\Big)\mathrm{e}^{-\mathrm{i} J_y\varphi}\proj{\th\cdots\th}\mathrm{e}^{\mathrm{i} J_y\varphi}\nonumber\\ &\hphantom{=}+\frac{\varepsilon^2}{16}\mathrm{e}^{-\mathrm{i} J_y\varphi}\Big(\proj{\th\cdots\th\textsc{v}}+\textsf{permutations}\Big)\mathrm{e}^{\mathrm{i} J_y\varphi}\nonumber\\ &\equiv w_0(\varepsilon)\tau_0(\varphi)+w_1(\varepsilon)\tau_1(\varphi), \end{eqnarray} where $w_0(\varepsilon)=1-\frac{N\varepsilon^2}{16}, w_1(\varepsilon)=\frac{N\varepsilon^2}{16}$, $\tau_0(\varphi)=\mathrm{e}^{-\mathrm{i} J_y\varphi}\proj{\th\cdots\th}\mathrm{e}^{\mathrm{i} J_y\varphi}$, and $\tau_1(\varphi)=\mathrm{e}^{-\mathrm{i} J_y\varphi}\sum_{i=1}^N\tau_{1,i}\mathrm{e}^{\mathrm{i} J_y\varphi}/N$ with $\tau_{1,i}=\proj{\textsc{v}}^{(i)}\bigotimes_{j(\neq i)}^N\proj{\th}^{(j)}$. Ignoring differences beyond $\Or(N\varepsilon^2)$ with the exact $\rho_N(\varepsilon,\varphi)$, we may then consider the state as given by the right-hand side of Eq.~(\ref{eq:rhoN2}) instead, and suppose we send this state into the $N$-Bell multiport. Then, we verified explicitly up to $N=12$ that the interference signatures for all the $\tau_{1,i}$ are the same, and we denote this set of signatures by $\sig{1}$. Furthermore, our numerical results show that $\sig{1}$ and the signature set for $\tau_0(\varphi)$, $\sig{0}$, are not exactly distinct, but they have an overlap, and the probability of having an interference signature from $\tau_{1,i}$ inside this overlapping subset is given by $1/N$. The overall probability of observing $\sig{0}$ is then \begin{eqnarray}\label{eq:prsig0} \pr{\sig{0}}=w_0(\varepsilon)+w_1(\varepsilon)\frac{1}{N}=1-\frac{(N-1)\varepsilon^2}{16}, \end{eqnarray} while the rest of signatures have probability \begin{eqnarray}\label{eq:prsig1} \pr{\sig{1}\setminus\sig{0}}=w_1(\varepsilon)(1-\frac{1}{N})=\frac{(N-1)\varepsilon^2}{16}. \end{eqnarray} Over $\nu$ repetitions then, should we obtain the relative frequencies $\nu_0$ and $\nu_1=1-\nu_0$ for observing signatures in $\sig{0}$ and $\sig{1}\setminus\sig{0}$ respectively, we may estimate $\varepsilon$ by $\displaystyle\hat{\varepsilon}(D)=4\sqrt{\frac{\nu_1}{N-1}}=\sqrt{\frac{8(1+\nu_1-\nu_0)}{N-1}}$. As a reminder to the reader, by obtaining $\nu_0$ and $\nu_1$ and then estimating $\varepsilon$, we have not disturbed any information about $\varphi$ which we shall extract from $L_\varphi$ measurement, as the interference signatures are invariant under an overall and equal transformation on all the qubits. It turns out that the probability of getting the signature sets $\sig{0}$ and $\sig{1}$ are respectively equal to $p_{N/2}$ and $p_{N/2-1}$, see Eqs.~(\ref{eq:prsig0}) and (\ref{eq:prsig1}) versus Eqs.~(\ref{eq:pN/2}) and (\ref{eq:pN/2-1}). Is this correspondence accidental? Evidently, $\ket{\th\cdots\th}=\ket{j=\frac{N}{2},m_z=\frac{N}{2}}$, and so $\tau_0(\varphi)=\mathcal{P}_{\frac{N}{2}}\tau_0(\varphi)\mathcal{P}_{\frac{N}{2}}$, such that $\tr\{\mathcal{P}_{\frac{N}{2}}\tau_0(\varphi)\}=1$. Meanwhile, we have $\tau_1(\varphi)=(\mathcal{P}_{\frac{N}{2}}+\mathcal{P}_{\frac{N}{2}-1})\tau_1(\varphi)(\mathcal{P}_{\frac{N}{2}}+\mathcal{P}_{\frac{N}{2}-1})$, and one can show easily that $\tr\{\mathcal{P}_{\frac{N}{2}}\tau_1(\varphi)\}=\frac{1}{N}$. Hence, just like $\pr{\sig{0}}$ and $\pr{\sig{1}\setminus\sig{0}}$, we have $p_{N/2}\approx w_0(\varepsilon)+w_1(\varepsilon)(1/N)=1-(N-1)\varepsilon^2/16$ and $p_{N/2-1}\approx w_1(\varepsilon)(1-1/N)=(N-1)\varepsilon^2/16$, indeed. It is, therefore, inviting for us to associate the $1/N$ overlap between $\sig{1}$ and $\sig{0}$ as originated from the $1/N$ fraction of $\tau_1(\varphi)$ in the $\mathcal{P}_{\frac{N}{2}}$ subspace, and therefore identify $\sig{0}$ as the signature set for $j=N/2$, while $\sig{1}\setminus\sig{0}$ as the signature set for $j=N/2-1$. We take note that $\sig{1}$ has been so far not obtained from any arbitrary states in the $j=N/2-1$ subspace, but only those with $m_z=N/2-1$ via the states $\tau_{1,i}=\proj{\textsc{v}}^{(i)}\bigotimes_{j(\neq i)}^N\proj{\th}^{(j)}$, $i=1,\cdots,N$. Similarly, $\sig{0}$ is obtained from considering just $\tau_0$, which is a particular state in the $j=N/2$ subspace with the specific value of $m_z=N/2$. To further confirm our assertion that $\sig{0}$ and $\sig{1}\setminus\sig{0}$ do unambiguously distinguish between the projection onto the two $j$-subspaces respectively, we check explicitly the signatures for \emph{all} the eigenstates, $\ket{j,m_z,g(j)}$, in each $(j,g(j))$-eigenspaces for $j=N/2$ and $j=N/2-1$ up to $N=6$, and find that indeed all the kets with $j=N/2$ have signatures in $\sig{0}$, while all the kets with $j=N/2-1$ have signatures in $\sig{1}\setminus\sig{0}$. We then end this section by proposing the following \emph{conjecture} from our numerical findings: \begin{eqnarray}\label{eq:conjecNsignatures} \textrm{Given $N$ spin-1/2s with one spin entering simultaneously from each of } \nonumber\\ \textrm{the input ports of an $N$-Bell multiport, the interference signatures for }\nonumber\\ \textrm{the $j=N/2$ and $j=N/2-1$ subspaces are distinguishable.} \end{eqnarray} Put differently, we conjecture that an $N$-Bell multiport allows us to distinguish between projection onto the $j=N/2$ and $j=N/2-1$ subspaces of $\boldsymbol{J}^2$ from the outcome signatures. \section{Discussion}\label{Sec5} Ideally, we would like to have a passive implementation of the $\boldsymbol{J}^2$ measurement for any valid range of $\varepsilon$ and $N$. Unfortunately however, we only manage to do so for $N=2,3$ with Bell multiport, and at the moment be content with approximate realizations in the nearly-pure qubit regime, where projection onto the $j=N/2$ and $N/2-1$ subspaces is sufficient. To proceed further, instead of Bell multiport with the corresponding discrete Fourier transform then, one may perhaps consider more general complex Hadamard transform \cite{Tadej2006}. Of course, more generally, measuring $\boldsymbol{J}^2$ and $L_\varphi$ need not be the only scheme that achieves the multi-parameter QFI CRB; there might be some other schemes as well, with different implementation method altogether. We demonstrated one such scheme here and will leave these other possibilities open for now. Regarding the interference signatures in $\sig{0}$ and $\sig{1}$ with $N$-Bell multiport in the above section, we also observe an interesting feature for the case of bosonic qubits. With $N$ bosons and $N$ outputs in the Bell multiport, there are in total $|\textsf{S}|:=\binom{2N-1}{N}$ different signatures possible. Then, we find that apart from some specific $N$, the total number of signatures in $\sig{1}$, $|\sig{1}|$, is exactly $|\textsf{S}|$, which again confirms that there are no room for other $j\neq N/2$ values to have distinguishable signature set. Up to $N=12$ as we have verified, the few exceptional $N$ values for which $|\sig{1}|\neq|\textsf{S}|$ are found to be $N=6,10,12$, with $|\sig{1}|/|\textsf{S}|=\frac{449}{462}, \frac{90358}{92378},\frac{1321717}{1352078}$ respectively. Incidentally, $N=6,10,12$ are the first three elements in the sequence of numbers which are not integer power of a prime number, and it is a well known open problem in mathematical physics and quantum information whether there exists $N+1$ mutually unbiased bases (MUB) for Hilbert space with dimension of such numbers \cite{Durt2010, Horodecki2020}. Given that one of the important tools in studying MUB is the complex Hadamard matrices for which discrete Fourier transform is one of such \cite{Tadej2006, Durt2010}, it is then perhaps not surprising to have such a correspondence between the non-unity ratio of $|\sig{1}|/|\textsf{S}|$ and the MUB problem. However, exactly how or why they are related would require a separate analysis, and is beyond the scope of this work. \noindent Additional remark: We realize that there is a recent work (to appear soon) by J. O. de Almeida et al. \cite{Jessica2021} on estimation of the separation and relative intensity of two incoherent point sources, which explores similar idea of collective ``angular momentum" measurements. \section{Conclusion}\label{Sec6} In summary, in this work, we have studied the joint estimation of the length parameter $\varepsilon$ and the direction parameter $\varphi$ of the Bloch vector for qubit states, a model which has relevance to various physical problems, e.g., superresolution quantum imaging. We show that these two parameters can be simultaneously estimated with optimal precisions asymptotically, using the collective measurement of the angular-momentum squared operator, followed by a local projective measurement. We also discuss how such measurement scheme can be realized using Bell multiport, either exactly for $N=2,3$, or approximately when the qubits are nearly pure for other $N$ values. We conjecture that with a Bell multiport setup, the interference signatures in the $j=N/2$ and $j=N/2-1$ angular-momentum spaces will be distinct. \ack {The author would like to express his gratitude to Konrad Banaszek, Chandan Datta, Marcin Jarzyna, and Jan Ko\l{}ody\'nski for their insightful comments and discussions. The author would also like to extend his appreciation to Konrad Banaszek for his encouragement and support throughout this work. This work is part of the project ``Quantum Optical Technologies" carried out within the International Research Agendas Programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund.}
1,108,101,566,085
arxiv
\section{Introduction} Atmospheric showers are initiated when primary cosmic rays hit the Earth's atmosphere. Secondary mesons produced in this collision, mostly pions and kaons, decay and give rise to electron and muon neutrino and anti-neutrinos fluxes \cite{review}. There has been a long-standing anomaly between the predicted and observed $\nu_\mu$ $/\nu_e$ ratio of the atmospheric neutrino fluxes \cite{atmexp}. Although the absolute individual $\nu_\mu$ or $\nu_e$ fluxes are only known to within $30\%$ accuracy, different authors agree that the $\nu_\mu$ $/\nu_e$ ratio is accurate up to a $5\%$ precision. In this resides our confidence on the atmospheric neutrino anomaly (ANA), now strengthened by the high statistics sample collected at the Super-Kamiokande experiment \cite{superkam}. The most likely solution of the ANA involves neutrino oscillations. In principle we can invoke various neutrino oscillation channels, involving the conversion of \hbox{$\nu_\mu$ } into either \hbox{$\nu_e$ } or \nt (active-active transitions) or the oscillation of \hbox{$\nu_\mu$ } into a sterile neutrino \hbox{$\nu_{s}$ } (active-sterile transitions). This last case is especially well-motivated theoretically, since it constitutes one of the simplest ways to reconcile \cite{4familytalks} the ANA with other puzzles in the neutrino sector such as the solar neutrino problem as well as the LSND result \cite{lsnd} and the possible need for a few eV mass neutrino as the hot dark matter in the Universe \cite{SS1}. The main aim of this talk is to compare the $\nu_\mu \to \nu_\tau$ and the $\nu_\mu \to \nu_{s}$ transitions using the the new sample corresponding to 535 days of the Super-Kamiokande data. This analysis uses the latest improved calculations of the atmospheric neutrino fluxes as a function of zenith angle, including the muon polarization effect and taking into account a variable neutrino production point \cite{flux}. \section{Atmospheric Neutrino Oscillation Probabilities} The expected neutrino event number both in the absence and the presence of oscillations can be written as: \begin{equation} N_\mu= N_{\mu\mu} +\ N_{e\mu} \; , \;\;\;\;\;\ N_e= N_{ee} + N_{\mu e} \; , \label{eventsnumber} \end{equation} where \begin{equation} N_{\alpha\beta} = n_t T \int \frac{d^2\Phi_\alpha}{dE_\nu d(\cos\theta_\nu)} \kappa_\alpha(h,\cos\theta_\nu,E_\nu) P_{\alpha\beta} \frac{d\sigma}{dE_\beta}\varepsilon(E_\beta) dE_\nu dE_\beta d(\cos\theta_\nu)dh\; . \label{event0} \end{equation} and $P_{\alpha\beta}$ is the oscillation probability of $\nu_\beta \to \nu_\alpha$ for given values of $E_{\nu}, \cos\theta_\nu$ and $h$, i.e., $ P_{\alpha\beta} \equiv P(\nu_\alpha \to \nu_\beta; E_{\nu}, \cos\theta_\nu, h) $. In the case of no oscillations, the only non-zero elements are the diagonal ones, i.e. $P_{\alpha\alpha}=1$ for all $\alpha$. Here $n_t$ is the number of targets, $T$ is the experiment's running time, $E_\nu$ is the neutrino energy and $\Phi_\alpha$ is the flux of atmospheric neutrinos of type $\alpha=\mu ,e$; $E_\beta$ is the final charged lepton energy and $\varepsilon(E_\beta)$ is the detection efficiency for such charged lepton; $\sigma$ is the neutrino-nucleon interaction cross section, and $\theta_\nu$ is the angle between the vertical direction and the incoming neutrinos ($\cos\theta_\nu$=1 corresponds to the down-coming neutrinos). In Eq.~(\ref{event0}), $h$ is the slant distance from the production point to the sea level for $\alpha$-type neutrinos with energy $E_\nu$ and zenith angle $\theta_\nu$. Finally, $\kappa_\alpha$ is the slant distance distribution which is normalized to one \cite{flux}. The neutrino fluxes, in particular in the sub-GeV range, depend on the solar activity. In order to take this fact into account in Eq.~(\ref{event0}), a linear combination of atmospheric neutrino fluxes $\Phi_\alpha^{max}$ and $\Phi_\alpha^{min}$, which correspond to the most active Sun (solar maximum) and quiet Sun (solar minimum) respectively, is used. For definiteness we assume a two-flavor oscillation scenario, in which the $\nu_\mu$ oscillates into another flavour either $\nu_\mu \to \nu_e$ , $\nu_\mu \to \nu_s$ or $\nu_\mu \to \nu_\tau$. The Schr\"odinger evolution equation of the $\nu_\mu -\nu_X$ (where $X=e,\tau $ or $s$ sterile) system in the matter background for {\sl neutrinos } is given by \begin{eqnarray} i{\mbox{d} \over \mbox{d}t}\left(\matrix{ \nu_\mu \cr\ \nu_X\cr }\right) & = & \left(\matrix{ {H}_{\mu} & {H}_{\mu X} \cr {H}_{\mu X} & {H}_X \cr} \right) \left(\matrix{ \nu_\mu \cr\ \nu_X \cr}\right) \,\,, \\ H_\mu & \! = & \! V_\mu + \frac{\Delta m^2}{4E_\nu} \cos2 \theta_{\mu X}\,, \,\,\,\,\,\,\,\,\, H_X \!= V_X - \frac{\Delta m^2}{4E_\nu} \cos2 \theta_{\mu X}, \nonumber \\ H_{\mu X}& \!= & - \frac{\Delta m^2}{4E_\nu} \sin2 \theta_{\mu X} \nonumber \label{evolution1} \end{eqnarray} where \begin{eqnarray} \label{potential} V_\tau=V_\mu = \frac{\sqrt{2}G_F \rho}{M} (-\frac{1}{2}Y_n)\,, & \;\;\;\;\; & V_s=0 \nonumber \\ V_e = \frac{\sqrt{2}G_F \rho}{M} ( Y_e - \frac{1}{2}Y_n) & & \nonumber \end{eqnarray} Here $G_F$ is the Fermi constant, $\rho$ is the matter density at the Earth, $M$ is the nucleon mass, and $Y_e$ ($Y_n$) is the electron (neutron) fraction. We define $\Delta m^2=m_2^2-m_1^2$ in such a way that if $\Delta m^2>0 \: (\Delta m^2<0)$ the neutrino with largest muon-like component is heavier (lighter) than the one with largest X-like component. For anti-neutrinos the signs of potentials $V_X$ should be reversed. We have used the approximate analytic expression for the matter density profile in the Earth obtained in ref. \cite{lisi}. In order to obtain the oscillation probabilities $P_{\alpha\beta}$ we have made a numerical integration of the evolution equation. The probabilities for neutrinos and anti-neutrinos are different because the reversal of sign of matter potential. Notice that for the $\nu_\mu\to\nu_\tau$ case there is no matter effect while for the $\nu_\mu\to\nu_s$ case we have two possibilities depending on the sign of $\Delta m^2$. For $\Delta m^2 > 0$ the matter efects enhance {\sl neutrino} oscillations while depress {\sl antineutrino} oscillations, whereas for the other sign ($\Delta m^2<0$) the opposite holds. The same occurs also for $\nu_\mu\to\nu_e$. Although in the latter case one can also have two possible signs, we have chosen the most usually assumed case where the muon neutrino is heavier than the electron neutrino, as it is theoretically more appealing. Notice also that, as seen later, the allowed region for this sign is larger than for the opposite, giving the most conservative scenario when comparing with the present limits from CHOOZ. \section{Atmospheric Neutrino Data Fits} Here I describe our fit method to determine the atmospheric oscillation parameters for the various possible oscillation channels, including matter effects for both $\nu_\mu \to \nu_e$ and $\nu_\mu \to \nu_s$ channels. The steps required in order to generate the allowed regions of oscillation parameters were given in ref. \cite{ourwork}. I will comment only that when combining the results of the experiments we do not make use of the double ratio, $R_{\mu/e}/R^{MC}_{\mu/e}$, but instead we treat the $e$ and $\mu$-like data separately, taking into account carefully the correlation of errors. It is well-known that the double ratio is not well suited from a statistical point of view due to its non-Gaussian character. Thus, following ref. \cite{ourwork,fogli2} we define the $\chi^2$ as \begin{equation} \chi^2 \equiv \sum_{I,J} (N_I^{data}-N_I^{theory}) \cdot (\sigma_{data}^2 + \sigma_{theory}^2 )_{IJ}^{-1}\cdot (N_J^{data}-N_J^{theory}), \label{chi2} \end{equation} where $I$ and $J$ stand for any combination of the experimental data set and event-type considered, i.e, $I = (A, \alpha)$ and $J = (B, \beta)$ where, $A,B$ stands for Fr\'ejus, Kamiokande sub-GeV, IMB,... and $\alpha, \beta = e,\mu$. In Eq.~(\ref{chi2}) $N_I^{theory}$ is the predicted number of events calculated from Eq.~(\ref{eventsnumber}) whereas $N_I^{data}$ is the number of observed events. In Eq.~(\ref{chi2}) $\sigma_{data}^2$ and $\sigma_{theory}^2$ are the error matrices containing the experimental and theoretical errors respectively. They can be written as \begin{equation} \sigma_{IJ}^2 \equiv \sigma_\alpha(A)\, \rho_{\alpha \beta} (A,B)\, \sigma_\beta(B), \end{equation} where $\rho_{\alpha \beta} (A,B)$ stands for the correlation between the $\alpha$-like events in the $A$-type experiment and $\beta$-like events in $B$-type experiment, whereas $\sigma_\alpha(A)$ and $\sigma_\beta(B)$ are the errors for the number of $\alpha$ and $\beta$-like events in $A$ and $B$ experiments, respectively. We compute $\rho_{\alpha \beta} (A,B)$ as in ref. \cite{fogli2}. A detailed discussion of the errors and correlations used in the analysis can be found in Ref.\cite{ourwork}. We have conservatively ascribed a 30\% uncertainty to the absolute neutrino flux, in order to generously account for the spread of predictions in different neutrino flux calculations. Next we minimize the $\chi^2$ function in Eq.~(\ref{chi2}) and determine the allowed region in the $\sin^22\theta-\Delta m^2$ plane, for a given confidence level, defined as, \begin{equation} \chi^2 \equiv \chi_{min}^2 + 4.61\ (9.21)\ \ \ \mbox{for}\ \ 90\ (99) \% \ \ \mbox{C.L.} \label{chimin} \end{equation} \begin{figure} \centerline{\protect\hbox{\epsfig{file=chimin_500.ps,width=0.7\textwidth,height=0.3\textheight}}} \fcaption{$\chi^2_{min}$ for fixed $\Delta m^2$ versus $\Delta m^2$ for each oscillation channel for Super-Kamiokande sub-GeV and multi-GeV data, and for the combined sample. Since the minimum is always obtained close to maximum mixing the curves for $\nu_\mu\rightarrow \nu_s$ for both signs of $\Delta m^2$ coincide.} \label{chimin1} \end{figure} In Fig.~\ref{chimin1} we plot the minimum $\chi^2$ (minimized with respect to $\sin^2 2\theta$) as a function of $\Delta m^2$. Notice that for large $\Delta m^2 \:\raisebox{-0.5ex}{$\stackrel{\textstyle>}{\sim}$}\: 0.1$ eV$^2$, the $\chi^2$ is nearly constant. This happens because in this limit the contribution of the matter potential in Eq~(\ref{evolution1}) can be neglected with respect to the $\Delta m^2$ term, so that the matter effect disappears and moreover, the oscillation effect is averaged out. In fact one can see that in this range we obtain nearly the same $\chi^2$ for the $\nu_\mu \to \nu_\tau$ and $\nu_\mu \to \nu_{s}$ cases. For very small $\Delta m^2 \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 10^{-4}$ eV$^2$, the situation is opposite, namely the matter term dominates and we obtain a better fit for the $\nu_\mu \to \nu_\tau$ channel, as can be seen by comparing the $\nu_\mu \to \nu_{\tau}$ curve of the Super-Kamiokande sub-GeV data (dotted curve in the left panel of Fig.~\ref{chimin1}) with the $\nu_\mu \to \nu_s$ and $\nu_\mu \to \nu_e$ curves in the left panel of Fig.~\ref{chimin1}). For extremely small $\Delta m^2 \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 10^{-4}$ eV$^2$, values $\chi^2$ is quite large and approaches a constant, independent of oscillation channel, as in the no-oscillation case. Since the average energy of Super-Kamiokande multi-GeV data is higher than the sub-GeV one, we find that the limiting $\Delta m^2$ value below which $\chi^2$ approaches a constant is higher, as seen in the middle panel. Finally, the right panel in Fig.~\ref{chimin1} is obtained by combining sub and multi-GeV data. \begin{table}[h] \tcaption{Minimum value of $\chi^2$ and the best fit point for each oscillation channel and for different data sets. For $\nu_\mu\rightarrow \nu_s$ the minimum $\chi^2$ is practically independent of the sign of $\Delta m^2$ as the minimum is located at maximum mixing angle.} \label{tab:data} \begin{center} \begin{tabular}{|l|l|l|l|l|l|} \hline Experiment & & $\nu_{\mu} \to \nu_\tau$ & $\nu_{\mu} \to \nu_s$ & $\nu_{\mu} \to \nu_e$ \\\hline Super-Kam & $\chi^2_{min}$ & $ 7.1 $ & $ 8.2 $ & $ 7.3$ \\ sub-GeV & $ \Delta m^2 $ ( $10^{-3} $eV$^2$ ) & $ 0.11 $ & $1.9$ & $1.2$ \\ & $\sin^2 2\theta$ & $1.0$ & $1.0$ & $0.97$ \\\hline Super-Kam & $\chi^2_{min}$ & $ 6.3$ & $ 7.9 $ & $ 10.8$ \\ multi-GeV & $ \Delta m^2 $ ( $10^{-3} $eV$^2$ ) & $1.5$ & $3.5$ & $24.7$ \\ & $\sin^2 2\theta$ & $0.97$ & $1.0$ & $0.72$ \\\hline Super-Kam & $\chi^2_{min}$ &$ 14.3$ & $ 16.8 $ & $21.8$ \\ Combined & $ \Delta m^2 $ ( $10^{-3} $eV$^2$ ) & $1.6$ & $2.6$ & $1.5$ \\ & $\sin^2 2\theta$ & $1.0$ & $1.0$ & $0.97$ \\\hline All experiments & $\chi^2_{min}$ &$ 47.2$ & $ 48.6 $ & $48.6$ \\ Combined & $ \Delta m^2 $ ( $10^{-3} $eV$^2$ ) & $2.9$ & $3.5$ & $3.0$ \\ & $\sin^2 2\theta$ & $1.0$ & $1.0$ & $0.99$ \\\hline \end{tabular} \end{center} \end{table} A last point worth commenting is that for the $\nu_\mu \to \nu_{\tau}$ case in the sub-GeV sample there are two almost degenerate values of $\Delta m^2$ for which $\chi^2$ attains a minimum while for the multi-GeV case there is just one minimum at $1.5 \times 10^{-3} $eV$^2$. Finally in the third panel in Fig.~\ref{chimin1} we can see that by combining the Super-Kamiokande sub-GeV and multi-GeV data we have a unique minimum at $1.6 \times 10^{-3} $eV$^2$. \section{Results for the Oscillation Parameters} The results of our $\chi^2$ fit of the Super-Kamiokande sub-GeV and multi-GeV atmospheric neutrino data are given in Fig.~\ref{mutausk1}. In this figure we give the allowed region of oscillation parameters at 90 and 99 \% CL. One can notice that the matter effects are similar for the upper right and lower right panels because matter effects enhance the oscillations for {\sl neutrinos} in both cases. In contrast, in the case of $\nu_\mu \to \nu_s$ with $\Delta m^2<0$ the enhancement occurs only for {\sl anti-neutrinos} while in this case the effect of matter suppresses the conversion in $\nu_\mu$'s. Since the yield of atmospheric neutrinos is bigger than that of anti-neutrinos, clearly the matter effect suppresses the overall conversion probability. Therefore we need in this case a larger value of the vacuum mixing angle, as can be seen by comparing the left and right lower panels in Fig.~\ref{mutausk1}. \begin{figure} \centerline{\protect\hbox{\epsfig{file=sksub500.ps,width=0.6\textwidth,height=0.4\textheight}}} \fcaption{ Allowed regions of oscillation parameters for Super-Kamiokande for the different oscillation channels as labeled in the figure. In each panel, we show the allowed regions for the sub-GeV data at 90 (thick solid line) and 99 \% CL (thin solid line) and the multi-GeV data at 90 (dashed line) and 99 \% CL (dot-dashed line).} \label{mutausk1} \end{figure} Notice that in all channels where matter effects play a role the range of acceptable $\Delta m^2$ is shifted towards larger values, when compared with the $\nu_\mu \to \nu_\tau$ case. This follows from looking at the relation between mixing {\sl in vacuo} and in matter. In fact, away from the resonance region, independently of the sign of the matter potential, there is a suppression of the mixing inside the Earth. As a result, there is a lower cut in the allowed $\Delta m^2$ value, and it lies higher than what is obtained in the data fit for the $\nu_\mu \to \nu_\tau$ channel. It is also interesting to analyse the effect of combining the Super-Kamiokande sub-GeV and multi-GeV atmospheric neutrino data. Comparing the results obtained with 535 days given in the table above with those obtained with 325 days of Super-Kamiokande\cite{ourwork} we see that the allowed region is relatively stable with respect to the increased statistics. However, in contrast to the case for 325.8 days, now the $\nu_{\mu} \to \nu_\tau$ channel is as good as the $\nu_{\mu} \to \nu_e$, when only the sub-GeV sample is included, with a clear Super-Kamiokande preference for the $\nu_{\mu} \to \nu_\tau$ channel. As before, the combined sub-GeV and multi-GeV data prefers the $\nu_{\mu} \to \nu_X$, where $X=\tau$ or {\sl sterile}, over the $\nu_{\mu} \to \nu_e$ solution. To conclude this section I now turn to the predicted zenith angle distributions for the various oscillation channels. As an example we take the case of the Super-Kamiokande experiment and compare separately the sub-GeV and multi-GeV data with what is predicted in the case of no-oscillation (thick solid histogram) and in all oscillation channels for the corresponding best fit points obtained for the {\sl combined} sub and multi-GeV data analysis performed above (all other histograms). This is shown in Fig.~\ref{ang_mu}. It is worthwhile to see why the $\nu_\mu \to \nu_e$ channel is bad for the Super-Kamiokande multi-GeV data by looking at the upper right panel in Fig.~\ref{ang_mu}. Clearly the zenith distribution predicted in the no oscillation case is symmetrical in the zenith angle very much in disagreement with the data. In the presence of $\nu_\mu \to \nu_e$ oscillations the asymmetry in the distribution is much smaller than in the $\nu_\mu \to \nu_\tau$ or $\nu_\mu \to \nu_s$ channels, as seen from the figure. Also since the best fit point for $\nu_\mu \to \nu_s$ occurs at $\sin(2\theta)=1$, the corresponding distributions are independent of the sign of $\Delta m^2$. \begin{figure} \centerline{\protect\hbox{\epsfig{file=ang_mu.ps,width=0.5\textwidth,height=0.4\textheight}}} \caption{Angular distribution for Super-Kamiokande electron-like and muon- like sub-GeV and multi-GeV events together with our prediction in the absence of oscillation (dot-dashed) as well as the prediction for the best fit point for $\nu_\mu \to \nu_s$ (solid line), $\nu_\mu \to \nu_e$ (dashed line) and $\nu_\mu \to \nu_\tau$ (dotted line) channels. The error displayed in the experimental points is only statistical.} \label{ang_mu} \end{figure} \section{Atmospheric versus Accelerator and Reactor Experiments} I now turn to the comparison of the information obtained from the analysis of the atmospheric neutrino data presented above with the results from reactor and accelerator experiments as well as the sensitivities of future experiments. For this purpose I present the results obtained by combining all the experimental atmospheric neutrino data from various experiments\cite{atmexp}. In Fig.~\ref{mutausk4} we show the combined information obtained from our analysis of all atmospheric neutrino data involving vertex-contained events and compare it with the constraints from reactor experiments such as Krasnoyarsk, Bugey, and CHOOZ\cite{reactors}, and the accelerator experiments such as CDHSW, CHORUS, and NOMAD \cite{accelerators}. We also include in the same figure the sensitivities that should be attained at the future long-baseline experiments now under discussion. The first important point is that from the upper-right panel of Fig.~\ref{mutausk4} one sees that the CHOOZ reactor\cite{reactors} data already exclude completely the allowed region for the $\nu_{\mu} \to \nu_e$ channel when all experiments are combined at 90\% CL. The situation is different if only the combined sub-GeV and multi-GeV Super-Kamiokande are included. In such a case the region obtained is not completely excluded by CHOOZ at 90\% CL. Present accelerator experiments are not very sensitive to low $\Delta m^2$ due to their short baseline. As a result, for all channels other than $\nu_{\mu} \to \nu_e$ the present limits on neutrino oscillation parameters from CDHSW, CHORUS and NOMAD \cite{accelerators} are fully consistent with the region indicated by the atmospheric neutrino analysis. Future long baseline (LBL) experiments have been advocated as a way to independently check the ANA. Using different tests such long-baseline experiments now planned at KEK (K2K) \cite{chiaki}, Fermilab (MINOS) \cite{minos} and CERN ( ICARUS \cite{icarus}, NOE \cite{noe} and OPERA \cite{opera}) would test the pattern of neutrino oscillations well beyond the reach of present experiments. These tests are the following: $\tau$ appearance searches, $NC/CC$ ratio which measures $\Frac{(NC/CC)_{near}}{(NC/CC)_{far}}$, and the muon disappearance or $CC_{near}/CC_{far}$ test. \begin{figure} \centerline{\protect\hbox{\epsfig{file=all500.ps,width=0.8\textwidth,height=0.8\textheight}}} \fcaption{ Allowed oscillation parameters for all experiments combined at 90 (thick solid line) and 99 \% CL (thin solid line) for each oscillation channel as labeled in the figure. We also display the expected sensitivity of the present accelerator and reactor experiments as well as to future long-baseline experiments in each channel. The best fit point is marked with a star.} \label{mutausk4} \end{figure} The second test can potentially discriminate between the active and sterile channels, i.e. $\nu_\mu \to \nu_\tau$ and $\nu_\mu \to \nu_s$. However it cannot discriminate between $\nu_\mu \to \nu_s$ and the no-oscillation hypothesis. In contrast, the last test can probe the oscillation hypothesis itself. Notice that the sensitivity curves corresponding to the disappearance test labelled as {\sl KEK-SK Disappearance } at the lower panels of Fig.~\ref{mutausk4} are the same for the $\nu_\mu \to \nu_\tau$ and the sterile channel since the average energy of KEK-SK is too low to produce a tau-lepton in the far detector. In contrast the MINOS experiment has a higher average initial neutrino energy and it can see the tau's. Although in this case the exclusion curves corresponding to the disappearance test are in principle different for the different oscillation channels, in practice, however, the sensitivity plot is dominated by the systematic error. As a result discriminating between $\nu_\mu \to \nu_{\tau}$ and $\nu_\mu \to \nu_s$ would be unlikely with the Disappearance test. In summary we find that the regions of oscillation parameters obtained from the analysis of the atmospheric neutrino data on vertex-contained events cannot be fully tested by the LBL experiments, when the Super-Kamiokande data are included in the fit for the $\nu_\mu \to \nu_\tau$ channel as can be seen clearly from the upper-left panel of Fig.~\ref{mutausk4}. One might expect that, due to the upward shift of the $\Delta m^2$ indicated by the fit for the sterile case, it would be possible to completely cover the corresponding region of oscillation parameters. This is the case for the MINOS disappearance test. But in general since only the disappearance test can discriminate against the no-oscillation hypothesis, and this test is intrinsically weaker due to systematics, we find that also for the sterile case most of the LBL experiments can not completely probe the region of oscillation parameters indicated by the atmospheric neutrino analysis. This is so irrespective of the sign of $\Delta m^2$: the lower-left panel in Fig.~\ref{mutausk4} shows the $\nu_\mu \to \nu_s$ channel with $\Delta m^2<0$ while the $\nu_\mu \to \nu_s$ case with $\Delta m^2>0$ is shown in the lower-right panel. I am very grateful to the Instituto de F\'{\i}sica Te\'orica of the Universidade Estadual Paulista where these proceedings were written for its kind hospitality during my visit. This work was supported by DGICYT under grant PB95-1077, by CICYT under grant AEN96-1718, and by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo.
1,108,101,566,086
arxiv
\section{Introduction}\label{sec:intro} Satellite image analysis is a crucial task for gathering information world-wide in the era of multinational cooperation. Since most, if not all, satellites have little computational resources satellite image analysis is typically done on the ground where powerful computing stations exist. Thus, analyzing a satellite image starts with image transmission from satellite to the ground station. The transmission latency is quite large when it comes to the satellite environment since they are very far away from the ground and therefore have very limited communication bandwidth. Therefore, there is a need for an image compression technique in order to mitigate the transmission overhead. However, compressing an image often causes information loss resulting in analysis performance degradation which is not wanted behavior. One of the most popular image compression technique is JPEG\cite{jpeg}. JPEG is an image compression standard that has been used in various domains enabling lightweight image compression with acceptable visual image quality. However, since the jpeg compression basically works with filtering out the high frequency color components in the image\cite{jpeg}, color distortion is inevitable after the pipeline of encoding and decoding. This color distortion causes performance degradation on image analysis tools such as deep neural networks. In this paper we propose an adaptive image compression technique that can reduce the transmission latency while preserving the accuracy of the analysis tools. This can be done by adaptively choosing the region of interest(RoI) where the analysis tool values the most and compress those regions with high quality and others with low quality. The details of choosing the RoI will be delivered in \Cref{sec:method} We showed that with our proposed algorithm we can significantly reduce the image file size while successfully preserving the analysis accuracy on satellite images. \section{Related Works} \subsection{JPEG Image compression for Satellite Imageries} As mentioned in the \Cref{sec:intro}, jpeg image compression is one of the most popular standard in the domain and there are numerous works that applied jpeg compression on satellite images\cite{jpegeval}. Work done by Tada et. al.\cite{jpegeval} evaluated the effect of the jpeg compression with power spectrum comparison showing the degree of image distortion after the compression. What is not dealt with tada's work is that the image compression may cause the deep neural network malfunction even though the compressed image looks fine with human eyes. Researches like \cite{jpegimpact1, jpegimpact2} report that jpeg compression does affect the performance of the deep neural network. We also conducted a preliminary experiment that shows the impact of jpeg compression on Faster R-CNN object detection network for satellite images in \Cref{sec:experiment}. Thus, it is clear that compressing a satellite image should be dealt carefully in order to get the maximum analyzing performance on the ground. \subsection{Image Compression with Accuracy Consideration} There are some works that deals with the image compression considering the outcome of the analysis tools on the receiver side\cite{dre_pre, dre_pre2,dre_pre3,dre_pre4,edge_assist}. Both works are targetting the different domain but the basic idea is similar. Work \cite{edge_assist} uses a image compression for efficient image transmission between mobile client and the edge server when offloading the compute intensive image object detection task from client to the server. Here, they propose a Dynamic Region of Interest for adaptive image compression where the object detection result for the previous frame is used for determining the important area of the current frame. The selected RoI is then compressed with higher quality(less compression), and the rest of the image area are compressed with lower quality(more compression). By doing so, \cite{edge_assist} achieves real-time image offloading in edge assisted augmented reality(AR) service. Work \cite{dre_pre} and others \cite{ dre_pre2,dre_pre3,dre_pre4} are more related to our work that those directly aim the same domain we are dealing with: Satellite image compression without performance degradation. Paper \cite{dre_pre} proposes a fuzzy c-means image segmentation and adaptive image compression according to the segmentation result which in turn compresses the background more and important objects less. However, works like \cite{dre_pre, dre_pre2,dre_pre3,dre_pre4} simply focuses on first incestigatin the image and do not directly consider the structure or the characteristics of the analysis tools on the ground. This inconsistency of image interpretation between land and satellite would result in performance degradation. On the other hand, work done by \cite{edge_assist} uses the previous result from the analysis tool, so it directly considers the analysis tool when compressing an image. Nonetheless, algorithm of \cite{edge_assist} is very hard to be applied to satellite imageries since satellite images typically captures different places and even if the time series data can be produced, huge transmission overhead makes data from last time step less valuable. \begin{figure}[!bp] \centering \begin{subfigure}{0.45\linewidth}{ \centering \includegraphics[width=\linewidth]{images/P2059.png} \caption{Input image} } \end{subfigure} \begin{subfigure}{0.45\linewidth}{ \centering \includegraphics[width=\linewidth]{images/result_0.1_h_P2059.png} \caption{Epsilon lrp result} } \end{subfigure} \caption{Example of applying epsilon lrp model explanation to the satellite image input and object dection model} \label{fig:elrp} \end{figure} \section{Methodology}\label{sec:method} In this paper, we propose a novel image compression scheme, reasoning based dynamic image compression(RDIC), which makes use of the layer-wise relevance propagation\cite{elrp} which is one of the explainable AI techniques. Layer-wise relevance propagation works by backpropagating the neural network result and it's relevance score as in the equation \ref{eqn:elrp1} and point the salient part of the input image where the model got the most valuable information getting the result. \begin{equation}\label{eqn:elrp1} R^{(l)}_i = \sum_{j} {{z_{ij}}\over{\sum_{i'} z_{i'j} + \epsilon sign(\sum_{i'} z_{i'j})}} R^{(l+1)}_j \end{equation} An example of the result of applying epsilon lrp, layer-wise relevance propagation, to the object detection model SCRDet\cite{r2cnn} is shown in the Figure. \ref{fig:elrp}. As can be seen in the figure salient part containing target objects(ship, harbor, cars, etc...) are highlighted. Instead of highlighting the whole foreground objects by using epsilon lrp we could highlight the image region that is needed by the model for image analysis. \begin{figure}[!tp] \centering \begin{subfigure}{0.45\linewidth}{ \centering \includegraphics[width=\linewidth]{images/P2059_hq.jpg} \caption{Salient region} } \end{subfigure} \begin{subfigure}{0.45\linewidth}{ \centering \includegraphics[width=\linewidth]{images/P2059_lq.jpg} \caption{Background} } \end{subfigure} \caption{Example of calculated final RoI mask} \label{fig:masks} \end{figure} \begin{figure*}[!htbp] \centering \includegraphics[width=\linewidth]{images/result.png} \caption{Experimental results for comparing the mAP score for each test cases: Original Dataset(Blue), JPEG compressed dataset(Orange), RDIC compressed dataset(Yellow). Class number(1-16) stands for classes ['roundabout', 'tennis-court', 'storage-tank', 'soccer-ball-field', 'small-vehicle', 'ship', 'plane', 'large-vehicle', 'helicopter', 'harbor', 'ground-track-field', 'bridge', 'basketball-court', 'baseball-diamond', 'mAP']} \label{fig:results} \end{figure*} From the result of the epsilon lrp relevance propagation, we now calculate the region of interest(RoI) which will act as a mask determining the compression quality. The outcome of the epsilon lrp is a bitmap which indicates the pixel-wise relevance score within the range of negative infinity and positive infinity. This unboundedness makes the calculation of the importance mask difficult with raw outcome of the elrp. Thus, we first took the absolute value of the outcome and then noramlized with the mean value of the outcome. From the normalized outcome values, we then create a mask M which indicates the pixel area where the normalized relevance value is bigger than 0 as shown in the equation \ref{eqn:mask}. However, as we can see from the Fig. \ref{fig:elrp}-(b), the outcome of the elrp is basically very noisy and therefore the resulting mask is also very noisy. The noisiness of the mask can significantly degrade the performance of the target object detection model so we implemented a dilation operation, one of the conventional CV techniques, for acquiring smooth RoI masks. The example of the final RoI mask can be seen in the Fig. \ref{fig:masks}. As can be seen in the figure, salient areas containing the target objects are successfully highlighted. Furthermore, in the background region we can see that salient objects are also included but the category of the salient objects does not belong to the target category. \begin{equation}\label{eqn:mask} M(i,j) = \begin{cases} 1 &\text{$abs(elrp(I)(i,j)) >= mean(abs(elrp(I)))$}\\ 0 &\text{else, I: Input image} \end{cases} \end{equation} After the calculation of the RoI for determining the compression criteria, we conducted the dynamic image compression where we compress the RoI region with high quality and background region with low quality. Here, the quality of the compression follows the quality definition of the computer vision OpenCV\cite{opencv}. \section{Experimental Results}\label{sec:experiment} For the evaluation of our proposed image compression scheme we have conducted an experiment comparing the mean average precision performance of the object detection model on datasets compressed with different methods. For the dataset we used a DOTA dataset\cite{dota} which is an open dataset consisting of numerous images taken by a plane that is similar to the satellite imageries. For the target object detection model we chose faster r-cnn model following the paper \cite{r2cnn}. We compared the evaluation result of the faster r-cnn model on the dataset first, and then we compressed the original dataset with two different methods: Original JPEG compression, and proposed reasoning based dynamic image compression(RDIC). Here, original jpeg compression is done with the quality of 100, and RDIC consist of two different quality 100, 50 each for RoI and BG regions. We then compared both the mAP score and the total file size of the dataset which can be seen in the Fig. \ref{fig:results} and the Table. \ref{tab:results}. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|} \hline & Original & JPEG& RDIC\\ \hline\hline File Size(MB) & 3324 & 1671 & 942\\ \hline mAP(\%) & 57.75 & 57.75 & 56.54\\ \hline \end{tabular} \end{center} \caption{Example of caption for table.} \label{tab:results} \end{table} The figure Fig. \ref{fig:results} shows the average precision of each classes. As we can see from the figure, the performance of the object detection model is mostly preserved after the compression. Interesting part is that in case of classes like soccer-ball-field and large-vehicle, jpeg and RDIC compressed version of ours resulted in a better precision score. Except for this unexpected outcome, we can see that the average precision gets lower when applied compression to the dataset. However, if we look into the file size analysis in the Table. \ref{tab:results}, we can see that compared to the original dataset, JPEG compression provides identical performance while the size of the dataset is reduced to 50.27 percent of the original dataset. Our proposed RDIC loses about 1.21 percent point of the accuracy while reducing the filesize into 942 Mega Bytes which is 56.4 percent of the JPEG compressed dataset, and 27.9 percent of the original dataset. This significant reduction of the filesize allows the satellite image transmission to be about four times faster than usual with only a 1.2 percent point loss of the detection model accuracy. \section{Conclusion}\label{conclusion} Satellite imageries are big in their size which causes a huge transmission latency hindering the fast and easy analysis of the image. In this paper we propose a novel image compression scheme based on the model reasoning that allows us to compress the satellite image with minimum accuracy loss and high compression rate. Our scheme starts from analyzing the target model by relevance propagation for RoI searching. According to the RoI we then conducted a dynamic image compression which will compress the important part of an image with high quality and others with high compression rate. The evaluation results show that our scheme successfully capture the important region in the image according to the model we use. Since the epsilon lrp method and other techniques we used is not bound to a single object detection model, our scheme is also easy to apply on various other applications and neural network models. \bibliographystyle{plain}
1,108,101,566,087
arxiv
\section{Introduction} In classical physics, identical particles are seen as mutually distinguishable; that is, any permutation bringing about an interchange of particles in two different one-particle states is recognized to lead to a new, physically distinct state. Many times, we speak about Boltzmannian particles obeying the Maxwell-Boltzmann (MB) statistics. In quantum theory, things are completely different since identical particles are not distinguishable. For such indistinguishable particles, a factorization of the total wave function is not appropriate because an interchange of any two particles leads to a wave function which has to be insensitive to such a permutation operation. Now, depending on the nature of identical particles, fermions and bosons, the total wave function is anti-symmetric and symmetric, respectively. As is well known, fermions follow the Fermi-Dirac (FD) statistics and bosons the Bose-Einstein (BE) statistics \cite{Baym-book-1969}. Furthermore, due to this symmetry, spatial correlations exist even if the particles are supposedly noninteracting. This type of correlation is manifested through the so-called statistical inter-particle potential which depends on the temperature through the thermal de Broglie wavelength \cite{Pathria-book-1988}. For fermions and bosons, this potential is repulsive and attractive, respectively. There is also an intimate connections between these statistics and the intrinsic spin of those particles but we are not going to consider it here. Interferometry and diffraction of matter waves is a very active field of research being a very valuable tool in examining the validity of quantum mechanics. Observed optical-like effects are determined by the interaction between the corresponding particles and measuring devices. One of the paradigmatic examples is the so-called Talbot effect, that is, a near-field interference effect \cite{Salva-JCP-2007} due to the presence of two or more slits, or in general, any grating. In analogy to the so-called carpets reported in optics, quantum carpets \cite{Berry-PW-2001} are observed in the Fresnel region when gratings are illuminated by particle beams. These carpets are distorted due to the specific particle-grating interaction (what has been termed the Talbot-Beeby effect \cite{Salva-JCP-2007}). When small clusters or big molecules are used, diffraction patterns in the far field or Fraunhofer region are governed mainly by van der Waals interactions and strong reduction of the fringe visibility is observed \cite{Arndt-NatPhys-2007}. A way to reduce as much as possible such distortions is thanks to the well-known effect of quantum reflection \cite{Wieland-Sci-2011, Salva-JPCL-2017, Salva-PRA-2018}. On the other hand, the effect of two-particle in this kind of studies has been extensively studied after the pioneer work by Hong et al. \cite{Hong-PRL-1987}; in particular, two photons. Identical massive particles' interference has been analyzed quite recently in several papers \cite{Bose-PRL-2002, Lim-NJP-2005, Sancho-EPJD-2014, MaGr-EPJD-2014}. It has been argued that when one of the one-particle states has a zero, bunching and anti-bunching can occur for fermions and bosons, respectively. \cite{MaGr-EPJD-2014} This unexpected effect can be seen by measuring detection probability of a couple of identical particles by two adjacent detectors: bosons (fermions) interfere locally destructively (constructively) and therefore anti-bunch (bunch). Furthermore, it has been shown that in the case of measurements by a single extended detector, there is no difference between the detection probabilities for different statistics meaning again bosons do not bunch and fermions do not anti-bunch \cite{MaGr-EPJD-2014}. Concerning the role of quantum statistics in multi-particle decay dynamics, by studying the releasing of a pair of particles from a quantum trap, it has been concluded that the naive picture, in which identical bosons attract one another while identical fermions repel each other, does not work in predicting even the qualitative behavior of the pair \cite{MaGr-AP-2015}. On the other hand, very few studies have been devoted to analyze the effect of friction and temperature on the resulting interference and/or diffraction patterns. The main purpose of this kind of studies is to see how the decoherence process affects this open dynamics and, in this context, how robust is the symmetry properties of the wave function for indistinguishable particles. Within the Caldirola-Kanai (CK) approach, where it takes into account friction without including environmental fluctuations, it has been shown in the two-slit problem for distinguishable particles how the friction leads to localization by using Bohmian trajectories \cite{Salva-AP-2014}. This analysis is carried out in an analytical way for Gaussian slits because this approach keeps the linearity of the problem. Within the same context, entanglement indicators corresponding to pure states by using the nonlinear Schr\"odinger-Langevin wave equation \cite{Kostin, MoMi-AP-2018, MoMi-JPC-2018, MoMi-EPJP-2019, MoMi-Arxiv-2019} have been analyzed \cite{ZaPl-En-2018}. An alternative way of dealing with the same issue and keeping the linearity is by means of the Caldeira-Leggett (CL) approach where a master equation for the reduced density in the coordinate representation and at high-temperatures is used \cite{Caldeira-PA-1983,Caldeira-book-2014}. Two simple but very illustrative examples, taking the one-particle states with considerable overlap, are considered, diffraction by a single and two Gaussian slits by analyzing the mean square separation between distinguishable particles, bosons and fermions as well as the simultaneous detection probability or diffraction patterns. In the CK approach, the mean square separation drastically reduces with friction, reaching ultimately a constant value. On the contrary, in the CL approach, temperature has an opposite effect to friction and this quantity increases. Furthermore, there is a time-interval for which the joint detection probability measured by an extended detector is greater for fermions than for bosons. As has already been reported for non-dissipative systems, fermion bunching and boson anti-bunching are also observed here for open two-particle systems. On the contrary, in the two Gaussian slit problem within the CK approach, the interference pattern behave similarly for bosons and distinguishable particles whereas for fermions display a different behavior, reflecting the symmetry of the corresponding wave functions. Recently, both approaches have been used for studying dissipative quantum backflow for distinguishable particles \cite{MoMi-arXiv-2019}. In this work, our aim is to explore dissipative and thermal effects in the diffraction of indistinguishable particles (bosons and fermions) by one and two Gaussian slits. Due to the above-mentioned symmetry properties, three important quantities are going to be evaluated, the mean square separation (MSS) between particles, single-particle probability density and the simultaneous detection probability as analyzed in Refs. \cite{Sancho-EPJD-2014} and \cite{MaGr-EPJD-2014}. The decoherence process is ultimately studied taken into account the mutual roles of friction and overlapping integral of one-particle states at a given time. For this goal, this manuscript is organized as follows. In Section \ref{sec: tp_CK}, the two-particle CK equation from the corresponding one-particle equation is proposed. Section \ref{sec: tp_CL} is devoted to the evolution of the {\it pure} two-identical-particle state under the corresponding CL master equation. Section \ref{sec: D_sl} deals with diffraction of two-identical-particle state by a single Gaussian slit in both the CK and CL approaches. In section \ref{sec: tp-tl}, two-particle two-slit experiment will be studied within the CK approach. Analytical and numerical results and discussion will be presented at the same time in the previous sections. In the last section, some concluding remarks are briefly listed. Finally, in an appendix, dissipative identical-particle systems are shown not to be properly described in the framework of the Schr\"{o}dinger-Langevin nonlinear wave equation because symmetry properties are no longer kept. \section{The Caldirola-Kanai equation for two non-interacting particles} \label{sec: tp_CK} The dissipative dynamics of two non-interacting particles with the same mass $m$ in the CK framework can be written as \begin{eqnarray} \label{eq: 2par_CK} i \hbar \frac{ \partial }{ \partial t} \Psi(x_1, x_2; t) &=& \bigg[ - e^{-2 \gamma t} \frac{ \hbar ^2}{2m} \left( \frac{ \partial ^2}{ \partial x_1^2} + \frac{ \partial ^2}{ \partial x_2^2}\right) + e^{2 \gamma t} ( V(x_1) + V(x_2) ) \bigg] \Psi(x_1, x_2; t) \end{eqnarray} where for identical particles the wave function $ \Psi(x_1, x_2; t) $ must have a given symmetry; it must be symmetric (anti-symmetric) under the exchange of identical bosons (fermions) obeying respectively the BE and FD statistics \cite{Baym-book-1969}. Here $ \gamma = \eta/ 2m $ is the relaxation constant \cite{Caldeira-PA-1983} defined versus the damping constant $ \eta $. If the initial wave function is expressed as \begin{eqnarray} \label{eq: 2p-psi0} \Psi_{\pm}(x_1, x_2, 0) &=& N_{\pm} ( \psi_0(x_1) \phi_0(x_2) \pm \phi_0(x_1) \psi_0(x_2) ) \end{eqnarray} where $\psi$ and $\phi$ are one-particle states and sub-indices $+$ and $-$ stand respectively for bosons and fermions, then due to the linearity of the wave equation (\ref{eq: 2par_CK}), the symmetric and anti-symmetric solutions can be written as \begin{eqnarray} \label{eq: 2p-psit} \Psi_{\pm}(x_1, x_2, t) &=& N_{\pm} ( \psi(x_1, t) \phi(x_2, t) \pm \phi(x_1, t) \psi(x_2, t) ) \end{eqnarray} at later times, where $\psi(x, t)$ and $\phi(x, t)$ fulfill the corresponding one-particle CK wave equation. Apart from a phase factor, the normalization constants $ N_{\pm} $ are given by \begin{eqnarray} \label{eq: normalization} N_{\pm} &=& \frac{1}{ \sqrt{ 2(1 \pm | \langle \psi_0 | \phi_0 \rangle |^2) } } \end{eqnarray} where we have assumed that the one-particle wave functions $\psi$ and $\phi$ are normalized to unity as well as used the fact that the interference term $ \langle \psi(t) | \phi(t) \rangle $ is independent on time. This can be easily deduced from the one-particle CK wave equation and square-integrability of the one-particle wave functions, \begin{eqnarray} \label{eq: interference} \frac{d}{d t} \langle \psi(t) | \phi(t) \rangle &=& \int_{-\infty}^{\infty} dx ~ \frac{ \partial }{ \partial t} [ \psi^*(x, t) \phi(x, t) ] = e^{-2 \gamma t} \frac{i \hbar }{2m} \int_{-\infty}^{\infty} dx \left\{ - \frac{ \partial ^2 \psi^*}{ \partial x^2} \phi + \psi^* \frac{ \partial ^2 \phi}{ \partial x^2} \right\} = 0 . \end{eqnarray} In the following we are going to provide expressions for two important quantities in this context, namely, the mean square separation (MSS) between particles and simultaneous detection probability (i) by a single non-ideal detector located at the origin and (ii) by two detectors located symmetrically around the origin. The widths of all detectors are the same, $2d$. This means we provide an expression for the probability of finding both particles simultaneously in the range $[-d, d]$ around the origin in the first case; and the same quantity for finding particles in two distinct regions of the same width $2d$ located symmetrically around the origin in the second case. Finally, by tracing out over the coordinate of one of the particles, some expressions for single-particle probability density and its corresponding probability current density fulfilling a continuity equation are analyzed. \subsection{Mean square separation between particles} One of the fundamental quantities in this context is the expectation value of square distance between particles, $ \langle \Psi| ( \hat{x}_1 - \hat{x}_2 )^2 | \Psi \rangle $, for different statistics, which reads as \begin{eqnarray} \label{eq: mss_CK} \langle ( \hat{x}_1 - \hat{x}_2 )^2 \rangle _{\pm} &=& \langle \Psi_{\pm}| ( \hat{x}_1 - \hat{x}_2 )^2 | \Psi_{\pm} \rangle = 2 |N_{\pm}|^2 \bigg( \langle x^2 \rangle _{\psi} + \langle x^2 \rangle _{\phi} - 2 \langle x \rangle _{\psi} \langle x \rangle _{\phi} \mp 2 | \langle x \rangle _{\psi \phi} |^2 \pm 2 \text{Re} \{ \langle x^2 \rangle _{\psi\phi} \langle \phi|\psi \rangle \} \bigg) \nonumber \\ & \equiv & 2 |N_{\pm}|^2 \bigg( \langle ( \hat{x}_1 - \hat{x}_2 )^2 \rangle _{ \text{MB} } \mp 2 | \langle x \rangle _{\psi \phi} |^2 \pm 2 \text{Re} \{ \langle x^2 \rangle _{\psi \phi} \langle \phi|\psi \rangle \} \bigg) \end{eqnarray} where, $ \langle ( \hat{x}_1 - \hat{x}_2 )^2 \rangle _{ \text{MB} } \equiv \langle \psi \phi |( \hat{x}_1 - \hat{x}_2 )^2 | \psi \phi \rangle $ stands for the expectation value of the square separation between the two distinguishable particles obeying the MB statistics; analogously, $ \langle \cdots \rangle _{\psi} \equiv \langle \psi | \cdots | \psi \rangle $ and $ \langle \cdots \rangle _{\psi \phi} \equiv \langle \psi | \cdots | \phi \rangle $. \subsection{Simultaneous detection probability} Consider now a single detector located at the origin with a width $2d$. Then, the {\it ratio} of simultaneous detection probability of indistinguishable particles to the distinguishable ones is given by \cite{MaGr-EPJD-2014} \begin{eqnarray} p_{\pm}(t) &=& \frac{ p_{\substack{ \text{BE} \\ \text{FD} }}(t) }{ p_{ \text{MB} }(t) }= \frac{ \int_{-d}^{d} dx_1 \int_{-d}^{d} dx_2 | \Psi_{\pm}(x_1, x_2, t) |^2 }{ \int_{-d}^{d} dx_1 \int_{-d}^{d} dx_2 ~~\frac{1}{2} ( |\psi(x_1, t)|^2 |\phi(x_2, t)|^2 + |\psi(x_2, t)|^2 |\phi(x_1, t)|^2 ) } \\ &=& 2 N_{\pm}^2 \left\{ 1 \pm \frac{ \left| \int_{-d}^{d} dx ~ \psi^*(x, t)\phi(x, t) \right|^2 } { \int_{-d}^{d} dx |\psi(x, t)|^2 \int_{-d}^{d} dx |\phi(x, t)|^2 } \right\} \label{eq: detprob_CK} \end{eqnarray} where, in the second line, Eq. (\ref{eq: 2p-psit}) has been used. Note that for distinguishable particles obeying the MB statistics, the corresponding probability density is expressed as \begin{eqnarray} \label{eq: rhoMB_CK} p_{ \text{MB} }(t) = | \Psi_{ \text{MB} }(x_1, x_2, t) |^2 &=& \frac{1}{2} ( |\psi(x_1, t)|^2 |\phi(x_2, t)|^2 + |\phi(x_1, t)|^2 |\psi(x_2, t)|^2 ) \end{eqnarray} Just for completeness, we mention that if instead of a single detector, one considers two detectors with the same width $2d$ located symmetrically around the origin at positions $D$ and $-D$ respectively, then the relative simultaneous detection probability for finding a particle by the first detector and a particle by the second detector is given by \begin{eqnarray} p'_{\pm}(t) &=& \frac{ p'_{\substack{ \text{BE} \\ \text{FD} }}(t) }{ p'_{ \text{MB} }(t) }= \frac{ \int_{D-d}^{D+d} dx_1 \int_{-D-d}^{-D+d} dx_2 | \Psi_{\pm}(x_1, x_2, t) |^2 }{ \int_{D-d}^{D+d} dx_1 \int_{-D-d}^{-D+d} dx_2 ~~\frac{1}{2} ( |\psi(x_1, t)|^2 |\phi(x_2, t)|^2 + |\psi(x_2, t)|^2 |\phi(x_1, t)|^2 ) } \\ \nonumber \\ &=& 2 N_{\pm}^2 \left\{ 1 \pm 2 \frac{ \text{Re} \left\{\int_{D-d}^{D+d} dx_1 ~ \psi^*(x_1, t) \phi(x_1, t) \int_{-D-d}^{-D+d} dx_2 ~ \phi^*(x_2, t) \psi(x_2, t) \right\} } { \int_{D-d}^{D+d} dx_1 |\psi(x_1, t)|^2 \int_{-D-d}^{-D+d} dx_2 |\phi(x_2, t)|^2 + \int_{D-d}^{D+d} dx_1 |\phi(x_1, t)|^2 \int_{-D-d}^{-D+d} dx_2 |\psi(x_2, t)|^2 } \right\} \label{eq: detprob_CK_prime} \end{eqnarray} where we have preferred to use $ p'_{\pm}(t) $ to distinguish it from the previous case where one detector is only considered. Note that for $D=0$ this result reduces to the former result given by Eq. (\ref{eq: detprob_CK}). When the widths of detectors are negligible in comparation to their distance, $d \ll D$, which corresponds to two point detectors, then Eq. (\ref{eq: detprob_CK_prime}) reduces to \begin{eqnarray} \label{eq: detprob_CK_prime-point} p'_{\pm}(t) &=& 2 N_{\pm}^2 \left[ 1 \pm 2 \text{Re} \left\{ \left( \frac{ \psi(D, t) ~ \phi(-D, t) }{ \psi(-D, t) ~ \phi(D, t) } + \frac{\psi^*(-D, t) ~ \phi^*(D, t) }{ \psi^*(D, t) ~ \phi^*(-D, t) } \right)^{-1} \right\} \right] \end{eqnarray} In the following, numerical results for the two detector scheme are provided. \subsection{The continuity equation for reduced (single-particle) densities} From the two-particle CK equation (\ref{eq: 2par_CK}), one can easily obtain the continuity equation written as \begin{eqnarray} \label{eq: con_eq} \frac{ \partial }{ \partial t} |\Psi|^2 + \frac{ \hbar }{m} e^{-2 \gamma t} \sum_k \frac{ \partial }{ \partial x_k} \text{Im} \left\{ \Psi^* \frac{ \partial \Psi}{ \partial x_k} \right\} &=& 0 . \end{eqnarray} By integrating this equation over $x_2$, we have that \begin{eqnarray} \frac{ \partial }{ \partial t} \int dx_2 ~ |\Psi(x, x_2, t)|^2 + \frac{ \hbar }{m} e^{-2 \gamma t} \frac{ \partial }{ \partial x} \int dx_2 ~ \text{Im} \left\{ \Psi^*(x, x_2, t) \frac{ \partial \Psi(x, x_2, t)}{ \partial x} \right\} &=& 0 \end{eqnarray} which from Eq. (\ref{eq: 2p-psit}), the continuity equation for the reduced density can be written as \begin{eqnarray} \label{eq: con_eq_1p} \frac{ \partial \rho_{ \text{sp} }(x, t)}{ \partial t} + \frac{ \partial j_{ \text{sp} }(x, t)}{ \partial x} &=& 0 \end{eqnarray} with \begin{numcases}~ \rho_{ \text{sp} }(x, t) = |N_{\pm}|^2 ( |\psi(x, t)|^2 + |\phi(x, t)|^2 \pm 2 \text{Re}[ \langle \psi | \phi \rangle \phi^*(x, t) \psi(x, t) ] ) \label{eq: rhosp} \\ j_{ \text{sp} }(x, t) = |N_{\pm}|^2\frac{ \hbar }{m} e^{-2 \gamma t} \text{Im} \left\{ \psi^* \frac{ \partial \psi}{ \partial x} + \phi^* \frac{ \partial \phi}{ \partial x} \pm \langle \phi|\psi \rangle \psi^* \frac{ \partial \phi}{ \partial x} \pm \langle \psi|\phi \rangle \phi^* \frac{ \partial \psi}{ \partial x} \right\} \end{numcases} being the single-particle (sp) probability density and probability current density, respectively. Interference effects are noticeable in both expressions. \section{The Caldeira-Leggett equation for two non-interacting particles} \label{sec: tp_CL} So far we have only taken into account dissipative aspects of the environment through the well-known CK formalism. Thermal fluctuations due to the environment can also be considered following the CL formalism \cite{Caldeira-PA-1983, Caldeira-book-2014}. In this framework, the master equation describing the evolution of the reduced density matrix $ \rho $ in the coordinate representation and at high-temperatures reads \begin{eqnarray} \label{eq: CL eq} \frac{ \partial \rho}{ \partial t} &=& \left[ - \frac{ \hbar }{2mi} \left( \frac{ \partial ^2}{ \partial x^2} - \frac{ \partial ^2}{ \partial x'^2} \right) - \gamma (x-x') \left( \frac{ \partial }{ \partial x} - \frac{ \partial }{ \partial x'} \right) + \frac{ V(x) - V(x') }{ i \hbar } - \frac{D}{ \hbar ^2} (x-x')^2 \right] \rho(x, x', t) \\ & \equiv & \mathcal{L}(x, x') \rho(x, x', t) \end{eqnarray} with $ D = 2 m \gamma k_B T $ being the diffusion coefficient; $k_B$ is the Boltzman factor and $T$ the temperature of the environment. If $\rho_1(x, x', t)$ and $\rho_2(x, x', t)$ are two one-particle states, due to the linearity of the operator $ \mathcal{L}(x, x') $ appearing in the one-particle CL equation (\ref{eq: CL eq}), the corresponding master equation is then written as \begin{eqnarray} \label{eq: 2p-CL eq} \frac{ \partial }{ \partial t} [ \rho_1(x_1, x_1', t) \rho_2(x_2, x_2', t) ] &=& [ \mathcal{L}(x_1, x_1') + \mathcal{L}(x_2, x_2') ] [ \rho_1(x_1, x_1', t) \rho_2(x_2, x_2', t) ] \end{eqnarray} for the evolution of the product state $ \rho_1(x_1, x_1', t) \rho_2(x_2, x_2', t) $. Consider now a system of two identical particles described by the initial density matrix \begin{eqnarray} \rho_{\pm}(x_1, x_1'; x_2, x_2', 0) &=& \langle x_1, x_2| \Psi_{\pm}(0) \rangle \langle \Psi_{\pm}(0) | x_1', x_2' \rangle \\ &=& N_{\pm}^2 [ \psi_0(x_1) \phi_0(x_2) \pm \phi_0(x_1) \psi_0(x_2) ] [ \psi_0^*(x_1') \phi_0^*(x_2') \pm \phi_0^*(x_1') \psi_0^*(x_2') ] \nonumber \\ &\equiv& N_{\pm}^2 [ \rho_{aa}(x_1, x_1', 0) \rho_{bb}(x_2, x_2', 0) \pm \rho_{ab}(x_1, x_1', 0) \rho_{ba}(x_2, x_2', 0) \nonumber \\ && \quad~ \pm \rho_{ba}(x_1, x_1', 0) \rho_{ab}(x_2, x_2', 0) + \rho_{bb}(x_1, x_1', 0) \rho_{aa}(x_2, x_2', 0) ] \label{eq: rho0} \end{eqnarray} corresponding to the {\it pure} state given by Eq. (\ref{eq: 2p-psi0}). In the third line we have used the notation \begin{numcases}~ \rho_{aa}(x, x', 0) = \psi_0(x) \psi_0^*(x') \\ \rho_{ab}(x, x', 0) = \psi_0(x) \phi_0^*(x') \\ \rho_{ba}(x, x', 0) = \phi_0(x) \psi_0^*(x') \\ \rho_{bb}(x, x', 0) = \phi_0(x) \phi_0^*(x') . \end{numcases} Each term of Eq. (\ref{eq: rho0}) corresponds to a product state evolving in time according to Eq. (\ref{eq: 2p-CL eq}). As a consequence, the evolution of each one-particle state like $ \rho_{ab}(x, x', 0) $ is given by the one-particle CL equation (\ref{eq: CL eq}). For Gaussian one-particle states, this equation can be solved by doing a coordinates transformation $ (x, x') \rightarrow (r, R) $, with $r=x-x'$ and $ R=(x+x')/2 $, taking a partial Fourier transform with respect to the coordinate $R$, solving the resulting equation and finally taking the inverse Fourier transform to obtain the density matrix in the coordinate representation \cite{Ve-PRA-1994&VeKuGh-PA-1995}. Following this procedure, quantum dissipative backflow for the superposition of two Gaussian wave packets has been studied in the CL framework \cite{MoMi-arXiv-2019}. Note that for distinguishable particles obeying the MB statistics, the density matrix in the configuration space is given by \begin{eqnarray} \label{eq: rhoMB_CL} \rho_{ \text{MB} }(x_1, x_1'; x_2, x_2', t) &=& \frac{1}{2} [ \rho_{aa}(x_1, x_1', t) \rho_{bb}(x_2, x_2', t) + \rho_{bb}(x_1, x_1', t) \rho_{aa}(x_2, x_2', t) ] \end{eqnarray} where $ \rho_{aa}(x_1, x_1', t) $ is the time evolution of the state $ \rho_{aa}(x_1, x_1', 0) $ under Eq. (\ref{eq: CL eq}). Since our aim is the computation of detection probability and mean square separation, diagonal elements of one-particle density matrices are only needed. \subsection{Mean square separation between particles} As before, for the MSS we have \begin{eqnarray} \langle ( \hat{x}_1 - \hat{x}_2 )^2 \rangle _{\pm} &=& \text{Tr}[ ( \hat{x}_1 - \hat{x}_2 )^2 \hat{\rho}_{\pm}(t) ] = \int_{-\infty}^{\infty} dx_1 \int_{-\infty}^{\infty} dx_2 ~ \langle x_1, x_2 | ( \hat{x}_1 - \hat{x}_2 )^2 \hat{\rho}_{\pm}(t) | x_1, x_2 \rangle \\ & = & 2 |N_{\pm}|^2 \bigg( \langle ( \hat{x}_1 - \hat{x}_2 )^2 \rangle _{ \text{MB} } \mp 2 | \langle x \rangle _{ab} |^2 \pm 2 \text{Re} \left\{ \langle x^2 \rangle _{ab} \int_{-\infty}^{\infty} dx ~ \rho_{ba}(x, x, t) \right\} \bigg) \label{eq: mss_CL} \end{eqnarray} where \begin{numcases}~ \langle ( \hat{x}_1 - \hat{x}_2 )^2 \rangle _{ \text{MB} } = \langle x^2 \rangle _{aa} + \langle x^2 \rangle _{bb} - 2 \langle x \rangle _{aa} \langle x \rangle _{bb} \label{eq: mss_CL_MB} \\ \langle \cdots \rangle _{ab} = \int_{-\infty}^{\infty} dx ~ ( \cdots ) ~ \rho_{ab}(x, x, t) \end{numcases} \subsection{Simultaneous detection probability} Again, the ratio of simultaneous detection probability of indistinguishable particles to the distinguishable ones is computed in the CL framework to give \begin{eqnarray} p_{\pm}(t) &=& \frac{ p_{\substack{ \text{BE} \\ \text{FD} }}(t) }{ p_{ \text{MB} }(t) }= \frac{ \int_{-d}^{d} dx_1 \int_{-d}^{d} dx_2 ~ \rho_{\pm}(x_1, x_1; x_2, x_2, t) }{ \int_{-d}^{d} dx_1 \int_{-d}^{d} dx_2 ~ \rho_{ \text{MB} }(x_1, x_1; x_2, x_2, t) } \\ &=& 2 N_{\pm}^2 \left\{ 1 \pm \frac{ \left| \int_{-d}^{d} dx ~ \rho_{ab}(x, x, t) \right|^2 } { \int_{-d}^{d} dx ~ \rho_{aa}(x, x, t) \int_{-d}^{d} dx ~ \rho_{bb}(x, x, t) } \right\} \label{eq: detprob_CL} \end{eqnarray} where the detector is located at the origin with a width of $2d$. \section{Results and discussion} In this Section, diffraction of a two-identical-particle system by a single and two Gaussian slits is analyzed. In the following, numerical calculations are carried out in a system of units where $ m = \hbar = 1$. \subsection{Diffraction by a single Gaussian slit} \label{sec: D_sl} The assumption of the Gaussian slit is due to Feynman \cite{Feynman-book-1965} which converts the problem to an analytical one. Otherwise, the corresponding analysis calls for the numerical integration of Fresnel functions \cite{Sa-JPB-2010}. \subsubsection{The Caldirola-Kanai approach} The initial one-particle wave packets $\psi$ and $\phi$ as two co-centred Gaussian wave packets with the same center $x_0$, kick momenta $p_0$ and $\bar{p}_0$ and widths $ \sigma _0$ and $\bar{ \sigma }_0$ are assumed to be \begin{numcases}~ \psi_0(x) = \frac{1}{(2\pi \sigma _0^2)^{1/4}} \exp \left[ - \frac{(x-x_0)^2}{4 \sigma _0^2} + i \frac{p_0}{ \hbar } (x-x_0) \right] \label{eq: psi0} \\ \phi_0(x) = \frac{1}{(2\pi \bar{ \sigma }_0^2)^{1/4}} \exp \left[ - \frac{(x-x_0)^2}{4\bar{ \sigma }_0^2} + i \frac{\bar{p}_0}{ \hbar } (x-x_0) \right] \label{eq: phi0} . \end{numcases} Then, the overlap integral is given by \begin{eqnarray} \label{eq: ovint} \langle \phi_0| \psi_0 \rangle &=& \sqrt{ \frac{2 \sigma _0 \bar{ \sigma }_0 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } } ~ \exp \left[ - \frac{ \sigma _0^2 \bar{ \sigma }_0^2 }{ \sigma _0^2 + \bar{ \sigma }_0^2} \frac{(p_0-\bar{p}_0)^2}{ \hbar ^2} \right] \end{eqnarray} and the solution of the corresponding one-particle CK equation \begin{eqnarray} \label{eq: 1par_CK} i \hbar \frac{ \partial }{ \partial t} \psi(x, t) &=& \bigg[ - e^{-2 \gamma t} \frac{ \hbar ^2}{2m} \frac{ \partial ^2}{ \partial x^2} + e^{2 \gamma t} V(x) \bigg] \psi(x, t) \end{eqnarray} for the free propagation of the initial Gaussian wave packet (\ref{eq: psi0}) reads as \cite{MoMi-JPC-2018} \begin{eqnarray} \label{eq: spwf_Gauss} \psi(x, t) &=& \frac{1}{(2\pi s_t^2)^{1/4}} \exp \left[ - \frac{(x-x_t)^2}{4 \sigma _0 s_t} + i \frac{p_0}{ \hbar } (x-x_t) + \frac{i}{ \hbar } \mathcal{A}_{ \text{cl} }(t) \right] \end{eqnarray} where $s_t$, $x_t$ and $\mathcal{A}_{ \text{cl} }(t)$ are respectively the complex width, classical trajectory of the center of the wave packet and classical action given respectively by \begin{numcases}~ s_t = \sigma_0 \left( 1 + i \frac{ \hbar}{2m\sigma_0^2} \uptau(t) \right) \label{eq: st}, \\ x_t = x_0 + \frac{p_0}{m} \uptau(t) , \\ \mathcal{A}_{ \text{cl} }(t) = \frac{p_0^2}{2m} \uptau(t) , \end{numcases} with \begin{eqnarray} \label{eq: uptau} \uptau(t) &=& \frac{1-e^{-2 \gamma t}}{2 \gamma } . \end{eqnarray} The complex width and center of the wave packet $\phi(x, t)$ are respectively denoted by $\bar{s}_t$ and $\bar{x}_t$. Now from the previous analysis one obtains that \begin{numcases}~ \langle x \rangle _{\psi \psi} = x_t , \\ \langle x \rangle _{\phi \phi} = \bar{x}_t , \\ \langle x^2 \rangle _{\psi \psi} = \sigma _t^2 + x_t^2 , \\ \langle x^2 \rangle _{\phi \phi} = \bar{ \sigma }_t^2 + \bar{x}_t^2 , \\ \langle x \rangle _{\psi \phi} = \frac{\beta}{2\theta \sqrt{2 \theta s_t^* \bar{s}_t} } \exp\left[ \alpha + \frac{\beta^2}{4\theta} \right], \\ \langle x^2 \rangle _{\psi \phi} = \frac{\beta^2 + 2\theta}{4\theta^2 \sqrt{2 \theta s_t^* \bar{s}_t} } \exp\left[ \alpha + \frac{\beta^2}{4\theta} \right] , \end{numcases} where \begin{numcases}~ \sigma _t = \sigma _0 \sqrt{ 1 + \frac{ \hbar ^2}{4m^2 \sigma _0^4} \uptau(t)^2 } ~, \label{eq: sigmat} \\ \bar{ \sigma }_t = \bar{ \sigma }_0 \sqrt{ 1 + \frac{ \hbar ^2}{4m^2\bar{ \sigma }_0^4} \uptau(t)^2 } ~, \label{eq: delt} \end{numcases} are the time dependent widths of the wave packets $\psi(x, t)$ and $\phi(x, t)$ respectively with \begin{numcases}~ \alpha = - \frac{i}{ \hbar } \Delta \mathcal{A}_{ \text{cl} }(t) + \frac{i}{ \hbar } ( p_0 x_t - \bar{p}_0 \bar{x}_t ) - \frac{x_t^2}{4 \sigma _0 s_t^*} - \frac{\bar{x}_t^2}{4\bar{ \sigma }_0 \bar{s}_t} , \\ \beta = -\frac{i(p_0-\bar{p}_0)}{ \hbar } + \frac{x_t}{2 \sigma _0 s_t^*} + \frac{\bar{x}_t}{2\bar{ \sigma }_0 \bar{s}_t} , \\ \theta = \frac{1}{4 \sigma _0 s_t^*} + \frac{1}{4\bar{ \sigma }_0 \bar{s}_t} , \end{numcases} where $\Delta \mathcal{A}_{ \text{cl} }(t)$ is the difference between the classical actions for the component wavepackets $\psi$ and $\phi$. Thus, by using these relations, the mean square separation is computed through Eq. (\ref{eq: mss_CK}). On the other hand, for the one-particle detection probability one obtains that \begin{eqnarray} \int_{-d}^{d} dx |\psi(x, t)|^2 = \frac{1}{2} \left\{ \text{erf} \left[ \frac{x_t +d}{\sqrt{2} \sigma _t} \right] - \text{erf} \left[ \frac{x_t - d}{\sqrt{2} \sigma _t} \right] \right\} \end{eqnarray} and for the overlap integral in the detector region $ [-d, d] $ \begin{eqnarray} \int_{-d}^{d} dx ~ \psi^*(x, t)\phi(x, t) &=& \sqrt{ \frac{ \sigma _0 \bar{ \sigma }_0}{ \sigma _0 s_t^* + \bar{ \sigma }_0 \bar{s}_t } } ~ e^{b_1(t)} ~ \{ \text{erf} [b_2(t)] - \text{erf} [b_3(t)] \} \end{eqnarray} where $ \text{erf} (\cdots)$ is the error function and \begin{numcases}~ b_1(t) = \frac{ 4i \hbar [ ( \sigma _0 s_t^* + \bar{ \sigma }_0 \bar{s}_t ) \Delta \mathcal{A}_{ \text{cl} }(t) - ( \sigma _0 s_t^* p_0 + \bar{ \sigma }_0 \bar{s}_t \bar{p}_0 ) \Delta x_t ] + \hbar ^2 \Delta x_t^2 + 4 \sigma _0 \bar{ \sigma }_0 s_t^* \bar{s}_t (p_0-\bar{p}_0)^2} { 4 \hbar ^2( \sigma _0 s_t^* + \bar{ \sigma }_0 \bar{s}_t ) } \\ b_2(t) = \frac{ - \hbar [ \sigma _0 s_t^* (\bar{x}_t - d) + \bar{ \sigma }_0 \bar{s}_t (x_t-d) ] + 2 i \sigma _0 \bar{ \sigma }_0 s_t^* \bar{s}_t (p_0 - \bar{p}_0)} { 2 \hbar \sqrt{ \sigma _0 \bar{ \sigma }_0 s_t^* \bar{s}_t ( \sigma _0 s_t^* + \bar{ \sigma }_0 \bar{s}_t ) } } \\ b_3(t) = \frac{ - \hbar [ \sigma _0 s_t^* (\bar{x}_t + d) + \bar{ \sigma }_0 \bar{s}_t (x_t+d) ] + 2 i \sigma _0 \bar{ \sigma }_0 s_t^* \bar{s}_t (p_0 - \bar{p}_0)} { 2 \hbar \sqrt{ \sigma _0 \bar{ \sigma }_0 s_t^* \bar{s}_t ( \sigma _0 s_t^* + \bar{ \sigma }_0 \bar{s}_t ) } } \end{numcases} $ \Delta x_t $ being the distance between centres of the component wavepackets $\psi$ and $\phi$ i.e., $ \Delta x_t = x_t - \bar{x}_t $. By using these relations in Eq. (\ref{eq: detprob_CK}), the detection probability is easily reached in the CK framework. In order to carry out numerical calculations, the following initial conditions are chosen: $ \sigma _0 = 1 $, $ x_0 = 0 $, $ p_0 = 3 $, $ \bar{ \sigma }_0 = 0.9 $ and $\bar{p}_0 = p_0 $. These conditions mean that the wave packets $\psi$ and $\phi$ have considerable overlap. Thus, one expects the behaviour of indistinguishable and distinguishable particles become completely different. \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{MSS_CK.pdf} \caption{ Mean square separation (MSS) versus time in the CK framework for the MB (black circle), BE (red curves) and FD (green curves) statistics and values of friction coefficient in different panels: $ \gamma = 0 $ (left top), $ \gamma = 0.1 $ (left bottom), $ \gamma = 0.15 $ (right top) and $ \gamma = 0.2 $ (right bottom). Other parameters of the theory have been fixed as follows: $ \sigma _0 = 1 $, $ x_0 = 0 $, $ p_0 = 3 $, $ \bar{ \sigma }_0 = 0.9 $ and $\bar{p}_0 = p_0 $. } \label{fig: MSS_CK} \end{figure} Within the CK framework, in Figure \ref{fig: MSS_CK} we have plotted MSS versus time for the MB (black circle), BE (red curves) and FD (green curves) statistics and values of friction coefficient in different panels: $ \gamma = 0 $ (left top), $ \gamma = 0.1 $ (left bottom), $ \gamma = 0.15 $ (right top) and $ \gamma = 0.2 $ (right bottom). For the non-dissipative case, the MSS is an increasing function of time. This behavior is more pronounced for fermions than bosons and distinguishable particles which display the same time behaviour. On the contrary, when dissipation is present, a drastic behavior is observed. The MSS is much smaller with friction and an asymptotic value or stationary regime seems to be settled. At very long times, $ t \gg \gamma ^{-1}$, localization effects tend to be important leading to a drastic decreasing of the MSS. This quantity becomes ultimately constant due to the fact that the friction force is acting in opposite direction to the motion of particles. At $t=0$, the initial MSS is non-zero for the three statistics here considered. The high values reach ed in the friction-free case makes that these initial values seem to be zero. Interestingly enough, as far as MSS and single-particle density concern, there is negligible difference between identical bosons and distinguishable particles. This point has already been reported in the study of two-particle two-slit experiment by computing joint detection probability for identical bosons and distinguishable particles in the context of non-dissipative systems \cite{Sancho-EPJD-2014}. In Figure \ref{fig: detprob_CK}, the ratio of simultaneous detection probability of indistinguishable particles to the distinguishable ones is plotted versus time for bosons $ p_+(t) = \frac{ p_{ \text{BE} }(t) }{ p_{ \text{MB} }(t) } $ (left top panel) and fermions $ p_-(t) = \frac{ p_{ \text{FD} }(t) } { p_{ \text{MB} }(t) } $ (middle top panel) in the CK framework. The difference between the relative joint detection probabilities for bosons and fermions, $ \Delta p(t) = p_+(t) - p_-(t) $, is plotted in the right top panel. They are measured by a single extended detector with a width $2d=2$ located at the origin for different values of friction coefficient, $ \gamma = 0 $ (black curves), $ \gamma = 0.02 $ (red curves), $ \gamma = 0.05 $ (green curves) and $ \gamma = 0.1 $ (blue curves). This ratio is around one for bosons meaning that bosons and distinguishable particles have approximately a similar time behaviour. However, for fermions, a quite different behaviour of this ratio is clearly observed. For these values of the friction coefficient, the stationary value of the relative detection probability decreases (increases) with friction for bosons (fermions). This point is better stressed in the three bottom panels of the same figure where the same three quantities are plotted versus friction at two fixed times, intermediate time $ t = 2.5 $ (orange curves) and stationary time $ t = 50 $ (indigo curves), measured by a detector with a width $2d=2$ located at the origin. After our choice of parameters, these plots show that only for $ \gamma \leq 0.22 $, the stationary value of detection probability is decreasing (increasing) with friction for bosons (fermions). The role played by dissipation is to modify the stationary value of the detection probability. The analytical form of $p_{\pm}(t)$ is dictated by the (anti)symmetrization of the state, and the intensity by the dissipation. Furthermore, as the right top panel shows, there is a time interval, increasing with friction, where the detection probability in the detector region is higher for fermions than for bosons revealing a sort of fermion-bunching and boson-anti-bunching, just the opposite effect one should expect. As the bottom right panel shows, there is a critical value of the friction coefficient $ \gamma _c \approx 0.78 $ where for $ \gamma > \gamma _c $ fermion-bunching is not observed anymore, i.e., $ \Delta p > 0 $. Moreover, for these values of friction, the detection probability for both times $t=2.5$ and $t=50$ becomes the same revealing that the stationary behaviour is seen at times of the order of the relaxation time $ t \sim 1/ \gamma _c $. \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{detprob_CK.pdf} \caption{ Relative simultaneous detection probability $ p_+(t) = \frac{ p_{ \text{BE} }(t) }{ p_{ \text{MB} }(t) } $ (left top plot) for two identical bosons and $ p_-(t) = \frac{ p_{ \text{FD} }(t) }{ p_{ \text{MB} }(t) } $ (middle top plot) for two identical fermions in the CK framework versus time for four different values of friction coefficient, $ \gamma = 0 $ (black curves), $ \gamma = 0.02 $ (red curves), $ \gamma = 0.05 $ (green curves) and $ \gamma = 0.1 $ (blue curves). The difference between the relative joint detection probabilities for bosons and fermions, $ \Delta p(t) = p_+(t) - p_-(t) $, is also plotted in the right top panel. In the bottom panels, the same three quantities are plotted versus friction at two fixed times, intermediate time $ t = 2.5 $ (orange curves) and stationary time $ t = 50 $ (indigo curves), measured by a detector with a width $2d=2$ located at the origin. Other parameters have been fixed as follows: $ \sigma _0 = 1 $, $ x_0 = 0 $, $ p_0 = 3 $, $ \bar{ \sigma }_0 = 0.9 $ and $\bar{p}_0 = p_0 $. } \label{fig: detprob_CK} \end{figure} In Figure \ref{fig: detprob_prime_CK}, the same information is plotted as in Figure \ref{fig: detprob_CK} but for two point detectors located at $D$ and $-D$ with $D=1$. The same trends are observable here but much more pronounced. The fermion-bunching is clearly enhanced under this new detection scheme. Furthermore, in this new scheme, fermion-bunching is seen for all values of friction in the considered interval, i.e., $\Delta p' < 0 $ in the whole range of friction for the intermediate time $t=2.5$. \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{detprob_prime_CK.pdf} \caption{ The same as Figure \ref{fig: detprob_CK} but for two point detectors. The relative simultaneous detection probability are now given by $ p'_+(t) = \frac{ p'_{ \text{BE} }(t) }{ p'_{ \text{MB} }(t) } $ (two identical bosons) and $ p'_-(t) = \frac{ p'_{ \text{FD} }(t) }{ p'_{ \text{MB} }(t) } $ (two identical fermions) in the CK framework. The detectors are located at $D$ and $-D$ with $D=1$. } \label{fig: detprob_prime_CK} \end{figure} \subsubsection{The Caldeira-Leggett approach} With the initial Gaussian wave packets given by Eqs. (\ref{eq: psi0}) and (\ref{eq: phi0}), the diagonal elements of one-particle states have the form \begin{numcases}~ \rho_{aa}(x, x, t) = \frac{1}{\sqrt{2\pi} w_t} \exp\left[ - \frac{ (x - x_t)^2}{ 2 w_t^2 } \right] \label{eq: rhoaa} \\ \rho_{ab}(x, x, t) = \sqrt{ 2 \frac{ \sigma _0 \bar{ \sigma }_0 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } } ~ \frac{1}{2 \sqrt{\pi a_2(t)}} \exp \left[ a_0 - \frac{ (x - a_1(t))^2}{ 4 a_2(t) } \right] \label{eq: rhoab} \end{numcases} under the evolution equation (\ref{eq: CL eq}) with \begin{numcases}~ w_t = \sqrt{ \sigma _t^2 + D \frac{ 4 \gamma t + 4 e^{-2 \gamma t} - 3 - e^{-4 \gamma t} }{ 8 m^2 \gamma ^3 } } \label{eq: wt} \\ a_0 = - \frac{ \sigma _0^2 \bar{ \sigma }_0^2 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } \frac{(p_0-\bar{p}_0)^2}{ \hbar ^2} \\ a_1(t) = x_0 + \frac{ p_0 \sigma _0^2 + \bar{p}_0 \bar{ \sigma }_0^2 }{ m( \sigma _0^2 + \bar{ \sigma }_0^2 ) } \uptau(t) + i \frac{ \sigma _0^2 \bar{ \sigma }_0^2 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } \frac{2(p_0-\bar{p}_0)}{ \hbar } \\ a_2(t) = \frac{ \sigma _0^2 \bar{ \sigma }_0^2 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } + \frac{ \hbar ^2 \uptau(t)^2 }{ 4m^2( \sigma _0^2 + \bar{ \sigma }_0^2 ) } + D \frac{ 4 \gamma t + 4 e^{-2 \gamma t} - 3 - e^{-4 \gamma t} }{ 16 m^2 \gamma ^3 } - i \frac{ \hbar }{ 2m } \frac{ \sigma _0^2 - \bar{ \sigma }_0^2 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } \uptau(t) . \label{eq: a2} \end{numcases} In order to obtain $ \rho_{bb}(x, x, t) $ it suffices to replace $x_t$ by $\bar{x}_t$ in Eq. (\ref{eq: rhoaa}) and $ \sigma _t $ by $\bar{ \sigma }_t$ in Eq. (\ref{eq: wt}). In a similar way, $ \rho_{ba}(x, x, t) $ is known from Eq. (\ref{eq: rhoab}) by interchanging $ \sigma _0 \leftrightarrow \bar{ \sigma }_0 $. The temperature appears in the expressions for the widths through the diffusion constant, $D$. From the above relations and their equivalent one in Eq. (\ref{eq: mss_CL}), one has that \begin{numcases}~ \int_{-\infty}^{\infty} dx ~ \rho_{ab}(x, x, t) = \sqrt{ \frac{2 \sigma _0 \bar{ \sigma }_0 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } } ~ e^{a_0} \\ \langle x \rangle _{ab} = \sqrt{ \frac{2 \sigma _0 \bar{ \sigma }_0 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } } ~ a_1(t) ~ e^{a_0} \\ \langle x^2 \rangle _{ab} = \sqrt{ \frac{2 \sigma _0 \bar{ \sigma }_0 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } } ~ [ a_1(t)^2 + 2 a_2(t) ] ~ e^{a_0} \end{numcases} leading to the MSS in the CL framework. Analogously, by using the following expressions \begin{numcases}~ \int_{-d}^{d} dx ~ \rho_{aa}(x, x, t) = \frac{1}{2} \left\{ \text{erf} \left( \frac{x_t + d }{ \sqrt{2} w_t } \right) - \text{erf} \left( \frac{x_t - d }{ \sqrt{2} w_t } \right) \right\} \\ \int_{-d}^{d} dx ~ \rho_{ab}(x, x, t) = \sqrt{ \frac{2 \sigma _0 \bar{ \sigma }_0 }{ \sigma _0^2 + \bar{ \sigma }_0^2 } } ~ e^{a_0} ~ \frac{1}{2} \left\{ \text{erf} \left( \frac{a_1(t) + d }{ 2 \sqrt{a_2(t)} } \right) - \text{erf} \left( \frac{a_1(t) - d }{ 2 \sqrt{a_2(t)} } \right) \right\} \end{numcases} and similar ones for the integration of $ \rho_{bb}(x, x, t) $ and $ \rho_{ba}(x, x, t) $ in Eq. (\ref{eq: detprob_CL}), one can calculate the joint detection probability of both identical particles by an extended detector with width $2d$ in the CL approach. \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{MSS_CL.pdf} \caption{ Mean square separation versus time in the CL framework for different statistics, MB (black circles), BE (red curves), FD (green curves) for different values of friction, $ \gamma = 0.1$ (left panels) and $ \gamma = 0.2$ (middle panels) for $ k_B T = 5 $ (top panels) and $ k_B T = 10 $ (bottom panels). Right panel depicts MSS for the MB statistics for $ k_B T = 8 $ and two friction values, $ \gamma = 0.1$ (violet) and $ \gamma = 0.2$ (orange). Other parameters have been fixed as follows: $ \sigma _0 = 1 $, $ x_0 = 0 $, $ p_0 = 3 $, $ \bar{ \sigma }_0 = 0.9 $ and $\bar{p}_0 = p_0 $. } \label{fig: MSS_CL} \end{figure} In this approach, the is displayed versus time in Figure \ref{fig: MSS_CL} for different statistics: MB (black circles), BE (red curves), FD (green curves) for different values of friction, $ \gamma = 0.1$ (left panels) and $ \gamma = 0.2$ (middle panels) with $ k_B T = 5 $ (top panels) and $ k_B T = 10 $ (bottom panels). Right panel depicts the MSS for MB statistics at $ k_B T = 8 $ for $ \gamma = 0.1$ (violet) and $ \gamma = 0.2$ (orange). It is clear that the MSS is higher for identical fermions than identical bosons. This separation for bosons is slightly lower than for distinguishable particles although this is not clearly seen due to the scale of the plots. This quantity also increases with time and temperature for a given friction with no asymptotic value. After the right panel, the MSS for the MB statistics displays a nearly linear behavior with time where the slope depends on the temperature and friction. Although all terms contributing to the MSS in Eq. (\ref{eq: mss_CL_MB}) reduce with friction for a given time, the rate of such a reduction is different for each term. This behavior is also seen for different statistics. The time dependent variation is quite different to the CK approach. This is due to the presence of thermal effects where the corresponding temperature makes the widths of the wave packets increase. In a certain sense, temperature and friction play an opposite effect on the width of the wave packet. \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{detprob_CL.pdf} \caption{ Relative simultaneous detection probability $ p_+(t) = \frac{ p_{ \text{BE} }(t) }{ p_{ \text{MB} }(t) } $ (left plots) for two identical bosons and $ p_-(t) = \frac{ p_{ \text{FD} }(t) }{ p_{ \text{MB} }(t) } $ (right plots) for two identical fermions in the CL framework, measured by a detector with a width $2d=2$ located at the origin, for $ \gamma = 0.1$ (top plots) and $ \gamma = 0.2$ (bottom plots) for different values of temperature: $ k_B T = 5 $ (black curves), $ k_B T = 7 $ (red curves), $ k_B T = 10 $ (green curves) and $ k_B T = 15 $ (blue curves). Other parameters have been fixed as follows: $ \sigma _0 = 1 $, $ x_0 = 0 $, $ p_0 = 3 $, $ \bar{ \sigma }_0 = 0.9 $ and $\bar{p}_0 = p_0 $. } \label{fig: detprob_CL} \end{figure} In Figure \ref{fig: detprob_CL} ,the ratio of simultaneous detection probability of indistinguishable particles to the distinguishable ones is plotted versus time for bosons $ p_+(t) = \frac{ p_{ \text{BE} }(t) }{ p_{ \text{MB} }(t) } $ (left panels) and $ p_-(t) = \frac{ p_{ \text{FD} }(t) }{ p_{ \text{MB} }(t) } $ (right panels) for fermions, measured by an extended detector with a width $2d=2$ located at the origin, for $ \gamma = 0.1$ (top plots) and $ \gamma = 0.2$ (bottom plots) for different values of temperature: $ k_B T = 5 $ (black curves), $ k_B T = 7 $ (red curves), $ k_B T = 10 $ (green curves) and $ k_B T = 15 $ (blue curves). The general behavior for both cases is similar to that found within the CK approach. For bosons, the temperature plays a major role than friction but the ratios are always around one. In particular, by increasing the temperature, the ratio is approaching to one. For fermions, the ratios become greater than one, after a while, but decrease with temperature. Again, the bunching and anti-bunching behavior is observed for fermions and bosons, respectively. In any case, interestingly enough, the two kind of ratios ultimately reach one for all temperatures and frictions analyzed. Thus, decoherence process, loss of being indistinguishable, is settled gradually by increasing friction and temperature. The symmetry of the total wave function is not so important under these conditions. The property of being distinguishable is emerging gradually. It is then clear that similar behaviors are observed in both frameworks concerning detection probabilities. The extra difference is that when adding temperature, the exchange effects become less important at high temperatures leading to the same behaviour for bosons and fermions in this regime. This is better observed in Figure \ref{fig: detprob_CL2}, where the relative simultaneous detection probability in the CL framework at two fixed times $ t = 1.5 $ (black curves) and $ t = 5 $ (red curves) versus friction coefficient for $ k_B T = 3 $ (top panels) and versus temperature for $ \gamma = 0.1$ (bottom panels), measured by a detector with a width $2d=2$ located at the origin, are displayed. \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{detprob_CL2.pdf} \caption{ Relative simultaneous detection probability in the CL framework at two fixed times $ t = 1.5 $ (black curves) and $ t = 5 $ (red curves) versus friction coefficient for $ k_B T = 3 $ (top plots) and versus temperature for $ \gamma = 0.1$ (bottom plots), measured by a detector with a width $2d=2$ located at the origin. Other parameters have been fixed as follows: $ \sigma _0 = 1 $, $ x_0 = 0 $, $ p_0 = 3 $, $ \bar{ \sigma }_0 = 0.9 $ and $\bar{p}_0 = p_0 $. } \label{fig: detprob_CL2} \end{figure} \subsection{The two-particle two-slit experiment: the CK approach} \label{sec: tp-tl} The problem of the two-particle two-slit experiment has been recently studied by Sancho \cite{Sancho-EPJD-2014} for conservative systems. Here, we extend this study to dissipative dynamics in the CK approach. We consider a two-slit interference experiment when the source emits particles by pairs. As is shown in Figure \ref{fig: setup}, the two slits are denoted by $B$ and $B'$ located symmetrically at the points $(\pm X, 0)$ and have the same width $w$. Gaussian slits are again assumed. Detectors measure the joint patterns by counting simultaneous arrivals. One-particle states are given by the wave functions $\psi$ and $\phi$. \begin{figure} \centering \includegraphics[width=8cm,angle=-0]{setup.pdf} \caption{ Particles are emitted by pairs from the source S, pass through the two slits $B$ and $B'$ and arrive at the screen. } \label{fig: setup} \end{figure} Particles are produced in a source located on the negative $y-$axis in a product state which has a Gaussian shape with zero kick momentum in the $x-$direction but plane wave in $y-$direction, \begin{eqnarray} \label{eq: in_wf} \psi_0(x, y) &=& A \frac{1}{(2 \pi \sigma _0^2)^{1/4}} \exp \left[-\frac{x^2}{4 \sigma _0^2} + i k y \right] \end{eqnarray} with $ \sigma _0 = \hbar / 2 \sigma _p$, $ \sigma _p$ being the momentum width, along $x-$axis, of the wave function and constant $A$. Then, this wave function propagating freely arrives at time $ t_0 = \hbar k / m $ to both slits. Thus, for $ t< t_0 $, the wave function is written as \begin{eqnarray} \label{eq: wf before t_0} \psi(x, y, t) &=& A \frac{1}{(2\pi s_t^2)^{1/4}} \exp\left[- \frac{x^2}{ 4 \sigma _0 s_t } + i k y - i \frac{E t}{ \hbar } \right] , \qquad t<t_0 \end{eqnarray} where $ E = \hbar ^2 k^2 / 2 m $ and $s_t$ is the complex width of the wave function in the $x-$direction given by Eq. (\ref{eq: st}). Here, we have assumed that the friction force acts along the $x-$axis only. Since the motion in the $y-$direction is described by plane waves, in the following we ignore the motion in this direction and consider only the dynamics along the plane of slits, i.e, the $x-$direction. By using the Gaussian slit approximation, the single particle wave function corresponding to the right slit is given by \begin{eqnarray} \label{eq: wf after t_0} \psi_B(x, t) &=& N \int_{-\infty}^{\infty} dx'~e^{-(x'-X)^2/2w^2} G(x, t; x', t_0) \psi(x', t_0) , \qquad t>t_0 \end{eqnarray} where $N$ is the normalization constant taken to be real, the first factor in the integrand is the weight function corresponding to the Gaussian slit approximation with $w$ being the width of the slit, and the second factor is the free particle propagator in the CK approach given by \cite{MoMi-JPC-2018, Mo-LNC-1978} \begin{eqnarray} \label{eq: propagator} G(x, t; x', t_0) &=& \sqrt{ \frac{m}{2\pi i \hbar ~ (\uptau(t)-\uptau(t_0))} } \exp \left[ \frac{im (x-x')^2}{2 \hbar ~ (\uptau(t)-\uptau(t_0))} \right] . \end{eqnarray} By carrying out the corresponding integral, we have that \begin{eqnarray} \label{eq: wft>t0} \psi_B(x, t) &=& N \frac{1}{(2\pi s_{t_0}^2)^{1/4}} \left\{ 1 + \frac{i \hbar }{m} \left( \frac{1}{w^2} + \frac{1}{2 \sigma _0 s_{t_0}} \right) (\uptau(t)-\uptau(t_0)) \right\}^{-1/2} ~ e^{- c_2(t) x^2 + c_1(t) x + c_0(t)} \end{eqnarray} where the following abbreviations are used \begin{numcases}~ s_{t_0} = s(t_0) \\ \sigma _{t_0} = \sigma (t_0) \end{numcases} with $ \sigma _t$ given by Eq. (\ref{eq: sigmat}) and \begin{eqnarray} \label{eq: normalization} N &=& \left( \frac{ w^2 + 2 \sigma _{t_0}^2 }{ w^2 } \right)^{1/4} \exp\left[ \frac{ X^2 }{ 2 w^2 + 4 \sigma _{t_0}^2 } \right] \end{eqnarray} where \begin{numcases}~ c_0(t) = - \frac{ 2m \sigma _0 s_{t_0} + i \hbar (\uptau(t)-\uptau(t_0)) }{ 4 m w^2 \sigma _0 s_{t_0} + 2 i \hbar ( w^2 +2 \sigma _0 s_{t_0} ) (\uptau(t)-\uptau(t_0)) } X^2 \\ c_1(t) = \frac{ 4 m \sigma _0 s_{t_0} }{ 4 m w^2 \sigma _0 s_{t_0} + 2 i \hbar ( w^2 +2 \sigma _0 s_{t_0} ) (\uptau(t)-\uptau(t_0)) } X \\ c_2(t) = \frac{ m( w^2 + 2 \sigma _0 s_{t_0}) }{ 4 m w^2 \sigma _0 s_{t_0} + 2 i \hbar ( w^2 +2 \sigma _0 s_{t_0} ) (\uptau(t)-\uptau(t_0)) } . \end{numcases} The wave function $\psi_{B'}$ is given by Eq. (\ref{eq: wft>t0}) by replacing $X$ by $-X$. The wave functions $\phi_B$ and $\phi_{B'}$ are obtained from $\psi_B$ and $\psi_{B'}$ by replacing $ \sigma _0 \rightarrow \bar{ \sigma }_0 $, $ s_t \rightarrow \bar{s}_t$ ($\bar{s}_t$ being the complex with of $\phi$) and $ \sigma _t \rightarrow \bar{ \sigma }_t $ ($\bar{ \sigma }_t$ being the with of $\phi$). \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{rhosp.pdf} \caption{ Scaled single-particle probability density $ \rho_{\text{sp}}(x, t) $ at time $ t = 5 t_0 $ versus space coordinate $x$ for different widths of the one-particle state $\phi(x, t)$, $ \bar{ \sigma }_0 = 0.1 $(left column) and $ \bar{ \sigma }_0 = 0.8 $ (right column) for MB statistics (black curves), BE statistics (red curves) and FD statistics (blue curves) and for friction values $ \gamma = 0 $ (top panels), $ \gamma = 0.1 $ (middle panels) and $ \gamma = 0.2 $ (bottom panels). For numerical calculations we have used $ w = 1 $, $ \sigma _0 = 0.9 $ and $ t_0 = 1 $. } \label{fig: rhosp} \end{figure} \begin{figure} \centering \includegraphics[width=12cm,angle=-0]{rho2p.pdf} \caption{ Scaled joint detection probability at different times $ t = 2 t_0 $ (left column), $ t = 5 t_0 $ (middle column) versus the position of moving detector for different statistics and different values of friction coefficient. In the left and middle columns, black curves correspond to $ \gamma = 0 $, red curves correspond to $ \gamma = 0.1 $ while green ones correspond to $ \gamma = 0.2 $ for MB statistics (top panels), BE statistics (middle panels) and FD statistics (bottom panels). In the right column, the same quantity is plotted but at time $t = 10 t_0$ for the MB statistics (magenta curves), BE statistics (blue curves) and FD statistics (orange curves) and friction values $ \gamma = 0 $ (top panel), $ \gamma = 0.1 $ (middle panel) and $ \gamma = 0.2 $ (bottom panel). For numerical calculations we have used $ w = 1 $, $ \sigma _0 = 0.9 $, $ \bar{ \sigma }_0 = 0.7 $, $ X = 4 $ and $ t_0 = 1 $. } \label{fig: rho2p} \end{figure} In our two-particle double-slit experiment, the total wave functions for identical particles are given by Eq. (\ref{eq: 2p-psit}) where \begin{numcases}~ \psi(x, t) = N_{\psi} (\psi_{B}(x, t) + \psi_{B'}(x, t)) \\ \phi(x, t) = N_{\phi} (\phi_{B}(x, t) + \phi_{B'}(x, t)) . \end{numcases} Apart from a phase factor, the normalization constants are expressed as \begin{numcases}~ N_{\psi} = [ 2 ( 1 + \text{Re} \{ \langle \psi_{B} | \psi_{B'} \rangle \} ) ]^{-1/2} \\ N_{\phi} = [ 2 ( 1 + \text{Re} \{ \langle \phi_{B} | \phi_{B'} \rangle \} ) ]^{-1/2} . \end{numcases} Note that according to Eq. (\ref{eq: interference}), interference terms, inner products of one-particle states, and thus normalization constants are independent on time. Now, the single-particle density $\rho_{\text{sp}}(x, t)$, that is, the probability density for finding a particle at time $t$ at $x$, irrespective of the position of the other particle of the pair, evaluated from Eq. (\ref{eq: rhosp}), and the joint detection probability with two detectors, one fixed in the origin and the other moving along the slits' plane can be computed, that is, $ | \Psi(x_1=0, x_2 = x, t) |^2 $ for different statistics. For numerical calculations, the following parameters $ X = 4 $, $ w = 1 $, $ \sigma _0 = 0.9 $, $ \bar{ \sigma }_0 = 0.7 $, otherwise stated, and $ t_0 = 1 $ are used. In Figure \ref{fig: rhosp}, we have plotted the single particle probability density at time $t=5t_0$ versus space coordinate $x$ for different statistics and friction values. Two different values of widths of the one-particle state $\phi(x, t)$ are used, fixing the width of the other one-particle state, that is, $\psi(x, t)$. As this figure shows, when there is little overlap between one-particle states, $ \sigma _0 = 0.9 $ and $\bar{ \sigma }_0 = 0.1 $ all kinds of particles behave similarly because the exchange effects are not so important and only introduce small differences among the three types of particles. The differences among the three curves decrease when dissipation increases. However, when considerable overlapping is present, $ \sigma _0 = 0.9 $ and $\bar{ \sigma }_0 = 0.8 $, fermions behave completely different from bosons which themselves behave quite similar to distinguishable particles, a result which has already been noticed in the context of non-dissipative systems \cite{Sancho-EPJD-2014}. In Figure \ref{fig: rho2p} the joint detection probability versus the position of the moving detector is plotted at two times, $ t = 2 t_0 $ (left columns) and $ t = 5 t_0 $ (middle columns), for the MB (top panels), BE (middle panels) and FD statistics (bottom panels) and three values of the friction coefficient $ \gamma = 0 $ (black curves), $ \gamma = 0.1 $ (red curves) and $ \gamma = 0.2 $ (green curves). In the right column, the same quantity is plotted but at time $t = 10 t_0$ for the MB statistics (magenta curves), BE statistics (blue curves) and FD statistics (orange curves) and friction values $ \gamma = 0 $ (top panel), $ \gamma = 0.1 $ (middle panel) and $ \gamma = 0.2 $ (bottom panel). At short times (left panels), the joint detection probability starts spreading, the interference being not so important yet. With friction, the intensities are decreasing and, for fermions, the highest intensities are reached. At intermediate times (middle panels) $t=5t_0$, the interference process is already important for the MB and BE statistics whereas, for the FD one, a higher spreading is clearly observed. Finally, at $ t = 10 t_0 $ (right panels), a similar behavior is observed. The different patterns for fermions are still formed by two lobes splitting apart each other, emphasizing the anti-bunching property of these particles even with friction. Furthermore, the lobes are sharper with increasing friction. Thus, for the set of the parameters chosen, bosons behave like distinguishable particles while fermions have a complete different behavior, reflecting in a certain sense the bunching and anti-bunching properties of bosons and fermions, respectively. As expected, the interference is decreasing with friction for bosons and distinguishable particles. In the three panels of the right column, the spreading is drastically reduced by friction but, on the contrary, the intensity peaks are higher with friction. We attribute these behaviours to a manifestation of localization effects due to dissipation, as has been already mentioned above. \section{Concluding remarks} The importance of friction and temperature leading to the decoherence process in open quantum systems is very well known. In this work, we have analyzed their mutual influence for non-interacting, distinguishable and indistinguishable particles (bosons and fermions) by considering the interference and diffraction patterns by one slit within the CK and CL approaches and two Gaussian slits only in the CK one taking the one-particle states with considerable overlap. The mean square separation, computed only for the one Gaussian slit problem in both CK and CL frameworks, is always greater for fermions than for bosons. The counter intuitive bunching and anti-bunching effects described in \cite{MaGr-EPJD-2014} for fermions and bosons, respectively, are observed through the detection probability. Our work notoriously extends the scope of this unusual behavior by showing its presence (i) in scenarios where there are not zeros in the single-particle wavefunctions, and (ii) in one- and two-detector schemes (in \cite{MaGr-EPJD-2014} is only for the second type). The time dependent probability tends to be the same for bosons and distinguishable particles but quite different for fermions. Decoherence process, loss of being indistinguishable, is settled gradually with time by increasing friction and temperature. In the two slits case, by computing single-particle detection probability for different particles in the context of the CK model, we have observed that all kinds of particles behave almost similarly for low-overlapping one-particle states. This is due to the fact that for low values of overlapping, the exchange effects are not so important and only introduce small differences among the three types of particles. These differences decrease as friction increases. On the contrary, for considerable overlapping where exchange effects are important, fermions behave completely different from bosons which themselves behave like distinguishable particles. These findings (i) show that in the regime considered the overlapping degree (and not the relaxation constant) is the fundamental parameter in the problem, and (ii) provide a confirmation of the results in \cite{Sancho-EPJD-2014} but for open systems. This work should be seen as a good starting point to study optical properties of matter waves under the presence of friction and temperature when considering non-interacting identical particles governed by the two quantum statistics. In particular, the Talbot effect leading to quantum carpets could be a good candidate. Furthermore, how the quantum statistics can influence the dissipative quantum backflow is another aspect to take into account to see the bunching and anti-bunching effects here described. Obviously, the list of interesting topics to be analyzed and discussed in a future work within this context is enormous. \vspace{1cm} \noindent {\bf Acknowledgement} \vspace{1cm} SVM acknowledges support from the University of Qom and SMA support from the Ministerio de Ciencia, Innovaci\'on y Universidades (Spain) under the Project FIS2017-83473-C2-1-P. \newpage
1,108,101,566,088
arxiv
\section{Introduction} The topological 4-genus $g_4(K)$ of a knot $K$ is the minimal genus of a topological, locally flat surface embedded in the 4-ball with boundary~$K$. A well-known theorem due to Freedman asserts that knots with trivial Alexander polynomial bound a locally flat disc in the 4-ball~\cite{F}. Unlike for the classical genus $g$, there is no known algorithm that determines the topological 4-genus of a knot. The signature bound by Kauffman and Taylor~\cite{KT}, $|\sigma(K)| \leq 2g_4(K)$, fails to be sharp for the simplest knots, such as the figure-eight knot. As we will see, the signature bound becomes much more effective when the topological 4-genus is replaced by its stable version $\widehat{g}_4$ defined by Livingston~\cite{Li}: $$ \widehat{g}_4(K)=\lim_{n \to \infty} \frac{1}{n} g_4(K^n).$$ Here $K^n$ denotes the $n$-times iterated connected sum of $K$. The existence of $\widehat{g}_4$ follows from general principles on subadditive functions (see Theorem~1 in~\cite{Li}). \begin{theorem} Let $\Sigma \subset {\mathbb R}^3$ be a minimal genus Seifert surface for a knot~$K$. Assume that $\Sigma$ contains an embedded annulus with framing $+1$ or $-1$. Then the following are equivalent: \begin{enumerate} \item[(i)] $\widehat{g}_4(K)=g(K)$, \smallskip \item[(ii)] $|\sigma(K)|=2g(K)$. \end{enumerate} \end{theorem} \begin{corollary} Let $\Sigma \subset {\mathbb R}^3$ be a minimal genus Seifert surface for a knot~$K$. If $\Sigma$ contains two embedded annuli with framings $+1$ and $-1$, then $\widehat{g}_4(K)<g(K)$. \end{corollary} The second condition of Theorem~1 clearly implies the first one, by the following chain of (in)equalities: $$n2g(K)=n|\sigma(K)|=|\sigma(K^n)| \leq 2g_4(K^n) \leq 2g(K^n)=n2g(K).$$ We do not know whether the reverse implication holds without any additional assumption on Seifert surfaces. \begin{question} Does there exist a knot $K$ with $|\sigma(K)|<2g(K)$ and $\widehat{g}_4(K)=g(K)$? \end{question} We conclude the introduction with an application concerning positive braid knots, i.e. knots which are closures of a positive braids. As shown in~\cite{Ba}, the only positive braid knots with $|\sigma(K)|=2g(K)$ are torus knots of type $T(2,n)$ ($n \in {\mathbb N}$), $T(3,4)$ and $T(3,5)$. Moreover, positive braid knots have a canonical Seifert surface (in fact, a fibre surface), which always contains a Hopf band with framing $+1$. \begin{corollary} Let $K$ be a positive braid knot. Then $\widehat{g}_4(K)=g(K)$, if and only if $K$ is a torus knot of type $T(2,n)$ $(n \in {\mathbb N})$, $T(3,4)$ or $T(3,5)$. \end{corollary} \section*{Acknowledgements} I would like to thank Livio Liechti for fruitful discussions, in particular for helping me get the assumption of Theorem~1 right. \section{Constructing tori with slice boundary} Let $K \subset S^3$ be a knot with minimal genus Seifert surface $\Sigma$. The Seifert form $V: H_1(\Sigma, {\mathbb Z}) \times H_1(\Sigma, {\mathbb Z}) \to {\mathbb Z}$ is defined by linear extension of the formula $$V([x],[y])=\text{lk}(x,y^+),$$ valid for simple closed curves $x,y \subset \Sigma$. Here lk denotes the linking number and $y^+$ is a push-off of the curve $y$ in the positive direction with respect to a fixed orientation of $\Sigma$. The number $V([x],[x]) \in {\mathbb Z}$ is called framing of the curve $x$. The signature $\sigma(K)$ of $K$ is defined as the number of positive eigenvalues minus the number of negative eigenvalues of the symmetrised Seifert form $V+V^T$. The Alexander polynomial of $K$ is defined as $\Delta_K(t)=\det(\sqrt{t}V-\frac{1}{\sqrt{t}}V^T)$. Throughout this section, we will assume that \begin{enumerate} \item[(i)] the symmetrised Seifert form on $H_1(\Sigma, {\mathbb Z})$ is indefinite, i.e. $$|\sigma(K)|<2g(K),$$ \item[(ii)] the surface $\Sigma$ contains an embedded annulus $A$ with framing $+1$ (the case of framing $-1$ can be reduced to this by taking the mirror image of $\Sigma$). \end{enumerate} Let $\Sigma^n$ be the Seifert surface for $K^n$ obtained by $n$-times iterated boundary connected sum of $\Sigma$. We define \smallskip \noindent $\mathcal{F}(\Sigma)=\{m \in {\mathbb Z} \, |$ there exist a number $n \in {\mathbb N}$ and an embedded annulus $A \subset \Sigma^n$ with framing $m\}$. \begin{lemma} $\mathcal{F}(\Sigma)={\mathbb Z}$. \end{lemma} \begin{proof} We first show that $\Sigma$ contains an embedded annulus with negative framing. The symmetrised Seifert form $q=V+V^T$ being indefinite and non-degenerate (the latter is true for all Seifert surfaces with one boundary component), there exists a vector $\alpha \in H_1(\Sigma,{\mathbb R})$ with $q(\alpha)<0$. Since negative vectors for $q$ form an open cone in $H_1(\Sigma,{\mathbb R})$, there exists a simple closed curve $c \subset \Sigma$ with negative framing, i.e. $q([c])<0$. Indeed, the surface $\Sigma$ can be seen as a boundary connected sum of $g(\Sigma)$ tori; a suitable connected sum of torus knots will do. Let $n=|q([c])|$ be the absolute value of the framing of the annulus $C \subset \Sigma$ defined by the curve $c$. We claim that $\Sigma^n$ contains an embedded annulus with framing $-1$. This can be seen by taking a split union of $C$ and $n-1$ copies of $A$ in $\Sigma^n$ (one annulus per factor), and constructing an annulus that runs through all of these once. Here we need to choose $n-1$ disjoint intervals connecting pairs of successive annuli, along which the new annulus will run back and forth, as sketched in Figure~1. In the same way, we may construct annuli with arbitrary framings. \begin{figure}[ht] \scalebox{1.0}{\raisebox{-0pt}{$\vcenter{\hbox{\epsffile{annulus.eps}}}$}} \caption{} \end{figure} \end{proof} \begin{lemma} There exists a number $N \in {\mathbb N}$ and an embedded torus $T \subset \Sigma^N$ with one boundary component whose Seifert form is $\begin{pmatrix} 0 & \pm 1 \\ 0 & 0 \end{pmatrix}$, with respect to a suitable basis of $H_1(T,{\mathbb Z})$. In particular, the boundary knot $L=\partial T$ has trivial Alexander polynomial. \end{lemma} \begin{proof} By the second assumption, $\Sigma$ contains an embedded annulus $A$ with framing $+1$. We claim that the core curve~$a$ of the annulus~$A$ is non-separating. Indeed, if the curve~$a$ was separating, it would bound a surface on one side (since the boundary of $\Sigma$ is connected), so the framing of $A$ would be zero. As a consequence of the non-separation property of~$a$, there exists an embedded annulus $D \subset \Sigma$ which intersects $A$ in a square. The union of $A$ and $D$ is an embedded torus $T \subset \Sigma$ with one boundary component. Let $\begin{pmatrix} 1 & b \\ c & d \end{pmatrix}$ be the matrix representing the Seifert form on $H_1(T,{\mathbb Z})$ with respect to a pair of oriented core curves of $A$ and $D$. By adding a suitable number of copies of $A$ or $B$ to $D$ in a power $\Sigma^n$, far away from the initial annulus $A \subset \Sigma$, we may impose the framing of $D$ to be $-1$, without changing its linking number with the annulus $A$. Thus we obtain an embedded torus $T' \subset \Sigma^n$ with Seifert form $\begin{pmatrix} 1 & b \\ c & -1 \end{pmatrix}$. An elementary base change yields $$\begin{pmatrix} 1 & 0 \\ -c & 1 \end{pmatrix} \begin{pmatrix} 1 & b \\ c & -1 \end{pmatrix} \begin{pmatrix} 1 & -c \\ 0 & 1 \end{pmatrix}= \begin{pmatrix} 1 & b-c \\ 0 & -bc-1 \end{pmatrix}.$$ In turn, if we replace the annulus $D$ by an annulus with $-c$ additional twists around $A$, we obtain an embedded torus $T'' \subset \Sigma^n$ with Seifert form $\begin{pmatrix} 1 & b-c \\ 0 & -bc-1 \end{pmatrix}$. As before, we may change the individual framings of $A$ and $D$ to be zero in an even larger power $\Sigma^N$. The resulting torus, which we again denote $T \subset \Sigma^N$, has Seifert form $V=\begin{pmatrix} 0 & b-c \\ 0 & 0 \end{pmatrix}$. We claim that $b-c=\pm 1$. Indeed, let $L=\partial T$ be the boundary knot of $T$. The Alexander polynomial of $L$ can be computed as $$\Delta_L(t)=\det(\sqrt{t}V-\frac{1}{\sqrt{t}}V^T)= \begin{vmatrix} 0 & \sqrt{t}(b-c) \\ -\frac{1}{\sqrt{t}}(b-c) & 0 \end{vmatrix}=(b-c)^2.$$ Since $\Delta_L(1)=1$, for all knots $L$, we conclude $b-c=\pm 1$ and $$\Delta_L(t)=1.$$ \end{proof} In order to prove Theorem~1, we need to invoke Freedman's result (\cite{F}, see also~\cite{FQ} and~\cite{GT}): knots with trivial Alexander polynomial are topologically slice. \begin{proof}[Proof of Theorem~1] As mentioned in the introduction, the condition $|\sigma(K)|=2g(K)$ implies $\widehat{g}_4(K)=g(K)$, without any assumption on the Seifert surface $\Sigma$. For the reverse implication, we assume $|\sigma(K)|<2g(K)$ and prove $\widehat{g}_4(K)<g(K)$. By Lemma~2, there exists a number $N \in {\mathbb N}$ and an embedded torus $T \subset \Sigma^N$ with one boundary component $L=\partial T$ and $\Delta_L(t)=1$. According to Freedman, there exists a topological, locally flat disc $D$ embedded in the 4-ball with boundary~$L$. We may assume that the interior of $D$ is contained in the interior of the 4-ball. Now the union of $D$ and $\Sigma^N \setminus T$ is a topological, locally flat surface embedded in the 4-ball with boundary $K^N$ and genus $Ng(K)-1$. Therefore, $$\widehat{g}_4(K) \leq g(K)-\frac{1}{N}<g(K).$$ \end{proof}
1,108,101,566,089
arxiv
\section{Introduction}\label{sec:introduction} Depth estimation from images has a long history in computer vision. Fruitful approaches have relied on structure from motion, shape-from-X, binocular, and multi-view stereo. However, most of these techniques rely on the assumption that multiple observations of the scene of interest are available. These can come in the form of multiple viewpoints, or observations of the scene under different lighting conditions. To overcome this limitation, there has recently been a surge in the number of works that pose the task of monocular depth estimation as a supervised learning problem \cite{ladicky2014pulling, eigen2014depth, liu2015learning}. These methods attempt to directly predict the depth of each pixel in an image using models that have been trained offline on large collections of ground truth depth data. While these methods have enjoyed great success, to date they have been restricted to scenes where large image collections and their corresponding pixel depths are available. Understanding the shape of a scene from a single image, independent of its appearance, is a fundamental problem in machine perception. There are many applications such as synthetic object insertion in computer graphics \cite{karsch2014automatic}, synthetic depth of field in computational photography \cite{Barron2015A}, grasping in robotics \cite{lenz2015deep}, using depth as a cue in human body pose estimation \cite{shotton2013real}, robot assisted surgery \cite{stoyanov2010real}, and automatic 2D to 3D conversion in film \cite{xie2016deep3d}. Accurate depth data from one or more cameras is also crucial for self-driving cars, where expensive laser-based systems are often used. \begin{figure}[t] \centering \input{ims/main_fig/main_fig.tex} \vspace{5pt} \caption{Our depth prediction results on KITTI 2015. Top to bottom: input image, ground truth disparities, and our result. Our method is able to estimate depth for thin structures such as street signs and poles.} \label{fig:overview_results} \vspace{-10pt} \end{figure} Humans perform well at monocular depth estimation by exploiting cues such as perspective, scaling relative to the known size of familiar objects, appearance in the form of lighting and shading and occlusion \cite{howard2012perceiving}. This combination of both top-down and bottom-up cues appears to link full scene understanding with our ability to accurately estimate depth. In this work, we take an alternative approach and treat automatic depth estimation as an image reconstruction problem during training. Our fully convolutional model does not require any depth data, and is instead trained to synthesize depth as an intermediate. It learns to predict the pixel-level correspondence between pairs of rectified stereo images that have a known camera baseline. There are some existing methods that also address the same problem, but with several limitations. For example they are not fully differentiable, making training suboptimal \cite{garg2016unsupervised}, or have image formation models that do not scale to large output resolutions \cite{xie2016deep3d}. We improve upon these methods with a novel training objective and enhanced network architecture that significantly increases the quality of our final results. An example result from our algorithm is illustrated in Fig.~\ref{fig:overview_results}. Our method is fast and only takes on the order of $35$ milliseconds to predict a dense depth map for a $512\times 256$ image on a modern GPU. Specifically, we propose the following contributions: \newline1) A network architecture that performs end-to-end unsupervised monocular depth estimation with a novel training loss that enforces left-right depth consistency inside the network. \newline2) An evaluation of several training losses and image formation models highlighting the effectiveness of our approach. \newline3) In addition to showing state of the art results on a challenging driving dataset, we also show that our model generalizes to three different datasets, including a new outdoor urban dataset that we have collected ourselves, which we make openly available. \section{Related Work} There is a large body of work that focuses on depth estimation from images, either using pairs \cite{scharstein2002taxonomy}, several overlapping images captured from different viewpoints \cite{furukawa2015multi}, temporal sequences \cite{ranftldense}, or assuming a fixed camera, static scene, and changing lighting \cite{woodham1980photometric, abrams2012heliometric}. These approaches are typically only applicable when there is more than one input image available of the scene of interest. Here we focus on works related to monocular depth estimation, where there is only a single input image, and no assumptions about the scene geometry or types of objects present are made. \subsection*{Learning-Based Stereo} The vast majority of stereo estimation algorithms have a data term which computes the similarity between each pixel in the first image and every other pixel in the second image. Typically the stereo pair is rectified and thus the problem of disparity (i.e.\@\xspace scaled inverse depth) estimation can be posed as a 1D search problem for each pixel. Recently, it has been shown that instead of using hand defined similarity measures, treating the matching as a supervised learning problem and training a function to predict the correspondences produces far superior results \cite{vzbontar2016stereo, ladicky2015learning}. It has also been shown that posing this binocular correspondence search as a multi-class classification problem has advantages both in terms of quality of results and speed \cite{luo16a}. Instead of just learning the matching function, Mayer et al.\@\xspace\cite{mayer2015large} introduced a fully convolutional \cite{shelhamer2016fully} deep network called DispNet that directly computes the correspondence field between two images. At training time, they attempt to directly predict the disparity for each pixel by minimizing a regression training loss. DispNet has a similar architecture to their previous end-to-end deep optical flow network \cite{fischer2015flownet}. The above methods rely on having large amounts of accurate ground truth disparity data and stereo image pairs at training time. This type of data can be difficult to obtain for real world scenes, so these approaches typically use synthetic data for training. Synthetic data is becoming more realistic, e.g.\@\xspace\cite{gaidon2016virtual}, but still requires the manual creation of new content for every new application scenario. \subsection*{Supervised Single Image Depth Estimation} Single-view, or monocular, depth estimation refers to the problem setup where only a single image is available at test time. Saxena et al.\@\xspace\cite{saxena2009make3d} proposed a patch-based model known as Make3D that first over-segments the input image into patches and then estimates the 3D location and orientation of local planes to explain each patch. The predictions of the plane parameters are made using a linear model trained offline on a dataset of laser scans, and the predictions are then combined together using an MRF. The disadvantage of this method, and other planar based approximations, e.g.\@\xspace \cite{hoiem2005automatic}, is that they can have difficulty modeling thin structures and, as predictions are made locally, lack the global context required to generate realistic outputs. Instead of hand-tuning the unary and pairwise terms, Liu et al.\@\xspace\cite{liu2015learning} use a convolutional neural network (CNN) to learn them. In another local approach, Ladicky et al.\@\xspace\cite{ladicky2014pulling} incorporate semantics into their model to improve their per pixel depth estimation. Karsch et al.\@\xspace\cite{karsch2014depth} attempt to produce more consistent image level predictions by copying whole depth images from a training set. A drawback of this approach is that it requires the entire training set to be available at test time. Eigen et al.\@\xspace\cite{eigen2014depth, eigen2015predicting} showed that it was possible to produce dense pixel depth estimates using a two scale deep network trained on images and their corresponding depth values. Unlike most other previous work in single image depth estimation, they do not rely on hand crafted features or an initial over-segmentation and instead learn a representation directly from the raw pixel values. Several works have built upon the success of this approach using techniques such as CRFs to improve accuracy \cite{li2015depth}, changing the loss from regression to classification \cite{cao2016estimating}, using other more robust loss functions \cite{laina2016deeper}, and incorporating strong scene priors in the case of the related problem of surface normal estimation \cite{wang2015designing}. Again, like the previous stereo methods, these approaches rely on having high quality, pixel aligned, ground truth depth at training time. We too perform single depth image estimation, but train with an added binocular color image, instead of requiring ground truth depth. \subsection*{Unsupervised Depth Estimation} Recently, a small number of deep network based methods for novel view synthesis and depth estimation have been proposed, which do not require ground truth depth at training time. Flynn et al.\@\xspace\cite{flynn2015deepstereo} introduced a novel image synthesis network called DeepStereo that generates new views by selecting pixels from nearby images. During training, the relative pose of multiple cameras is used to predict the appearance of a held-out nearby image. Then the most appropriate depths are selected to sample colors from the neighboring images, based on plane sweep volumes. At test time, image synthesis is performed on small overlapping patches. As it requires several nearby posed images at test time DeepStereo is not suitable for monocular depth estimation. The Deep3D network of Xie et al.\@\xspace\cite{xie2016deep3d} also addresses the problem of novel view synthesis, where their goal is to generate the corresponding right view from an input left image (i.e.\@\xspace the source image) in the context of binocular pairs. Again using an image reconstruction loss, their method produces a distribution over all the possible disparities for each pixel. The resulting synthesized right image pixel values are a combination of the pixels on the same scan line from the left image, weighted by the probability of each disparity. The disadvantage of their image formation model is that increasing the number of candidate disparity values greatly increases the memory consumption of the algorithm, making it difficult to scale their approach to bigger output resolutions. In this work, we perform a comparison to the Deep3D image formation model, and show that our algorithm produces superior results. Closest to our model in spirit is the concurrent work of Garg et al.\@\xspace\cite{garg2016unsupervised}. Like Deep3D and our method, they train a network for monocular depth estimation using an image reconstruction loss. However, their image formation model is not fully differentiable. To compensate, they perform a Taylor approximation to linearize their loss resulting in an objective that is more challenging to optimize. Similar to other recent work, e.g.\@\xspace\cite{patraucean2015spatio, zhou2016learning, zhou2016view}, our model overcomes this problem by using bilinear sampling \cite{jaderberg2015spatial} to generate images, resulting in a fully (sub-)differentiable training loss. We propose a fully convolutional deep neural network loosely inspired by the supervised DispNet architecture of Mayer et al.\@\xspace\cite{mayer2015large}. By posing monocular depth estimation as an image reconstruction problem, we can solve for the disparity field without requiring ground truth depth. However, only minimizing a photometric loss can result in good quality image reconstructions but poor quality depth. Among other terms, our fully differentiable training loss includes a left-right consistency check to improve the quality of our synthesized depth images. This type of consistency check is commonly used as a post-processing step in many stereo methods, e.g.\@\xspace \cite{vzbontar2016stereo}, but we incorporate it directly into our network. \section{Method} This section describes our single image depth prediction network. We introduce a novel depth estimation training loss, featuring an inbuilt left-right consistency check, which enables us to train on image pairs without requiring supervision in the form of ground truth depth. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{decoder.pdf} \caption{Our loss module outputs left and right disparity maps, $d^l$ and $d^r$. The loss combines smoothness, reconstruction, and left-right disparity consistency terms. This same module is repeated at each of the four different output scales. C: Convolution, UC: Up-Convolution, S: Bilinear Sampling, US: Up-Sampling, SC: Skip Connection. \vspace{-5pt}} \label{fig:pipeline} \end{figure} \subsection{Depth Estimation as Image Reconstruction} Given a single image $I$ at test time, our goal is to learn a function $f$ that can predict the per-pixel scene depth, $\hat{d} = f(I)$. Most existing learning based approaches treat this as a supervised learning problem, where they have color input images and their corresponding target depth values at training. It is presently not practical to acquire such ground truth depth data for a large variety of scenes. Even expensive hardware, such as laser scanners, can be imprecise in natural scenes featuring movement and reflections. As an alternative, we instead pose depth estimation as an image reconstruction problem during training. The intuition here is that, given a calibrated pair of binocular cameras, if we can learn a function that is able to reconstruct one image from the other, then we have learned something about the $3$D shape of the scene that is being imaged. Specifically, at training time, we have access to two images $I^l$ and $I^r$, corresponding to the left and right color images from a calibrated stereo pair, captured at the same moment in time. Instead of trying to directly predict the depth, we attempt to find the dense correspondence field $d^r$ that, when applied to the left image, would enable us to reconstruct the right image. We will refer to the reconstructed image $I^l(d^r)$ as $\tilde{I}^r$. Similarly, we can also estimate the left image given the right one, $\tilde{I}^l = I^r(d^l)$. Assuming that the images are rectified \cite{hartley2003multiple}, $d$ corresponds to the image disparity - a scalar value per pixel that our model will learn to predict. Given the baseline distance $b$ between the cameras and the camera focal length $f$, we can then trivially recover the depth $\hat{d}$ from the predicted disparity, $\hat{d} = bf/d$. \subsection{Depth Estimation Network} \begin{figure}[!t] \centering \includegraphics[width=0.85\linewidth]{LR.pdf} \caption{Sampling strategies for backward mapping. With na\"{i}ve sampling the CNN produces a disparity map aligned with the target instead of the input. No LR corrects for this, but suffers from artifacts. Our approach uses the left image to produce disparities for both images, improving quality by enforcing mutual consistency.} \label{fig:LR} \end{figure} At a high level, our network estimates depth by inferring the disparities that warp the left image to match the right one. The key insight of our method is that we can simultaneously infer both disparities (left-to-right and right-to-left), using only the left input image, and obtain better depths by enforcing them to be consistent with each other. Our network generates the predicted image with backward mapping using a bilinear sampler, resulting in a fully differentiable image formation model. As illustrated in Fig.~\ref{fig:LR}, na\"ively learning to generate the right image by sampling from the left one will produce disparities aligned with the right image (target). However, we want the output disparity map to align with the input left image, meaning the network has to sample from the right image. We could instead train the network to generate the left view by sampling from the right image, thus creating a left view aligned disparity map (\textbf{No LR} in Fig.~\ref{fig:LR}). While this alone works, the inferred disparities exhibit `texture-copy' artifacts and errors at depth discontinuities as seen in Fig. \ref{fig:lr_consistency_results}. We solve this by training the network to predict the disparity maps for both views by sampling from the opposite input images. This still only requires a single left image as input to the convolutional layers and the right image is only used during training (\textbf{Ours} in Fig.~\ref{fig:LR}). Enforcing consistency between both disparity maps using this novel left-right consistency cost leads to more accurate results. Our fully convolutional architecture is inspired by DispNet~\cite{mayer2015large}, but features several important modifications that enable us to train without requiring ground truth depth. Our network, is composed of two main parts - an encoder (from cnv1 to cnv7b) and decoder (from upcnv7), please see the supplementary material for a detailed description. The decoder uses skip connections \cite{shelhamer2016fully} from the encoder's activation blocks, enabling it to resolve higher resolution details. We output disparity predictions at four different scales (disp4 to disp1), which double in spatial resolution at each of the subsequent scales. Even though it only takes a single image as input, our network predicts two disparity maps at each output scale - left-to-right and right-to-left. \begin{table*}[!h] \centering \resizebox{0.94\textwidth}{!}{ \begin{tabular}{|l|c||c|c|c|c|c|c|c|c|} \hline Method & Dataset & \cellcolor{col1}Abs Rel & \cellcolor{col1}Sq Rel & \cellcolor{col1}RMSE & \cellcolor{col1}RMSE log & \cellcolor{col1}{\it D1-all} & \cellcolor{col2}$\delta < 1.25 $ & \cellcolor{col2}$\delta < 1.25^{2}$ & \cellcolor{col2}$\delta < 1.25^{3}$\\ \hline Ours with Deep3D \cite{xie2016deep3d} & K & 0.412 & 16.37 & 13.693 & 0.512 & 66.85 & 0.690 & 0.833 & 0.891\\ Ours with Deep3Ds \cite{xie2016deep3d} & K & 0.151 & 1.312 & 6.344 & 0.239 & 59.64 & 0.781 & 0.931 & 0.976\\ Ours No LR & K & 0.123 & 1.417 & 6.315 & 0.220 & 30.318 & 0.841 & 0.937 & 0.973 \\ Ours & K & 0.124 & 1.388 & 6.125 & 0.217 & 30.272 & 0.841 & 0.936 & 0.975 \\ Ours & CS & 0.699 & 10.060 & 14.445 & 0.542 & 94.757 & 0.053 & 0.326 & 0.862\\ Ours & CS + K & 0.104 & 1.070 & 5.417 & 0.188 & 25.523 & 0.875 & 0.956 & 0.983\\ Ours pp & CS + K & 0.100 & 0.934 & 5.141 & 0.178 & 25.077 & 0.878 & 0.961 & \textbf{0.986}\\ Ours resnet pp & CS + K & \textbf{0.097} & \textbf{0.896} & \textbf{5.093} & \textbf{0.176} & \textbf{23.811} & \textbf{0.879} & \textbf{0.962} & \textbf{0.986}\\ \hline Ours Stereo & K & 0.068 & 0.835 & 4.392 & 0.146 & 9.194 & 0.942 & 0.978 & 0.989\\ \hline \end{tabular} \begin{tabular}{|l|} \hline \cellcolor{col1} Lower is better\\ \hline \\ \hline \cellcolor{col2} Higher is better\\ \hline \end{tabular} } \vspace{10pt} \caption{Comparison of different image formation models. Results on the KITTI 2015 stereo 200 training set disparity images \cite{Geiger2012CVPR}. For training, K is the KITTI dataset \cite{Geiger2012CVPR} and CS is Cityscapes \cite{Cordts2016Cityscapes}. Our model with left-right consistency performs the best, and is further improved with the addition of the Cityscapes data. The last row shows the result of our model trained \emph{and tested} with two input images instead of one (see Sec.~\ref{sec:Stereo}).} \label{tab:kitti_official} \vspace{-10pt} \end{table*} \subsection{Training Loss} We define a loss $C_s$ at each output scale s, forming the total loss as the sum $C = \sum_{s=1}^4 C_s$. Our loss module (Fig.~\ref{fig:pipeline}) computes $C_s$ as a combination of three main terms, \begin{equation} C_s = \alpha_{ap} (C_{ap}^l + C_{ap}^r) + \alpha_{ds} (C_{ds}^l + C_{ds}^r) + \alpha_{lr} (C_{lr}^l + C_{lr}^r), \label{eq:cs} \end{equation} where $C_{ap}$ encourages the reconstructed image to appear similar to the corresponding training input, $C_{ds}$ enforces smooth disparities, and $C_{lr}$ prefers the predicted left and right disparities to be consistent. Each of the main terms contains both a left and a right image variant, but only the left image is fed through the convolutional layers. Next, we present each component of our loss in terms of the left image (e.g.\@\xspace $C_{ap}^l$). The right image versions, e.g.\@\xspace $C_{ap}^r$, require to swap left for right and to sample in the opposite direction. \paragraph*{Appearance Matching Loss} During training, the network learns to generate an image by sampling pixels from the opposite stereo image. Our image formation model uses the image sampler from the spatial transformer network (STN) \cite{jaderberg2015spatial} to sample the input image using a disparity map. The STN uses bilinear sampling where the output pixel is the weighted sum of four input pixels. In contrast to alternative approaches \cite{garg2016unsupervised,xie2016deep3d}, the bilinear sampler used is locally fully differentiable and integrates seamlessly into our fully convolutional architecture. This means that we do not require any simplification or approximation of our cost function. Inspired by \cite{lossfunctions}, we use a combination of an $L1$ and single scale SSIM \cite{wang2004image} term as our photometric image reconstruction cost $C_{ap}$, which compares the input image $I^l_{ij}$ and its reconstruction $\tilde{I}^l_{ij}$, where $N$ is the number of pixels, \begin{equation}C_{ap}^l = \frac{1}{N} \sum_{i,j} \alpha \frac{1 - \textup{SSIM}(I^l_{ij}, \tilde{I}^l_{ij})}{2} + (1-\alpha)\left \| I^l_{ij} - \tilde{I}^l_{ij} \right \|. \label{eq:ca} \end{equation} Here, we use a simplified SSIM with a $3\times3$ block filter instead of a Gaussian, and set $\alpha = 0.85$. \paragraph*{Disparity Smoothness Loss} We encourage disparities to be locally smooth with an $L1$ penalty on the disparity gradients $\partial d$. As depth discontinuities often occur at image gradients, similar to \cite{heise2013pm}, we weight this cost with an edge-aware term using the image gradients $\partial I$, \vspace{-3pt} \begin{equation}C_{ds}^l = \frac{1}{N} \sum_{i,j} \left | \partial_x d^l_{ij} \right | e^{-\left \| \partial_x I_{ij}^l \right \|} + \left | \partial_y d^l_{ij} \right | e^{-\left \| \partial_y I^l_{ij} \right \|}. \label{eq:cds} \end{equation} \paragraph*{Left-Right Disparity Consistency Loss} To produce more accurate disparity maps, we train our network to predict both the left and right image disparities, while only being given the left view as input to the convolutional part of the network. To ensure coherence, we introduce an $L1$ left-right disparity consistency penalty as part of our model. This cost attempts to make the left-view disparity map be equal to the \emph{projected} right-view disparity map, \begin{equation}C_{lr}^l = \frac{1}{N} \sum_{i,j} \left | d^l_{ij} - d^r_{ij+d^l_{ij}} \right |. \label{eq:clr} \end{equation} Like all the other terms, this cost is mirrored for the right-view disparity map and is evaluated at all of the output scales. At test time, our network predicts the disparity at the finest scale level for the left image $d^l$, which has the same resolution as the input image. Using the known camera baseline and focal length from the training set, we then convert from the disparity map to a depth map. While we also estimate the right disparity $d^r$ during training, it is not used at test time. \begin{figure*}[!h] \centering \resizebox{\textwidth}{!}{ \input{ims/kitti_eigen/kitti_eigen_2.tex} } \vspace{0pt} \caption{Qualitative results on the KITTI Eigen Split. The ground truth velodyne depth being very sparse, we interpolate it for visualization purposes. Our method does better at resolving small objects such as the pedestrians and poles.} \label{fig:kitti_eigen} \end{figure*} \section{Results} Here we compare the performance of our approach to both supervised and unsupervised single view depth estimation methods. We train on rectified stereo image pairs, and do not require any supervision in the form of ground truth depth. Existing single image datasets, such as \cite{Silberman:ECCV12, saxena2009make3d}, that lack stereo pairs, are not suitable for evaluation. Instead we evaluate our approach using the popular KITTI 2015 \cite{Geiger2012CVPR} dataset. To evaluate our image formation model, we compare to a variant of our algorithm that uses the original Deep3D~\cite{xie2016deep3d} image formation model and a modified one, Deep3Ds, with an added smoothness constraint. We also evaluate our approach with and without the left-right consistency constraint. \subsection{Implementation Details} The network which is implemented in TensorFlow \cite{tensorflow} contains $31$ million trainable parameters, and takes on the order of $25$ hours to train using a single Titan X GPU on a dataset of $30$ thousand images for $50$ epochs. Inference is fast and takes less than $35$ ms, or more than $28$ frames per second, for a $512\times 256$ image, including transfer times to and from the GPU. Please see the supplementary material and our code\footnote{Available at \url{https://github.com/mrharicot/monodepth}} for more details. During optimization, we set the weighting of the different loss components to $\alpha_{ap} = 1$ and $\alpha_{lr} = 1$. The possible output disparities are constrained to be between $0$ and $d_{max}$ using a scaled sigmoid non-linearity, where $d_{max} = 0.3 \times$ the image width at a given output scale. As a result of our multi-scale output, the typical disparity of neighboring pixels will differ by a factor of two between each scale (as we are upsampling the output by a factor of two). To correct for this, we scale the disparity smoothness term $\alpha_{ds}$ with $r$ for each scale to get equivalent smoothing at each level. Thus $\alpha_{ds} = 0.1 / r $, where $r$ is the downscaling factor of the corresponding layer with respect to the resolution of the input image that is passed into the network. For the non-linearities in the network, we used exponential linear units \cite{elus} instead of the commonly used rectified liner units (ReLU) \cite{nair2010rectified}. We found that ReLUs tended to prematurely fix the predicted disparities at intermediate scales to a single value, making subsequent improvement difficult. Following \cite{odena2016deconvolution}, we replaced the usual deconvolutions with a nearest neighbor upsampling followed by a convolutions. We trained our model from scratch for $50$ epochs, with a batch size of $8$ using Adam \cite{adamsolver}, where $\beta_1 = 0.9$, $\beta_2 = 0.999$, and $\epsilon = 10^{-8}$. We used an initial learning rate of $\lambda = 10^{-4}$ which we kept constant for the first $30$ epochs before halving it every $10$ epochs until the end. We initially experimented with progressive update schedules, as in \cite{mayer2015large}, where lower resolution image scales were optimized first. However, we found that optimizing all four scales at once led to more stable convergence. Similarly, we use an identical weighting for the loss of each scale as we found that weighting them differently led to unstable convergence. We experimented with batch normalization \cite{ioffe2015batch}, but found that it did not produce a significant improvement, and ultimately excluded it. Data augmentation is performed on the fly. We flip the input images horizontally with a $50\%$ chance, taking care to also swap both images so they are in the correct position relative to each other. We also added color augmentations, with a $50\%$ chance, where we performed random gamma, brightness, and color shifts by sampling from uniform distributions in the ranges $[0.8, 1.2]$ for gamma, $[0.5, 2.0]$ for brightness, and $[0.8, 1.2]$ for each color channel separately. \paragraph{Resnet50} For the sake of completeness, and similar to \cite{laina2016deeper}, we also show a variant of our model using Resnet50 \cite{he2016deep} as the encoder, the rest of the architecture, parameters and training procedure staying identical. This variant contains $48$ million trainable parameters and is indicated by \textbf{resnet} in result tables. \paragraph{Post-processing} In order to reduce the effect of stereo disocclusions which create disparity ramps on both the left side of the image and of the occluders, a final post-processing step is performed on the output. For an input image $I$ at test time, we also compute the disparity map $d'_l$ for its horizontally flipped image $I'$. By flipping back this disparity map we obtain a disparity map $d''_l$, which aligns with $d_l$ but where the disparity ramps are located on the right of occluders as well as on the right side of the image. We combine both disparity maps to form the final result by assigning the first $5\%$ on the left of the image using $d''_l$ and the last $5\%$ on the right to the disparities from $d_l$. The central part of the final disparity map is the average of $d_l$ and $d'_l$. This final post-processing step leads to both better accuracy and less visual artifacts at the expense of doubling the amount of test time computation. We indicate such results using \textbf{pp} in result tables. \begin{table*}[t] \centering \resizebox{0.94\textwidth}{!}{ \begin{tabular}{|l|c|c||c|c|c|c|c|c|c|} \hline Method & Supervised & Dataset & \cellcolor{col1}Abs Rel & \cellcolor{col1}Sq Rel & \cellcolor{col1}RMSE & \cellcolor{col1}RMSE log & \cellcolor{col2}$\delta < 1.25 $ & \cellcolor{col2}$\delta < 1.25^{2}$ & \cellcolor{col2}$\delta < 1.25^{3}$\\ \hline Train set mean & No & K & 0.361 & 4.826 & 8.102 & 0.377 & 0.638 & 0.804 & 0.894\\ Eigen et al.\@\xspace\cite{eigen2014depth} Coarse $^{\circ}$ & Yes & K & 0.214 & 1.605 & 6.563 & 0.292 & 0.673 & 0.884 & 0.957\\ Eigen et al.\@\xspace\cite{eigen2014depth} Fine $^{\circ}$ & Yes & K & 0.203 & 1.548 & 6.307 & 0.282 & 0.702 & 0.890 & 0.958\\ Liu et al.\@\xspace\cite{liu2015learning} DCNF-FCSP FT \mbox{*} & Yes & K & 0.201 & 1.584 & 6.471 & 0.273 & 0.68 & 0.898 & 0.967\\ \textbf{Ours No LR} & No & K & 0.152 & 1.528 & 6.098 & 0.252 & 0.801 & 0.922 & 0.963\\ \textbf{Ours} & No & K & 0.148 & 1.344 & 5.927 & 0.247 & 0.803 & 0.922 & 0.964\\ \textbf{Ours} & No & CS + K & 0.124 & 1.076 & 5.311 & 0.219 & 0.847 & 0.942 & 0.973\\ \textbf{Ours pp} & No & CS + K & 0.118 & 0.923 & 5.015 & 0.210 & 0.854 & 0.947 & \textbf{0.976}\\ \textbf{Ours resnet pp} & No & CS + K & \textbf{0.114} & \textbf{0.898} & \textbf{4.935} & \textbf{0.206} & \textbf{0.861} & \textbf{0.949} & \textbf{0.976}\\ \hline Garg et al.\@\xspace\cite{garg2016unsupervised} L12 Aug 8$\times$ cap 50m & No & K & 0.169 & 1.080 & 5.104 & 0.273 & 0.740 & 0.904 & 0.962 \\ \textbf{Ours} cap 50m & No & K & 0.140 & 0.976 & 4.471 & 0.232 & 0.818 & 0.931 & 0.969\\ \textbf{Ours} cap 50m & No & CS + K & 0.117 & 0.762 & 3.972 & 0.206 & 0.860 & 0.948 & 0.976\\ \textbf{Ours pp} cap 50m & No & CS + K & 0.112 & 0.680 & 3.810 & 0.198 & 0.866 & 0.953 & \textbf{0.979}\\ \textbf{Ours resnet pp} cap 50m & No & CS + K & \textbf{0.108} & \textbf{0.657} & \textbf{3.729} & \textbf{0.194} & \textbf{0.873} & \textbf{0.954} & \textbf{0.979}\\ \hline \textbf{Our pp} uncropped & No & CS + K & 0.134 & 1.261 & 5.336 & 0.230 & 0.835 & 0.938 & 0.971\\ \textbf{Ours resnet pp} uncropped & No & CS + K & 0.130 & 1.197 & 5.222 & 0.226 & 0.843 & 0.940 & 0.971\\ \hline \end{tabular} \begin{tabular}{|l|} \hline \cellcolor{col1} Lower is better\\ \hline \\ \hline \cellcolor{col2} Higher is better\\ \hline \end{tabular} } \vspace{10pt} \caption{Results on KITTI 2015 \cite{Geiger2012CVPR} using the split of Eigen et al.\@\xspace\cite{eigen2014depth}. For training, K is the KITTI dataset \cite{Geiger2012CVPR} and CS is Cityscapes \cite{Cordts2016Cityscapes}. The predictions of Liu et al.\@\xspace\cite{liu2015learning}\mbox{*} are generated on a mix of the left and right images instead of just the left input images. For a fair comparison, we compute their results relative to the correct image. As in the provided source code, Eigen et al.\@\xspace\cite{eigen2014depth}$^{\circ}$ results are computed relative to the velodyne instead of the camera. Garg et al.\@\xspace\cite{garg2016unsupervised} results are taken directly from their paper. All results, except \cite{eigen2014depth}, use the crop from \cite{garg2016unsupervised}. We also show our results with the same crop and maximum evaluation distance. The last two rows are computed on the uncropped ground truth.} \label{tab:kitti_eigen} \vspace{-10pt} \end{table*} \subsection{KITTI} We present results for the KITTI dataset \cite{Geiger2012CVPR} using two different test splits, to enable comparison to existing works. In its raw form, the dataset contains $42,382$ rectified stereo pairs from $61$ scenes, with a typical image being $1242\times375$ pixels in size. \paragraph{KITTI Split} First we compare different variants of our method including different image formation models and different training sets. We evaluate on the $200$ high quality disparity images provided as part of the official KITTI training set, which covers a total of $28$ scenes. The remaining $33$ scenes contain $30,159$ images from which we keep $29,000$ for training and the rest for evaluation. While these disparity images are much better quality than the reprojected velodyne laser depth values, they have CAD models inserted in place of moving cars. These CAD models result in ambiguous disparity values on transparent surfaces such as car windows, and issues at object boundaries where the CAD models do not perfectly align with the images. In addition, the maximum depth present in the KITTI dataset is on the order of $80$ meters, and we cap the maximum predictions of all networks to this value. Results are computed using the depth metrics from \cite{eigen2014depth} along with the {\it D1-all} disparity error from KITTI \cite{Geiger2012CVPR}. The metrics from \cite{eigen2014depth} measure error in both meters from the ground truth and the percentage of depths that are within some threshold from the correct value. It is important to note that measuring the error in depth space while the ground truth is given in disparities leads to precision issues. In particular, the non-thresholded measures can be sensitive to the large errors in depth caused by prediction errors at small disparity values. \begin{figure}[b] \centering \includegraphics[width=\linewidth]{nolr_comparison} \caption{Comparison between our method with and without the left-right consistency. Our consistency term produces superior results on the object boundaries. Both results are shown without post-processing.} \label{fig:lr_consistency_results} \end{figure} In Table~\ref{tab:kitti_official}, we see that in addition to having poor scaling properties (in terms of both resolution and the number of disparities it can represent), when trained from scratch with the same network architecture as ours, the Deep3D \cite{xie2016deep3d} image formation model performs poorly. From Fig.~\ref{fig:deep3d_compar} we can see that Deep3D produces plausible image reconstructions but the output disparities are inferior to ours. Our loss outperforms both the Deep3D baselines and the addition of the left-right consistency check increases performance in all measures. In Fig. \ref{fig:lr_consistency_results} we illustrate some zoomed in comparisons, clearly showing that the inclusion of the left-right check improves the visual quality of the results. Our results are further improved by first pre-training our model with additional training data from the Cityscapes dataset \cite{Cordts2016Cityscapes} containing $22,973$ training stereo pairs captured in various cities across Germany. This dataset brings higher resolution, image quality, and variety compared to KITTI, while having a similar setting. We cropped the input images to only keep the top 80\% of the image, removing the very reflective car hoods from the input. Interestingly, our model trained on Cityscapes alone does not perform very well numerically. This is likely due to the difference in camera calibration between the two datasets, but there is a clear advantage to fine-tuning on data that is related to the test set. \begin{figure} \centering \input{ims/deep3d_comparison/deep3d_error_fig.tex} \caption{Image reconstruction error on KITTI. While all methods output plausible right views, the Deep3D image formation model without smoothness constraints does not produce valid disparities.} \label{fig:deep3d_compar} \vspace{-10pt} \end{figure} \paragraph{Eigen Split} To be able to compare to existing work, we also use the test split of $697$ images as proposed by \cite{eigen2014depth} which covers a total of $29$ scenes. The remaining $32$ scenes contain $23,488$ images from which we keep $22,600$ for training and the rest for evaluation, similarly to \cite{garg2016unsupervised}. To generate the ground truth depth images, we reproject the 3D points viewed from the velodyne laser into the left input color camera. Aside from only producing depth values for less than $5\%$ of the pixels in the input image, errors are also introduced because of the rotation of the Velodyne, the motion of the vehicle and surrounding objects, and also incorrect depth readings due to occlusion at object boundaries. To be fair to all methods, we use the same crop as \cite{eigen2014depth} and evaluate at the input image resolution. With the exception of Garg~et al.\@\xspace's~\cite{garg2016unsupervised} results, the results of the baseline methods are recomputed by us given the authors's original predictions to ensure that all the scores are directly comparable. This produces slightly different numbers than the previously published ones, e.g.\@\xspace in the case of \cite{eigen2014depth}, their predictions were evaluated on much smaller depth images ($1/4$ the original size). For all baseline methods we use bilinear interpolation to resize the predictions to the correct input image size. Table~\ref{tab:kitti_eigen} shows quantitative results with some example outputs shown in Fig. \ref{fig:kitti_eigen}. We see that our algorithm outperforms all other existing methods, including those that are trained with ground truth depth data. We again see that pre-training on the Cityscapes dataset improves the results over using KITTI alone. \begin{figure*} \centering \resizebox{0.99\textwidth}{!}{ \input{ims/make3d_fig/make3d_fig.tex} } \vspace{5pt} \caption{Our method achieves superior qualitative results on Make3D despite being trained on a different dataset (Cityscapes).} \label{fig:make3d_qual} \vspace{-10pt} \end{figure*} \subsection{Stereo}\label{sec:Stereo} We also implemented a stereo version of our model, see Fig.~\ref{fig:stereo_results}, where the network's input is the concatenation of both left and right views. Perhaps unsurprisingly, the stereo models outperforms our monocular network on every single metric, especially on the {\it D1-all} disparity measure, as can be seen in Table~\ref{tab:kitti_official}. This model was only trained for 12 epochs as it becomes unstable if trained for longer. \begin{figure}[!ht] \centering \resizebox{\linewidth}{!}{ \input{ims/stereo_disp/stereo_fig.tex} } \vspace{3pt} \caption{Our stereo results. While the stereo disparity maps contains more detail, our monocular results are comparable.} \label{fig:stereo_results} \vspace{-8pt} \end{figure} \subsection{Make3D} To illustrate that our method can generalize to other datasets, here we compare to several fully supervised methods on the Make3D test set of \cite{saxena2009make3d}. Make3D consists of only RGB/Depth pairs and no stereo images, thus our method cannot train on this data. We use our network trained only on the Cityscapes dataset and despite the dissimilarities in the datasets, both in content and camera parameters, we still achieve reasonable results, even beating \cite{karsch2014depth} on one metric and \cite{liu2014discrete} on three. Due to the different aspect ratio of the Make3D dataset we evaluate on a central crop of the images. In Table \ref{tab:make3d_tab}, we compare our output to the similarly cropped results of the other methods. As in the case of the KITTI dataset, these results would likely be improved with more relevant training data. A qualitative comparison to some of the related methods is shown in Fig. \ref{fig:make3d_qual}. While our numerical results are not as good as the baselines, qualitatively, we compare favorably to the supervised competition. \begin{table} \centering \resizebox{0.85\linewidth}{!}{ \begin{tabular}{|l|c|c|c|c|} \hline Method & Sq Rel & Abs Rel & RMSE & $\text{log}_{10}$ \\ \hline Train set mean\mbox{*} & 15.517 & 0.893 & 11.542 & 0.223 \\ Karsch et al.\@\xspace\cite{karsch2014depth}\mbox{*} & 4.894 & 0.417 & 8.172 & 0.144 \\ Liu et al.\@\xspace\cite{liu2014discrete}\mbox{*} & 6.625 & 0.462 & 9.972 & 0.161 \\ Laina et al.\@\xspace\cite{laina2016deeper} berHu\mbox{*} & \textbf{1.665} & \textbf{0.198} & \textbf{5.461} & \textbf{0.082} \\ \hline \textbf{Ours} with Deep3D \cite{xie2016deep3d} & 17.18 & 1.000 & 19.11 & 2.527 \\ \textbf{Ours} & 11.990 & 0.535 & 11.513 & 0.156 \\ \textbf{Ours pp} & 7.112 & 0.443 & 8.860 & 0.142\\ \hline \end{tabular} } \vspace{5pt} \caption{Results on the Make3D dataset \cite{saxena2009make3d}. All methods marked with an \mbox{*} are supervised and use ground truth depth data from the Make3D training set. Using the standard C1 metric, errors are only computed where depth is less than 70 meters in a central image crop.} \label{tab:make3d_tab} \end{table} \subsection{Generalizing to Other Datasets} Finally, we illustrate some further examples of our model generalizing to other datasets in Figure \ref{fig:others}. Using the model only trained on Cityscapes \cite{Cordts2016Cityscapes}, we tested on the CamVid driving dataset \cite{brostow2009semantic}. In the accompanying video and the supplementary material we can see that despite the differences in location, image characteristics, and camera calibration, our model still produces visually plausible depths. We also captured a $60,000$ frame dataset, at 10 frames per second, taken in an urban environment with a wide angle consumer 1080p stereo camera. Finetuning the Cityscapes pre-trained model on this dataset produces visually convincing depth images for a test set that was captured with the same camera on a different day, please see the video in the supplementary material for more results. \begin{figure}[!hb] \centering \vspace{2pt} \resizebox{0.95\linewidth}{!}{ \input{ims/others/others_fig.tex} } \vspace{3pt} \caption{Qualitative results on Cityscapes, CamVid, and our own urban dataset captured on foot. For more results please see our video.} \label{fig:others} \end{figure} \subsection{Limitations} Even though both our left-right consistency check and post-processing improve the quality of the results, there are still some artifacts visible at occlusion boundaries due to the pixels in the occlusion region not being visible in both images. Explicitly reasoning about occlusion during training \cite{hoiem2007recovering, humayun_CVPR_2011_occlusions} could improve these issues. It is worth noting that depending how large the baseline between the camera and the depth sensor, fully supervised approaches also do not always have valid depth for all pixels. Our method requires rectified and temporally aligned stereo pairs during training, which means that it is currently not possible to use existing single-view datasets for training purposes e.g.\@\xspace \cite{Silberman:ECCV12}. However, it is possible to fine-tune our model on application specific ground truth depth data. Finally, our method mainly relies on the image reconstruction term, meaning that specular~\cite{godard2015multi} and transparent surfaces will produce inconsistent depths. This could be improved with more sophisticated similarity measures \cite{vzbontar2016stereo}. \section{Conclusion} We have presented an unsupervised deep neural network for single image depth estimation. Instead of using aligned ground truth depth data, which is both rare and costly, we exploit the ease with which binocular stereo data can be captured. Our novel loss function enforces consistency between the predicted depth maps from each camera view during training, improving predictions. Our results are superior to fully supervised baselines, which is encouraging for future research that does not require expensive to capture ground truth depth. We have also shown that our model can generalize to unseen datasets and still produce visually plausible depth maps. In future work, we would like to extend our model to videos. While our current depth estimates are performed independently per frame, adding temporal consistency ~\cite{karsch2014depth} would likely improve results. It would also be interesting to investigate sparse input as an alternative training signal \cite{zoran2015learning, chen2016single}. Finally, while our model estimates per pixel depth, it would be interesting to also predict the full occupancy of the scene \cite{FirmanCVPR2016}. \small{ \vspace{8pt} \noindent\textbf{Acknowledgments} We would like to thank David Eigen, Ravi Garg, Iro Laina and Fayao Liu for providing data and code to recreate the baseline algorithms. We also thank Stephan Garbin for his lua skills and Peter Hedman for his \LaTeX~magic. We are grateful for EPSRC funding for the EngD Centre EP/G037159/1, and for projects EP/K015664/1 and EP/K023578/1. } {\small \bibliographystyle{ieee}
1,108,101,566,090
arxiv
\section{Introduction} Decentralized finance (DeFi) protocols are often described as either utopian systems of aligned incentives or dystopian systems that incentivize hacks and exploits. These incentives, however, are thus far sparsely studied formally, especially around the governance of DeFi applications, which determine how they evolve over time. Unlike in traditional companies, governance in DeFi is meant to be transparent and openly auditable through smart contracts on a blockchain. The aim is often to incentivize good governance without relying on legal recourse, setting it apart from corporate finance. While some DeFi applications are immutable, with change impossible, most have some flexibility to parameters (such as fees and price feeds, or ``oracles''), and often the entire functionality can be upgraded. Control is often placed in the hands of a cooperative of governance token holders who govern the system. This cooperative, however, is known to face perverse incentives, both theoretically (e.g., \cite{klagesmundt2019vuln,zoltu2019,gudgeon2020decentralized}) and often in practice (e.g., \cite{hack:compounder,rekt2021paid}) to steal (or otherwise extract) value. {\it Related work.} These incentives to deviate from the best interest of the protocol and its users are termed \emph{governance extractable value} (GEV) \cite{lee2021gov,werner2021sok}. While there is a body of work on blockchain governance (e.g.~\cite{reijers2016governance,beck2018governance,lee2020political}), DeFi governance is sparsely studied. \cite{klages2020stablecoins} proposed a framework for modeling DeFi governance that extends capital structure models from corporate finance (see \cite{dybvig1991capital,myers1984corporate}). Equilibria in these models are not yet formally studied. In this paper, we incorporate closed form valuation into the framework proposed in \cite{klages2020stablecoins} and characterize the equilibria around interest rate policies (and how closely these lead to stability) and governance attacks in non-custodial stablecoins. {\it Non-custodial stablecoins.} Non-custodial stablecoins are implemented as smart contracts using on-chain collateral, which are not controlled by a responsible party \cite{bullmann2019search}. We briefly introduce the core components of these stablecoins and refer to \cite{bullmann2019search,klages2020stablecoins} for further details. We focus on exogenous collateral, whose price is independent of the stablecoin system and which has proven able to maintain the peg in the long run, see e.g. \cite{bullmann2019search}. Stablecoin issuance is initiated by a user creating a collateralized debt position (CDP) using a ``vault''. The user transfers collateral, e.g., ETH, to the vault, which can mint an amount of stablecoins up to the minimum collateralization level. This leveraged position can be used in multiple ways, e.g. to spend the stablecoin or invest in other assets. The vault owner can redeem the collateral by reimbursing the vault with the issued stablecoins (with interest) and is tasked with maintaining the required collateralization. If the vault becomes undercollateralized, for instance if the price of ETH drops, then an involuntary redemption (liquidation) is performed to deleverage the position. This deleveraging is performed through buy-backs of stablecoins to close the vault. Vaults are over-collateralized to help ensure that the position can be closed. However, should the liquidation proceeds be insufficient, additional mechanisms may kick in to cover the shortfall---either by tapping into a reserve fund or by selling governance tokens as a form of sponsored support (or backstop). Notably, this occurred in Dai on Black Thursday, when the Maker system found itself in a deleveraging spiral \cite{makerdao2020black,klages2019stability,klages2020while}. {\it Incentive compatibility.} Drawing from \cite{werner2021sok}, we consider a cryptoeconomic protocol to be incentive compatible if ``agents are incentivized to execute the game as intended by the protocol designer.'' \cite{klages2020stablecoins} reduces this to a key question of \textit{incentive security}: Is equilibium participation in the stablecoin sustainable? This requires that incentives among all participants (stablecoin holders, vault owners, governance agents) lead to a mutually profitable equilibrium of participating in the stablecoin. As non-custodial stablecoins contain self-governing aspects outside of most rule of law, participant incentives are also influenced by the possibility of profitable governance attacks. {\it This paper.} We study governance incentive problems in non-custodial stablecoins similar to Maker. We formalize a game theoretic model (an adaptation of that in \cite{klages2020stablecoins}) of governance incentives in Section~\ref{sec:model}. We derive closed form solutions to stakeholders' problems in Section~\ref{sec:analysis} using financial engineering methods, culminating in conditions for a unique equilibrium in Theorem~\ref{thm:unique-equil}. We then modify the model to include governance attacks in Section~\ref{sec:gov-attacks} and derive conditions for equilibria without attacks. \section{Model}\label{sec:model} We build upon the framework presented in \cite{klages2020stablecoins}, which seeks to describe incentives between governance token holders (GOV), vaults/risk absorbers and stablecoin holders. We define the model parameters in Table \ref{table_components}. \begin{table}\centering \caption{Model components}\label{table_components} \begin{tabular}{|l|l|} \hline Variable & Definition\\ \hline $N$ & Dollar value of vault collateral (COL position) \\ $e^{R}$ & Return on COL \\ $F$ & Total stablecoin issuance (debt face value) \\ $b$ & Return rate on the outside opportunity \\ $\beta$ & Collateral factor \\ $\delta$ & Interest rate paid by vault to issue STBL \\ $u$ & Vault's utility from an outside COL opportunity \\ $B$ & STBL market price \\ \hline \end{tabular} \end{table} We first introduce the basic framework with no attack vectors. The setup considers an interaction between governors and vaults, who both seek to maximize expected profits. The governance choice problem is simply to maximize expected fee revenue. The vault choice problem is to maximize expected revenue from maintaining a long position in COL, while pursuing a new (leveraged) opportunity, and paying an interest fee to governance. Randomness is introduced in the model by assuming that the return on COL $e^R$ follows a log-normal distribution: $$R\sim N(0,\sigma^2),$$ where the standard deviation $\sigma$ represents the COL volatility. We consider continuous compounding returns. The time horizon is set to $1$. In this case $F(e^\delta - 1)$ represents the total interest paid by vault for the stablecoin issuance, while $FB(e^b - 1)$ represents the net revenue from investing the proceeds of the stablecoin issuance in the outside opportunity. Further, vaults are subject to three constraints: Eq.~(\ref{collateral_constraint}), the collateral constraint, which restricts maximum stablecoin issuance to a fraction of posted collateral; Eq.~(\ref{stablecoin_price}), the stablecoin price, which is pegged at one whenever collateral does not fall short; and Eq.~(\ref{outside_utility constraint}), the participation constraint, which simply states that participation must yield higher utility compared to abandoning the system. Formally, the governance choice problem is written as: \begin{equation}\label{eq:discrete-gov-system} \begin{aligned} \max_\delta\quad& \mathbb{E}[(e^\delta - 1)\cdot F]\\ \text{s.t.}\quad& \text{Vault's choice of $F$.} \end{aligned} \end{equation} The vault choice problem can be written as: \begin{align} \max_{F\geq0} \hspace{0.2cm} & \mathbb{E}\left[\underbrace{Ne^{R}}_{\text{COL long position}}+\underbrace{F\big(B(e^b - 1)-(e^\delta - 1)\big)}_{\text{Net revenue from leveraged position}}\right] \nonumber \\ % \text{s.t.} \hspace{0.2cm} & \hspace{0.6cm} \text{GOV's choice of $\delta$}\nonumber\\ &\hspace{0.6cm} F \leq\beta N \label{collateral_constraint}\\ &\hspace{0.6cm} B =\mathbb{E}\left[\min\left(1,\frac{Ne^{R}}{F}-(e^\delta - 1) \right)\right]\label{stablecoin_price}\\ &\hspace{0.6cm} u \leq\mathbb{E}\left[Ne^{R}+F\big(B(e^b - 1)-(e^\delta - 1)\big)\right].\label{outside_utility constraint} \end{align} We consider a Stackelberg equilibrium in which first the governance chooses an interest rate and then the vault chooses the stablecoin issuance. Note that the reverse order would yield a trivial solution in which the vault does not participate. The governance as a second player would simply set the interest rate $\delta\xrightarrow{}\infty$. In anticipation, vault would set $F=0$ as a consequence of the vaults' participation constraint. In contrast, the problem in which the governance moves first is non-trivial. Indeed, using financial engineering methods we will give closed form solutions to the objective functions of the two agents. This will allow us to analyze the convexity of their payoffs. Under reasonable conditions on parameters, the vaults' participation constraint imposes an upper bound on the interest rate but the optimal interest rate does not saturate this constraint. \paragraph{ Expected collateral shortfall.} Our approach follows classical ideas for the valuation of corporate liabilities, present since the seminal works of Black and Scholes \cite{blackscholes79} and Merton \cite{merton1970dynamic}, \cite{merton1974pricing}. ``Since almost all corporate liabilities can be viewed as combinations of options", i.e., their payoffs can be replicated using an options portfolio, their valuations can be characterized using Black and Scholes formulas, see e.g., \cite{shreve2004stochastic}. In analogy to the corporate debt holders, the stablecoin holders have an asset essentially equal to $1$ (the face value) minus the following quantity that captures the collateral shortfall: \begin{equation} \begin{aligned} P(F,\delta)&=\mathbb{E}\left[Fe^\delta-Ne^R\right]_+=Fe^\delta\Phi(-d_2)-N\Phi(-d_1)\\ d_1&=\frac{\log(\frac{N}{Fe^\delta})+\frac{\sigma^2}{2}}{\sigma},\qquad d_2=d_1-\sigma. \end{aligned} \label{putoption} \end{equation} Hence, the representative stablecoin holder mimicks the role of the debt holder in classical capital structure models. The quantity $\Phi(-d_1)$ represents the probability that there is a collateral shortfall (which is analogous to a corporate default in the classical corporate debt valuation theory): $Fe^\delta > Ne^R$. $\Phi(-d_2)$ is also the probability that there is a shortfall, but adjusted for the severity of this shortfall.\footnote{Note that for the purposes of debt valuation the no-arbitrage theory of option pricing is not relevant. Only the Black and Scholes formulas are needed, i.e. the closed form solution for the expectation in \eqref{putoption} when the random return is log-normal. Moreover, while the valuation of corporate debt can be achieved in a dynamic model, the same formulas govern our one period case where the end of the period can be seen as the bond maturity.} \paragraph{Vaults' Perspective} In a similar manner, vaults take into account the expected collateral shortfall in their objective through the stablecoin price constraint, \begin{equation} \begin{aligned} \max_F\quad& Ne^\frac{\sigma^2}{2}+F(e^b-e^\delta)-P(F,\delta)(e^b-1)\\ \text{s.t.}\quad& F\leq \beta\cdot N\\ &u\leq Ne^\frac{\sigma^2}{2}+F(e^b-e^\delta)-P(\delta,F)(e^b-1)\\ &\text{GOV's choice of $\delta$.} \end{aligned} \end{equation} \paragraph{GOV's Perspective} Governance simply maximizes fee revenue \begin{equation} \begin{aligned} \max_\delta\quad& F(e^\delta-1)\\ \text{s.t.}\quad& \text{Vault's choice of $F$.} \end{aligned} \end{equation} We later consider an altered form of the model in Section~\ref{sec:gov-attacks} that incorporates a governance attack vector into our analysis. \section{Stackelberg Equilibrium Analysis}\label{sec:analysis} Governors know that the vaults will only choose to participate if the outside utility of an alternative COL usage is less than (or equal to) the benefit from issuing stablecoins. We first consider the optimum stablecoin issuance without the outside option constraint, and only with the leverage constraint. By evaluating vault objective sensitivities (see Appendix~\ref{Derivative_analysis}), we can obtain vaults' objective maximizer if we include the leverage constraint (which imposes a cap on the amount of stablecoins issued) but ignore the participation constraint. All proofs are provided in Appendix~\ref{appendix-proofs}. \begin{proposition}\label{vault_max_unconstrained} Vaults' unconstrained objective is maximized at \begin{align} \varphi(\delta)&= N\cdot\exp\left[\sigma\cdot\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)-\delta-\frac{\sigma^2}{2}\right]\label{eq:F-bar}, \end{align} which implicitly requires that $\delta\in[0,b]$. Moreover, if we include the leverage constraint, vaults' objective is maximized at \begin{equation} F^\ast(\delta)=\min(\varphi(\delta),\beta N).\label{eq:f-star} \end{equation} \end{proposition} By accounting for vaults' optimal issuance, GOV's objective transforms into \begin{equation} G= F^\ast(\delta)\left(e^\delta-1\right)\label{eq:g-with-f-star_}. \end{equation} \subsection{\texorpdfstring{$F^\ast(\delta)$}{TEXT} w/o Participation Constraint} We first derive results disregarding vaults' participation constraint. By comparing the unconstrained optimum stablecoin issuance $\varphi(\delta)$ to $\beta N$ we pin down a lower bound $\delta_{\beta}$ for the interest rate, arising from the leverage constraint, such that $\forall\delta\in( \delta_{\beta},b]$, $\varphi(\delta)<\beta N$. This result is due to vaults' preference for a larger stablecoin issuance when the interest rate is low, while being unable to exceed the leverage constraint set by governance. Along with the monotonicity of GOV's objective function, we obtain following proposition. \begin{proposition}[Governance choice]\label{governance_choice_unconstrained} There exists a $\delta_{\beta}\in[0,b]$ such that $\varphi( \delta_{\beta})=\beta N$. GOV's optimal interest rate, $\delta^{\ast}$, satisfies $\delta^\ast\geq\delta_{\beta}$. \end{proposition} Hence, in equilibrium governance will either exhaust the leverage constraint with the highest compatible interest rate or set the interest rate higher than $\delta_\beta$ implying excess overcollateralization. This is not in itself surprising, yet leads us to the following technical lemma. \begin{lemma}[Concavity threshold for $\delta$]\label{Delta_threshold} There exists a $\delta_\text{th}$ such that for $\delta > \delta_\text{th}$, GOV's objective is concave. \end{lemma} The following Assumption \ref{assumption-1} ensures that the volatility of the collateral's rate of return is not too large. With this, we further have that GOV's objective is locally increasing at $\delta = \delta_\text{th}$. This assumption is currently verified e.g. for ETH. \begin{assumption}\label{assumption-1} $\sigma<2\phi(0)$ (\hyperlink{pf:assumption-1}{See Appendix} for how this condition is derived). \end{assumption} By recalling that $\frac{dG}{d\delta}$ is non-increasing with $\delta$ for $\delta > \delta_{th}$ (from Lemma \ref{Delta_threshold}), we then obtain the following proposition. \begin{proposition}[Governance unconstrained optimal choice]\label{Delta_upper} Under Assumption \ref{assumption-1} there exists a $\delta^{\ast}$, at which level GOV's objective is maximized. Consequently, if there exists $\delta_\text{th}< \delta_{\beta}<\delta^{\ast}\leq b$, then GOV would take $\delta^{\ast}$. \end{proposition} Then $\delta=\delta^{\ast}$ achieves the unconstrained maximum for the GOV objective. For $\delta_\text{th}< \delta_{\beta}$ to hold, we need the following assumption. \begin{assumption}\label{beta-assumption} $\beta<\frac{e^b+1}{2}\cdot\exp(-b-\frac{\sigma^2}{2})$. \end{assumption} \noindent This is because $\varphi( \delta_{\beta})=\beta N$ and $\frac{d\varphi}{d\delta}<0$, when we ask for $\delta_\text{th}< \delta_{\beta}$, we therefore need to have $\varphi(\delta_\text{th})>\varphi( \delta_{\beta})=\beta N$. Intuitively, governance will achieve a lower payoff at the interest rate pinned down by the leverage constraint, $\delta_{\beta}$, relative to a lower interest rate, $\delta_\text{th}$, which would, in turn, allow vaults to issue a larger amount of stablecoins resulting in larger interest revenue for governance ceteris paribus. Note that the RHS at Assumption~\ref{beta-assumption} obtains its maximum value of $\exp(-\frac{\sigma^2}{2})<1$ when $b=0$, implying overcollateralization. \subsection{\texorpdfstring{$F^\ast(\delta)$}{TEXT} w/ Participation Constraint} We now give conditions on the parameters for which the optimal interest rate set by governors satisfies the vaults' participation constraint given outside utility on COL usage. First, we make the following additional assumption. \begin{assumption}\label{u-assumption} $u\leq Ne^\frac{\sigma^2}{2}+N\Phi(-d_1(\delta^{\ast}))(e^b-1)$. \end{assumption} Here we ensure that the vault is able to achieve a payoff equal to or above their utility from an outside COL opportunity. This assumption ensures that GOV is aware that their optimum interest rate must be sufficiently attractive in order for vaults to participate, i.e., governance must take into account the vaults' idiosyncratic tradeoffs. Armed with this assumption, we characterize a unique equilibrium in the following theorem. \begin{theorem}\label{thm:unique-equil} If hyper-parameters are selected such that Assumption \ref{assumption-1}, \ref{beta-assumption}, and \ref{u-assumption} all are satisfied, and there exist $ \delta_{\beta}$ and $\delta^{\ast}$ that satisfy \eqref{eq:delta-lower_} and \eqref{eq:delta-upper} respectively, then there is a unique equilibrium with $\delta=\delta^{\ast}$ and $F =\varphi(\delta^{\ast})$. \end{theorem} \section{Governance attack vector}\label{sec:gov-attacks} We now introduce a governance attack vector, as per \cite{gudgeon2020decentralized}. A rational adversary only engages in an attack if its profits exceed costs. They could exploit the governance system to change the contract code and access a sufficiently large GOV token stake to approve the update. For instance, the adversarial change could transfer all COL to the adversary's address. More nuanced versions can also extract collateral indirectly by manipulating price feeds as in \cite{klagesmundt2019vuln}. This may not require 50\% of GOV tokens as governance participation is commonly low. Neither does it require a single wealthy adversary, since many attackers can collude via a crowdfunding strategy (e.g., \cite{daian2018}), or a single attacker could borrow the required tokens via a flash loan (as in \cite{zoltu2019,gudgeon2020decentralized}). Note that timelocks make it harder to pursue flash loan governance attacks. Formally, we consider an adversarial agent with a $\zeta$ fraction of GOV tokens, who is able to steal a $\gamma$ fraction of collateral in the system. Typically, we might have $\gamma=1$, although not always. A rational attack will take place when profits exceed costs, i.e., when $ \zeta F\frac{e^\delta-1}{1-r}+\alpha<\gamma \mathbb{E}[Ne^R]=\gamma Ne^\frac{\sigma^2}{2}, $ where $\alpha$ is an outside cost to attack, and $\zeta F\frac{e^\delta-1}{1-r}$ is the opportunity cost of an attack, i.e. the profits resulting from a non-attack decision, represented as a geometric series of future fee revenue with discount factor $r$. In idealized DeFi, we might have $\alpha=0$ or very close to 0 (through pseudonymity), while, in traditional finance, $\alpha$ is assumed to be so high such that an attack would always be unprofitable, e.g. due to legal repercussions. We could extend the vault choice problem to include the amount of collateral, $N$, locked in the stablecoin system as a share of total endowed collateral available to the vault, $\bar{N}$. Only locked in collateral is subject to seizure during a governance attack. We assume for simplicity that $\bar{N} = N$ and we leave the decision on how much collateral to lock in as an open problem. If there is no attack, i.e., $\alpha+\zeta F\frac{e^\delta-1}{1-r} \geq\gamma Ne^\frac{\sigma^2}{2}$, the governance choice problem writes as before in \eqref{eq:discrete-gov-system} and if there is an attack, i.e., $\alpha+\zeta F\frac{e^\delta-1}{1-r} < \gamma Ne^\frac{\sigma^2}{2}$, then the governance's payoff (ex-adversary) is equal to zero. In the Stackelberg equilibrium with vaults as a second player, the vault choice problem is only relevant conditional on the governance attack being unsuccessful, in which case it writes as before. If the attack is successful, then there is no participation from vaults and the stablecoin system is abandoned. We are thus interested in the non-attack scenario with participation from the vault, since only then is there mutually profitable continued participation for both parties and we have incentive security. The optimal interest rate set by governance that ensures a non-attack decision (and so incentive security) then satisfies the following condition: \begin{equation} \alpha+\zeta \underbrace{F(\delta^\ast)\frac{e^{\delta^\ast}-1}{1-r}}_{G^\ast} \geq \gamma Ne^\frac{\sigma^2}{2} \hspace{0.5cm} \text{equivalently, } G^\ast \geq\frac{\gamma N^\frac{\sigma^2}{2}-\alpha}{\zeta}, \label{eq:incentive_security} \end{equation} where $G^*$ is the optimal governance objective value (and $\delta^\ast$ is the unique optimizing interest rate) under the assumptions of Theorem \ref{thm:unique-equil}. $$ \frac{\gamma N e^{\frac{\sigma^2}{2}}-\alpha}{\zeta} $$ Since $\delta^\ast$ represents a Stackelberg equilibrium with vault participation, condition \eqref{eq:incentive_security} is both necessary and sufficient for the existence of an interest rate that satisfies both the non-attack condition \textit{and} the participation constraint. Note that an interest rate that satisfies condition \eqref{eq:incentive_security} may not be feasible in general. The practical implication of the condition is that participants in the system can use it to verify the incentives of decentralized governors and whether given conditions lead to an equilibrium with incentive security or whether governors may have perverse incentives. The condition applies given \emph{rational} behaviour, since agents know ex-ante if an attack will take place based on parameter values. \section{Conclusion} We have characterized the unique equilibrium arising in non-custodial stablecoins with decentralized governance. The payoff structure is based on closed form valuations of the positions of two stakeholders in the capital structure that underlies the stablecoin. We obtain the equilibrium interest rate and level of participation in settings without governance attacks (Theorem~\ref{thm:unique-equil}) and with a possible governance attack (Eq.~\ref{eq:incentive_security}). Using these closed form solutions, protocol designers can more easily account for the effects their design choices will have on economic equilibrium and incentive security in the system. Our results allow us to quantify how loose the participation constraint can be in order to allow governors to earn a sufficiently high profit in the stablecoin system such that it offsets the proceeds from attacking the system. The implication is that GOV tokens should be expensive enough (e.g., from fundamental value of `honest' cash flows) so that it is unprofitable for outsiders to buy them with the sole purpose of attacking the system. By comparing the precise value of the GOV tokens to the return of the collateral at stake, adjusted for the attack cost, we can evaluate the security and sustainability of decentralized governance systems. As the adjusted attack cost increases with $\alpha$, one possible mitigation to strengthen these governance systems is the traditional one: increase $\alpha$ to deter attack through centralized means. One way to do this to make governors resemble legal fiduciaries with known identities, which often goes against the idealized tenets of DeFi. Another possibility, recently proposed in \cite{lee2021gov} as ``optimistic approval'', alters the problem in a different way by incorporating a veto mechanism invokable by other parties in the system (e.g., vaults and stablecoin holders) in the case of malicious governance proposals. This would introduce a new term in our model that lowers the success probability of an attack based on the probability that the veto mechanism is invoked. If governors anticipate that the veto mechanism will be invoked, then their expectations of attack profit plummet, expanding the mutual participation region. \paragraph*{Acknowlegements.} The authors thank the Center for Blockchains and Electronic Markets at University of Copenhagen for support. \section{Derivative Analysis}\label{Derivative_analysis} \subsection{Sensitivity of the Expected Collateral Shortfall} Note the following relationship, \begin{align} Fe^\delta\phi(d_2)=N\phi(d_1) \end{align} With this, we have the following derivatives, \begin{align} \frac{\partial P}{\partial F}&=e^\delta\Phi(-d_2)+Fe^\delta\cdot\phi(-d_2)\cdot\left(-\frac{d d_2}{dF}\right)-N\cdot\phi(-d_1)\cdot\left(-\frac{d d_1}{dF}\right)\nonumber\\ &=e^\delta\Phi(-d_2)\\ \frac{\partial P}{\partial\delta}&=Fe^\delta\Phi(-d_2)+Fe^\delta\cdot\phi(-d_2)\cdot\left(-\frac{d d_2}{d\delta}\right)-N\cdot\phi(-d_1)\cdot\left(-\frac{d d_1}{d\delta}\right)\nonumber\\ &=Fe^\delta\Phi(-d_2) \end{align} \subsection{Vault Objective Sensitivities} Denote \begin{equation} V:=\quad Ne^\frac{\sigma^2}{2}+F(e^b-e^\delta)-P(\delta,F)(e^b-1)\label{eq:V} \end{equation} Note the following derivatives, \begin{align} \frac{\partial V}{\partial F}&=\left(e^b-e^\delta\right)-\left(e^b-1\right)e^\delta\Phi(-d_2)\label{eq:dv-df}\\ \frac{\partial V}{\partial\delta}&=-Fe^\delta-(e^b-1)\frac{\partial P}{\partial\delta}\nonumber\\ &=-Fe^\delta-(e^b-1)Fe^\delta\Phi(-d_2)\nonumber\\ &=-Fe^\delta(1+(e^b-1)\delta\Phi(-d_2))<0\quad\text{always} \end{align} \subsection{GOV Objective Sensitivities} Denote \begin{equation} G:=\quad F\left(e^\delta-1\right)\label{eq:G} \end{equation} Note the following derivatives, \begin{align} \frac{\partial G}{\partial\delta}&=Fe^\delta>0\quad\text{always}\\ \frac{\partial G}{\partial F}&=e^\delta-1 \end{align} \section{Proofs}\label{appendix-proofs} \textbf{Proposition \ref{vault_max_unconstrained}}\hypertarget{pf:vault-max-unconstrained}{} \begin{proof} Since $V$ is concave in $F$, we set \eqref{eq:dv-df} equal to zero to obtain a maximum for $V$ : \begin{align} \Phi(-d_2)&=\frac{e^b-e^\delta}{e^\delta\left(e^b-1\right)}\nonumber\\ \frac{\log\left(\frac{F}{N}\right)+\delta+\frac{\sigma^2}{2}}{\sigma}&=\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\nonumber\\ F^\ast=\varphi(\delta)&= N\cdot\exp\left[\sigma\cdot\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)-\delta-\frac{\sigma^2}{2}\right], \end{align} with \eqref{eq:F-bar} implicitly requiring that $\delta\in[0,b]$.\\ Together with the leverage constraint, this implies \begin{equation} F^\ast(\delta)=\min(\varphi(\delta),\beta N) \end{equation} since the leverage constraint imposes a cap on amount of stablecoins issued. Substitute \eqref{eq:f-star} into \eqref{eq:G} and obtain \begin{equation} G= F^\ast(\delta)\left(e^\delta-1\right)\label{eq:g-with-f-star}, \end{equation} thus transforming GOV's optimization into finding the optimum for \eqref{eq:g-with-f-star}. \end{proof} \noindent\textbf{Proposition \ref{governance_choice_unconstrained}}\hypertarget{pf:governance-choice-unconstrained}{} \begin{proof} We begin by establishing a lower bound for $\delta$: There exists a $\delta_{\beta}\in[0,b]$ such that $\varphi( \delta_{\beta})=\beta N$, i.e. \begin{equation} \frac{F^{\ast}}{N}=\exp\left[\sigma\cdot\Phi^{-1}\left(\frac{e^{b- \delta_{\beta}}-1}{e^b-1}\right)- \delta_{\beta}-\frac{\sigma^2}{2}\right]=\beta\label{eq:delta-lower_}. \end{equation} The quantity $\delta_{\beta}$ is the interest rate for which vaults' leverage constraint is hit, i.e. for $\delta < \delta_{\beta}$ the optimal stablecoin issuance is given by $\beta N$. Indeed, by comparing $\varphi(\delta)$ to $\beta N$ we obtain \begin{align} \frac{d\varphi}{d\delta}&=-\varphi(\delta)\cdot\left[1+\sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)}\cdot\frac{e^{b-\delta}}{e^b-1}\right]<0\quad\text{always}\label{eq:dvarphi-ddelta} \end{align} Thus $\exists \delta_{\beta}\in[0,b]$ such that $\varphi( \delta_{\beta})=\beta N$, i.e. \begin{equation} \frac{F^{\ast}}{N}=\exp\left[\sigma\cdot\Phi^{-1}\left(\frac{e^{b- \delta_{\beta}}-1}{e^b-1}\right)- \delta_{\beta}-\frac{\sigma^2}{2}\right]=\beta\label{eq:delta-lower}, \end{equation} which effectively is setting a lower bound to $\delta$, such that $\forall\delta\in( \delta_{\beta},b]$, $\varphi(\delta)<\beta N$. We can no conclude the proof of Proposition \ref{governance_choice_unconstrained}. \textbf{Suppose $\varphi(\delta)\geq\beta N$}, i.e. $\delta\in[0, \delta_{\beta}]$ \begin{align*} G&=\beta N\cdot\left(e^\delta-1\right)\\ \frac{dG}{d\delta}&=\beta Ne^\delta>0\quad\text{always}. \end{align*} Thus, GOV will choose $\delta^\ast= \delta_{\beta}$. \end{proof} \noindent\textbf{Lemma \ref{Delta_threshold}}\hypertarget{pf:delta-threshold}{} \begin{proof} \textbf{Suppose $\varphi(\delta)<\beta N$}, i.e. $\delta\in( \delta_{\beta},b]$ \begin{align*} G&=\varphi(\delta)\cdot\left(e^\delta-1\right)\\ \frac{dG}{d\delta}&=\varphi(\delta)\cdot e^\delta + \frac{\partial\varphi}{\partial\delta}\cdot(e^\delta-1)\\ &=\varphi(\delta)\cdot e^\delta -\varphi(\delta)\cdot\left[1+\sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)}\cdot\frac{e^{b-\delta}}{e^b-1}\right]\cdot(e^\delta-1)\\ &=\varphi(\delta)\left[1-\sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)}\cdot\frac{e^b-e^{b-\delta}}{e^b-1}\right] \end{align*} from which the lemma follows. Consider threshold value $\delta_\text{th}$ such that \begin{align*} \frac{e^{b-\delta_\text{th}}-1}{e^b-1}&=0.5\quad\Rightarrow\quad \delta_\text{th}=b-\log\left(\frac{e^b+1}{2}\right). \end{align*} When $\delta>\delta_\text{th}$, we have as $\delta$ increases \begin{itemize} \item $\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)$ decreases from $0$ to $-\infty$ \item $\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)$ hence decreases from $\phi(0)$ to 0 \item $\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)}$ increases from $\frac{1}{\phi(0)}$ to $\infty$ \item $\frac{e^b-e^{b-\delta}}{e^b-1}$ increases from 0.5 to $\frac{e^b}{e^b-1}$ \item Overall, $\sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)}\cdot\frac{e^b-e^{b-\delta}}{e^b-1}$ is increasing. \end{itemize} \end{proof} \noindent\textbf{Assumption \ref{assumption-1}}\hypertarget{pf:assumption-1}{} \begin{proof} \begin{align} &1-\sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta_\text{th}}-1}{e^b-1}\right)\right)}\cdot\frac{e^b-e^{b-\delta_\text{th}}}{e^b-1}>0\nonumber\\ \Leftrightarrow\quad&1-\frac{\sigma}{2\phi(0)}>0\nonumber\\ \Leftrightarrow\quad&\sigma<2\phi(0) \end{align} \end{proof} \noindent\textbf{Proposition \ref{Delta_upper}}\hypertarget{pf:delta-upper}{} \begin{proof} Under Assumption \ref{assumption-1}, we have that $G$ is locally increasing at $\delta =\delta_{th}$ and we have that $\frac{dG}{d\delta}$ is non-increasing with $\delta$ for $\delta > \delta_{th}$. Therefore, when setting $\frac{dG}{d\delta}=0$, i.e. $1-\sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta}-1}{e^b-1}\right)\right)}\cdot\frac{e^b-e^{b-\delta}}{e^b-1}=0$, we implicitly obtain a $\delta^{\ast}$, at which level $G$ is maximized. \end{proof} \noindent\textbf{Theorem \ref{thm:unique-equil}}\hypertarget{pf:unique-equil}{} \begin{proof} At $\delta^{\ast}$, we have \begin{equation} \sigma\cdot\frac{1}{\phi\left(\Phi^{-1}\left(\frac{e^{b-\delta^{\ast}}-1}{e^b-1}\right)\right)}\cdot\frac{e^b-e^{b-\delta^{\ast}}}{e^b-1}=1\label{eq:delta-upper} \end{equation} Substitute \eqref{eq:delta-upper} into \eqref{eq:V}, \begin{equation} \begin{aligned} V(\delta^{\ast})&=Ne^\frac{\sigma^2}{2}+\varphi(\delta^{\ast})(e^b-e^{\delta^{\ast}})\nonumber\\ &-P(\delta^{\ast},\varphi(\delta^{\ast}))(e^b-1)\label{eq:v-delta-upper}\\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} P(\delta^{\ast},\varphi(\delta^{\ast}))&=\varphi(\delta^{\ast})e^{\delta^{\ast}}\Phi(-d_2)-N\Phi(-d_1)\label{eq:p-delta-upper}\\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} d_1(\delta^{\ast})&=\frac{1}{\delta}\cdot\left(\log(\frac{N}{\varphi(\delta^{\ast})e^\delta})+\frac{\sigma^2}{2}\right)\nonumber\\ &=-\Phi^{-1}\left(\frac{e^{b-\delta^{\ast}}-1}{e^b-1}\right)+\sigma\label{eq:d1-delta-upper}\\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} d_2(\delta^{\ast})&=d_1-\sigma=-\Phi^{-1}\left(\frac{e^{b-\delta^{\ast}}-1}{e^b-1}\right)\label{eq:d2-delta-upper} \end{aligned} \end{equation} We obtain \begin{align*} P(\delta^{\ast},\varphi(\delta^{\ast}))&=\varphi(\delta^{\ast})\frac{e^{b}-e^{\delta^{\ast}}}{e^b-1}-N\Phi(-d_1)\label{eq:p-delta-upper-sub} \end{align*} and \begin{align*} V(\delta^{\ast}) &=Ne^\frac{\sigma^2}{2}+N\Phi(-d_1)(e^b-1) \end{align*} such that we must assume \begin{equation} u\leq Ne^\frac{\sigma^2}{2}+N\Phi(-d_1(\delta^{\ast}))(e^b-1), \end{equation} in order for vault participation to hold. \end{proof}
1,108,101,566,091
arxiv
\section{Introduction} Off-policy Deep Reinforcement Learning (RL) algorithms aim to improve sample efficiency by reusing past experience. Recently a number of new off-policy Deep RL algorithms have been proposed for control tasks with continuous state and action spaces, including Deep Deterministic Policy Gradient (DDPG) and Twin Delayed DDPG (TD3) \citep{lillicrap2015ddpg,fujimoto2018td3}. TD3, which introduced clipped double-Q learning, delayed policy updates and target policy smoothing, has been shown to be significantly more sample efficient than popular on-policy methods for a wide range of MuJoCo benchmarks. The field of Deep Reinforcement Learning (DRL) has also recently seen a surge in the popularity of maximum entropy RL algorithms. In particular, Soft Actor-Critic (SAC), which combines off-policy learning with maximum-entropy RL, not only has many attractive theoretical properties, but can also give superior performance on a wide-range of MuJoCo environments, including on the high-dimensional environment Humanoid for which both DDPG and TD3 perform poorly \citep{haarnoja2018sac, haarnoja2018sacapps,langlois2019benchmarkingmodelbased}. SAC and TD3 have similar off-policy structures with clipped double-Q learning, but SAC also employs maximum entropy reinforcement learning. In this paper, we aim to develop off-policy DRL algorithms that not only provide state-of-the-art performance but are also simple and minimalistic. We first seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the MuJoCo benchmark, we demonstrate that when using the standard objective without entropy along with standard additive noise exploration, there is often insufficient exploration due to the bounded nature of the action spaces. Specifically, the outputs of the policy network are often way outside the bounds of the action space, so that they need to be squashed to fit within the action space. The squashing results in actions persistently taking on their maximal values, resulting in insufficient exploration. In contrast, the entropy term in the SAC objective forces the outputs to have sensible values, so that even with squashing, exploration is maintained. We conclude that, for the MuJoCo environments, the entropy term in the objective for Soft Actor-Critic principally addresses the bounded nature of the action spaces. With this insight, we propose the Streamlined Off Policy (SOP) algorithm, which is a minimalistic off-policy algorithm that includes a simple but crucial output normalization. The normalization addresses the bounded nature of the action spaces, allowing satisfactory exploration throughout training. We also consider using inverting gradients (IG) \citep{hausknecht2015deep} with the streamlined scheme, which we refer to as SOP\textunderscore IG. Both approaches use the standard objective without the entropy term. Our results show that SOP and SOP\textunderscore IG match the sample efficiency and robust performance of SAC, including on the challenging Ant and Humanoid environments. Having matched SAC performance without using entropy maximization, we then seek to attain state-of-the-art performance by employing a non-uniform sampling method for selecting transitions from the replay buffer during training. Priority Experience Replay (PER), a non-uniform sampling scheme, has been shown to significantly improve performance for the Atari games benchmark \citep{schaul2015prioritized}, but requires sophisticated data structure for efficient sampling. Keeping with the theme of simplicity with the goal of meeting Occam's principle, we propose a novel and simple non-uniform sampling method for selecting transitions from the replay buffer during training. Our method, called Emphasizing Recent Experience (ERE), samples more aggressively recent experience while not neglecting past experience. Unlike PER, ERE is only a few lines of code and does not rely on any sophisticated data structures. We show that when SOP, SOP\textunderscore IG, or SAC is combined with ERE, the resulting algorithm out-performs SAC and provides state of the art performance. For example, for Ant and Humanoid, SOP+ERE improves over SAC by $21\%$ and $24\%$, respectively, with one million samples. The contributions of this paper are thus threefold. First, we uncover the primary contribution of the entropy term of maximum entropy RL algorithms for the MuJoCo environments. Second, we propose a streamlined algorithm which does not employ entropy maximization but nevertheless matches the sampling efficiency and robust performance of SAC for the MuJoCo benchmarks. And third, we propose a simple non-uniform sampling scheme to achieve state-of-the art performance for the MuJoCo benchmarks. We provide public code for SOP+ERE for reproducibility \footnote{\url{https://github.com/AutumnWu/Streamlined-Off-Policy-Learning}}. \section{Preliminaries} We represent an environment as a Markov Decision Process (MDP) which is defined by the tuple $(\mathcal{S}, \mathcal{A}, r, p, \gamma)$, where $\mathcal{S}$ and $\mathcal{A}$ are continuous multi-dimensional state and action spaces, $r(s,a)$ is a bounded reward function, $p(s'|s,a)$ is a transition function, and $\gamma$ is the discount factor. Let $s(t)$ and $a(t)$ respectively denote the state of the environment and the action chosen at time $t$. Let $\pi = \pi(a|s), \; s \in {\mathcal{S}}, a \in {\mathcal{A}}$ denote the policy. We further denote $K$ for the dimension of the action space, and write $a_k$ for the $k$th component of an action $a \in {\mathcal{A}}$, that is, $a = (a_1,\ldots,a_K)$. The expected discounted return for policy $\pi$ beginning in state $s$ is given by: \begin{equation} \label{standard_return} V_\pi(s)=\mathbb{E}_\pi[\sum_{t=0}^{\infty} \gamma^t r(s(t),a(t)) | s(0)=s] \end{equation} Standard MDP and RL problem formulations seek to maximize $V_\pi(s)$ over policies $\pi$. For finite state and action spaces, under suitable conditions for continuous state and action spaces, there exists an optimal policy that is deterministic \citep{puterman2014markov, bertsekas1996neuro}. In RL with unknown environment, exploration is required to learn a suitable policy. In DRL with continuous action spaces, typically the policy is modeled by a parameterized policy network which takes as input a state $s$ and outputs a value $\mu(s; \theta)$, where $\theta$ represents the current parameters of the policy network \citep{schulman2015trpo,schulman2017ppo,vuong2018spu,lillicrap2015ddpg, fujimoto2018td3}. During training, typically additive random noise is added for exploration, so that the actual action taken when in state $s$ takes the form $a = \mu(s; \theta) + \epsilon$ where $\epsilon$ is a $K$-dimensional Gaussian random vector with each component having zero mean and variance $\sigma$. During testing, $\epsilon$ is set to zero. \subsection{Maximum Entropy Reinforcement Learning} Maximum entropy reinforcement learning takes a different approach than Equation (\ref{standard_return}) by optimizing policies to maximize both the expected return and the expected entropy of the policy \citep{ziebart2008maximum,ziebart2010modeling,todorov2008general,rawlik2012stochastic,levine2013guided, levine2016end,nachum2017bridging,haarnoja2017reinforcement,haarnoja2018sac,haarnoja2018sacapps}. In particular, the maximum entropy RL objective is: \begin{align*} V_\pi(s)= \sum_{t=0}^{\infty} \gamma^t \mathbb{E}_\pi[ r(s(t),a(t)) \\+ \lambda H(\pi(\cdot|s(t))) | s(0)=s] \end{align*} where $H(\pi(\cdot|s))$ is the entropy of the policy when in state $s$, and the temperature parameter $\lambda$ determines the relative importance of the entropy term against the reward. For maximum entropy DRL, when given state $s$ the policy network will typically output a $K$-dimensional vector $\sigma(s; \theta)$ in addition to the vector $\mu(s;\theta)$. The action selected when in state $s$ is then modeled as $\mu(s;\theta) + \epsilon$ where $\epsilon \sim N(0,\sigma(s; \theta))$. Maximum entropy RL has been touted to have a number of conceptual and practical advantages for DRL \citep{haarnoja2018sac,haarnoja2018sacapps}. For example, it has been argued that the policy is incentivized to explore more widely, while giving up on clearly unpromising avenues. It has also been argued that the policy can capture multiple modes of near-optimal behavior, that is, in problem settings where multiple actions seem equally attractive, the policy will commit equal probability mass to those actions. In this paper, we show for the MuJoCo benchmarks that the standard additive noise exploration suffices and can achieve the same performance as maximum entropy RL. \section{The Squashing Exploration Problem} \subsection{Bounded Action Spaces} \begin{figure*}[h!tb] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SAC_entropy_humanoid.png} \caption{Humanoid-v2} \label{fig:humanoid_traincurve} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SAC_entropy_walker2d.png} \caption{Walker2d-v2} \label{fig:walker2d_traincurve} \end{subfigure} \caption{SAC performance with and without entropy maximization} \label{fig:SAC_training_curves} \end{figure*} Continuous environments typically have bounded action spaces, that is, along each action dimension $k$ there is a minimum possible action value $a_k^{\min}$ and a maximum possible action value $a_k^{\max}$. When selecting an action, the action needs to be selected within these bounds before the action can be taken. DRL algorithms often handle this by squashing the action so that it fits within the bounds. For example, if along any one dimension the value $\mu(s;\theta) + \epsilon$ exceeds $a^{\max}$, the action is set (clipped) to $a^{\max}$. Alternatively, a smooth form of squashing can be employed. For example, suppose $a_k^{\min} = - M$ and $a_k^{\max} = + M$ for some positive number $M$, then a smooth form of squashing could use $a = M \tanh(\mu(s;\theta) + \epsilon )$ in which $\tanh()$ is being applied to each component of the $K$-dimensional vector. DDPG \citep{hou2017ddpgper} and TD3 \citep{fujimoto2018td3} use clipping, and SAC \citep{haarnoja2018sac,haarnoja2018sacapps} uses smooth squashing with the $\tanh()$ function. For concreteness, henceforth we will assume that smooth squashing with the $\tanh()$ is employed. We note that an environment may actually allow the agent to input actions that are outside the bounds. In this case, the environment will typically first clip the actions internally before passing them on to the ``actual'' environment \citep{fujita2018clipped}. We now make a simple but crucial observation: squashing actions to fit into a bounded action space can have a disastrous effect on additive-noise exploration strategies. To see this, let the output of the policy network be $\mu(s) = (\mu_1(s),\ldots,\mu_K(s))$. Consider an action taken along one dimension $k$, and suppose $\mu_k(s) >> 1$ and $|\epsilon_k|$ is relatively small compared to $\mu_k(s)$. Then the action $a_k = M \tanh(\mu_k(s) + \epsilon_k )$ will be very close (essentially equal) to $M$. If the condition $\mu_k(s) >> 1$ persists over many consecutive states, then $a_k$ will remain close to 1 for all these states, and consequently there will be essentially no exploration along the $k$th dimension. We will refer to this problem as the {\em squashing exploration problem}. We will argue that algorithms using the standard objective (Equation \ref{standard_return}) with additive noise exploration can be greatly impaired by squashing exploration. \subsection{How Does Entropy Maximization Help for the MuJoCo Environments?} \label{SAC_with_without_entropy} \begin{figure*}[h!tb] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.235\linewidth]{Figures/humanoid_dim11_1.png} \label{fig:humanoid_muk_1} \includegraphics[width=0.235\linewidth]{Figures/humanoid_dim11_2.png} \label{fig:humanoid_muk_2} \includegraphics[width=0.235\linewidth]{Figures/humanoid_dim11_3.png} \label{fig:humanoid_muk_3} \includegraphics[width=0.235\linewidth]{Figures/humanoid_dim11_4.png} \label{fig:humanoid_muk_4} \caption{Humanoid-v2} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.235\linewidth]{Figures/walker_dim1_1.png} \label{fig:walker_muk_1} \includegraphics[width=0.235\linewidth]{Figures/walker_dim1_2.png} \label{fig:walker_muk_2} \includegraphics[width=0.235\linewidth]{Figures/walker_dim1_3.png} \label{fig:walker_muk_3} \includegraphics[width=0.235\linewidth]{Figures/walker_dim1_4.png} \label{fig:walker_muk_4} \caption{Walker2d-v2} \end{subfigure} \caption{$\mu_k$ and $a_k$ values from SAC and SAC without entropy maximization. See section 3.2 for a discussion.} \label{fig:SAC_actions} \end{figure*} SAC is a maximum-entropy off-policy DRL algorithm which provides good performance across all of the MuJoCo benchmark environments. To the best of our knowledge, it currently provides state of the art performance for the MuJoCo benchmark. In this section, we argue that the principal contribution of the entropy term in the SAC objective is to resolve the squashing exploration problem, thereby maintaining sufficient exploration when facing bounded action spaces. To argue this, we consider two DRL algorithms: SAC with adaptive temperature \citep{haarnoja2018sacapps}, and SAC with entropy removed altogether (temperature set to zero) but everything else the same. We refer to them as {\em SAC} and as {\em SAC without entropy}. For SAC without entropy, for exploration we use additive zero-mean Gaussian noise with $\sigma$ fixed at $0.3$. Both algorithms use $\tanh$ squashing. We compare these two algorithms on two MuJoCo environments: Humanoid-v2 and Walker-v2. Figure \ref{fig:SAC_training_curves} shows the performance of the two algorithms with 10 seeds. For Humanoid, SAC performs much better than SAC without entropy. However, for Walker, SAC without entropy performs nearly as well as SAC, implying maximum entropy RL is not as critical for this environment. To understand why entropy maximization is important for one environment but less so for another, we examine the actions selected when training these two algorithms. Humanoid and Walker have action dimensions $K=17$ and $K=6$, respectively. Here we show representative results for one dimension for both environments. The top and bottom rows of Figure \ref{fig:SAC_actions} shows results for Humanoid and Walker, respectively. The first column shows the $\mu_k$ values for an interval of 1,000 consecutive time steps, namely, for time steps 599,000 to 600,000. The second column shows the actual action values passed to the environment for these time steps. The third and fourth columns show a concatenation of 10 such intervals of 1000 time steps, with each interval coming from a larger interval of 100,000 time steps. The top and bottom rows of Figure \ref{fig:SAC_actions} are strikingly different. For Humanoid using SAC with entropy, the $|\mu_k|$ values are small, mostly in the range [-1.5,1.5], and fluctuate significantly. This allows the action values to also fluctuate significantly, providing exploration in the action space. On the other hand, for SAC without entropy the $|\mu_k|$ values are typically huge, most of which are well outside the interval [-10,10]. This causes the actions $a_k$ to be persistently clustered at either $M$ or -$M$, leading to essentially no exploration along that dimension. For Walker, we see that for both algorithms, the $\mu_k$ values are sensible, mostly in the range [-1,1] and therefore the actions chosen by both algorithms exhibit exploration. In conclusion, the principal benefit of maximum entropy RL in SAC for the MuJoCo environments is that it resolves the squashing exploration problem. For some environments (such as Walker), the outputs of the policy network take on sensible values, so that sufficient exploration is maintained and overall good performance is achieved without the need for entropy maximization. For other environments (such as Humanoid), entropy maximization is needed to reduce the magnitudes of the outputs so that exploration is maintained and overall good performance is achieved. \section{Matching SOTA Performance without Entropy Maximization} In this paper we examine two approaches for matching SAC performance without using entropy maximization. \begin{figure*}[h!tb] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SOP_hopper.png} \caption{Hopper-v2} \label{fig:sop-sac-hopper} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SOP_walker2d.png} \caption{Walker2d-v2} \label{fig:sop-sac-walker2d} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SOP_halfcheetah.png} \caption{HalfCheetah-v2} \label{fig:sop-sac-halfcheetah} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SOP_ant.png} \caption{Ant-v2} \label{fig:sop-sac-ant} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/SOP_humanoid.png} \caption{Humanoid-v2} \label{fig:sop-sac-humanoid} \end{subfigure} \caption{Streamlined Off-Policy (SOP) versus SAC, SOP\textunderscore IG and TD3} \label{fig:SOP-SAC} \end{figure*} \subsection{Output Normalization} As we observed in the previous section, in some environments the policy network output values $|\mu_k|$, $k=1,\ldots,K$ can become persistently huge, which leads to insufficient exploration due to the squashing. We propose a simple solution of normalizing the outputs of the policy network when they collectively (across the action dimensions) become too large. To this end, let $\mu = (\mu_1,\dots,\mu_K)$ be the output of the original policy network, and let $G= \sum_k |\mu_k|/K$. The $G$ is simply the average of the magnitudes of the components of $\mu$. The normalization procedure is as follows. If $G>1$, then we reset $\mu_k \leftarrow \mu_k/G$ for all $k=1,\ldots,K$; otherwise, we leave $\mu$ unchanged. With this simple normalization, we are assured that the average of the normalized magnitudes is never greater than one. Our Streamlined Off Policy (SOP) algorithm is described in Algorithm \ref{alg:sop}. The algorithm is essentially TD3 minus the delayed policy updates and the target policy parameters but with the addition of the normalization described above. SOP also uses $\tanh$ squashing instead of clipping, since $\tanh$ gives somewhat better performance in our experiments. The SOP algorithm is ``streamlined'' as it has no entropy terms, temperature adaptation, target policy parameters or delayed policy updates. \subsection{Inverting Gradients} In our experiments, we also consider using SOP but replacing the output normalization with the IG scheme \citep{hausknecht2015deep}. In this scheme, when gradients suggest increasing the action magnitudes, gradients are down scaled if actions are within the boundaries, and inverted if otherwise. More specifically, let $p$ be the output of the last layer of the policy network, let $p_{\min}$ and $p_{\max}$ be the action boundaries. The IG approach can be summarized as follows \citep{hausknecht2015deep}: \begin{equation} \nabla_{p} = \nabla_{p} \cdot \begin{cases} \frac{p_{\max} - p}{p_{\max} - p_{\min}} & \parbox[t]{0.3 \linewidth}{\RaggedRight if $\nabla_{p}$ suggests increasing $p$} \\ \frac{p - p_{\min}}{p_{\max} - p_{\min}} &\text{otherwise} \end{cases} \end{equation} Where $\nabla_{p}$ is the gradient of the policy loss w.r.t to $p$. Although IG is not complicated, it is not as simple and straightforward as simply normalizing the outputs. We refer to SOP with IG as SOP\textunderscore IG. Implementation details can be found in the supplementary materials. \subsection{Experimental Results for SOP and SOP\textunderscore IG} Figure \ref{fig:SOP-SAC} compares SAC (with temperature adaptation \citep{haarnoja2018sac,haarnoja2018sacapps}) with SOP, SOP\textunderscore IG, and TD3 plus the simple normalization (which we call TD3+) for five of the most challenging MuJoCo environments. Using the same baseline code, we train each of the algorithms with 10 seeds. Each algorithm performs five evaluation rollouts every 5000 environment steps. The solid curves correspond to the mean, and the shaded region to the standard deviation of the returns over seeds. Results show that SOP, the simplest of all the schemes, performs as well or better than all other schemes. In particular, SAC and SOP have similar sample efficiency and robustness across all environments. TD3+ has slightly weaker asymptotic performance for Walker and Humanoid. SOP\textunderscore IG initially learns slowly for Humanoid with high variance across random seeds, but gives similar asymptotic performance. These experiments confirm that the performance of SAC can be achieved without maximum entropy RL. \subsection{Ablation Study for SOP} In this ablation study, we separately examine the importance of $(i)$ the normalization at the output of the policy network; $(ii)$ the double Q networks; $(iii)$ and randomization used in the line 9 of the SOP algorithm (that is, target policy smoothing \citep{fujimoto2018td3}). Figure \ref{fig:ablation} shows the results for the five environments considered in this paper. In Figure \ref{fig:ablation}, ``no normalization'' is SOP without the normalization of the outputs of the policy network; ``single Q'' is SOP with one Q-network instead of two; and ``no smoothing'' is SOP without the randomness in line 8 of the algorithm. Figure \ref{fig:ablation} confirms that double Q-networks are critical for obtaining good performance \citep{van2016ddqn,fujimoto2018td3,haarnoja2018sac}. Figure \ref{fig:ablation} also shows that output normalization is critical. Without output normalization, performance fluctuates wildly, and average performance can decrease dramatically, particularly for Humanoid and HalfCheetah. Target policy smoothing improves performance by a relatively small amount. In addition, to better understand whether the simple normalization term in SOP achieves a similar effect compared to explicitly maximizing entropy, we plot the entropy values for SOP and SAC throughout training for all environments. We found that SOP and SAC have very similar entropy values across training, while removing the entropy term from SAC makes the entropy value much lower. This indicates that the effect of the action normalization is very similar to maximizing entropy. The results can be found in the supplementary materials. \begin{figure*}[htb] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/Ablation_hopper.png} \caption{Hopper-v2} \label{fig:ablation-hopper} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/Ablation_walker2d.png} \caption{Walker2d-v2} \label{fig:ablation-walker2d} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/Ablation_halfcheetah.png} \caption{HalfCheetah-v2} \label{fig:ablation-halfcheetah} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/Ablation_ant.png} \caption{Ant-v2} \label{fig:ablation-ant} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/Ablation_humanoid.png} \caption{Humanoid-v2} \label{fig:ablation-humanoid} \end{subfigure} \caption{Ablation Study for SOP} \label{fig:ablation} \end{figure*} \section{Non-Uniform Sampling} \begin{figure*}[htb] \centering \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/sac_3ere_hopper.png} \caption{Hopper-v2} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/sac_3ere_walker2d.png} \caption{Walker2d-v2} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/sac_3ere_halfcheetah.png} \caption{HalfCheetah-v2} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/sac_3ere_ant.png} \caption{Ant-v2} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.95\linewidth]{Figures/sac_3ere_humanoid.png} \caption{Humanoid-v2} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/culmulative_expected_5curve.png} \caption{Uniform and ERE sampling} \label{fig:expected_update_done} \end{subfigure} \caption{(a) to (e) show the performance of SAC baseline, SOP+ERE, SAC+ERE, and SOP\textunderscore IG+ERE. (f) shows over a period of 1000 updates, the expected number of times the $t$th data point is sampled (with $\eta=0.996$). ERE allows new data to be sampled many times soon after being collected. } \label{fig:sop-ere} \end{figure*} In the previous section we showed that SOP, SOP\textunderscore IG, and SAC all offer roughly equivalent sample-efficiency performance, with SOP being the simplest of the algorithms. We now show how a small change in the sampling scheme, which can be applied to any off-policy scheme (including SOP, SOP\textunderscore IG and SAC), can achieve state of the art performance for the MuJoCo benchmark. We call this non-uniform sampling scheme Emphasizing Recent Experience (ERE). ERE has 3 core features: $(i)$ It is a general method applicable to any off-policy algorithm; $(ii)$ It requires no special data structure, is very simple to implement, and has near-zero computational overhead; $(iii)$ It only introduces one additional important hyper-parameter. The basic idea is that during the parameter update phase, the first mini-batch is sampled from the entire buffer, then for each subsequent mini-batch we gradually reduce our range of sampling to sample more from recent data. Specifically, assume that in the current update phase we are to make $1000$ mini-batch updates. Let $N$ be the max size of the buffer. Then for the $k^{th}$ update, we sample uniformly from the most recent $c_k$ data points, where $c_k = N \cdot \eta^{k}$ and $\eta \in (0,1]$ is a hyper-parameter that determines how much emphasis we put on recent data. $\eta=1$ is uniform sampling. When $\eta < 1$, $c_k$ decreases as we perform each update. $\eta$ can be made to adapt to the learning speed of the agent so that we do not have to tune it for each environment. The algorithmic and implementation details of such an adaptive scheme is given in the supplementary material. The effect of such a sampling formulation is twofold. The first is recent data have a higher chance of being sampled. The second is that sampling is done in an ordered way: we first sample from all the data in the buffer, and gradually shrink the range of sampling to only sample from the most recent data. This scheme reduces the chance of over-writing parameter changes made by new data with parameter changes made by old data \citep{french1999catastrophic,mcclelland1995there, mccloskey1989catastrophic, ratcliff1990connectionist, robins1995catastrophic}. This allows us to quickly obtain information from recent data, and better approximate the value functions near recently-visited states, while still maintaining an acceptable approximation near states visited in the more distant past. What is the effect of replacing uniform sampling with ERE? First note if we uniformly sample several times from a fixed buffer (uniform fixed), where the buffer is filled, and no new data is coming in, then the expected number of times a data point has been sampled is the same for all data points. Now consider a scenario where we have a buffer of size 1000 (FIFO queue), we collect one data at a time, and then perform one update with mini-batch size of one. If we start with an empty buffer and sample uniformly (uniform empty), as data fills the buffer, each data point gets less and less chance of being sampled. Specifically, start from timestep 0, over a period of 1000 updates, the expected number of times the $t$th data (the data point collected at $t$th timestep) has been sampled is: $\frac{1}{t} + \frac{1}{t+1} + \dots + \frac{1}{1000}$. And if we start with a filled buffer and sample uniformly (uniform full), then the expected number of times the $t$th data has been sampled is $\sum_{t'=t}^{1000} \frac{1}{1000} = \frac{1000-t}{1000}$. Figure \ref{fig:expected_update_done} shows the expected number of times a data point has been sampled (at the end of 1000 updates) as a function of its position in the buffer. We see that when uniform sampling is used, older data are expected to get sampled much more than newer data, especially in the empty buffer case. This is undesirable because when the agent is improving and exploring new areas of the state space; new data points may contain more interesting information than the old ones, which have already been updated many times. When we apply the ERE scheme, we effectively skew the curve towards assigning higher expected number of samples for the newer data, allowing the newer data to be frequently sampled soon after being collected, which can accelerate the learning process. In Figure \ref{fig:expected_update_done} we can see that the curves for ERE (ERE empty and ERE full) are much closer to the horizontal line (Uniform fixed), compared to when uniform sampling is used. With ERE, at any point during training, we expect all data points currently in the buffer to have been sampled approximately the same number of times. Simply using a smaller buffer size will also allow recent data to be sampled more often, and can sometimes lead to a slightly faster learning speed in the early stage. However, it also tends to reduce the stability of learning, and damage long-term performance. Another simple method is to sample data according to an exponential scheme, where more recent data points are assigned exponentially higher probability of being sampled. In the supplementary materials, we provide further algorithmic detail and analysis on ERE, and compare ERE to the exponential sampling scheme, and show that ERE provides a stronger performance improvement. We also compare to another sampling scheme called Prioritized Experience Replay (PER) \citep{schaul2015prioritized}. PER assigns higher probability to data points that give a high absolute TD error when used for the Q update, then it applies an importance sampling weight according to the probability of sampling. Performance comparison can also be found in the supplementary materials. Results show that in the MuJoCo environments, PER can sometimes give a performance gain, but it is not as strong as ERE and the exponential scheme. \subsection{Experimental Results for ERE} Figure \ref{fig:sop-ere} compares the performance of SAC (considered the baseline here), SAC+ERE, SOP+ERE, and SOP\textunderscore IG+ERE. ERE gives a significant boost to all three algorithms, surpassing SAC and achieving a new SOTA. Among the three algorithms, SOP+ERE gives the best performance for Ant and Humanoid (the two most challenging environments) and performance roughly equivalent to SAC+ERE and SOP\textunderscore IG+ERE for the other three environments. In particular, for Ant and Humanoid, SOP+ERE improves performance by 21\% and 24\% over SAC at 1 million samples, respectively. For Humanoid, at 3 million samples, SOP+ERE improves performance by 15\%. In conclusion, SOP+ERE is not only a simple algorithm, but also exceeds state-of-the-art performance. \begin{algorithm*}[htb] \caption{Streamlined Off-Policy} \label{alg:sop} \begin{algorithmic}[1] \STATE Input: initial policy parameters $\theta$, Q-function parameters $\phi_1$, $\phi_2$, empty replay buffer $\mathcal{D}$ \STATE Throughout the output of the policy network $\mu_{\theta}(s)$ is normalized if $ G > 1 $. (See Section 4.1.) \STATE Set target parameters equal to main parameters $\phi_{\text{targ}_i} \leftarrow \phi_i$ \;for i = 1, 2 \REPEAT \STATE Generate an episode using actions $ a = M \text{tanh} (\mu_{\theta} (s) + \epsilon)$ where $\epsilon \sim \mathcal{N}(0,\sigma_1)$. \FOR {$j$ in range(however many updates)} \STATE Randomly sample a batch of transitions, $B = \{ (s,a,r,s) \}$ from $\mathcal{D}$ \STATE Compute targets for Q functions: \\ \hskip1.5em $y_q (r,s') = r + \gamma \min_{i=1,2} Q_{\phi_{\text{targ}_i}}(s', M \text{tanh} (\mu_{\theta} (s') + \delta )) \;\;\;\;\; \delta \sim \mathcal{N}(0, \sigma_2)$ \STATE Update Q-functions by one step of gradient descent using \\ \hskip1.5em $\nabla_{\phi_i} \frac{1}{|B|}\sum_{(s,a,r,s') \in B} \left( Q_{\phi_i}(s,a) - y_q(r,s') \right)^2 \text{for } i=1,2$ \STATE Update policy by one step of gradient ascent using \\ \hskip1.5em $\nabla_{\theta} \frac{1}{|B|}\sum_{s \in B}Q_{\phi_1}(s, M \tanh (\mu_{\theta}(s)))$ \STATE Update target networks with \\ \hskip1.5em $\phi_{\text{targ}_i} \leftarrow \rho \phi_{\text{targ}_i} + (1-\rho) \phi_i \;\; \text{for } i=1,2$ \ENDFOR \UNTIL {Convergence} \end{algorithmic} \end{algorithm*} \section{Related Work} In recent years, there has been significant progress in improving the sample efficiency of DRL for continuous robotic locomotion tasks with off-policy algorithms \citep{lillicrap2015ddpg,fujimoto2018td3,haarnoja2018sac, haarnoja2018sacapps}. There is also a significant body of research on maximum entropy RL methods \citep{ziebart2008maximum,ziebart2010modeling,todorov2008general,rawlik2012stochastic,levine2013guided, levine2016end,nachum2017bridging,haarnoja2017reinforcement,haarnoja2018sac,haarnoja2018sacapps}. \citet{ahmed2019understanding} very recently shed light on how entropy leads to a smoother optimization landscape. By taking clipping in the MuJoCo environments explicitly into account, \citet{fujita2018clipped} modified the policy gradient algorithm to reduce variance and provide superior performance among on-policy algorithms. \citet{eisenach2018marginal} extend the work of \citet{fujita2018clipped} for when an action may be direction. \citet{hausknecht2015deep} introduce Inverting Gradients, for which we provide expermintal results in this paper for the MuJoCo environments. \citet{chou2017improving} also explores DRL in the context of bounded action spaces. \citet{dalal2018safe} consider safe exploration in the context of constrained action spaces. Experience replay \citep{lin1992experiencereplay} is a simple yet powerful method for enhancing the performance of an off-policy DRL algorithm. Experience replay stores past experience in a replay buffer and reuses this past data when making updates. It achieved great successes in Deep Q-Networks (DQN) \citep{mnih2013dqn, mnih2015dqn}. Uniform sampling is the most common way to sample from a replay buffer. One of the most well-known alternatives is prioritized experience replay (PER) \citep{schaul2015prioritized}. PER uses the absolute TD-error of a data point as the measure for priority, and data points with higher priority will have a higher chance of being sampled. This method has been tested on DQN \citep{mnih2015dqn} and double DQN (DDQN) \citep{van2016ddqn} with significant improvement and applied successfully in other algorithms \citep{wang2015dueling,schulze2018vizdoom, hessel2018rainbow, hou2017ddpgper} and can be implemented in a distributed manner \citep{horgan2018distributed}. When new data points lead to large TD errors in the Q update, PER will also assign high sampling probability to newer data points. However, PER has a different effect compared to ERE. PER tries to fit well on both old and new data points. While for ERE, old data points are always considered less important than newer data points even if these old data points start to give a high TD error. A performance comparison of PER and ERE are given in the supplementary materials. There are other methods proposed to make better use of the replay buffer. The ACER algorithm has an on-policy part and an off-policy part, with a hyper-parameter controlling the ratio of off-policy to on-policy updates \citep{wang2016acer}. The RACER algorithm \citep{novati2018remember} selectively removes data points from the buffer, based on the degree of ``off-policyness,'' bringing improvement to DDPG \citep{lillicrap2015ddpg}, NAF \citep{gu2016naf} and PPO \citep{schulman2017ppo}. In \citet{de2015replaydatabase}, replay buffers of different sizes were tested, showing large buffer with data diversity can lead to better performance. Finally, with Hindsight Experience Replay\citep{andrychowicz2017her}, priority can be given to trajectories with lower density estimation \citep{zhao2019curiosity} to tackle multi-goal, sparse reward environments. \section{Conclusion} In this paper we first showed that the primary role of maximum entropy RL for the MuJoCo benchmark is to maintain satisfactory exploration in the presence of bounded action spaces. We then developed a new streamlined algorithm which does not employ entropy maximization but nevertheless matches the sampling efficiency and robust performance of SAC for the MuJoCo benchmarks. Finally, we combined our streamlined algorithm with a simple non-uniform sampling scheme to create a simple algorithm that achieves state-of-the art performance for the MuJoCo benchmark. \section*{Acknowledgements} We would like to thank Yiming Zhang for insightful discussion of our work; Josh Achiam for his help with the OpenAI Spinup codebase. We would also like to thank the reviewers for their helpful and constructive comments.
1,108,101,566,092
arxiv
\section{Introduction} The interaction between atoms and electromagnetic fields has been studied for more than a century, and has provided many important insights. For an atom at rest, the spectral profile of a single transition is a Lorentzian function. When the atom is so strongly coupled to an electromagnetic mode that its absorption and dispersion appreciably change the mode characteristics, two coupled normal modes with a mixed atom-field character emerge (vacuum Rabi splitting). The strong coupling of an atom to an optical-resonator mode opened the field of cavity quantum electrodynamics (QED) in the optical domain, both for individual atoms \cite{PhysRevLett.68.1132,PhysRevLett.82.3791,PhysRevA.96.031802,RevModPhys.87.1379} and for atomic ensembles \cite{JPhysB.38.S551,Science.333.1266,PhysRevLett.106.133601,Science.344.180,PhysRevLett.99.213601,PhysRevLett.111.100505}. Notable results include the observation of single-atom vacuum Rabi splitting \cite{PhysRevLett.68.1132} and the associated optical nonlinearity \cite{Nature.436.87}, a single-photon transistor \cite{Science.341.768,Nature.536.193,Science.361.57}, a photon-atom quantum gate \cite{Nature.508.237}, polarization-dependent directional spontaneous photon emission \cite{NatCommun.5.5713}, light-induced spin squeezing \cite{Nature.529.505,PhysRevLett.116.093602,PhysRevLett.104.073602,PhysRevLett.113.263603}, preparation of entangled many-atom spin states \cite{Nature.519.439,Science.344.180}, and photon-induced entanglement between distant particles \cite{NatPhoton.8.356}. \begin{figure*}[!tb] \begin{center} \includegraphics[width=2.\columnwidth]{IntroFigureRev9.png} \caption{(a) Experimental setup: a cavity mode (dark red) is formed between a mirror of large ROC $R_1$ (bottom) and a micromirror with an ROC of $R_2$ (top), separated by a distance $L$. A mirror MOT (light green) is formed using the flat part of the mirror substrate on which the micromirror is fabricated. Trapping and probing beams are sent through the bottom mirror. (b) Waist size $w_0$ of the cavity mode at 556 nm for different values of $R_2$ and mirror separation $L$ when $R_1=25$ mm. The cavity with small $R_2$ permits stable geometries with small waists. (c) The angular tolerance $\theta_{\rm T}=D/L$ of the tilt of the optical axis for different $R_2$ and $w_0=10$ $\mu$m, 5 $\mu$m, and 2.5 $\mu$m (top to bottom), fixing $R_1=25$ mm. The asymmetric cavity with $R_1 \gg R_2$ is far more stable with respect to misalignment than the near-concentric symmetric cavity with $R_1 = R_2\approx L/2$ for a given $w_0$. } \label{IntroFig} \end{center} \end{figure*} The most common structure used in cavity QED experiments is a Fabry-Perot (FP) cavity consisting of two spherical mirrors with equal radii of curvature (ROCs) \cite{JPhysB.38.S551,Science.333.1266,PhysRevLett.106.133601,Science.344.180,PhysRevLett.99.213601}. For confocal and shorter cavities, this configuration exhibits good mechanical stability of the optical mode. However, when it comes to increasing the single-atom cooperativity $\eta$, the structure has certain constraints: to achieve a small mode waist with commercially available super-polished mirrors of centimeter-scale ROCs, the two mirrors need to be very far from each other (near-concentric cavity) \cite{ApplPhysB.107.1145}, or very close to each other (near-planar cavity) \cite{JPhysB.38.S551}. The near-concentric cavity is very sensitive to alignment errors, while the near-planar cavity offers little optical access. To overcome these difficulties, we instead implemented a geometrically asymmetric cavity, which offers good optical access and a very small mode waist at reasonable mechanical stability. This paper describes the concept and experimental realization of such an asymmetric cavity with high $\eta$. We observe single-atom cooperativity up to $\eta=10$, and collective cooperativity up to $N\eta=2\times 10^4$ with trapped-atom lifetime exceeding several seconds. \section{Concept of asymmetric cavity} Cavity QED is a gateway for manipulating single atoms and atomic ensembles using light \cite{PhysScripta.T76.127,AdvAtMolOptPhys.60.201}. The all-important parameter is the single-atom cooperativity at an antinode $\eta_{\rm max}$, given by \begin{equation}\label{EqCooperativity} \eta_{\rm max}=\frac{4g^2}{\kappa\Gamma}=\frac{24 {\cal F}}{\pi k^2 w^2} \end{equation} for a standing wave cavity \cite{AdvAtMolOptPhys.60.201}. This parameter is a dimensionless constant in cavity QED that describes the strength of atom-light interaction, where $2g$ is the coupling constant (single-photon Rabi frequency) between an atom and a photon, $\kappa$ is the decay rate of a photon in the cavity, $\Gamma$ is the decay rate of the atomic excited state, ${\cal F}$ is the finesse of the cavity, $k=2\pi/\lambda$ is the wavenumber, and $w$ is the $1/e^2$ intensity radius of the cavity mode. An important realization in cavity QED is that the ratio of the coupling constant squared and the product of decay rates is purely geometric. Therefore, designing a cavity with $\eta \gg 1$, useful for obtaining highly entangled states using light \cite{PhysRevLett.115.250502}, is reduced to designing a cavity with small beam size $w$ and high finesse ${\cal F}$. The geometrical relation between the ROCs and positions of two mirrors, and the resulting shape of the cavity mode are well known (e.g. \cite{Siegman}). If one uses more than two mirrors, a waist size smaller than that with a conventional two-mirror cavity can be realized \cite{JPhysB.51.195002}, but here we concentrate on a cavity with two mirrors, because it benefits from a simpler mechanical structure and lower optical loss. In the general case, the waist size for a two-mirror cavity is given by \cite{Siegman} \begin{equation}\label{EqWaist} w_0^2=\frac{L\lambda}{\pi}\sqrt{\frac{g_1g_2\left( 1-g_1g_2 \right)}{\left( g_1+g_2-2g_1g_2\right)^2}}, \end{equation} where $g_{1,2}=1-L/R_{1,2}$, $R_{1,2}$ denote the ROC of the two mirrors, and $L$ is the distance between the two mirrors. In the case of a symmetric cavity ($R_1=R_2$ and thus $g_1=g_2$), this expression simplifies to $w_0^2=(L\lambda/2\pi)\sqrt{(1+g_1)/(1-g_1)}$, leading to two possible cavity configurations with small $w_0$: (i) when the two mirrors are very close to each other, $L\approx 0$, and (ii) when the two mirrors are in a near-concentric configuration, $L\approx 2R_1$. The first configuration has good mechanical stability due to a large optical axis length, given by the distance $D=R_1+R_2-L \approx 2R_1$ between the centers of curvature of the two mirrors. This is a good configuration for having very high cooperativity, and has been used in many experiments, particularly with single atoms \cite{PhysRevLett.68.1132,PhysRevLett.82.3791,Nature.436.87,JPhysB.38.S551}, though the optical access for loading and manipulating atoms is very limited. With additional technical effort, such as a movable magnetic trap \cite{PhysRevLett.99.213601,Nature.450.272,PhysRevLett.98.233601}, it is possible to load large atomic ensembles even into very short cavities. The near-concentric configuration, on the other hand, offers excellent optical access for loading atoms directly into the cavity mode from a magneto-optical trap (MOT) or any other type of trap. However, in this case, the length of the optical axis is short: $D=2\pi^2 w_0^4/(R_1 \lambda^2)$. For example, to obtain $w_0=5$ $\mu$m with $R_1=25$ mm, the cavity has $D=1.6$ $\mu$m for 556 nm light. This causes difficulties in obtaining and maintaining alignment of the cavity, as well as poor mechanical stability. In this case, higher-order transverse modes are close to the fundamental mode in frequency, which can be problematic for experiments aiming to couple atoms to a single cavity mode. Nevertheless, this type of cavity is used for ions to keep the mirror surfaces far away from the trapped particles \cite{PhysRevLett.111.100505}. Some cavities even utilize mirrors with aspheric structure to attain the large numerical aperture required for focusing the beam tightly \cite{NewJPhys.16.103002,1806.03038}. Next, we consider an asymmetric cavity with $R_1 \gg R_2$ [see Fig. \ref{IntroFig}(a)]. In this case, there are two separate stability regions, one with $0<L<R_2$ and the other with $R_1<L<R_1+R_2$. Figure \ref{IntroFig}(b) shows the waist size in the long stability region with $L>R_1$. As $R_2$ shrinks, so does the maximum mode waist $w_0$. When $R_1$ and $R_2$ are fixed, larger $w_0$ gives a larger angular tolerance $\theta_T=D/L$, which is the sensitivity of the optical axis alignment to any tilt in the cavity mounting hardware. When a target $w_0$ is set and $R_2$ is varied, smaller $R_2$ gives larger angular tolerance $\theta_T$, as shown in Fig. \ref{IntroFig}(c). This motivates the construction of an asymmetric cavity consisting of a standard super-polished mirror of $R_1=25$ mm and a micromirror of $R_2\sim400$ $\mu$m, which is manufactured by ablation with a CO$_2$ laser pulse \cite{NJP12.065038}, to simultaneously achieve high cooperativity, large distance between the two mirrors, and large angular tolerance $\theta_T$. Compared to a symmetric cavity with $R_1=R_2=25$ mm, this setup is 60 times more stable with respect to angular misalignment [see Fig. \ref{IntroFig}(c)]. \section{Cavity properties}\label{CavProperty} We built an asymmetric cavity with a slightly elliptical micromirror ($R_{\rm 2x}=303$ $\mu$m, $R_{\rm 2y}=391$ $\mu$m \cite{BBthesis}) on a flat substrate and a standard super-polished mirror ($R_1=25$ mm, see Appendixes for the mechanical details and the procedure of construction). The mirrors have high reflectivity coatings for 556 nm and 759 nm light at normal incidence. The mirrors also reflect 99 \% of the 399 nm and 556 nm light at 45$^{\circ}$ angle of incidence to enable the operation of a mirror MOT with ytterbium, as shown in Fig. \ref{IntroFig}(a). Prior to fixing the mirror distance, the finesse ${\cal F}$ is measured for different separations between the two mirrors. A constant ${\cal F}$ is observed in the region of $25.00<L<25.12$ mm, and it decreases at larger $L$, which may be caused by extra loss due to the large mode size on the nonspherical micromirror \cite{NewJPhys.17.053051}. The inter-mirror distance is fixed at $L=25.0467(10)$ mm, which is calibrated by the disappearance of the cavity mode when $L<R_1$ and a known shift by a micrometer stage. Note that this distance is different from $L=25.10807(17)$ mm derived from the measured free spectral range (FSR) of 5970.04(4) MHz, which potentially implies the breakdown of the simple relation between the FSR and cavity length at small waist size, where the paraxial approximation no longer holds (see Appendix \ref{AssymCavFSR} for more discussion). The expected cooperativity $\eta$ for different atom position $Z$, defined as the distance of the atoms from the micromirror, is calculated based on the mode geometry and ${\cal F}$. The single-atom cooperativity $\eta$ and other QED parameters are summarized in Table \ref{cavityqedparameter} and Fig. \ref{etavsatompos}. In addition to the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ transition at 556 nm, the cavity also has a high finesse for the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_0}$ clock transition at 578 nm. The single-atom cooperativity $\eta$ for 556 nm light can be tuned from the maximum of 40 to less than 0.1 by changing the position of the atoms by a few millimeters, as shown in Fig. \ref{etavsatompos}. \begin{table}[!t] \caption{Cavity QED parameters of the constructed cavity for 556, 578, and 759 nm light, corresponding to the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ transition, the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_0}$ clock transition, and the magic wavelength for the clock transition, respectively. $R_{\rm 25mm}$ and $R_{\rm micro}$ are reflectivities for the 25 mm ROC mirror and the micromirror, respectively, and ${\cal F}$ is the corresponding finesse. } \begin{center} \begin{tabular}{cccc} wavelength $\lambda$ & 556 nm & 578 nm & 759 nm \\ \hline $1-R_{\rm 25mm}$ & 60(2) ppm & 80(5) ppm & 1000(50) ppm \\ $1-R_{\rm micro}$ & 390(10) ppm & 580(20) ppm & 1000(50) ppm \\ ${\cal F}/10^3$ & $14.0(1)$ & $9.5(1)$ & $3.14(7)$\\ $\Gamma/(2\pi)$ & 184(1) kHz & 7.0(2) mHz & - \\ $\kappa/(2\pi)$ & 426(2) kHz & 628(4) kHz & 1.90(4) MHz\\ $g_{\rm max}/(2\pi)$ & 885(5) kHz & 176(1) Hz & -\\ $\eta_{\rm max}$ & 40.0 & 28.2 & - \\ $w_0$ & 4.60 $\mu$m & 4.70 $\mu$m & 5.38 $\mu$m \end{tabular} \end{center} \label{cavityqedparameter} \end{table}% \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\columnwidth]{CooperativityVsAtomDistance_LogX_Levels_5.png} \caption{Single-atom cooperativity at the cavity mode antinodes, calculated from the geometry and the finesse ${\cal F}$ for the atoms trapped in the cavity at different locations along the cavity axis. The upper green curve corresponds to the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ transition at 556 nm for ${\cal F}=1.4 \times 10^4$, and the lower yellow curve corresponds to the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_0}$ clock transition at 578 nm for ${\cal F}=9.5 \times 10^3$.} \label{etavsatompos} \end{center} \end{figure} \section{Atom trapping in the cavity mode} To measure the single-atom cooperativity $\eta$ with atoms, a mirror MOT \cite{PhysRevLett.83.3398} is operated with $^{171}$Yb [see also Fig. \ref{IntroFig}(a)]. The atoms are first loaded into a two-color MOT \cite{JPhysB.48.155302}. Subsequently, the 399 nm cooling light on the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{1} \hspace{-0.05 em} P_1}$ transition is turned off, the detuning of the 556 nm MOT light is reduced from $-7$ MHz to $-200$ kHz (the linewidth of the transition is $\Gamma = 2\pi\times 184$ kHz), and a bias magnetic field is added to move the atoms to the desired location along the cavity axis. Typically around $10^4$ $^{171}$Yb atoms are trapped in the MOT by 556 nm light at a temperature of 15 $\mu$K, with a rms cloud radius of 60 $\mu$m along the vertical cavity axis. To trap the atoms in the cavity mode, a one-dimensional optical lattice near the magic wavelength of 759 nm for the clock transition is generated inside the cavity. With a typical circulating power of 1.2 W, the trap depth at a distance of $Z=0.42$ mm from the micromirror is $2.5$ MHz, with trapping frequencies 142(3) kHz axially and 1.39(10) kHz radially. To load the atoms into the optical lattice, the detuning of the 556 nm MOT light is increased from $-200$ kHz to $-400$ kHz, and the intensity per beam is lowered to 0.05 mW/cm$^2$ (the saturation intensity of the transition is 0.14 mW/cm$^2$) for 20 ms before the MOT light is extinguished. The lifetime of the atoms in the optical lattice is typically a few seconds, limited by intensity noise in the lattice, and approaching the limit set by background gas collisions. \section{Single-atom and collective cooperativity measurement} A cavity-QED system with atoms in the cavity mode is typically characterized by the single-atom cooperativity $\eta$ and the collective cooperativity $N\eta$, where $N$ is the atom number. The single-atom cooperativity $\eta$ determines the strength of the interaction between atoms and light, while the collective cooperativity $N\eta$ sets some limits for the manipulation of the quantum system, such as the amount of attainable spin squeezing (e.g., \cite{PhysRevA.81.021804,PhysRevA.89.043837}). This is because $N\eta$ determines the ratio of useful collective light scattering by the ensemble into the cavity relative to the scattering of light into free space, which results in decoherence \cite{AdvAtMolOptPhys.60.201}. \begin{figure}[!t] \begin{center} \includegraphics[width=0.7\columnwidth]{EnergyLevelPaperRev6.png} \caption{Hyperfine structure of the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0}$ state and the $\mathrm{6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ state relevant to phase shift measurement: the $F=1/2$ manifold of the $\mathrm{6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ state is $-6$ GHz detuned from the $F=3/2$ manifold and therefore is not drawn in the figure. $g_{3P1}$ and $g_{1S0}$ are g factors for the $\mathrm{6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ state and the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0}$ state, respectively. $\omega_{\rm a}$, $\omega_{\rm c}$, and $\omega_{\rm p}$ are the frequencies of the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ atomic transition, the cavity resonance, and the probing laser, respectively.} \label{EnergyDiagramForPhase} \end{center} \end{figure} \subsection{Single-atom cooperativity} The single-atom cooperativity $\eta$ can be experimentally determined as the effective single-atom cooperativity $\eta_{\rm eff}$ by measuring the atomic phase shift $\phi_{\rm at}$ induced by off-resonant probing light \cite{AdvAtMolOptPhys.60.201}. The measured value of $\eta_{\rm eff}$ equals $(3/4)\eta_{\rm max}$, assuming a uniform distribution of atoms along the cavity mode \cite{PhysRevA.92.063816}. To perform the measurement, atoms are optically pumped into the $|^1S_0, m_F=+1/2\rangle$ state, with a bias magnetic field $B=13.6$ G parallel to the cavity axis applied to generate an energy difference of $h \times 10.2$ kHz between the $|^1S_0, m_F= \pm 1/2\rangle$ states, where $h$ is the Planck constant (see Fig. \ref{EnergyDiagramForPhase} for the detailed energy level structure of the system). The cavity resonance frequency $\omega_{\rm c}$ is set equal to the atomic resonance frequency $\omega_{\rm a}$ for the $|\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0,m_F=+1/2 \rangle \rightarrow |6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1, m_F=+3/2} \rangle$ transition, and the probing light is detuned by $\delta$ from both resonances. After applying a $\pi/2$ pulse to the atoms resonant with the Zeeman splitting of the ground state, a probing laser pulse is sent into the cavity mode, which shifts the phase between the $|m_F=\pm1/2\rangle$ states by an amount \begin{equation}\label{EqPhaseShift} \phi_{\rm at}=-\frac{\eta_{\rm eff}}{2\epsilon} \frac{2\delta/\Gamma}{1+(2\delta/\Gamma)^2} \end{equation} per detected photon. The system quantum efficiency $\epsilon$ is defined as $\epsilon=(1-L_{\rm op})\frac{T_2}{T_1+T_2+L_1+L_2}$, where $T_1$ and $T_2$ are the transmission of the input- and output-side mirrors, $L_1$ and $L_2$ are the loss at the input- and output-side mirrors, and $L_{\rm op}$ is the loss between the output-side mirror and the photodetector including the detector's quantum efficiency. \cite{BBthesis,AdvAtMolOptPhys.60.201}. The phase is measured as a population difference between the $|m_F=\pm1/2\rangle$ states after another $\pi/2$ pulse. Figure \ref{PhaseMeasurement} shows the result of the phase measurements, including the small additional phase shift from the $|\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0,m_F=-1/2 \rangle \rightarrow |6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1, m_F=+1/2} \rangle$ transition. The measurements at different detunings $\delta$ are fitted reasonably well by Eq. (\ref{EqPhaseShift}) with $\eta_{\rm eff}/\epsilon$ as the only fitting parameter. From these fits, the cooperativity $\eta_{\rm eff}$ at different atom positions is calculated, assuming the overall detection efficiency of an intracavity photon $\epsilon$ is 0.175(30), obtained from independent measurements of the cavity and photodetector properties. Note that the uncertainty of $\epsilon$ propagates into the estimate of $\eta_{\rm eff}$ as a systematic error. \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\columnwidth]{PhaseShiftResultPaperRev5.png} \caption{Phase shift measurements at $Z=0.14$, 0.27, 0.44, 0.558, 0.564, and 1.40 mm (from top to bottom): squares are the measured phase shifts at different detunings $\delta$, and curves are the fitted phase shift, including the effect of both $|m_F=\pm1/2\rangle$ states.} \label{PhaseMeasurement} \end{center} \end{figure} The measured effective single-atom cooperativity in this system ranges from $\eta_{\rm eff}=10$ to $\eta_{\rm eff}=0.2$ for atom-micromirror distances between $Z=0.136$ mm and $Z=1.40$ mm as shown in Fig. \ref{CooperativityResult}. The value of $Z$ has systematic uncertainty of 7\% due to uncertainty in the magnification of the imaging system. The measured effective cooperativity matches well with the calculated value, as shown in Fig. \ref{CooperativityResult}. \subsection{Collective cooperativity} To measure the collective cooperativity $N\eta$ after trapping the atoms inside the cavity, we measure the vacuum Rabi splitting of the cavity resonance $\Delta\omega$. $N\eta$ is given by \cite{AdvAtMolOptPhys.60.201} \begin{equation} N\eta=\frac{(\Delta\omega)^2}{\kappa\Gamma} \end{equation} For the measurement of $\Delta\omega$, the atomic and cavity resonances are set to the same frequency $\omega_{\rm a}=\omega_{\rm c}$, and a probing laser at 556 nm is sent into the system. The vacuum Rabi splitting $\Delta\omega$ is obtained by the phase and the power measurement of the transmitted probing laser whose frequency $\omega_{\rm p}$ is scanned over the resonance peak. The scanning is performed by two sidebands $\omega_{\rm p} = \omega_{\rm a} \pm \omega_{\rm ch}$ to cancel the effect of the fluctuation of the cavity resonance frequency under the condition of $\Delta\omega \gg \kappa,\Gamma$, where the chirping frequency $\omega_{\rm ch}$ increases linearly in time. Alternatively, one can also measure $N\eta$ by measuring the dispersive shift of cavity resonance frequency $\delta \omega_c$, according to the following equation: \begin{equation}\label{EqOffRes} N\eta=\delta\omega_{\rm c} \frac{4\Delta}{\kappa\Gamma} \end{equation} To perform this frequency shift measurement, $\omega_{\rm p}$ is fixed as $\Delta=\omega_{\rm p}-\omega_{\rm a}$ and the relative transmission through the cavity is measured. The values of $N\eta$ derived from both methods agree with each other. Fig. \ref{CooperativityResult}(b) shows that collective cooperativities $N\eta$ up to $10^4$ are observed for a wide range of atom positions $Z$. The observed values of $N\eta$ are sufficiently large to permit significant cavity-feedback or measurement-based spin squeezing \cite{PhysRevA.81.021804,AKThesis,BBthesis} in future experiments. The details of atom trapping to a small optical lattice are discussed elsewhere \cite{TrappingPaper}. \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\columnwidth]{EtaNetaSummary_20171011Rev20.png} \caption{(a) Measured effective single-atom cooperativity $\eta_{\rm eff}$ and (b) collective cooperativity $N\eta$ for different atom distances $Z$ from the micromirror. (a) The black circles are the measured $\eta_{\rm eff}$. The error bars show the systematic error, while the statistical error is negligible. The solid red curve is the $\eta_{\rm eff}$ estimated from the geometry of the cavity shown in Fig. \ref{etavsatompos}, and dashed blue curve is the best fit of the measured $\eta_{\rm eff}$. (b) Collective cooperativity $N\eta$ measured via vacuum Rabi splitting (red diamonds) or cavity frequency shift (green circles).} \label{CooperativityResult} \end{center} \end{figure} \section{Summary} We have constructed an asymmetric cavity reaching the single-atom strong-coupling regime, and have measured a cooperativity up to $\eta_{\rm eff}=10$ for $^{171}$Yb atoms on the $\mathrm{6s^2 \hspace{0.1 em} {}^{1} \hspace{-0.05 em} S_0 \rightarrow 6s6p \hspace{0.1 em}{} ^{3} \hspace{-0.05 em} P_1}$ transition. The asymmetric structure with a standard mirror and a micromirror ensures both large single-atom cooperativity and mechanical stability, as well as easy tuning of cooperativity by changing the atom position. Atom trapping is performed by a mirror MOT, and collective cooperativities $N\eta$ in excess of $10^4$ are reached at atom-micromirror distances $Z\leq0.7$ mm in a one-dimensional optical lattice with a lifetime exceeding 1 s. The measured single-atom cooperativity ranges from $\eta=10$ to $\eta=0.2$, in agreement with the value expected from the cavity geometry and finesse. The large collective cooperativity we observe will enable spin squeezing in the $|m_F=\pm1/2\rangle$ ground-state manifold, which can then be mapped onto the atomic clock transition, as well as preparation of non-classical collective states \cite{RevModPhys.90.035005}. \begin{acknowledgments} This work is supported by DARPA Grant No. W911NF-11-1-0202, NSF Grants No. PHY-1505862 and No. PHY-1806765, NSF CUA Grant No. PHY-1734011, ONR Grant No. N00014-17-1-2254, and AFOSR MURI Grant No. FA9550-16-1-0323. B.B. acknowledges support from the National Science and Engineering Research Council of Canada. A.K. and B.B. contributed equally to this work. \end{acknowledgments}
1,108,101,566,093
arxiv
\section*{Abstract} {\bf Gauge theories possess nonlocal features that, in the presence of boundaries, inevitably lead to subtleties. We employ geometric methods rooted in the functional geometry of the phase space of Yang-Mills theories to: (\textit{1}) characterize a basis for quasilocal degrees of freedom (dof) that is manifestly gauge-covariant also at the boundary; (\textit{2}) tame the non-additivity of the regional symplectic forms upon the gluing of regions; and to (\textit{3}) discuss gauge and global charges in both Abelian and non-Abelian theories from a geometric perspective. Naturally, our analysis leads to splitting the Yang-Mills dof into Coulombic and radiative. Coulombic dof enter the Gauss constraint and are dependent on extra boundary data (the electric flux); radiative dof are unconstrained and independent. The inevitable non-locality of this split is identified as the source of the symplectic non-additivity, i.e. of the appearance of new dof upon the gluing of regions. Remarkably, these new dof are fully determined by the regional radiative dof only. Finally, a direct link is drawn between this split and Dirac's dressed electron. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \setcounter{tocdepth}{2} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction and summary of the results} Physical degrees of freedom in gauge theories cannot be completely localized, since gauge-invariant quantities have a certain degree of nonlocality; the prototypical example being a Wilson line. Here, we will address the problem of defining \textit{quasilocal} degrees of freedom (quasilocal dof) in electromagnetism and Yang-Mills (YM) theories. By ``quasilocal'', we specifically mean ``confined to a finite and bounded region'', with a certain degree of nonlocality allowed {\it within} the region. When the role of the specific region needs to be emphasized, we will call such properties \textit{regional}. In electromagnetism, or any Abelian YM theory, although the field strength $F_{\mu\nu}={\partial}_{\mu} A_{\nu} - {\partial}_\nu A_\mu$ provides a complete set of local gauge-invariant observables, a canonical formulation unveils the underlying nonlocality. The components of $F_{\mu\nu}$ (i.e. the electric and magnetic fields $E$ and $B$) fail to provide gauge-invariant {\it canonical} coordinates on field space: in 3 space dimensions, $\{ E^i(x) , B^j(y) \} = \epsilon^{ijk}{\partial}_k\delta(x,y)$ is not a canonical Poisson bracket and the presence of the derivative on the right-hand-side is the first sign of a nonlocal behaviour. (For a striking proof of the tension between locality and even gauge-{\it co}variance in the quantum formalism, see \cite[Thm. 8.1]{StrocchiBook}.) From a canonical perspective, the constraint whose Poisson bracket generates gauge transformations, namely the Gauss constraint, is responsible for the non-local attributes of gauge theories---and indeed of most of their peculiar properties (both classical, and quantum \cite{StrocchiBook, strocchi2015symmetries}). The Gauss constraint gives an {\it elliptic} equation which must be satisfied by initial data on a Cauchy surface $\Sigma$. In other words, the initial values of the fields cannot be freely specified throughout $\Sigma$; for instance the allowed values of the electric field inside a region depend on the distribution of charges within the region and the flux of the electric field at its boundary. Ultimately, this is the source of both the nonlocality and the difficulty of identifying freely specifiable initial data---the ``true'' dof of the theory. The viewpoint often adopted in the literature is that such nonlocality also prevents the factorizability of gauge-invariant observables and of physical degrees of freedom across regions (e.g. \cite{Polikarpov, Casini_gauge, GiddingsDonnelly}). In this paper, we will clarify these statements, characterizing the quasilocal dof of Yang-Mills theory as well as their non-local properties. That is, we will address the definition of YM quasilocal dof in a linearized setting around a background configuration. We refer to these first-order perturbations as ``fluctuations'' or, often, as ``modes." Geometrically, these modes are identified with tangent vectors to the YM configuration space over a Cauchy hypersurface $\Sigma$ at a certain base-point in configuration space---the background configuration. Such tangent vectors are the basic objects required by the study of symplectic geometry, as encoded in the (pre)symplectic form $\Omega$. Our approach seamlessly adapts to the treatment of bounded regions $R\subset \Sigma$, ${\partial} R\neq\emptyset$, without ever requiring any restriction on the dof: {\it not even in the form of boundary conditions at ${\partial} R$}. This feature makes our approach uniquely adaptable to the study of arbitrary \textit{fiducial boundaries}---that is, of interfaces that do not presume any boundary condition on the fields---with foreseen applications in e.g. entanglement entropy computations discussed in the outlook section. Although restrictive boundary conditions (see e.g. \cite{Harlow_cov}) on the physical content can in principle be incorporated in the formalism by restricting the definition of the configuration space, we will not analyze this possibility here (we refer to \cite{RielloSoft} by one of the authors for considerations regarding asymptotic null infinity). To be more explicit: more than leave boundary conditions open, we {\it never} fix the gauge freedom, {\it not even at the boundary}. Manifest covariance, including at the boundary, is the central feature of our approach, lying at the core of all our results. Moreover, this freedom fundamentally distinguishes our approach to gauge theories in regions with either finite or asymptotic boundaries from other standard approaches (e.g. \cite{ReggeTeitelboim1974, strominger2018lectures, BalachandranVaidya2013, brown1986}---see also \cite{RielloSoft} for a discussion of this point). Since we also restrain from introducing any additional dof at the boundary, our approach is more economical than the edge-mode approach \cite{DonnellyFreidel, Balachandran:1994up, Speranza:2017gxd, Geiller:2017xad, Camps} (to be discussed in the concluding section). This paper is centered on three physical questions: (\textit{1}) How do we characterize the quasilocal dof of YM theory? (\textit{2}) What are their covariantly conserved regional charges and how are these related to the underlying gauge symmetry? And finally, (\textit{3}) how do the quasilocal dof behave upon composition, or gluing, of the underlying regions? These three questions will be addressed through the development of appropriate mathematical tools, respectively: (\textit{1}) A decomposition of the linearized dof over a region, into a basis that is covariant with respect to gauge transformations of the background configuration. The main tool here is the introduction of a functional connection form over the phase-space of Yang-Mills theory \cite{GomesRiello2016,GomesRiello2018,GomesHopfRiello}. Here we show how the introduction of this connection naturally leads to a split of the dof into Coulombic and radiative. Coulombic dof are those that enter the initial-value Gauss constraint and, in the presence of boundaries, rely on extra independent boundary data---the electric flux. In \cite{AldoNew} by one of the authors, this dependence on boundary data is shown to be at the source of superselection sectors. Within each of these sectors, a quasi-local gauge-reduction procedure can be meaningfully performed. Radiative dof, on the other hand, are unconstrained and independent of any other data: they are the ``true'' quasi-local degrees {\it of freedom} of the theory. Although the split itself depends on the choice of functional connection, our results hold for an arbitrary such choice. Nonetheless, a geometrically privileged functional connection exists which satisfies some extra, convenient, properties. We called this connection the Singer--DeWitt (SdW) connection \cite{GomesHopfRiello}. The gauge-geometry of phase space is described in section \ref{sec:field_space}, while the consequences at the symplectic level are discussed in section \ref{sec:symred}. (\textit{2}) Together with \cite{AbbottDeser, Barnich, DeWitt_Book}, we will argue that non-trivial global charges can only be associated to reducible configurations of the gauge potential. In Abelian theories, every configuration is reducible (with reducibility parameter the constant ``gauge transformations'') and global charges admit a Hamiltonian symplectic flow in the reduced quasilocal phase space---notice that the global charges over $\Sigma$, for ${\partial}\Sigma=\emptyset$, must vanish. In contrast to Abelian theories, in the {\it non}-Abelian case, reducible configurations are extremely rare (i.e. {\it ir}reducible configurations are dense in configuration space) and possess an intricate geometric structure \cite{Ebin, Palais, Mitter:1979un, isenberg1982slice, kondracki1983, YangMillsSlice}. This means not only that the physical relevance of global charges in the non-Abelian theories is less clear (fluctuations that are not fine-tuned generically break the global symmetry under study), but also that an extension of our geometric formalism that encompasses non-Abelian reducible configurations would require substantially more work. For these reasons, in this article we limit ourselves to laying down some general considerations on the non-Abelian case and leave the detailed analysis of the symplectic geometry associated to these charges to future work. Charges are discussed in section \ref{sec:charges}. The relationship of this formalism with Dirac's dressed electron is explained in section \ref{sec:dressing}. (\textit{3}) Our analysis of the gluing of the YM dof across adjacent regions leverages a novel gluing-theorem that we prove in the case of (topologically trivial) bipartite systems. This theorem shows that: (i) the regional {\it radiative} dof are sufficient to reconstruct the global symmetry-reduced symplectic form; and yet (ii) the composition of the radiative symplectic forms is non-additive, i.e. that the global symmetry-reduced symplectic form contains (in a precise sense) more dof than the combination of the regional radiative ones. This is the classical analogue of the non-factorizability of the Hilbert spaces of (lattice) gauge theory. Remarkably, in the SdW case, the gluing theorem leads to an explicit gluing formula for the radiative dof which shows that the ``missing'' dof that emerge upon gluing are indeed encoded in the {\it mismatch} between the two regional radiatives across the interface. As the gluing theorem shows, at a generic configuration of the non-Abelian theory, if gluing is possible---i.e. if the two radiatives can be composed at all---then it is unique. However, at reducible configurations, and in the presence of matter, gluing is ambiguous due to the presence of the non-trivial global symmetries analyzed in (2). This is particularly relevant in the Abelian case, where the ambiguity is related to the total regional electric charge. Finally, we explore in a simple 1-dimensional case the consequences of non-trivial space topology and the emergence of Aharonov-Bohm phases within out formalism. Gluing is discussed in section \ref{sec:gluing}. Crucially, the key feature in all these results is the nonlocal nature of the ``physical dof'' of Yang--Mills theory, a property which is manifest in our answer to (\textit{3}). Of course, this nonlocality is a property that we expect Yang--Mills theory to share with (all) other gauge theories---such as Chern-Simons theory. For example the decomposition of linear fluctuations along gauge and transverse directions in field space, as well as the results on their gluing, apply to any gauge theory described by a Lie-algebra valued gauge potential $A$. Having said that, precise statements on the nature of the dof of a gauge theory can rely only upon a detailed analysis of the \textit{symplectic structure} of the theory, especially in relation to gauge transformations. And since this analysis can only be performed on a theory-by-theory basis, the conclusions we draw in this paper only apply---strictly speaking---to Yang--Mills theory. We conclude our discussion in section \ref{sec:conclusions} with a brief outlook. A list of symbols can be found in appendix \ref{app:symbols}. \section{Field-space geometry: setup and definitions\label{sec:field_space}} This section will set the stage for our future considerations. It mostly reviews constructions and results that have already appeared in our previous work \cite{GomesRiello2016,GomesHopfRiello}. Nonetheless, the inclusion of this material aims for more than just reviewing: our current presentation will be more rigorous, complete, and systematic than those previously available. Throughout this article we will not strive for functional analytic rigour: our constructions will rather focus on the algebraic aspects of the geometry of field space. Most of the field-space objects introduced in this paper are understood within the setting of ``local'' calculus in the sense of the pullback from the (infinite) jet bundle, and not in the setting of general differential geometry on Frechet manifolds. For example, the “cotangent bundle” of the space of connection ${\mathcal{A}}$ introduced later is the fiberwise dualisation of the vector bundle whose sections are the fields. However, as it will become clear later on, these local spaces have to be slightly generalized to introduce certain nonlocal objects such as Green's functions. We will not attempt a rigorous characterization of this extension. Before starting we notice one important remark: all the constructions will be performed at the quasilocal level, by formally replacing a Cauchy surface $\Sigma$ with any compact subregion $R$ thereof, with ${\partial} R\neq \emptyset$. Since our interest lies mostly in bounded regions, we take this replacement for granted. Motivated by the study of subregions of $\Sigma$ defined by fiducial boundaries, in the following we will assume \textit{no} boundary condition at ${\partial} R$, not even in the allowed gauge freedom. Unless otherwise specified, all integrals are understood to be over $R$, i.e. $\int := \int_R \d^D x$, and all boundary integrals over ${\partial} R$, i.e. $\oint := \int_{{\partial} R} \d^{D-1}x$. \subsection{Horizontal splittings in configuration space\label{sec:hor-spl-config}} To start, we introduce notation and recall some basic facts. Consider a Lagrangian $D+1$ formulation of YM theory on a globally hyperbolic spacetime $M\cong \Sigma \times \mathbb R $ foliated by equal-time Cauchy surfaces\footnote{Concerning the extrinsic geometry of our foliation, i.e. how $\Sigma_t$ is embedded in spacetime: Unless stated otherwiese, all our formulae we will hold when $\Sigma$ belongs to an Eulerian foliation of spacetime, i.e. to a foliation whose lapse is equal to one and whose shift vanishes. In other words, $\Sigma$ is an equal-time hypersurface in a spacetime with metric $\d s^2 = -\d t^2 + g_{ij}(t,x)\d x^i \d x^j$. The inclusion of nontrivial lapse and shift is in principle straightforward, but makes some formulae more cluttered, and most likely wouldn't add much to our considerations here. However, we point the reader to \cite{RielloSoft} for a situation where the introduction of a nontrivial shift plays a crucial role in dealing with asymptotic gauge transformations and charges. \label{fn:setup}} $\Sigma_t\cong \Sigma $. To distinguish issues of global (topological) nature---which will only be considered in section \ref{sec:topology}---from those associated with finite boundaries---which constitute our main focus,---we assume $\Sigma\cong \mathbb R^D$. This choice is made for mere convenience and will play no role in the following where our focus will be on compact subregions $R\subset\Sigma$, diffeomorphic to a $D$-disk. Denote the corresponding {\it quasilocal YM configuration space} ${\mathcal{A}}$ (see figure figure \ref{fig1}). This is the space of Lie-algebra valued one-forms on $ R\subset \Sigma$,\footnote{Rigorously speaking, dealing with a non-compact Cauchy surface would require us to consider only fields that vanish fast enough at infinity. However, our focus on compact region will make this restriction virtually irrelevant in the following. Therefore, we do not concern ourselves with a precise determination of the fall off rates and hereafter neglect them completely. For an application of our formalism where asymptotic conditions at null infinity are carefully treated, see \cite{RielloSoft}.} \begin{equation} A \in {\mathcal{A}}:=\Omega^1(R, \mathrm{Lie}(G)). \end{equation} Since we will be using a Hamiltonian (phase-space) framework, the component of $A$ in the transverse direction to $\Sigma$, $A_0$, is left out of the description. \begin{figure}[t] \begin{center} \includegraphics[scale=0.17]{fig_new_1} \caption{A pictorial representation of the configuration space ${\mathcal{A}}$ seen as a principal fibre bundle, on the right. We have highlighted a generic configuration $A$, its (gauge-transformed) image under the action of $R_g:A\mapsto A^g$, and its orbit $\mathcal O_A \cong {\mathcal{G}}$. We have also represented the quotient space of `gauge-invariant configurations' ${\mathcal{A}}/{\mathcal{G}}$. On the left hand side of the picture, we have ``zoomed into'' a representation of $A$ and $A^g$ as two gauge-related local sections of a connection $\omega$ on $P$, the finite dimensional principal fibre bundle with structural group $G$ over $ R$. The principal fibre bundle picture of ${\mathcal{A}}$ will be partially revisited in section \ref{sec:charges}---see figure \ref{fig8}.} \label{fig1} \end{center} \end{figure} The group $G$ is assumed to be compact and semisimple and will be referred to as the {\it charge group} of the theory. In specific applications, we will have $G={\mathrm{SU}}(N)$ in mind. We write, $A=A_i\d x^i = A^\alpha_i \d x^i \tau_\alpha$, where $\{\tau_a\}$ is a basis of generators of $\mathrm{Lie}(G)$ which is orthonormal with respect to a rescaled Killing form on $\mathrm{Lie}(G)$, i.e. $\frac{1}{2N}\mathrm{k}(\tau_\alpha,\tau_\beta) = \mathrm{Tr}(\tau_\alpha \tau_\beta) =\delta_{\alpha,\beta}$. The space of gauge transformations i.e. the space of smooth (compactly supported) $G$-valued functions on $\Sigma$, $\mathcal C_o^\infty(\Sigma, G)$ inherits a group structure from $G$ via pointwise multiplication. This group is in general not connected. Although this fact has crucial physical consequences, in this article we shall be concerned exclusively with the properties of infinitesimal gauge transformations, thus turning a blind eye to these issues.\footnote{The non-connectedness of ${\mathcal{G}}$ has physical consequences e.g. for chiral symmetry breaking in the full quantum theory; for a thorough discussion see \cite{Strocchi_SB}.} Most often, we shall focus on the space of {\it quasilocal} gauge transformations within $R\subset \Sigma$, which we call the {\it gauge group} and indicate by \begin{equation} {\mathcal{G}}:=\mathcal C^\infty(R, G)\ni g. \end{equation} The gauge transformation $g: R \to G$ acts on the gauge potential's configuration $A$ as \begin{equation}\label{eq:gt} R_g : {\mathcal{A}} \to {\mathcal{A}} ,\quad A \mapsto A^g = g^{-1} A g + g^{-1}\d g. \end{equation} This defines an action of ${\mathcal{G}}$ on ${\mathcal{A}}$. The orbits of this action, $\mathcal O_A$, are called gauge orbits and they define a foliation\footnote{${\mathcal{G}}$ does not act freely on every orbit. Indeed, certain configurations $A\in{\mathcal{A}}$, said reducible, admit a {\it finite-dimensional} stabilizer. For more on this, see section \ref{sec:charges} and in particular appendix \ref{app:slice}, where the consequences of this fact will be explored. Until then, we will ignore this complication. \label{ftnt:generic fol}} of ${\mathcal{A}}$, denoted $\mathcal F = \{ \mathcal O_A\}$ and called the {\it vertical foliation} of ${\mathcal{A}}$. The space of orbits, $\mathcal P \cong {\mathcal{A}}/{\mathcal{G}}$, is the ``gauge-invariant'' space of configurations which is only defined abstractly through an equivalence relation, and is most often inaccessible for practical purposes. Rigorous mathematical work has shown that ${\mathcal{A}}$ and the vertical foliation $\cal F$ provide indeed (locally\footnote{Cf. previous footnote.}) a principal fibre bundle structure with ${\mathcal{G}}$ the structure group \cite{Ebin, Palais, Mitter:1979un, isenberg1982slice, kondracki1983, YangMillsSlice}. We will denote the tangent bundle to the vertical foliation by $V:=\mathrm T\mathcal F \subset \mathrm T{\mathcal{A}}$. An infinitesimal gauge transformation $\xi\in\mathrm{Lie}({\mathcal{G}})\cong \mathcal C^\infty(R, \mathrm{Lie}(G))$ defines a vector field tangent to $\mathcal F$. This is denoted by $\xi^\#\in V$, and its value at $A$ is \begin{equation} \xi_A^\# = \int ({\mathrm{D}}_i\xi)^\alpha(x) \frac{\delta}{\delta A_i^\alpha(x)} \in \mathrm T_A \mathcal O_A \subset \mathrm T_A{\mathcal{A}}, \label{eq:hash} \end{equation} where ${\mathrm{D}}_i \xi := {\partial}_i \xi + [A_i, \xi] $ is the gauge-covariant derivative in the adjoint representation. Clearly, at $A\in{\mathcal{A}}$, $V_A = \mathrm{Span}(\{\xi^\#_A \}_{\xi\in\mathrm{Lie}({\mathcal{G}})})$. Thus, we say that $V$ comprises the ``pure gauge directions'' in ${\mathcal{A}}$. Later applications, such as the study of charges and especially gluing, require us to consider so-called ``{\it field-dependent} gauge transformations". Let us first provide a heuristic intuition of this concept: field-dependent gauge transformations correspond to choices of different $\xi\in\mathrm{Lie}({\mathcal{G}})$'s at different configurations $A\in{\mathcal{A}}$ (hence their ``field dependence''). Note that the definition of $\xi^\#$ \eqref{eq:hash} holds point-wise on ${\mathcal{A}}$ and can thus be canonically extended to the field-dependent case. This leads to field-dependent gauge transformations being associated to {\it generic} vertical vector fields in $V\subset \mathrm T{\mathcal{A}}$. These heuristic ideas can be formalized by introducing the {\it action} (or {\it transformation}) {\it Lie algebroid} $(\mathfrak A, \cdot^\#, {\mathcal{A}})$ associated to the action of ${\mathcal{G}}$ on ${\mathcal{A}}$ (see e.g. \cite{Fernandes}). Here, $\mathfrak A = {\mathcal{A}} \times \mathrm{Lie}({\mathcal{G}})$ is a trivial bundle on ${\mathcal{A}}$; $\xi$ is promoted to a (non-necessarily constant) section of $\mathfrak A$, i.e. \begin{equation} \xi \in \Gamma({\mathcal{A}}, \mathfrak A) \cong \Omega^0({\mathcal{A}}, \mathrm{Lie}({\mathcal{G}})); \end{equation} and the anchor $\cdot^\# : \mathfrak A \to \mathrm T{\mathcal{A}}$ is still defined through \eqref{eq:hash}. The Lie algebroid $(\mathfrak A, \cdot^\#, {\mathcal{A}})$ is canonically isomorphic to the Lie algebroid of the foliation $\mathcal F\subset \mathrm TM$, understood as the canonical Lie algebroid of vertical vector fields endowed with their Lie bracket. An important formula is the isomorphism between, on one side, the Lie bracket $\llbracket\cdot,\cdot\rrbracket_{{\mathrm{T}}{\mathcal{A}}}$ between vectors in ${\mathrm{T}}{\mathcal{A}}$ and, on the other, the action Lie algebroid bracket in $\frak A$. This isomorphism can be expressed more elementarily in terms of the Lie bracket $[\cdot,\cdot]$ of ${\mathrm{Lie}(\G)}$---which is a point-wise extension of the Lie bracket on $\mathrm{Lie}(G)$,---according to: \begin{equation} \llbracket \xi^\# , \eta^\# \rrbracket_{{\mathrm{T}}{\mathcal{A}}} = \big( [\xi,\eta] + \xi^\#(\eta) - \eta^\#(\xi) \big)^\#. \label{eq:bracket_iso} \end{equation} On the right-hand side $\xi$ and $\eta$ are treated as zero-forms on ${\mathcal{A}}$ with values in ${\mathrm{Lie}(\G)}$, thus $\xi^\#(\eta) \equiv \xi^\#(\eta^\alpha)\tau_\alpha \in{\mathrm{Lie}(\G)}$. Moreover, the formulation in terms of Lie algebroids not only allows us to formalize the notion of ``field-dependent'' gauge transformations, but also opens the door to future generalizations of our framework, e.g. general relativity in the formalism of \cite{BlohmannWeinstein11, BlohmannWeinstein18}. In terms of the action Lie algebroid, field-{\it in}dependent gauge transformations are constant sections in $\Gamma({\mathcal{A}}, \mathfrak A) \cong \Omega^0({\mathcal{A}}, \mathrm{Lie}({\mathcal{G}}))$. Introducing a formal de-Rham differential ${\mathbb{d}}$ on ${\mathcal{A}}$, this condition reads ${\mathbb{d}} \xi = 0 $. Since field-independent gauge transformations play a distinguished role in our framework, we expect that generalizations beyond the action Lie-algebroid will involve Lie algebroids equipped with a connection, i.e. $({\frak A}, \cdot^\#, {\mathcal{A}}, \mathbb D)$ with $\mathbb D : \Gamma({\mathcal{A}}, {\frak A}) \to \Omega^1({\mathcal{A}})\otimes \Gamma({\mathcal{A}}, {\frak A})$: indeed this allows to generalize the field-independence condition to $\mathbb D \xi = 0$ (see also \cite{KotovStrobl16b}). An action Lie algebroid like the one appearing in YM theory comes equipped with the canonical flat connection $\mathbb D = {\mathbb{d}}$, ${\mathbb{d}}^2 \equiv 0$. Since vertical directions in $\mathrm T{\mathcal{A}}$ are identified with pure-gauge directions, the `physical' directions can be defined as those transverse to $V$. Thus, physical directions are encoded in a complementary distribution $H\subset \mathrm T{\mathcal{A}}$, $ H \oplus V = \mathrm T{\mathcal{A}} $, that we call the ``horizontal'' distribution. The decomposition $H \oplus V = \mathrm T{\mathcal{A}} $ is however not canonically defined. The {\it choice} of any such decomposition that is compatible with the gauge structure of ${\mathcal{A}}$ is encoded in the choice of an Ehresmann connection on ${\mathcal{A}}$ valued in $\mathrm{Lie}({\mathcal{G}})$, that we call $\varpi$, which satisfies two compatibility conditions. \begin{Def}[Functional connection\footnote{Cf. \cite{kobayashivol1} for the finite dimensional case.} \cite{GomesHopfRiello}] Let \begin{equation} \varpi \in \Omega^1({\mathcal{A}}, \mathrm{Lie}({\mathcal{G}})), \end{equation} then $\varpi$ is said a ${\mathcal{G}}$-compatible functional connection form on ${\mathcal{A}}$, or simply a \emph{functional connection}, if it satisfies the following properties for all field-dependent gauge transformations $\xi$: \begin{equation}\label{eq:varpi_def} \begin{dcases} \mathbb i_{\xi^\#}\varpi = \xi,\\ \mathbb L_{\xi^\#} \varpi = [\varpi, \xi] + {\mathbb{d}} \xi. \end{dcases} \end{equation} We will call these properties the \emph{projection} and \emph{covariance} properties, respectively.\footnote{In the non-Abelian theory, this definition is viable only within the dense subset of irreducible configurations. In the Abelian theory, this definition requires an adjustment to the definition of ${\mathcal{G}}$ with important physical consequences. Discussion of these issues is postponed until section \ref{sec:charges}.\label{fnt:reducible}} \end{Def} Notice that this definition demands $\varpi$ to be a local 1-form over field-space, ${\mathcal{A}}$, but says nothing on its locality properties over space, $\Sigma$. Indeed, as we will see in section \ref{sec:SdW}, $\varpi(A)$ will be a nonlocal functional of $A(x)$. We will come back on this point shortly. Hereafter, double-struck symbols refer to geometrical objects and operations in configuration space: ${\mathbb{d}}$ is the (formal) field-space de Rham differential,\footnote{We prefer this notation to the more common $\delta$, because the latter is often used to indicate vectors as well as forms, hence creating possible confusions.}$^{,}$\footnote{ More concretely, given a zero-form $\mathcal S\in\Omega^0({\mathcal{A}})$, i.e. a functional $\mathcal S:{\mathcal{A}} \to \mathbb R$, and a vector field $\mathbb X = \int X_A \frac{\delta}{\delta A} \in \mathfrak X^1({\mathcal{A}})$, one has that $\mathbb X(\mathcal S) \equiv \mathbb i_{\mathbb X} {\mathbb{d}} \mathcal S = \lim_{\epsilon\to0}\frac{1}{\epsilon}\big(\mathcal S(A + \epsilon X_A) - \mathcal S(A) \big)$. Hence, ${\mathbb{d}} {\cal S}$ is the Fréchet differential of $\cal S$. In the following, we will simply assume that these differential exist for the class of vector fields we are interested in. We will not pursue functional analytic questions.\label{fn:Frechet}} $\mathbb i$ is the inclusion operator of field-space vectors into field-space forms, and $\mathbb L_\mathbb X$ is the field-space Lie derivative along the vector field $\mathbb X\in\mathfrak X^1({\mathcal{A}})$. Its action on field-space forms is given by Cartan's formula, $\mathbb L_{\mathbb X} = \mathbb i_{\mathbb X} {\mathbb{d}} + {\mathbb{d}} \mathbb i_{\mathbb X} $. Finally, the curly wedge $\curlywedge$ will denote the wedge product in $\Omega^\bullet({\mathcal{A}})$, where $\bullet$ stands in for arbitrary degrees. The projection property means that $\varpi$ defines a horizontal complement $H$ to the fixed vertical space $V$, via \begin{equation} H := \ker (\varpi). \label{eq:Hkervarpi} \end{equation} The horizontal projector $\hat H : {\mathrm{T}}{\mathcal{A}} \to H$ is thus given by $\mathbb X \mapsto \hat H(\mathbb X):=\mathbb X-\varpi(\mathbb X)^\#$. See figure \ref{fig2}. The covariance property intertwines the action of vertical vector fields on 1-forms over ${\mathcal{A}}$ (the lhs) to the adjoint action of $\mathrm{Lie}({\mathcal{G}})$ on itself (the rhs). This condition ensures the compatibility of the above definition with the group action of ${\mathcal{G}}$ on ${\mathcal{A}}$, i.e. it embodies the covariance of $\varpi$ under gauge transformations. The term ${\mathbb{d}}\xi$ on the right hand side of the covariance property is only present if $\xi$ is an infinitesimal {\it field-dependent} gauge transformation. Using Cartan's formula, its presence can be deduced from the covariance of $\varpi$ under field-independent gauge transformation and the projection property of $\varpi$ which holds pointwise in field-space (see \cite{GomesHopfRiello}). \begin{figure}[t] \begin{center} \includegraphics[scale=.17]{fig_new_2} \caption{A pictorial representation of the split of ${\rm T}_A {\mathcal{A}}$ into a vertical subspace $V_A$ spanned by $\{\xi_A^\#, \xi\in{\mathrm{Lie}(\G)}\}$ and its horizontal complement $H_A$ defined as the kernel at $A$ of a functional connection $\varpi$. With dotted lines, we represent a different choice of horizontal complement associated to a different choice of $\varpi$. } \label{fig2} \end{center} \end{figure} \begin{Rmk}[On nonlocality] Since a gauge transformation transforms $A$ by a derivative of the gauge parameter, in order to satisfy the projection property, $\varpi$ must be nonlocal over $\Sigma$. Indeed, recalling that on ${\mathrm{T}}{\mathcal{A}}$, $\xi^\# = \int {\mathrm{D}} \xi \frac{\delta}{\delta A}$ \eqref{eq:hash}, the projection property $\mathbb i_{\xi^\#}\varpi = \xi$ can be formally re-written as $\varpi({\mathrm{D}} \xi) = \xi$. From this perspective, $\varpi$ is morally the inverse operator to the covariant derivative ${\mathrm{D}} = \d + A$ and as such it must be an integral operator. That is, making it explicit that $\varpi$ is valued in $\mathrm{Lie}({\mathcal{G}})$, $\varpi$ is expected to be of the form \begin{equation} \varpi^\alpha(x) = \int \d y \;\varpi^{\alpha,}{}^j_{\beta}(x,y) {\mathbb{d}} A_j^\beta(y) \end{equation} for some integral kernel $\varpi^{\alpha,}{}^j_{\beta}(x,y)$. Then the equation $\xi = \varpi(\xi^\#) $ reads: \begin{equation} \xi^\alpha(x) = \int \d y \;\varpi^{\alpha,}{}^j_{\beta}(x,y) {\mathrm{D}}_j\xi^\beta(y). \end{equation} In section \ref{sec:SdW}, we will introduce an explicit example of functional connection that has this form (see also section \ref{sec:dressing} for a well-known realization in electromagnetism). Conversely, by working over the space of matter fields that transform homogeneously under gauge transformations (no derivatives involved), spatially-local functional connections can be constructed. See e.g. \cite[Sect. 7]{GomesHopfRiello}. \end{Rmk} Given a functional connection form satisfying \eqref{eq:varpi_def}, alongside ${\mathbb{d}}$ we can introduce the horizontal differential, ${\mathbb{d}}_H$ \cite{GomesRiello2016, GomesRiello2018, GomesHopfRiello}. Horizontal differentials are by definition transverse to the vertical, pure gauge, directions: \begin{Def}[Horizontal differential]\label{def:varpi_def} The horizontal differential ${\mathbb{d}}_H \mu$ of a form $\mu\in\Omega^k({\mathcal{A}})$ is the $(k+1)$-form such that $\mathbb i_{\mathbb X} {\mathbb{d}}_H \mu := \mathbb i_{\hat H(\mathbb X)}{\mathbb{d}} \mu$ for all $\mathbb X\in{\mathrm{T}}{\mathcal{A}}$. \end{Def} Of course, the definition implies $\mathbb i_{\xi^\#}{\mathbb{d}}_H \mu \equiv 0$. The following proposition shows that a simpler, and more intuitive, characterization of ${\mathbb{d}}_H$ in terms of a ``$\varpi$-covariant'' differential on field space can be given for horizontal differentials of {\it horizontal and equivariant} field-space forms of general degree. For example, one could consider a $\lambda \in \Omega^k({\mathcal{A}})\otimes\Gamma(\Sigma, W)$ such that for all field-independent $\xi$'s (${\mathbb{d}} \xi =0$) satisfies (\textit{i}) $\mathbb i_{\xi^\#} \lambda =0$ (horizontality) and (\textit{ii}) $\mathbb L_{\xi^\#} \lambda^a = - (R(\xi))^{a}{}_b \lambda^b$ (equivariance), where $(W,R)$ is a representation of $G$, and $a,b$ are indices in the vector space $W$. Then: \begin{Prop} The horizontal differential of a horizontal and equivariant form $\lambda \in \Omega^k({\mathcal{A}})\otimes\Gamma(\Sigma, W)$ is itself horizontal and equivariant, and it is given by \begin{equation} {\mathbb{d}}_H \lambda^a = {\mathbb{d}} \lambda^a + (R(\varpi))^{a}{}_b\curlywedge \lambda^b \in \Omega^{k+1}({\mathcal{A}})\otimes\Gamma(\Sigma,W), \label{eq:dH_equivariant} \end{equation} where $R(\varpi) \in \Omega^1({\mathcal{A}}) \otimes\Gamma(\Sigma, \mathrm{End}(W))$ is constructed from the representation $R: \mathrm{Lie}(G) \to \mathrm{End}(W)$ and the connection form $\varpi\in\Omega^1({\mathcal{A}}, \mathrm{Lie}({\mathcal{G}})) \cong \Omega^1({\mathcal{A}}) \otimes \Gamma(\Sigma, \mathrm{Lie}(G))$ in the obvious way. \end{Prop} \begin{proof} The proposition is a straightforward application of \eqref{eq:varpi_def}, the properties of $\lambda$, and the anticommutativity of ${\mathbb{d}}$ and $\curlywedge$; see \cite{GomesRiello2016}. \end{proof} The all-important horizontal differential of $A_i$, seen as a ``coordinate'' map from ${\mathcal{A}}$ to $\Omega^1(\Sigma,\mathrm{Lie}(G))$ is characterized by the following: \begin{Prop} The horizontal differential of $A_i$ is given by \begin{equation}\label{eq:dH} {\mathbb{d}}_H A_i = {\mathbb{d}} A_i - {\mathrm{D}}_i \varpi, \end{equation} and it is equivariant under any (possibly field-dependent) gauge transformation, that is \begin{equation} \mathbb L_{\xi^\#} {\mathbb{d}}_H A_i = [\xi, {\mathbb{d}}_H A_i] . \label{eq:LddHA} \end{equation} \end{Prop} \begin{proof} These two statements can be easily checked using \eqref{eq:varpi_def}. \end{proof} A central property of the horizontal distribution $H\subset {\mathrm{T}}{\mathcal{A}}$ is its anholonomicity, i.e. its non-integrability in the sense of Frobenius theorem---figure \ref{fig-anholo}. As standard, this is characterized by failure of the Lie bracket between two horizontal vector fields to be itself horizontal. Thanks to the projection property of $\varpi$, this quantity can be encoded in the following definition: \begin{figure}[t] \begin{center} \includegraphics[scale=.17]{anholo} \caption{Pictorial representation of anholonomic horizontal plances in $\cal A$, corresponding to a non-vanishing curvature $\mathbb F\neq0$. } \label{fig-anholo} \end{center} \end{figure} \begin{Def}[Functional curvature \cite{Singer:1981xw, GomesHopfRiello}] Given a functional connection $\varpi$, the anholonomicity of the associated horizontal distribution $H_\varpi=\ker(\varpi)\subset{\mathrm{T}}{\mathcal{A}}$ as quantified by the functional two-form \begin{equation} \mathbb F_\varpi := \varpi\big( \big\llbracket \hat H(\cdot), \hat H(\cdot) \big\rrbracket \big) \in \Omega^2({\mathcal{A}},\mathrm{Lie}({\mathcal{G}})) \label{eq:FFunholo} \end{equation} is called the \emph{functional curvature} of the functional connection $\varpi$. The subscript $\bullet_\varpi$ will generally be omitted. \end{Def} As standard in the theory of principal fibre bundles, the curvature of $\varpi$ satisfies the following properties \begin{Prop} The curvature $\mathbb F$ of $\varpi$ is horizontal $\mathbb i_{\xi^\#} \mathbb F \equiv 0$, equivariant $\mathbb L_{\xi^\#} \mathbb F = [\mathbb F, \xi]$, and its horizontal differential satisfies the algebraic Bianchi identity {\mathbb{d}}_H \mathbb F \equiv 0. $ Moreover, $\mathbb F$ can be expressed as \begin{equation} \mathbb F ={\mathbb{d}}_H\varpi \equiv {\mathbb{d}} \varpi + \tfrac12[\varpi \stackrel{\curlywedge}{,}\varpi] . \label{eq:FF} \end{equation} \end{Prop} \begin{proof} Horizontality is manifest from the definition of $\mathbb F$. The equivalence between the definitions \eqref{eq:FFunholo} and the expressions of \eqref{eq:FF} is standard and can be checked using \eqref{eq:varpi_def}, \eqref{eq:Hkervarpi} and Cartan's calculus.\footnote{See e.g. \cite[Sect. 4.2]{GomesHopfRiello}.} Once the right-most formula of \eqref{eq:FF} has been established, the other properties can be checked by direct computation. \end{proof} We conclude this section with a (new) simple proposition which will help us clarify the relationship between $\varpi$ and gauge fixings in section \ref{sec:symred}. \begin{Lemma}[On exact connection forms]\label{Lemma:exactvarpi} The functional connection $\varpi$ is exact, i.e. $\varpi = {\mathbb{d}} \varsigma$ for some $\varsigma\in\Omega^0({\mathcal{A}},{\mathrm{Lie}(\G)})$, if and only if $G$ is Abelian and $\varpi$ is flat. \end{Lemma} \begin{proof} If $G$ is Abelian, it follows from \eqref{eq:FF} and the affine nature of ${\mathcal{A}}$ that $\varpi$ is exact if and only if $\varpi$ is flat. Conversely, assume that $\varpi = {\mathbb{d}} \varsigma$. Then, through Cartan's formula, the projection property \eqref{eq:varpi_def} implies \begin{equation} \mathbb L_{\xi^\#}\varsigma = \mathbb i_{\xi^\#} \varpi = \xi \label{eq:varsigma} \end{equation} for all $\xi\in\Omega^0({\mathcal{A}},{\mathrm{Lie}(\G)})$. From this, $\mathbb L_{\xi^\#} \varpi = \mathbb L_{\xi^\#} {\mathbb{d}} \varsigma = {\mathbb{d}} \mathbb L_{\xi^\#} \varsigma = {\mathbb{d}} \xi$. Comparing this formula with the second of \eqref{eq:varpi_def}, it follows that for all $\xi$, $[\xi,\varpi]=0$. By contracting with an arbitrary $\eta^\#$ and using again the projection property, one concludes that $G$ is Abelian. \end{proof} \subsection{Metric structure on ${\mathcal{A}}$ and the Singer-DeWitt connection}\label{sec:SdW} Consider a positive-definite (super)metric on ${\mathcal{A}}$, i.e. $\mathbb G \in \Gamma( {\mathrm{T}}^*{\mathcal{A}} \otimes_S {\mathrm{T}}^*{\mathcal{A}})$, with $\otimes_S$ standing for the symmetric part of the tensor product. Through such a metric one can fix a notion of horizontality via the condition of orthogonality to the vertical foliation ${\cal F} \subset {\mathcal{A}}$: \begin{equation} H_{\mathbb G} := ({\mathrm{T}} {\cal F})^\perp \equiv V^\perp. \end{equation} The question is whether such a notion of horizontality can be encoded in a connection form, i.e. if it is gauge-covariant along the orbits. In \cite{GomesHopfRiello}, we showed that this is the case if and only if $\mathbb G$ is gauge compatible in the following sense:\footnote{Notice that this notion of gauge-compatibility for the supermetric is different from that for a ``bundle-like'' metric common in the mathematical literature (e.g. as a sufficient condition for the existence of Ehresmann connections \cite{BlumenthalHebda84, Koike99}). The bundle-like condition can be written without reference to field-independent vertical vectors and involves the inner product of two horizontal vectors, rather than of one vertical and one horizontal vector. Although we won't make use of it, we write here, as a reference, the the bundle-like condition in our (infinite dimensional) notation: $(\mathbb L_{\eta^\#} \mathbb G)(\mathbb h, \mathbb h') = 0$ for all $\eta^\# \in V$ and $\mathbb h, \mathbb h'\in H_{\mathbb G}$. \label{fnt:bundlelike}} \begin{Def}[Gauge compatible supermetric] A supermetric $\mathbb G \in \Gamma( {\mathrm{T}}^*{\mathcal{A}} \otimes_S {\mathrm{T}}^*{\mathcal{A}})$ is said \emph{gauge compatible} if \begin{equation} (\mathbb L_{\xi^\#} \mathbb G)( \eta^\# , \mathbb h) = 0 \qquad ({\mathbb{d}} \xi = 0) \label{eq:GGcov} \end{equation} holds for all gauge-{\it in}dependent\footnote{Notice that this condition requires a notion of field-independence for the $\xi$'s which is automatic in the YM context (which is described by an action Lie algebroid), but might not be obvious in the context of a more general Lie-algebroid over some configuration space). Cf. footnote \ref{fnt:LG=0}.} gauge transformation $\xi\in{\mathrm{Lie}(\G)}$, all arbitrary vertical vectors $\eta^\#\in V$, and arbitrary horizontal vectors $ \mathbb h \in H_{\mathbb G}$. \end{Def} \begin{Prop} Let $\mathbb G$ be a gauge compatible supermetric. Then the following equation implicitly defines a $\varpi_{\mathbb G}$ satisfying the defining properties \eqref{eq:varpi_def}, \begin{equation} \mathbb G (\xi^\#, \mathbb X - \varpi_{\mathbb G}^\#(\mathbb X) ) \stackrel{!}{=} 0 \quad \forall \xi,\mathbb X. \label{eq:GGvarpi} \end{equation} \end{Prop} \begin{proof} See \cite[ Section 4.1]{GomesHopfRiello}. \end{proof} In YM theory, a most natural choice of supermetric is given by inspecting its second-order Lagrangian, and in particular its kinetic term. In temporal gauge, on the $(D+1)$-dimensional spacetime $M \cong \Sigma\times \mathbb R$, this is $L = K - U$ with potential \begin{equation} U = \tfrac{1}{4} \int_{ \Sigma} \d^D x \sqrt{g} \, g^{ii'} g^{jj'} \mathrm{Tr}( F_{ij} F_{i'j'}), \end{equation} where $F_{ij} = 2{\partial}_{[i} A_{j]} + [A_i,A_j]$, and with kinetic term \begin{equation} K = \tfrac12 \int_{ \Sigma} \d^D x \sqrt{g} \, g^{ij} \mathrm{Tr}( \dot A_i \dot A_j) = \tfrac12 \mathbb G(\dot{\bb A},\dot{\bb A}). \end{equation} In the last term we have introduced the velocity vector\footnote{Notice that the dot is just a notational device and does not stand here for any time derivative: on par to the momentum $E$ here the velocity $\dot A$ is an independent quantity relatively to $A$.} $\dot{\mathbb A} = \int \dot A \frac{\delta}{\delta A} \in {\mathrm{T}}{\mathcal{A}}$, as well as the kinetic supermetric $\mathbb G$: \begin{Def}[Kinetic supermetric] On the quasilocal configuration space of YM theory ${\mathcal{A}}$, the \emph{kinetic supermetric} is defined as \begin{equation} \mathbb G(\mathbb X, \mathbb Y) := \int_R \d^D x \sqrt{g}\, g^{ij} \mathrm{Tr}( \mathbb X_i \mathbb Y_j) \qquad \forall \mathbb X, \mathbb Y\in\mathrm T{\mathcal{A}}. \label{eq:GG} \end{equation} From now on the symbol $\mathbb G$ will refer exclusively to the kinetic supermetric \eqref{eq:GG}. \end{Def} It is then straightforward to prove that \begin{Prop} The kinetic supermetric $\mathbb G$ is gauge invariant, i.e.\footnote{This condition implies \eqref{eq:GGcov} as well as the bundle-like condition mentioned in the previous footnote.}$^,$\footnote{A finite dimensional analogue of this condition was recently studied and generalized to more general Lie algebroids ${\frak A}_{KS}$ than the action Lie algebroid featuring studied here, by Kotov and Strobl \cite{KotovStrobl16a,KotovStrobl16b}. They named Lie algebroids satisfying such a generalized condition Killing Lie algebroids, and related their properties to the ability of ``gauging'' a Poisson-sigma-model with a ${\frak A}_{KS}$-symmetry.\label{fnt:LG=0}} \begin{equation} \mathbb L_{\xi^\#} \mathbb G = 0 \qquad ({\mathbb{d}} \xi = 0), \end{equation} and therefore gauge compatible. \end{Prop} One can then introduce the connection associated to $\mathbb G$ (see \cite{GomesHopfRiello} for an account of the historical origin of this connection in gauge theories): \begin{Def}[Singer-DeWitt connection] The connection associated to the kinetic supermetric \eqref{eq:GG} via \eqref{eq:GGvarpi} is called the {\it Singer-DeWitt (SdW) connection}, $\varpi_{\text{SdW}}$. SdW horizontal differentials will be denoted by ${\mathbb{d}}_\perp = {\mathbb{d}} + \varpi_{\text{SdW}}$. \end{Def} An independent argument for the derivation of $\varpi_{\text{SdW}}$ that is based on generalizing Dirac's dressing of the electron to non-Abelian theories in the presence of boundaries, is discussed in section \ref{sec:dressing}. Although we have motivated the choice of the kinetic supermetric \eqref{eq:GG} by reference to the Lagrangian formulation of YM, this reference is not necessary for the analysis that will follow---and therefore we won't pursue it any further. However, we find relevant that the whole YM Lagrangian is nothing but a gauge- and Lorentz-covariant extension of the kinetic term $K=\tfrac12 \mathbb G(\dot{\bb A},\dot{\bb A})$: this simple observation explains the wealth of properties satisfied by the connection form associated through \eqref{eq:GGvarpi} to the kinetic supermetric. An alternative, and fully explicit, characterization of the SdW connection can be given in terms of an elliptic boundary value problem:\footnote{This proposition is subjected to the same limitations of the definition \eqref{eq:varpi_def}. See footnote \ref{fnt:reducible}.} \begin{Prop}[SdW boundary value problem]\label{prop:SdWbvp} Over a bounded region $R$, ${\partial} R\neq \emptyset$, $\varpi_{\text{SdW}}$ can be equivalently defined through the following \emph{elliptic} boundary value problem\footnote{This equation between 1-forms should be understood as follows. Given any $\mathbb X = \int X_i^\alpha \frac{\delta}{\delta A_i^\alpha}$, its contraction into \eqref{eq:SdW}---recall, ${\mathbb{d}} A_i(\mathbb X) \equiv X_i$---defines the contraction $\varpi_{\text{SdW}}(\mathbb X)$ as the unique solution to: $$ \begin{dcases} {\mathrm{D}}^2 \varpi_{\text{SdW}}(\mathbb X) = {\mathrm{D}}^i X_i & \text{in }R,\\ {\mathrm{D}}_s \varpi_{\text{SdW}}(\mathbb X) = X_s & \text{at }{\partial} R. \end{dcases} $$ Knowledge of $\varpi_{\text{SdW}}(\mathbb X)$ for an arbitrary $\mathbb X$ is what defines the one-form $\varpi_{\text{SdW}}$.} \begin{equation} \begin{dcases} {\mathrm{D}}^2 \varpi_{\text{SdW}} = {\mathrm{D}}^i {\mathbb{d}} A_i & \text{in }R,\\ {\mathrm{D}}_s \varpi_{\text{SdW}} = {\mathbb{d}} A_s & \text{at }{\partial} R. \end{dcases} \label{eq:SdW} \end{equation} where ${\mathrm{D}}^2 = {\mathrm{D}}^i{\mathrm{D}}_i$ is the covariant Laplace operator, and the subscript $\bullet_s$ denotes the contraction with the outgoing unit normal $s^i$ at ${\partial} R$. We will call this type of elliptic boundary value problem (with this covariant-Neumann boundary condition) a \emph{SdW boundary value problem}. \end{Prop} \begin{proof} For $0=\mathbb G\Big( \mathbb X - \varpi_{\text{SdW}}^\#(\mathbb X)\, , \, \xi^\# \Big) = \int \sqrt{g}\,g^{ij}\mathrm{Tr}\Big( \big(X_i - {\mathrm{D}}_i \varpi(\mathbb X) \big){\mathrm{D}}_j\xi\Big)$ to hold for all $\xi$'s and $\mathbb X$ gives condition \eqref{eq:SdW} after an integration by parts. See \cite{GomesRiello2018, GomesHopfRiello} for a detailed derivation. \end{proof} The following proposition then characterizes the curvature of the SdW connection in terms of another SdW boundary value problem: \begin{Prop}[SdW curvature] The curvature of the SdW-connection $\varpi_{\text{SdW}}$, denoted $\mathbb F_{\text{SdW}}$ \eqref{eq:FFunholo}, satisfies the following boundary value problem:\begin{equation} \begin{dcases} {\mathrm{D}}^2\mathbb F_{\text{SdW}} = g^{ij}[ {\mathbb{d}}_\perp A_i \stackrel{\curlywedge}{,} {\mathbb{d}}_\perp A_j] & \text{in $R$},\\ {\mathrm{D}}_s \mathbb F_{\text{SdW}} = 0 & \text{at ${\partial} R$}, \end{dcases} \label{eq:FFsdw} \end{equation} Notice that $\mathbb F_{\text{SdW}} \equiv 0$ in the Abelian case. \end{Prop} \begin{proof} In the absence of boundaries, this formula was given by Singer in \cite{Singer:1981xw}. In \cite[eq. 5.6]{GomesHopfRiello}, the differential equation for $\mathbb F_{\text{SdW}}$ is explicitly derived in the context without boundary. To find the boundary condition used in \eqref{eq:FFsdw}, we note that, in \cite{GomesHopfRiello} to obtain equation 5.6, one uses equations 5.4 and 5.5. The first requires no integration by parts, contrary to the second, which yields an extra boundary term: $\oint \sqrt{h} \,\mathrm{Tr}(\xi s^i{\mathrm{D}}_i\mathbb F)$. Hence, from the arbitrariness of $\xi$ at the boundary, we deduce the boundary condition of \eqref{eq:FFsdw}. \end{proof} In \cite{GomesHopfRiello}, the significance of $\mathbb F_{\text{SdW}}$ for the non-Abelian theory is extensively discussed in relation to: (\textit{i}) the obstruction to the extension of the dressing of matter fields \`a la Dirac (see e.g. \cite{Dirac:1955uv, Lavelle:1994rh, bagan2000charges, bagan2000charges2}) to the non-Abelian setting \cite{Lavelle:1995ty}; (\textit{ii}) the Gribov problem \cite{Gribov:1977wm, Singer:1978dk}; and (\textit{iii}) the Vilkovisky-DeWitt geometric effective action \cite{Rebhan1987,Vilkovisky:1984st,DeWitt_Book, vilkovisky1984gospel, Pawlowski:2003sk,Branchina:2003ek}. See also section \ref{sec:dressing} in the present article. As a consequence of the bulk and boundary properties of $\varpi_{\text{SdW}}$, SdW-horizontal modes $\mathbb h$, i.e. those in the kernel of $\varpi_{\text{SdW}}$ (that is $\mathbb i_{\mathbb h}\varpi_{\text{SdW}} = 0$), do satisfy specific bulk and boundary properties:\footnote{Of course, the properties of a SdW-horizontal mode can be deduced with no reference to $\varpi_{\text{SdW}}$, but only to $\mathbb G$. See the proof of the following proposition.} \begin{Prop}[SdW horizontal modes]\label{prop:SdWhoriz} The quasilocal \emph{SdW-horizontal modes} of the gauge potential, $\delta A = h$, are covariantly divergenceless in the bulk and vanish when contracted with $s^i$ at the boundary ${\partial} R$, i.e. \begin{equation} \mathbb h = \int h_i \frac{\delta}{\delta A_i} \quad\text{is SdW-horizontal iff}\quad \begin{dcases} {\mathrm{D}}^i h_i = 0 & \text{in } R,\\ h_s = 0 & \text{at }{\partial} R. \end{dcases} \label{eq:horizontalpert} \end{equation} Physically, SdW-horizontal modes generalize to the non-Abelian setting and to the presence of boundaries the notion of transverse photon. We will therefore sometimes call them \emph{radiative} modes. \end{Prop} \begin{proof} Contracting $\mathbb h$ into the boundary value problem for the SdW connection \eqref{eq:SdW}, and using $\mathbb i_{\mathbb h}\varpi_{\text{SdW}}= 0$ (by definition of SdW-horizontality) and the identity $\mathbb i_{\mathbb h}{\mathbb{d}} A_i = h_i$, readily gives the sought result. Alternatively, observe that the SdW horizontal modes $\mathbb h$ are by definition $\mathbb G$-orthogonal to all $\xi^\#\in V\subset {\mathrm{T}}{\mathcal{A}}$, thus $0\stackrel{!}{=}\mathbb G( \mathbb h, \xi^\#) = \int \sqrt{g}\, g^{ij} \mathrm{Tr}( h_i {\mathrm{D}}_j \xi ) = -\int \sqrt{g} \,\mathrm{Tr}({\mathrm{D}}^i h_i \xi) + \oint \sqrt{h}\,\mathrm{Tr}(h_s \xi) $. The conclusion then follows from the arbitrariness of $\xi$. \end{proof} Mathematically, the SdW decomposition of ${\mathbb{d}} A := {\mathbb{d}}_\perp A + {\mathrm{D}} \varpi_{\text{SdW}}$ is a non-Abelian generalization of the orthogonal Helmholtz decomposition (in the presence of boundaries) of 1-tensors in a pure-gradient part and a divergence-free part. Notice that, consistently with our goals, the SdW boundary value problem \eqref{eq:SdW} defines a connection form on ${\mathcal{A}}$ that is quasi-local to $R$: i.e. non-local within $R$ (it requires the inversion of a covariant Laplacian) but completely determined by the value of the fields within $R$. For this to work, it is important that the boundary value boundary for $\varpi_{\text{SdW}}$ involves boundary conditions for $\varpi_{\text{SdW}}$, but {\it not} for the background gauge potential $A_i$ nor for its fluctuations ${\mathbb{d}} A_i$. The boundary conditions on $\varpi_{\text{SdW}}$ ensure that the connection in a region $R$, and the corresponding horizontal projections, are uniquely defined. In this way, no restriction is imposed on the gauge-variant fields $A_i$ nor on the gauge parameters $\xi$, neither in $R$ nor at ${\partial} R$---but restrictions naturally arise for the horizontal linearized fluctuations $h_i$. In this regard the horizontality conditions \eqref{eq:horizontalpert} can be interpreted as a (gauge-covariant) gauge-fixing for the linearized fluctuations in a bounded region. This gauge fixing encompasses the \textit{entire} physical content of possible linearized fluctuations over a given region; that is, although the boundary conditions might seem restrictive, a completely general linear fluctuation $\mathbb X\in \mathrm{T}\mathcal{A}$, with \textit {any} other boundary condition can be generated from a $\mathbb h$ of the form \eqref{eq:horizontalpert} with the aid of a unique infinitesimal gauge transformation. Importantly, this is only possible because gauge freedom at the boundary is unrestricted, and therefore the $\mathbb G$-orthogonal projection is a complete and viable gauge fixing for the linearized fluctuations around $A\in{\mathcal{A}}$. In particular, the technical demand that the gauge parameters $\xi$ are fully {\it un}constrained at the boundary will have far-reaching repercussions, and it distinguishes our approach from others in the literature, e.g. \cite{DonnellyFreidel, Balachandran:1994up, Avery_2016, Speranza:2017gxd, Geiller:2017xad, AliDieter, Camps} (cf. also the discussion of the ``edge mode'' framework in the conclusions). Now that we have established these fundamental properties of the SdW connection and the SdW horizontal properties, we shall comment on some general properties of the SdW boundary value problem. In footnote \ref{fnt:reducible}, we have anticipated (without explanation) that a connection form satisfying the projection and covariance properties can be successfully defined in the non-Abelian theory only on a dense subset of configurations $A\in{\mathcal{A}}$, and in the Abelian theory only for a slightly modified definition of the gauge group ${\mathcal{G}}$. Interestingly, the kernel of the SdW boundary value problem reflects these issues. In electromagnetism (EM), the boundary value problem \eqref{eq:SdW} is of Neumann type. This means that in EM the solution to this boundary value problem is not unique for constant gauge transformations are in its kernel. These are precisely the gauge transformations that have to be ``removed'' from ${\mathcal{G}}$ for the definition of a connection form to apply. It is intriguing that these gauge transformations are related to the definition of the electric charge, an observation that we will develop on in section \ref{sec:charges}. In general, the kernel of the SdW boundary value problem is characterized by the following: \begin{Def}[Reducible configurations] Configurations $A\in{\mathcal{A}}$ such that the equation ${\mathrm{D}}\chi\equiv \d \chi + [A, \chi]=0$ admits nontrivial solution $\chi\in{\mathrm{Lie}(\G)}\setminus\{0\}$ are called \emph{reducible}; all other configurations are called \emph{ir}reducible. At a reducible configuration $A\in{\mathcal{A}}$, the nonvanishing solutions $\chi$ are called \emph{reducibility parameters} or \emph{stabilizers} of $A$. \end{Def} \begin{Prop}[Kernel of the SdW boundary value problem]\label{prop:Dchi=0} At the background configuration $A\in{\mathcal{A}}$, the kernel of the SdW boundary value problem is given by the reducibility parameters of $A$. \end{Prop} \begin{proof} We want to show that ${\mathrm{D}}^2\chi = 0 = {\mathrm{D}}_s \chi{}_{|{\partial} R}$ if and only if (iff) ${\mathrm{D}} \chi =0$ throughout $R$. One implicaftion is obvious. For the other, we observe that $0 = - \int \mathrm{Tr}(\chi {\mathrm{D}}^2\chi) + \oint\mathrm{Tr}(\chi {\mathrm{D}}_s \chi) = \mathbb G( \chi^\#, \chi^\#) $. From the non-degeneracy of $\mathbb G$, this vanishes iff $\chi^\# = 0$ i.e. iff ${\mathrm{D}} \chi \equiv 0$. \end{proof} Note the prominent role the SdW boundary condition plays in this proposition: e.g. replacing it with a Dirichlet condition would leave us with a kernel which is always trivial. Reducibility is to YM configurations as the existence of Killing vector fields is to spacetime metrics in general relativity. We will argue in section \ref{sec:charges} that, just as for Killing vector fields, the existence of reducibility parameters is related to the existence of ``global'' charges in YM---the electric charge being the most basic such example. In EM (or any Abelian theory), all configurations are reducible for $\chi = const$ (hence the universal nature of the electric charge). In non-Abelian YM, on the other hand, reducible configurations are ``rare,'' just like spacetime metrics with Killing vector fields are rare. More precisely, reducible configurations constitute a meagre subset of ${\mathcal{A}}$---i.e. {\it ir}reducible configurations are everywhere dense in ${\mathcal{A}}$. In section \ref{sec:charges} (and appendix \ref{app:slice}), we will review the topological and geometrical properties of the set of reducible configurations within the configuration space ${\mathcal{A}}$. From our field-space perspective it is indeed important that these configurations are imprinted in the geometry of ${\mathcal{A}}$ as well as on that of the reduced field space ${\mathcal{A}}/{\mathcal{G}}$. From that discussion it will be clear why at reducible configurations {\it no} connection form can be defined. Until then, however, we will work in the generic subspace of ${\mathcal{A}}$ and neglect the existence of reducible configurations or---in the Abelian case---we will assume that ${\mathcal{G}}$ is appropriately replaced. In sum, {\it unless stated otherwise, we will henceforth consider the SdW boundary value problem as invertible}.\footnote{The fact that the SdW boundary value problem is not invertible at reducible configurations means that the definition \eqref{eq:SdW} of $\varpi_{\text{SdW}}$ is not viable there. In fact, it turns out, the very notion of connection form fails at reducible configurations. Again, this is discussed in section \ref{sec:charges}.} \\ We conclude this section with a remark that will play an important role in the following: the SdW connection for EM (for the appropriately modified ${\mathcal{G}}$) provides a concrete example of a connection form which is exact. \begin{Thm}[SdW connection in EM]\label{thm:EM} In noncompact electromagnetism (EM),\footnote{As mentioned above, the SdW connection for $G=\mathrm U(1)$ or $\mathbb R$ is not invertible. For the time being we will work formally: in section \ref{sec:charges} we will show how to get around this issue by modding-out constant gauge transformations. The present conclusions will not be altered by the more rigorous treatment.} i.e. if $G = \mathbb R$, the SdW horizontal distribution $H_{\mathbb G} \equiv V^\perp$ is integrable and related to the Coulomb gauge fixing. \end{Thm} \begin{proof} Define the real-valued field-space function $\varsigma\in\Omega^0({\mathcal{A}})$ to be the solution of the following SdW boundary value problem: \begin{equation} \begin{dcases} \nabla^2 \varsigma = \nabla^i A_i & \text{in }R,\\ {\partial}_s \varsigma = A_s & \text{at }{\partial} R. \end{dcases} \label{eq:sdwvarsigma} \qquad(\text{EM}). \end{equation} Then, the SdW connection satisfying \eqref{eq:SdW} can be obtained by simple field-space differentiation, that is $\varpi_{\text{SdW}} = {\mathbb{d}} \varsigma \in \Omega^1({\mathcal{A}},{\mathrm{Lie}(\G)})$---notice that for this step it is crucial that the spatial differential operator $\nabla_i$ is field-independent, i.e. independent of the configuration $A\in{\mathcal{A}}$). By lemma \ref{Lemma:exactvarpi} it follows that the SdW horizontal distribution is flat. More explicitly, $\mathbb F_{\text{SdW}} = {\mathbb{d}} \varpi_{\text{SdW}} = {\mathbb{d}}^2 \varsigma =0$. By Frobenius theorem, a flat distribution is also integrable. For each field-independent function $\sigma: R \to \mathbb R$, ${\mathbb{d}} \sigma = 0$, define the ``constant-value'' hypersurface $\mathcal H_\sigma := \{A:\varsigma(A) = \sigma\}\subset {\mathcal{A}}$. Notice that the invertibility\footnote{See the next footnote.} of the SdW value problem means that every $A$ belongs to one and only one hypersurface $\mathcal H_c$, which therefore foliate ${\mathcal{A}}$. The SdW horizontality condition $0 = \mathbb i_{\mathbb h} \varpi_{\text{SdW}} = \mathbb h(\varsigma)$ says that $\varsigma$ is constant in the SdW horizontal directions within ${\mathcal{A}}$. As a consequence, the SdW horizontal directions at $A\in{\mathcal{A}}$ coincide with the directions tangent to the appropriate $\mathcal H_\sigma$ through $A$. In other words, if $A\in\mathcal H_\sigma$, then $(H_{\mathbb G})_A = {\mathrm{T}}_A \mathcal H_\sigma$. From this, $H_{\mathbb G} = {\mathrm{T}} (\bigcup_{\sigma} \mathcal H_\sigma)$. This also shows that the foliation $\cal H = \bigcup_\sigma\cal H_\sigma$ is transverse to the vertical foliation $\cal F$. Finally, consider the vanishing parameter $\sigma\equiv 0$. Then, setting $\varsigma = 0$ in \eqref{eq:sdwvarsigma} shows that the configurations lying on $\mathcal H_{0}$ satisfy the Coulomb gauge condition $\nabla^i A_i = 0$, completed---if ${\partial} R\neq\emptyset$---by the boundary condition $A_s=0$. This means that $\mathcal H_0$ is the section of ${\mathcal{A}}$ corresponding to the Coulomb fixing.\footnote{~Seemingly, any spatially constant parameter $\xi=\xi_o\in \mathrm{Lie}(G)$ would do. This is because constant gauge transformations constitute precisely the stabilizer gauge transformations in the kernel of the Abelian SdW boundary problem (proposition \ref{prop:Dchi=0}). However, following the discussion above, in order to have a well-defined $\varpi_{\text{SdW}}$ we have (implicitly) modified ${\mathcal{G}}$ by modding-out the stabilizer gauge transformations---i.e. the constant gauge transformations. Hence, from this perspective we have identified all $\xi=\xi_o$ with $\xi=0$ and therefore the Coulomb gauge fixing as defined in the text indeed corresponds to one section of ${\mathcal{A}}$ which crosses all fibres once and only once. See section \ref{sec:charges_EM} for a detailed construction of the appropriately modified gauge group ${\mathcal{G}}$ for electromagnetism.} More generally, the ``constant-value'' hypersurfaces $\mathcal H_\sigma$ generalize Coulomb gauge according to the spatial properties of $\sigma:R\to \mathbb R$. \end{proof} Notice that in the Dirac-Bergmann formalism for constrained systems, $ \varsigma =0$ is the second class constraint associated to the Coulomb gauge fixing. However, not all second class constraints (gauge-fixings) define a connection form, since they must satisfy the restrictive covariance condition \eqref{eq:varsigma}. E.g. even a change in the boundary condition of \eqref{eq:sdwvarsigma} would jeopardize that covariance property. \subsection{Horizontal splitting in phase space}\label{sec:phase space} In the last section we have introduced configuration space. In this section we will introduce {\it phase space} and {\it matter fields}. Most constructions are immediate extensions of those performed in the previous section and will therefore be only sketched. The YM phase space is defined as the cotangent bundle of the configuration space ${\mathcal{A}}$, and its elements are \begin{equation} (A,E)\in{\mathrm{T}}^*{\mathcal{A}}. \end{equation} The coordinates $(A,E)$ have been chosen so that the tautological 1-form on ${\mathrm{T}}^*{\mathcal{A}}$ reads \begin{equation} \theta_{\text{YM}} = \int \sqrt{g} \,\mathrm{Tr}\big(E^i {\mathbb{d}} A_i \big) \in \Omega^1({\mathrm{T}}^*{\mathcal{A}}), \end{equation} that is so that---interpreting $\theta_{\text{YM}}$ as the {\it off-shell\footnote{``Off shell'' refers to the Gauss constraint, see below.} symplectic potential} of Yang-Mills theory---$E^i(x) = E^{i\alpha}(x)\tau_\alpha$ is the ${\mathrm{Lie}(\G)}$-valued {\it electric field}. As customary in second-order Lagrangian theories, the canonical momentum (a one-form) is related to the configuration velocity (a one-vector) via the kinetic supermetric. This is most succinctly expressed in terms $\theta_{\text{YM}}$: \begin{equation} \theta_\text{YM} = \int \sqrt{g}\, \mathrm{Tr}( E^i {\mathbb{d}} A_i) = \mathbb G(\dot{\bb A}). \label{eq:GGbbv} \end{equation} This is nothing else than the YM analogue of the usual Legendre transform relating momenta and velocities in particle mechanics, $p_I \d q^I = g_{IJ} \dot q^I \d q^J$. Since under a gauge transformation $E$ transforms in the adjoint representation, the configuration-space gauge symmetry is lifted to phase space as follows: \begin{equation}\label{eq:xihash_Phi} \xi^\#_{(A,E)} = \int ({\mathrm{D}}_i \xi)^\alpha(x) \frac{\delta}{\delta A_i^\alpha(x)} + [E^i,\xi]^\alpha(x) \frac{\delta}{\delta E^{i\alpha}(x)} \in {\mathrm{T}}_{(A,E)}{\mathrm{T}}^*{\mathcal{A}}. \end{equation} As in the previous section, vectors fields of this form are called {\it vertical}. Through their span they locally define an integral distribution $\tilde V \subset {\mathrm{T}}\T^*{\mathcal{A}}$, and thus a foliation $\tilde {\cal F}$ of ${\mathrm{T}}^*{\mathcal{A}}$, which identifies the pure-gauge directions in phase space (we temporarily introduce tildes to distinguish these spaces from their configuration space analogues). The inclusion of matter can be done in similar fashion. For definiteness, we consider complex Dirac fermions, $\psi$, valued in the fundamental representation $W$ of the gauge group $G={\mathrm{SU}}(N)$, \begin{equation} \psi^{B,b}\in\Psi = \mathcal C^\infty(\Sigma, \mathbb C^4\otimes W). \end{equation} The conjugate momenta, $\bar\psi$, thus live in\footnote{In a Lagrangian setting, $\bar\psi = i \psi^\dagger \gamma^0$. See the next footnote for details on $\gamma^0$. Here $\mathbb C^4_\ast$ indicates that the action of the Lorentz group on $\bar\Psi$ differs from that over $\Psi$ \cite{WeinbergQFT1}. The details won't be needed. } $\bar\Psi =\mathcal C^\infty(\Sigma, \mathbb C^4_\ast \otimes W^\dagger)$. Under the action of a gauge transformation $g\in{\mathcal{G}}$, $\psi$ and $\bar\psi$ transform as \begin{equation} \psi \mapsto g^{-1}\psi \quad\text{and}\quad \bar\psi \mapsto \bar\psi g. \end{equation} Thus, the $(\psi,\bar\psi)$-components of $\xi^\#$ read \begin{equation} \xi^\#_{|\psi,\bar\psi} = \int (- \xi \psi)^{B,b}(x) \frac{\delta}{\delta \psi^{B,b}(x)} + ( \bar\psi \xi)^{B,b}(x) \frac{\delta}{\delta \bar\psi^{B,b}(x)} \end{equation} where $(\xi \psi)^{B,b}(x) = \xi^\alpha(x) (\tau_\alpha)^b{}_{b'} \psi^{B,b'}(x)$, with $(\tau_\alpha)^{b'}{}_b$ an anti-Hermitian generator of $G$ in the fundamental representation $W$. The charged fermions carry a $\mathrm{Lie}(G)$-current density \begin{equation} J^\mu = (\rho, J^i) \quad\text{with}\quad J^\mu_\alpha = \bar \psi \gamma^\mu \tau_\alpha \psi \end{equation} where $(\gamma^\mu)^{B'}{}_B$ are the Dirac matrices.\footnote{For a metric $g_{ij}$ on $\Sigma$, the commutator is $\{\gamma^\mu,\gamma^\nu\} = 2 g^{\mu\nu} = 2\text{diag}(-1, g^{ij})$, i.e. $\gamma^\mu := e_I^\mu \gamma^I$ for $\gamma^I$ the flat-space Dirac matrices and $e_I^\mu$ a local inertial frame, $g_{\mu\nu}e_I^\mu e_J^\nu = \eta_{IJ}$ (see also footnote \ref{fn:setup}). We adopt the following conventions for the $\gamma^I$ \cite{WeinbergQFT1}: $\gamma^0={\tiny -i\bmatr{0}{1}{1}{0}}$, $\gamma^j=-i{\tiny \bmatr{0}{\sigma^j}{-\sigma^j}{0}}$ with $\sigma^j$ the Hermitian Pauli matrices. } It is convenient to introduce the following notation for the {\it total phase space}, \begin{equation} \Phi = {\mathrm{T}}^*{\mathcal{A}} \times (\bar\Psi\times\Psi). \end{equation} Then the (complex) contribution of the Dirac fermions to the {\it total off-shell symplectic potential} \begin{equation} \theta = \theta_{\text{YM}} + \theta_\text{Dirac} \in\Omega^1(\Phi) \end{equation} is: \begin{equation} \theta_\text{Dirac} = - \int \sqrt{g} \,\bar\psi \gamma^0 {\mathbb{d}} \psi \in \Omega^1(\bar\Psi\times\Psi). \end{equation} As on ${\mathcal{A}}$, the total action of gauge transformations on $\Phi$, $\xi^\# = \xi^\#_{A,E} + \xi^\#_{\psi,\bar\psi}$, can be promoted to a field-dependent one. That is, from now on \begin{equation} \xi \in \Omega^0(\Phi, {\mathrm{Lie}(\G)}), \end{equation} with the isomorphism \eqref{eq:bracket_iso} extended to \begin{equation} \llbracket \xi^\# , \eta^\# \rrbracket_{{\mathrm{T}}\Phi} = \big( [\xi,\eta] + \xi^\#(\eta) - \eta^\#(\xi) \big)^\#. \label{eq:bracket_iso_phsp} \end{equation} ~ Given a connection form on ${\mathcal{A}}$, a connection form can be introduced on $\Phi$ by pullback: \begin{Prop} Denoting by $\pi:\Phi \to {\mathcal{A}}$ the canonical projection from the full phase space to the gauge-potential configuration space ${\mathcal{A}}$, the pullback $\pi^*\varpi$ of the ${\mathcal{G}}$-compatible connection form $\varpi$ onto $\Phi$ defines a connection form on $\Phi$---i.e. it defines a ${\mathrm{Lie}(\G)}$-valued 1-form on $\Phi$ that satisfies the corresponding projection and covariance properties. In particular, $\pi^*\varpi$ defines a horizontal distribution $\tilde H := \ker(\pi^*\varpi)\subset{\mathrm{T}}\Phi$ transverse to the vertical distribution spanned by the $\xi^\#$, also denoted $\tilde V\subset{\mathrm{T}}\Phi$; i.e. $\tilde H \oplus \tilde V = {\mathrm{T}} \Phi$. \end{Prop} \begin{proof} This follows directly from the fact that $A$, $E$, $\psi$, and $\bar \psi$ transform in concert under gauge transformations, together with the fact that $A$ necessarily changes under a gauge transformation (recall that we are here considering irreducible configurations only). \end{proof} There is therefore little use in having different notations for $\varpi$ and its pullback on phase space; we will henceforth denote $\pi^*\varpi$ simply by $\varpi$, and $(\tilde H, \tilde V)$ by $(H, V)$. (For an alternative, of more limited use, to this pullback construction from ${\mathcal{A}}$ to $\Phi$, see the so-called Higgs connection introduced in \cite{GomesHopfRiello}.) We can now turn to the computation of horizontal differentials in $\Phi$. Following the definitions given in the previous sections, as well as equation \eqref{eq:dH_equivariant} for the horizontal differential of horizontal and equivariant forms, it is straightforward to prove that \begin{Prop} The single and double horizontal differentials of $A$, $E$, $\psi$ and $\bar\psi$ are respectively given by \begin{equation} \begin{cases} {\mathbb{d}}_H A = {\mathbb{d}} A - {\mathrm{D}} \varpi\\ {\mathbb{d}}_H E = {\mathbb{d}} E - [E,\varpi] \end{cases} \qquad\text{and}\qquad \begin{cases} {\mathbb{d}}_H \psi = {\mathbb{d}} \psi + \varpi \psi\\ {\mathbb{d}}_H \bar \psi = {\mathbb{d}} \bar\psi - \bar\psi \varpi\label{eq:dd1} \end{cases} \end{equation} and\footnote{In general, for an horizontal and equivariant form $\lambda$, ${\mathbb{d}}_H^2 \lambda^a = - R(\mathbb F)^a{}_b\curlywedge \lambda^b$. See \eqref{eq:dH_equivariant}.} \begin{equation} \begin{cases} {\mathbb{d}}_H^2 A = -{\mathrm{D}} \mathbb F\\ {\mathbb{d}}_H^2 E = - [E,\mathbb F] \end{cases} \qquad\text{and}\qquad \begin{cases} {\mathbb{d}}_H^2 \psi = \mathbb F\psi\\ {\mathbb{d}}_H^2 \bar \psi = - \bar\psi \mathbb F \label{eq:dd2} \end{cases} \end{equation} \end{Prop} If $\varpi$ is flat, then the horizontal differentials assume a particular meaning in terms of dressed field \cite{Dirac:1955uv, francoisthesis, bagan2000charges}. This is spelled out in the following definition and proposition: \begin{Def}[Dressed fields]\label{Def:dressing} Assume the existence of a covariant field-space function $h: \Phi \to {\mathcal{G}}$ such that $R_g^*h = h g$ for all $g\in{\mathcal{G}}$, then the following composite fields, called the \emph{dressed fields}, can be defined: \begin{equation} \begin{cases} \hat A = hA h^{-1} + h \d h^{-1} \\ \hat E = h E h^{-1} \end{cases} \qquad\text{and}\qquad \begin{cases} \hat \psi = h\, \psi \\ \hat {\bar \psi} = h^{-1} \bar\psi \end{cases} \qquad (\varpi=h^{-1} {\mathbb{d}} h) \end{equation} In these formulas $h$ is called the \emph{dressing factor}. \end{Def} Then it is straightforward to check the following: \begin{Prop}\label{prop:ddHdressing} Dressed fields can be defined if and only if a flat connection $\varpi=h^{-1}{\mathbb{d}} h$ exists. Moreover, the dressed fields are \emph{gauge invariant} and their differential is related to the horizontal differential through the following: \begin{equation} \begin{cases} {\mathbb{d}} \hat A = h( {\mathbb{d}}_H A)h^{-1} \\ {\mathbb{d}} \hat E = h( {\mathbb{d}}_H E ) h^{-1} \end{cases} \qquad\text{and}\qquad \begin{cases} {\mathbb{d}}\hat \psi = h\, {\mathbb{d}}_H \psi \\ {\mathbb{d}}\hat{\bar\psi} = h^{-1} {\mathbb{d}}_H \bar \psi \end{cases} \qquad (\varpi=h^{-1} {\mathbb{d}} h) \end{equation} \end{Prop} Therefore whenever the connection is not flat and the dressing construction is not available, one can see the horizontal differential as the only viable generalization of the dressing construction. This provides a physical intuition on the meaning of $\varpi$ and will be further discussed in section \ref{sec:dressing}. As shown by the following theorem, the dressed-field construction {\it is} available, and indeed quite familiar, in electromagnetism: \begin{Thm}[Coulomb potential and Dirac's dressed electron] In EM, where $\varpi_{\text{SdW}}={\mathbb{d}} \varsigma$ is exact, the SdW dressing of the field can be defined. Moreover, if $R=\mathbb R^3$ and we assume standard rapid-fall-off boundary conditions for the fields at infinity, the SdW dressed gauge potential $\hat A$ coincides with the gauge potential in Coulomb gauge, whereas the dressed electron $\hat \psi$ coincides with Dirac's dressed electron (cf. section \ref{sec:dressing} for details on Dirac's dressed electron). \end{Thm} \begin{proof} In EM the SdW connection is flat $\varpi_{\text{SdW}} = {\mathbb{d}} \varsigma$ and $h = e^\varsigma$ defines the SdW dressing factor. Then the expression for the dressed fields simplifies to \begin{equation} \begin{cases} \hat A = A -\d \varsigma \\ \hat E = E \end{cases} \qquad\text{and}\qquad \begin{cases} \hat \psi = e^\varsigma \psi \\ \hat {\bar \psi} = e^{-\varsigma} \bar\psi \end{cases} \qquad (\varpi={\mathbb{d}} \varsigma). \end{equation} The fact that in $R =\mathbb R^3$, $\hat A$ is the gauge potential in Coulomb gauge and $\hat \psi$ is Dirac's dressed electron are both direct consequences of \eqref{eq:sdwvarsigma}---notice that in EM, the electric field $E$ is already gauge invariant and therefore its dressing is trivial. \end{proof} A thorough discussion (with references) of Dirac's dressing is postponed to section \ref{sec:dressing}, where we will also discuss---from a field-space perspective---a possible generalization of dressed fields to the non-flat setting and in particular to the non-Abelian SdW case. In the above example the dressing factor is spatially nonlocal. Conversely, if the dressing factor can be chosen to be space(time) local, then the passage to dressed fields is just a local field redefinition that completely ``reabsorbs'' the gauge symmetry. In \cite{francoisthesis} this circumstance is interpreted---and we agree---as meaning that a gauge symmetry that can be ``neutralized'' in this way is non-substantial. This is the case e.g. when the gauge symmetry is introduced through a so-called Stückelberg trick, but it is also the case for the Lorentz gauge symmetry in tetrad gravity (here the dressing factor is given by the inverse tetrad) and, with certain subtleties \cite[Sect. 9]{GomesHopfRiello}, in the presence of spontaneous symmetry breaking (here the dressed fields are the fields expressed in unitary gauge). \section{Horizontal splittings and symplectic geometry\label{sec:symred}} This section is dedicated to the study of the symplectic structure of YM theory in the presence of boundaries. In particular, we will study the horizontal/vertical split of the symplectic structure induced by the horizontal/vertical decomposition of the (co)tangent bundle of the total phase space introduced in the previous section. This study was initiated in \cite{GomesRiello2016,GomesHopfRiello} and is here pushed (much) further. Of course, many of the propositions presented in this section are (a rephrasing of) well known facts. We should point out that ultimately the choice of a $\varpi$---including the SdW one, which in certain respects is a more convenient choice---is entirely fiducial. As described in \cite{AldoNew}, on-shell of the Gauss constraint, one \textit{can} write the physical, i.e. reduced, symplectic form {\it independently} of a choice of $\varpi$. However, the explicit description of the physical degrees of freedom \textit{will} involve a choice of connection $\varpi$. This was to be expected from the standard symplectic duality between the gauge constraints and gauge-fixings. It should also be noticed that the ability of writing down the $\varpi$-independent reduced symplectic structure relies on the introduction of superselection sectors and a {\it canonical} completion of the symplectic structure; this completion does not add any new dof. We will briefly review these results in Section \ref{sec:QLSymplRed}, and refer to \cite{AldoNew} for details. \subsection{Horizontal/vertical split of the symplectic structure \label{sec:splitsympl}} Given the total symplectic potential, $\theta$ and the horizontal/vertical split of ${\mathrm{T}}\Phi$, we introduce a horizontal/vertical split of $\theta$ itself: \begin{Def}[Horizontal/vertical split of $\theta$] The \emph{horizontal/vertical split} of the off-shell symplectic potential $\theta = \theta_{\text{YM}} + \theta_\text{Dirac}$ with respect to a connection form $\varpi$ is defined as: \begin{equation} \theta = \theta^H + \theta^V \quad\text{where}\quad \begin{dcases} \theta^H := \int_R \sqrt{g} \, \mathrm{Tr}( E^i {\mathbb{d}}_H A_i) -\int_R \sqrt{g}\; \bar\psi \gamma^0 {\mathbb{d}}_H \psi\\ \theta^V := \int_R \sqrt{g} \, \mathrm{Tr}( E^i {\mathrm{D}}_i\varpi) + \int_R \sqrt{g} \;\bar\psi \gamma^0 \varpi \psi \end{dcases} \label{eq:thetaH-def} \end{equation} $\theta^H$ (resp. $\theta^V$) is said the \emph{horizontal} (resp. \emph{vertical}) off-shell symplectic potential. \end{Def} By construction $ \theta^{ H}(\xi^\#) := \mathbb i_{\xi^\#} \theta^{ H} \equiv 0$ for all $\xi$, and $\theta^V(\mathbb h) \equiv 0$ for all $\mathbb h \in H\subset {\mathrm{T}}\Phi$, hence the horizontal/vertical nomenclature. Although in the above formulas we have explicitly decomposed ${\mathbb{d}} A$ into pure-gauge and horizontal modes, we haven't yet decomposed the different modes of the electric field. \begin{Def}[Radiative/Coulombic decomposition] Given a connection form $\varpi$, define the following functional decomposition of the electric field into \emph{radiative} and \emph{Coulombic} components \begin{equation} E = E_{\text{rad}} + E_{\text{Coul}} \end{equation} through the cotangent dual of the decomposition of $\mathbb X \in {\mathrm{T}}{\mathcal{A}}$ into its horizontal and vertical parts, $E_{\text{rad}}$ being dual to horizontal vectors and $E_{\text{Coul}}$ to vertical ones. \end{Def} In other words, the radiative/Coulombic decomposition is defined by demanding that $ \int \sqrt{g} \,\mathrm{Tr}(E_{\text{rad}}^i\, {\mathrm{D}}_i \xi) \equiv 0 \equiv \int\sqrt{g}\, \mathrm{Tr}(E_{\text{Coul}}^i\, h_i)$, for all $\xi\in{\mathrm{Lie}(\G)}$ and all horizontal vectors $\mathbb h = \int h \frac{\delta}{\delta A}$. Therefore, by definition, $E_{\text{rad}}$ drops from $\theta^V$ and is therefore the component of the electric field conjugate to ${\mathbb{d}}_H A$; and conversely, $E_{\text{Coul}}$ drops from $\theta^H$ and is (loosely speaking) the component of $E$ conjugate to $\varpi$. In more detail: we use the cotangent dual to the horizontal/vertical decomposition of vectors $\mathbb X = \mathbb h + \xi^\# \in{\mathrm{T}}{\mathcal{A}}$ to decompose the {\it co}vector $\theta = \int \sqrt{g}\, \mathrm{Tr}(E^i {\mathbb{d}} A_i)\in{\mathrm{T}}^*{\mathcal{A}}$ into $\theta^H$ and $\theta^V$---so that by definition $\theta^H(\xi^\#)\equiv 0 \equiv \theta^V(\mathbb h)$. The decomposition of $E$ into $E_{\text{rad}}$ and $E_{\text{Coul}}$ is then a rewriting of the decomposition of $\theta$ in terms of its coordinate components.\footnote{Note that $\theta^H = \int \sqrt{g} \,\mathrm{Tr}(E_{\text{rad}} {\mathbb{d}}_H A) = \int \sqrt{g} \,\mathrm{Tr}(E {\mathbb{d}}_H A) = \int \sqrt{g} \,\mathrm{Tr}(E_{\text{rad}} {\mathbb{d}} A)$, and similarly for $\theta^V$.} Moreover, from the gauge-covariance of the whole construction it follows that $E$, $E_{\text{rad}}$ and $E_{\text{Coul}}$ all transform in the adjoint representation and are therefore equally gauge variant. Indeed, since $\theta$ is a {\it co}vector rather than a vector, what we call its horizontal/vertical decomposition has {\it nothing} to do with a split of $E$ into its pure-gauge and ``physical'' components, as it was for $\mathbb X = \mathbb h + \xi^\#$. This point is most evident in electromagnetism, where $E$, $E_{\text{rad}}$, and $E_{\text{Coul}}$ are all gauge {\it in}variant and equally ``physical.'' The only place in which the ``pure gauge'' part of $E$ (or of $E_{\text{rad}}$, or of $E_{\text{Coul}}$) is distinguished through a geometric construction, is when we build a horizontal variation of $E$ or, dually, the horizontal differential ${\mathbb{d}}_H E$ (or ${\mathbb{d}}_H E_{\text{rad}}$, or ${\mathbb{d}}_H E_{\text{Coul}}$). Regarding notation, we will see that $E_{\text{rad}}$ is a generalization of the transverse electric field of a photon to a finite region, thus the labeling ``rad'' which stands for {\it radiative}. Conversely, $E_{\text{Coul}}$ will be tasked with solving the Gauss constraint within $R$, and for this reason it is labeled ``Coul'' which stands for {\it Coulombic}. Another convenient way to understand the above definition uses the supermetric $\mathbb G$ to dualize the electric field by introducing an associated field space vector. Despite the fact that to perform the dualization we will use $\mathbb G$, the following construction holds for {\it any} choice of $\varpi$, not only the SdW one. Following the hint of \eqref{eq:GGbbv}, we can use $\mathbb G$ to convert the definition of $E$ as a cotangent vector to a tangent one: define $\mathbb E \in {\mathrm{T}}{\mathcal{A}}$ to be the field-space vector such that\footnote{The last expression of \eqref{eq:theta_E} has been introduced for notational convenience, even if geometrically imprecise. But the meaning is intuitively clear: $\theta(\mathbb X) \equiv \mathbb i_{\mathbb X} \mathbb G(\mathbb E) \equiv \mathbb G( \mathbb E, \mathbb X)$ for any $\mathbb X\in\mathrm T{\mathcal{A}}$. We also notice that $\mathbb i_{\mathbb X} ({\mathbb{d}}_H A) = \hat H(\mathbb X)$ and $(\mathbb i_{\mathbb X} \varpi)^\# = \hat V(\mathbb X)$ where $\hat H$ and $\hat V$ are the horizontal and vertical projections respectively. } \begin{equation} \theta = \mathbb G(\mathbb E)\equiv\mathbb G(\mathbb E, {\mathbb{d}} A). \label{eq:theta_E} \end{equation} More explicitly, \begin{equation} \mathbb E := \int \sqrt{g}\, g_{ij} E^i \frac{\delta}{\delta A_j} \in {\mathrm{T}}{\mathcal{A}}. \end{equation} With this notation, the radiative/Coulombic split of $E$ can be seen to be defined by the following orthogonality relations: \begin{equation} \begin{dcases} \mathbb G( \mathbb E_{\text{rad}} , \xi^\#) \equiv 0 & \text{for all $\xi$},\\ \mathbb G( \mathbb E_{\text{Coul}}, \hat H(\mathbb X) ) \equiv 0 & \text{for all $\mathbb X$}, \end{dcases} \label{eq:EErad/Coul} \end{equation} where we recall that $\hat H(\mathbb X) := \mathbb X - \varpi(\mathbb X)^\#$ is the $\varpi$-horizontal projection in ${\mathrm{T}}{\mathcal{A}}$. These equations are of course just a rewriting of the dual nature of the decomposition of $E$ relatively to that of $\mathbb X \in {\mathrm{T}}{\mathcal{A}}$. See figure \ref{fig:GGE}. \begin{figure}[t] \begin{center} \includegraphics[width=5.5cm]{fig_Erad.png} \hspace{1.5cm} \includegraphics[width=5.5cm]{fig_ECoul.png} \caption{A graphical representation in $\mathrm T{\mathcal{A}}$ of $\mathbb E_{\text{rad}}$ and $\mathbb E_{\text{Coul}}$ as vectors in the $\mathbb G$-orthogonal complements of $V_A$ and $H_A$, respectively. Notice that only with the SdW choice $\varpi = \varpi_{\text{SdW}}$, one has $H_A = V_A^\perp$ and therefore $\mathbb E_{\text{rad}} \in H_A^{\text{SdW}}$ and $\mathbb E_{\text{Coul}} \in V_A$; that is, only with this choice do the pictures on the right and left align---see section \ref{sec:SdWCoul}.} \label{fig:GGE} \end{center} \end{figure} From the first of these equations we readily see that: \begin{Prop}[Radiative electric field] The radiative component of the electric field is (covariant-)divergence-free and fluxless, i.e. \begin{equation} \begin{cases} {\mathrm{D}}_i E_{\text{rad}}^i = 0 & \text{in }R\\ s_i E_{\text{rad}}^i = 0 & \text{at }{\partial} R \end{cases} \label{eq:Erad} \end{equation} \end{Prop} \begin{proof} The proof follows from \eqref{eq:EErad/Coul} is formally identical to the proof of proposition \ref{prop:SdWhoriz} on the properties of the SdW-horizontal modes of the gauge potential. \end{proof} Equation \eqref{eq:Erad} reduces the number of local dof of $E_{\text{rad}}$ with respect to $E$ by one (times $\dim(G)$), as required for $E_{\text{rad}}$ to be conjugate to ${\mathbb{d}}_HA$. The remaining degree of freedom is then encoded in $E_{\text{Coul}}$. As exemplified by the second equation of \eqref{eq:EErad/Coul}, the functional properties of $E_{\text{Coul}}$, contrary to those of $E_{\text{rad}}$, are not universal i.e. they depend on the choice of horizontal distribution, that is of $\varpi$. We will see shortly that the vertical symplectic potential $\theta^V$ is tightly related to the Gauss constraint. It should therefore not come as a surprise that $E_{\text{rad}}$ is completely absent from the Gauss constraint, due to its being divergence-free in the bulk. Moreover, the boundary condition $s_i E^i_{\text{rad}} = 0$ in \eqref{eq:Erad}, which expresses the fluxless property of $E_{\text{rad}}$, already suggests that the Gauss constraint, in a bounded region, should be complemented by a boundary condition involving the electric flux $E_s$. In section \ref{sec:charges} (and especially \ref{sec:Greens}) we will argue how, contrary to $\rho$, the electric flux is {\it not} determined by the field content of the region $R$. This means, in particular, that charged matter can be introduced into $R$ without\footnote{There are caveats to these statements in Abelian theories and more generally at reducible configurations, where a finite number of modes of $E_s$ over ${\partial} R$ is related to as many integrals of $\rho$ over $R$. E.g. in electromagnetism $\int \sqrt{g} \, \rho_{\text{EM}} = \oint \sqrt{h} \, f$. We refer to section \ref{sec:charges} for a discussion.} modifying $E_s$. Following this argument, as well as the analysis of the Gauss constraint performed in \cite{AldoNew}, we are led to consider the value $f$ of $E_s$ as an {\it external} datum which is not on par with $\rho$. Rather, this external datum defines {\it super-selection sectors} of the theory as restricted to $R$. Hence, {\it given} a functional connection $\varpi$---which allows us to define the radiative/coulombic split of $E$,---and a flux $f$, we introduce the following version of the Gauss constraint (see \cite{AldoNew} for details): \begin{equation} {\mathsf{G}}_f : \qquad \begin{cases} {\mathsf{G}}:= {\mathrm{D}}_i E_{\text{Coul}}^i -\rho \approx 0 & \text{in }R,\\ {\mathsf{G}}_f^{\partial}:=s_i E_{\text{Coul}}^i - f\approx0 & \text{at }{\partial} R. \end{cases} \label{eq:Gauss-Coul} \end{equation} This equation has then a unique solution, as discussed in section \ref{sec:SdWCoul}. The above-mentioned relation between $\theta^V$ and the Gauss constraint (paragraphs following \eqref{eq:Erad}) becomes manifest through an integration by parts: \begin{equation} \theta^V = - \int \sqrt{g} \;\mathrm{Tr}\big( {\mathsf{G}}\, \varpi ) + \oint \sqrt{h}\; \mathrm{Tr}( E_s \varpi ) \approx \oint\sqrt{h}\;\mathrm{Tr}(f\varpi), \label{eq:thetaV} \end{equation} where we have introduced $h_{ab} = (\iota_{{\partial} R}^* g)_{ab}$, the induced metric on ${\partial} R$, and the square-root of its determinant $\sqrt{h}$. This shows that the vertical symplectic potential is, on shell of the Gauss constraint, a pure boundary term. We are finally ready to introduce the split of the symplectic {\it form} and thus state our theorem on the horizontal/vertical split of the symplectic structure in the presence of boundaries. Recall that from the off-shell symplectic potential $\theta = \theta_{\text{YM}} + \theta_\text{Dirac}$, one builds the {\it off-shell symplectic 2-form} by differentiation: \begin{equation} \Omega = \Omega_{\text{YM}} + \Omega_\text{Dirac} = {\mathbb{d}} \theta_{\text{YM}} + {\mathbb{d}} \theta_\text{Dirac} = {\mathbb{d}} \theta, \end{equation} i.e. \begin{equation} \Omega = \int_R \sqrt{g} \,\mathrm{Tr}( {\mathbb{d}} E^i \curlywedge {\mathbb{d}} A_i) -\int_R \sqrt{g} \;{\mathbb{d}} \bar\psi \curlywedge \gamma^0 {\mathbb{d}} \psi. \end{equation} ~ \begin{Def}[Horizontal/vertical split of $\Omega$] The \emph{horizontal/vertical split} of the off-shell symplectic 2-form $\Omega = \Omega_{\text{YM}} + \Omega_\text{Dirac}$ is defined as: \begin{equation} \Omega^H = \Omega^H_{\text{YM}} + \Omega^H_\text{Dirac} := {\mathbb{d}} \theta^H_{\text{YM}} + {\mathbb{d}} \theta^H_\text{Dirac} = {\mathbb{d}} \theta^H \qquad\text{and}\qquad \Omega^{\partial} := {\mathbb{d}} \theta^V \label{eq:OmegaH-def}. \end{equation} $\Omega^H$ (resp. $\Omega^{\partial}$) is said the \emph{horizontal} (resp. \emph{boundary}) symplectic form. \end{Def} Notice that, when referring to $\Omega^H$ and $\Omega^{\partial}$ the use of the adjective ``symplectic'' is technically incorrect, since they have degenerate directions in ${\mathrm{T}}\Phi$. This fact can be emphasized in the case of $\Omega^H$ by rather using the term {\it pre}-symplectic. We also warn the reader that when referred to $\Omega$, the nomenclature ``horizontal/vertical split'' should not be misinterpreted: the horizontal/vertical decomposition of ${\mathrm{T}}{\mathcal{A}}$ is at the basis of the split---hence its name,---but: $\Omega^{\partial}$ fails to be purely vertical {\it and} $\Omega^H$ sometimes fails to be the {\it entirety} of the horizontal components present in $\Omega$. These points are clarified by the following theorem. \begin{Thm}[Horizontal/vertical split of the symplectic structure]\label{thm:OmegaH-pp} The horizontal/vertical split of the off-shell symplectic potential and off-shell symplectic 2-form read, respectively: \begin{equation} \theta = \theta^H + \theta^V \;\text{where}\; \begin{dcases} \theta^H = \int \sqrt{g} \, \mathrm{Tr}( E^i_{\text{rad}} {\mathbb{d}}_H A_i) -\int \sqrt{g} \;\bar\psi \gamma^0 {\mathbb{d}}_H \psi\\ \theta^V = \int \sqrt{g} \;\mathrm{Tr}\big( -{\mathsf{G}}\, \varpi ) + \oint \sqrt{h}\; \mathrm{Tr}(E_{\text{Coul}}^s \varpi )\\\quad \;\;\approx \oint \sqrt{h}\; \mathrm{Tr}(f \varpi ) \end{dcases} \label{eq:theta_HV} \end{equation} and \begin{equation} \Omega = \Omega^H + \Omega^{\partial}\;\text{where}\; \begin{dcases} \Omega^H = \int \sqrt{g} \, \mathrm{Tr}\big( {\mathbb{d}}_H E_{\text{rad}}^i \curlywedge {\mathbb{d}}_H A_j \big)- \int \sqrt{g} \, \big( {\mathbb{d}}_H \bar\psi \curlywedge \gamma^0 {\mathbb{d}}_H \psi - \mathrm{Tr}( \rho \mathbb F) \big)\\ \Omega^{\partial} = - \int \sqrt{g} \, \mathrm{Tr}\big( \tfrac12 {\mathsf{G}} [\varpi\stackrel{\curlywedge}{,} \varpi] + {\mathbb{d}}_H {\mathsf{G}} \curlywedge \varpi + {\mathsf{G}} \mathbb F \big) \\ \qquad\quad + \oint \sqrt{h} \, \mathrm{Tr}\big( \tfrac12 E_s [\varpi\stackrel{\curlywedge}{,} \varpi] + {\mathbb{d}}_H E_s \curlywedge \varpi + E_s \mathbb F \big)\\ \quad\;\;\approx \oint \sqrt{h} \, \mathrm{Tr}\big( \tfrac12 f [\varpi\stackrel{\curlywedge}{,} \varpi] + {\mathbb{d}}_H f \curlywedge \varpi + f \mathbb F \big) \end{dcases} \label{eq:OmegaH-pp} \end{equation} (In Appendix \ref{app:translation}, we provide a quick bridge to a more common notation, analogous to that used e.g. by Ashtekar and Streubel \cite{AshtekarStreubel} or Wald and Lee \cite{Lee:1990nz}.) \end{Thm} \begin{proof} The proof \eqref{eq:theta_HV} was given in equations \eqref{eq:thetaH-def} and \eqref{eq:thetaV}. The proof of \eqref{eq:OmegaH-pp} follows from a straightforward albeit tedious calculation which employs the following relations: $\Omega^H := {\mathbb{d}}_H \theta^H$ \eqref{eq:OmegaH-def}, ${\mathbb{d}}_H^2 A = - {\mathrm{D}}\mathbb F$ and ${\mathbb{d}}_H^2 \psi = \mathbb F \psi $ \eqref{eq:dd2}, as well as \eqref{eq:Erad}; \eqref{eq:FF} is also needed to compute $\Omega^{\partial} = {\mathbb{d}} \theta^V$. \end{proof} {\it In the absence of boundaries}, we come to the following conclusion: \begin{CorT}\label{Lemma:redppR0} In the \emph{absence} of boundaries ${\partial} R = \emptyset$ and on-shell of the Gauss constraint ${\mathsf{G}}\approx0$, the total symplectic form equals the horizontal one, $\Omega\approx \Omega^H$. Therefore, in the absence of boundary and on shell of the Gauss constraint, $\Omega^H$ is independent of the choice of functional connection $\varpi$ used to build it. \end{CorT} {\it In the presence of boundaries}, on the other hand, even on-shell of the Gauss constraint, the pure-gauge and Coulombic dof {\it fail to fully drop} from the symplectic structure: both $f$ and the boundary value of $\varpi$ survive\footnote{This is tightly related to the introduction of so-called edge-modes \cite{DonnellyFreidel}; cf. the discussion in section \ref{sec:conclusions}.} in $\Omega^{\partial}$, i.e. $\Omega - \Omega^H = \Omega^{\partial} \not\approx 0$. We stress that the boundary value of $\varpi$ is in general a nonlocal function of the fields within $R$, as in the SdW case. Notice that, contrary to $\theta^V$, $\Omega^{\partial}$ is {\it not} purely-vertical: it features one pure-vertical contribution (the first one in \eqref{eq:OmegaH-pp}), one mixed horizontal-vertical contribution (the second one), and---if $\mathbb F \neq 0$---even a {\it purely horizontal} contribution. This has the following consequences: \setcounter{Thm}{1} \begin{CorT}[Implications of ${\partial} R\neq \emptyset$ on $\Omega^H$\label{Cor:312}]~\\ If ${\partial} R\neq \emptyset$: \begin{enumerate}[(i)] \item The horizontal symplectic form $\Omega^H$ coincides with the horizontal projection of $\Omega$, i.e. with $\Omega(\hat H(\cdot),\hat H(\cdot) )$, if and only if $\varpi$ is flat---that is if and only if $\mathbb F=0$; \item The horizontal projection $\Omega(\hat H(\cdot),\hat H(\cdot) )$ is not closed unless $\mathbb F=0$; \item The horizontal symplectic form $\Omega^H$ depends on the choice of functional connection $\varpi$ used to build it. \end{enumerate} \end{CorT} Ultimately, this hints at a deeper fact: in the presence of boundaries, $\Omega^H$ does not provide, by itself, a canonical symplectic structure on the reduced phase space. We will come back to this point in section \ref{sec:QLSymplRed}. We conclude this section with an analysis of the special case in which the functional connection is flat, $\varpi=h^{-1}{\mathbb{d}} h$, as in the case of the SdW connection for EM (see theorem \ref{thm:EM}). Using the dressed field formalism (definition \ref{Def:dressing}), the horizontal/vertical split of the symplectic structure acquires a more transparent physical meaning in terms of a symplectic structure for the dressed fields ($\Omega^H$) and one for the dressing factor and the Gauss constraint ($\Omega^{\partial}$): \begin{CorT} Suppose $\varpi$ is flat, then $\varpi = h^{-1}{\mathbb{d}} h$ and\footnote{The dressed Gauss constraint $\hat{\mathsf{G}}$ has the same functional expression of ${\mathsf{G}}$ with the fields $\phi=(A,E_{\text{Coul}},\psi,\bar\psi)$ replaced by their dressed counterparts $\hat\phi = (\hat A, \hat E_{\text{Coul}},\hat\psi,\hat{\bar\psi})$. As a result $\hat{\mathsf{G}} = h {\mathsf{G}} h^{-1}$. Similarly for the definition of $\hat E_{\text{rad}} = h E_{\text{rad}} h^{-1}$ and $\hat E_{\text{Coul}} = h E_{\text{Coul}} h^{-1}$.} \begin{equation} \begin{dcases} \theta^H = \int \sqrt{g} \, \mathrm{Tr}( \hat E^i_{\text{rad}}\; {\mathbb{d}} \hat A_i) -\int \sqrt{g} \;\hat{\bar\psi} \gamma^0 {\mathbb{d}} \hat \psi\\ \theta^V = \int \sqrt{g} \;\mathrm{Tr}\big( -\hat {\mathsf{G}}\; h^{-1} {\mathbb{d}} h ) + \oint \sqrt{h}\; \mathrm{Tr}(\hat E_{\text{Coul}}^s\; h^{-1}{\mathbb{d}} h ) \\ \phantom{\theta^V =} \approx \oint \sqrt{h}\; \mathrm{Tr}(\hat f h^{-1} {\mathbb{d}} h ) \end{dcases} \qquad (\varpi=h^{-1}{\mathbb{d}} h) \label{eq:theta_HV-EM} \end{equation} and \begin{equation} \begin{dcases} \Omega^H = \int \sqrt{g} \, \mathrm{Tr}\big( {\mathbb{d}} \hat E_{\text{rad}}^i \curlywedge {\mathbb{d}} \hat A_j \big)- \int \sqrt{g} \, \big( {\mathbb{d}} \hat{ \bar\psi} \curlywedge \gamma^0 {\mathbb{d}} \hat \psi \;\big)\\ \Omega^{\partial} = - \int \sqrt{g} \, \mathrm{Tr}\big( {\mathbb{d}} \hat {\mathsf{G}} \curlywedge h^{-1} {\mathbb{d}} h \big) + \oint \sqrt{h} \, \mathrm{Tr}\big( {\mathbb{d}} \hat E_s \curlywedge h^{-1} {\mathbb{d}} h \big) \\ \phantom{\Omega^{\partial}=}\approx \oint \sqrt{h} \, \mathrm{Tr}\big( {\mathbb{d}} \hat f \curlywedge h^{-1} {\mathbb{d}} h \big) \end{dcases} \qquad (\varpi=h^{-1}{\mathbb{d}} h) \label{eq:OmegaH-pp-EM} \end{equation} \end{CorT} In EM, $h=e^\varsigma$ and $h^{-1} {\mathbb{d}} h = {\mathbb{d}} \varsigma$ and thus these formulas show that the dressing factor $\varsigma$ \textit{is the dof conjugate to the Gauss constraint}. This has a nice interpretation in terms of the Dirac formalism for constrained system: the choice of ${\mathsf{G}}=0$ as the first class constraint and of $\varsigma=0$ as the gauge-fixing second class constraint, puts the Dirac's matrix of (off-shell) Poisson brackets between the constraints in normal (Darboux) form. In this article, we will not elaborate on this observation any further. \subsection{The radiative/Coulombic split and the SdW connection\label{sec:SdWCoul}} In this brief interlude, we turn to the SdW choice of connection in relation to the radiative/Coulombic split. Since the SdW connection is built out of similar orthogonality conditions as those involved in the split of $E$, the SdW choice leads to a more harmonious formalism. First, the radiative part of the electric field \eqref{eq:Erad} and the SdW-horizontal perturbations of $A$ \eqref{eq:horizontalpert} satisfy the same functional properties, that is they are both covariantly divergence-free and fluxless. This is of course a consequence of them both being {\it de facto} determined by orthogonality to $V={\mathrm{T}}\cal F$, i.e. to the pure-gauge directions in ${\mathcal{A}}$ (figure \ref{fig:GGE}). This agreement of their functional properties is particularly welcome in the Lagrangian context, for then one has, in temporal gauge, that the radiative electric field corresponds to the SdW-horizontal component of the velocity vector\footnote{$\mathbb E_{\text{rad}} := \int \sqrt{g}\, g_{ij} E^i_{\text{rad}} \frac{\delta}{\delta A_j} \in H = V^\perp \subset{\mathrm{T}}{\mathcal{A}}$. } $\mathbb E_{\text{rad}} = \hat H_{\mathbb G} (\dot{\bb A})$---a relationship that holds if and only if one makes use of the SdW notion of horizontality, with or without boundaries. To the extent that there is a parallel between $E_{\text{rad}}$ and ${\mathbb{d}}_\perp A$, a parallel also exists between $E_{\text{Coul}}$ and $\varpi_{\text{SdW}}$. \begin{Prop}[SdW radiative/Coulombic decomposition] Let $\varpi=\varpi_{\text{SdW}}$. Then, $E_{\text{Coul}}$ is the pure-gradient part in the (generalized) Helmholtz decomposition of $E$, denoted \begin{equation}\label{eq:Ecoul} E_{\text{Coul}}^i = g^{ij}{\mathrm{D}}_j \varphi \qquad ({\text{SdW}}) \end{equation} with $\varphi$ the \emph{(SdW-)Coulombic potential}. Expressed in terms of $\varphi$, the Gauss constraint \eqref{eq:Gauss-Coul} then reads \begin{equation} {\mathsf{G}}_f : \quad \begin{cases} {\mathsf{G}}= {\mathrm{D}}^2 \varphi - \rho \approx 0& \text{in }R\\ {\mathsf{G}}^{\partial}_f = {\mathrm{D}}_s \varphi -f \approx 0 & \text{at }{\partial} R \end{cases} \qquad ({\text{SdW}}) \label{eq:Coul_pot} \end{equation} which is another SdW value problem, cf. \eqref{eq:SdW}. \end{Prop} \begin{proof} This directly follows from the formal analogy between \eqref{eq:EErad/Coul} and \eqref{eq:GGvarpi}. Cf. proposition \ref{prop:SdWbvp} \end{proof} We have now all the tools necessary to state the existence and uniqueness of the solution to the Gauss constraint, {\it once a a choice of functional connection $\varpi$ is given} \eqref{eq:Gauss-Coul}: \begin{Prop}[Uniqueness of $E_{\text{Coul}}$ \cite{AldoNew}]\label{Prop:uniqueCoul}For any choice of functional connection $\varpi$ and electric flux $f$, the Gauss constraint ${\mathsf{G}}_f = 0$ has one and only one solution $E_{\text{Coul}}=E_{\text{Coul}}(A,\rho,f)$. \end{Prop} \begin{proof} The proof of this statement proceeds in two steps. In the first step we prove the existence and uniqueness of the solution to the Gauss constraint for the SdW choice of connection, i.e. $\varpi=\varpi_{\text{SdW}}$. This is a consequence of \eqref{eq:Coul_pot} and the general properties of the SdW boundary value problem, see proposition \ref{prop:Dchi=0} (notice that in this case uniqueness holds even at reducible configurations, since we are ultimately interested in $E_{\text{Coul}}$, not $\varphi$). In the second step, one can show that this result for the SdW connection can be used to prove existence and uniqueness for any other choice of connection. For details on the second step, see \cite{AldoNew}. \end{proof} \subsection{Gauge properties of the horizontal/vertical split\label{sec:gaugeprop}} In this section we will characterize the properties of $\theta^{H,V}$ and $\Omega^{H,V}$ in relation to gauge. First, however, we characterize the gauge properties of the off-shell symplectic potential $\theta$: \begin{Prop} The following propositions on $\theta$ hold true: \begin{enumerate}[(i)] \item $\theta$ is gauge invariant only under field-\emph{in}dependent gauge transformations $\xi$, ${\mathbb{d}}\xi=0$; \item on-shell of the Gauss constraint ${\mathsf{G}}_f\approx 0$ and in the \emph{absence} of boundaries ${\partial} R = \emptyset$, $\theta$ $\Theta$ is gauge invariant under \emph{all} gauge transformations. \end{enumerate} \end{Prop} \begin{proof} Using the gauge transformation properties of $A$, $E$, $\psi$ and $\psi$ (e.g. $\mathbb L_{\xi^\#} A = {\mathrm{D}}\xi$), as well as $[\mathbb L, {\mathbb{d}}] = 0$, an explicit computation shows that \begin{equation} \mathbb L_{\xi^\#} \theta = \int \sqrt{g} \, \mathrm{Tr}(E^i {\mathrm{D}}{\mathbb{d}} \xi) + \int \sqrt{g} \, \bar \psi \gamma^0 ({\mathbb{d}} \xi) \psi = - \int \sqrt{g} \,\mathrm{Tr}({\mathsf{G}} {\mathbb{d}} \xi) + \oint \sqrt{h} \,\mathrm{Tr}( E_s {\mathbb{d}} \xi). \label{eq:Lxitheta} \end{equation} The two propositions follow from the formula above upon imposing respectively ${\mathsf{G}}\approx 0$ and ${\partial} R =\emptyset$ for (\textit{ii}), and ${\mathbb{d}} \xi = 0$ for (\textit{i}). The latter case gives: \begin{equation} \mathbb L_{\xi^\#} \theta = 0 \qquad ({\mathbb{d}} \xi = 0) \label{eq:Ltheta=0} \end{equation} Notice that since ${\mathbb{d}}$ and $\mathbb L$ commute, this implies $\mathbb L_{\xi^\#} \Omega= 0$ if ${\mathbb{d}} \xi = 0$. \end{proof} Ultimately, the reason \eqref{eq:Ltheta=0} holds is that, in YM theory, the conjugate momentum to $A$ transforms covariantly, rather than as a connection. Thus, the fact that a polarization of the symplecitc potential exists such that \eqref{eq:Ltheta=0} holds, is a property of Yang-Mills theory not shared by either $BF$ or Chern-Simons theories.\footnote{See also \cite{Mnev:2019ejh}, where the {\it failure} to satisfy an extended analogue of the above equation plays a role in the BV-BFV derivation of the Chern-Simons's edge theory. \label{fnt:CS-Michele}} This property will be implicitly at the root of much of the following analysis. From \eqref{eq:Ltheta=0} one can readily deduce the following corollary characterizing the Hamiltonian flow of a gauge transformation with an eye for the field-dependence of the gauge transformation involved. \begin{CorP}\label{Cor3.7} The following propositions hold true: \begin{enumerate}[(i)] \item Off shell of the Gauss constraint (and irrespectively of boundaries), only field-{\it in}dependent gauge transformations $\xi$, such that ${\mathbb{d}} \xi = 0$, have a Hamiltonian generator $H_\xi$ with respect to $\Omega = {\mathbb{d}} \theta$. This generator, up to a field-space constant, is given by \begin{equation} H_\xi := \theta(\xi^\#) = \theta^V(\xi^\#) \qquad ({\mathbb{d}} \xi = 0); \label{eq:Hxi} \end{equation} \item In the \emph{absence} of boundaries ${\partial} R = \emptyset $, $H_\xi$ is a smearing of the Gauss constraint and therefore vanishes on shell of the Gauss constraint, $H_\xi \approx 0$; \item In the \emph{presence} of boundaries ${\partial} R \neq \emptyset $, $H_\xi$ is \emph{not} a smearing of the Gauss constraint, and generally fails to vanish on shell of the Gauss constraint; indeed, in this case, $H_\xi \approx \oint \sqrt{h}\,\mathrm{Tr}( f \xi)$. \end{enumerate} \end{CorP} \begin{proof} Application of Cartan's formula $\mathbb L_{\xi^\#} = \mathbb i_{\xi^\#} {\mathbb{d}} + {\mathbb{d}} \mathbb i_{\xi^\#} $ to the left-most expression in \eqref{eq:Lxitheta}, together with the definition $\Omega = {\mathbb{d}} \theta$, gives \begin{equation} \mathbb i_{\xi^\#} \Omega + {\mathbb{d}} \theta(\xi^\#) = \mathbb L_{\xi^\#} \theta = - \int \sqrt{g}\, \mathrm{Tr}({\mathsf{G}} {\mathbb{d}} \xi) + \oint \sqrt{h} \,\mathrm{Tr}( E_s {\mathbb{d}} \xi). \end{equation} Off shell of the Gauss constraint, the right hand side is exact, and actually vanishes, only if ${\mathbb{d}} \xi = 0$ irrespectively of the presence of boundaries. Also, from the remark below \eqref{eq:thetaH-def}, it is clear that $\theta(\xi^\#) \equiv \theta^V(\xi^\#)$. Hence,\footnote{In the following, to remind the reader which equations are subject to the conditional ${\mathbb{d}} \xi = 0$, we will explicitly include it in parenthesis.} \begin{equation} 0 = \mathbb L_{\xi^\#} \theta = \mathbb i_{\xi^\#} \Omega + {\mathbb{d}} H_\xi \qquad ({\mathbb{d}} \xi = 0). \label{eq:Ltheta=0_v2} \end{equation} This proves (\textit{i}). To prove (\textit{ii-iii}), it is enough to write $H_\xi$ explicitly, starting from the expression \eqref{eq:thetaV} for the vertical symplectic form: \begin{align} H_\xi &= \int \sqrt{g}\,\mathrm{Tr}( E^i {\mathrm{D}}_i \xi + \rho \xi) \notag\\ &=- \int\sqrt{g}\, \mathrm{Tr}( {\mathsf{G}} \xi ) + \oint \sqrt{h}\; \mathrm{Tr}(E_s \xi) \approx \oint \sqrt{h}\; \mathrm{Tr}(f \xi). \label{eq:Hxi-explicit} \end{align} This concludes the proof. \end{proof} (This corollary provides an explicit answer to the question of why one should introduce field-dependent gauge transformations at all: field-dependent gauge transformation serve as a diagnostic tool to detect the presence of spacetime boundaries from a geometric analysis within field-space. Of course, a more abstract answer to this same question is that arbitrary vertical vector fields---aka field dependent gauge transformations---are natural geometric objects on field space.) Heuristically, this corollary emphasizes once again that the dof contained in $H_\xi$ are precisely those dof whose Hamiltonian flow generates translations in the pure-gauge part of the fields, that is (loosely speaking) $\varpi$. Conversely, the following two results confirm that the dof contained in $\theta^H$ play no role in the flow equation along the pure gauge directions \eqref{eq:Ltheta=0_v2}: \begin{Prop}[Gauge properties of $\Omega^H$]\label{prop:gaugeOmegaH} The horizontal symplectic form $\Omega^H := {\mathbb{d}} \theta^H$ is horizontal and gauge-{\it invariant}, i.e. \emph{basic}, and can be expressed as $\Omega^H = {\mathbb{d}}_H \theta^H$. \end{Prop} \begin{proof} The first part of the proposition is another consequence of \eqref{eq:Ltheta=0}---together with the equivariance of ${\mathbb{d}}_H A$ \eqref{eq:LddHA} \cite{GomesRiello2016,GomesHopfRiello}. Indeed, these two equations imply that $\theta^H$ itself \textit{basic}:\footnote{The second equations follows from \eqref{eq:LddHA} and an analoguous formula for ${\mathbb{d}}_H\psi$ which can be deduced from \eqref{eq:dH_equivariant}. For more explicit details see equation (6.29) in \cite{GomesHopfRiello}.} \begin{equation} \mathbb i_{\xi^\#} \theta^H = 0 \quad\text{and}\quad \mathbb L_{\xi^\#} \theta^H = 0. \label{eq:LthetaH=0} \end{equation} Notice that both of these equations---contrary to \eqref{eq:Ltheta=0}---hold for field-{\it dependent} $\xi$'s as well. Because $\theta^H$ is basic, using the result \eqref{eq:dH_equivariant} on the differential of horizontal and equivariant forms, it is immediate to see that ${\mathbb{d}}_H \theta^H = {\mathbb{d}} \theta$ and hence that $\Omega^H$ is also basic. Crucially, these results hold\footnote{Beside the previous abstract argument, an explicit, albeit non-illuminating, proof of the right-most equality can be found in appendix B.2 (equation 109) of \cite{GomesRiello2016}.} for {\it any} $\varpi$, even when $\mathbb F\neq 0$. \end{proof} \begin{CorP}[A trivial flow for gauge transformations] With respect to the horizontal symplectic structure $\Omega^H$, gauge transformations have a \emph{trivial} Hamiltonian flow. \end{CorP} \begin{proof} Application of Cartan's formula to \eqref{eq:LthetaH=0} gives \begin{equation} 0= \mathbb L_{\xi^\#} \theta^H = \mathbb i_{\xi^\#} \Omega^H + {\mathbb{d}} \theta^H(\xi^\#). \end{equation} This flow equation can be called trivial, because each of the two terms on the right-most formula vanish {\it independently} (even if ${\mathbb{d}} \xi \neq 0$). \end{proof} \subsection{\label{sec:QLSymplRed} Quasilocal symplectic reduction} The results presented in this section are discussed and proved in greater detail in \cite{AldoNew}. We briefly review them here for completeness, but they will not be needed in the following. As proved in the previous sections, the horizontal symplectic form $\Omega^H$ is basic, i.e. both horizontal and gauge-invariant. As a consequence it can be unambiguously projected down to a 2-form $\Omega^H_\text{proj}$ on the reduced, on-shell phase space $\Phi//{\mathcal{G}}$.\footnote{Here $\Phi//{\mathcal{G}}$ denotes the symplectic reduction of $\Phi$, which requires both going on-shell of Gauss and modding out gauge transformations.} Moreover, since $\Omega^H$ is closed, $\Omega^H_\text{proj}$ is also closed. However, for it to define a {\it symplectic} structure on $\Phi//{\mathcal{G}}$, $\Omega^H_\text{proj}$ would need to be non-degenerate as well. It turns out that, {\it in the presence of boundaries}, this is not the case. Physically, this is simple to understand: $\Omega^H_\text{proj}$ fails to provide a symplectic structure for the {\it Coulombic} dof. The reason why this does not happen in the absence of boundaries is because $E_{\text{Coul}}$ is fully determined by the matter degrees of freedom, and therefore does not need to independently appear in the symplectic structure. However, in the presence of boundaries, $E_{\text{Coul}}$ is determined by $\rho$ {\it as well as} $f$ \eqref{eq:Gauss-Coul}. Thus, loosely speaking, what is missing in $\Omega^H_\text{proj}$ is a symplectic structure for the fluxes $f$. In sum, not only does $\Omega^H_\text{proj}$ depend on the choice of $\varpi$ (corollary \ref{Cor:312}), but it also fails to be non-degenerate (and therefore symplectic). Both these problems can be solved in one stroke by resorting to the concept of (covariant) superselection sectors. In the Abelian case, this means simply that one ``stratifies'' the reduced phase space $\Phi//{\mathcal{G}}$ by subspaces at fixed value of $f$. Notice that in the Abelian case $f$ is gauge invariant and therefore a well-defined quantity on the reduced phase space. As a result, within each superselection sector, $E_{\text{Coul}}$ is also completely fixed by the matter dof and we are therefore in a situation similar to that of the case without boundary. Thus, in the Abelian case, although $\Phi//{\mathcal{G}}$ is not symplectic, each superselection sector is. In the presence of field-space curvature, the appropriate symplectic structure here is not $\Omega^H_\text{proj}$ but rather the projection of $\Omega(\hat H, \hat H) \approx \Omega^H + \oint \sqrt{h} \mathrm{Tr}( f\mathbb F)$---which is now closed within a superselection sector since there ${\mathbb{d}} f \equiv 0$ (and ${\mathbb{d}}\mathbb F\equiv 0$ by the Bianchi identity; cf. corollary \ref{Cor:312}). One can show that the resulting symplectic structure is also independent of $\varpi$. In the non-Abelian case, fixing $f$ would be tantamount to breaking the gauge symmetry at the boundary. Therefore the best one can do is to fix $f$ up to gauge, i.e. demand that $f$ belongs to the set $[f] = \{ f = g^{-1} f g , \text{for some} g\in{\mathcal{G}} \}$. The restriction of $\Phi$ to those configurations on-shell of the Gauss constraint with $f\in[f]$ is called a {\it covariant} superselection sector. However, fixing a {\it covariant} superselection sector is not enough for the Gauss constraint to fully fix $E_{\text{Coul}}$ in terms of the matter field: $f$ can still be varied within $[f]$, even at {\it fixed} $(A,E_{\text{rad}}, \psi)$. These transformations, that vary $f$ within $[f]$ while leaving the other fields fixed, are called {\it flux rotations} and are {\it physical} transformations (gauge transformations would have to uniformly act on the other fields as well). Therefore, loosely speaking, to define a symplectic structure over the (gauge-reduced) covariant superselection sector, one needs to add a symplectic structure for the superselected fluxes $f\in[f]$. This can be done in a canonical manner, by realizing that $[f]$ is essentially a (co)adjoint orbit in ${\mathcal{G}}$ and by resorting to the canonical Kirillov--Konstant--Sourieu (KKS) symplectic structure on coadjoint orbits. A properly constructed horizontal variation of the KKS symplectic structure over the fluxes, $\omega^H_\text{KKS}$, can then be added to $\Omega^H$. The resulting 2-form $\Omega^H_f = \Omega^H + \omega^H_\text{KKS}$ is basic, closed, and projects to a non-degenerate symplectic structure within a reduced covariant superselection sector. The resulting symplectic structure is also independent of the choice of $\varpi$. We refer to the procedure of adding to $\Omega^H$ the canonically constructed symplectic structure on the fluxes, $\omega^H_\text{KKS}$, as the ``canonical completion'' of the symplectic structure $\Omega^H$. We call it a completion of the symplectic structure---as opposed e.g. to an extension of the phase space (à la ``edge mode'')---because this procedure fixes the degeneracy of $\Omega^H_\text{proj}$ over $\Phi//{\mathcal{G}}$ without enlarging the space $\Phi//{\mathcal{G}}$, that is without adding any extra degree of freedom to the phase space. Mathematically, this procedure is closely related to performing the Marsden--Weinstein symplectic reduction \cite{MW1974} not on the pre-image of the zero-section of the momentum map, but on the pre-image of a coadjoint orbit of the moment map: indeed, from Corollary \ref{Cor3.7} the relevant moment map is $H:\xi\mapsto H_\xi$ such that $H_\xi=\int\sqrt{g}\mathrm{Tr}(E^i{\mathrm{D}}_i\xi + \rho \xi) \approx\oint \sqrt{h}\mathrm{Tr}(f\xi)$). The crucial subtlety is that one still wants the Gauss constraint strictly imposed in the bulk, which means that one focuses on non-zero coadjoint orbits of $H$ that are, so to say, concentrated at the boundary only. One last remark: although the (completed) reduced symplectic form is independent of a choice of $\varpi$ (analogously, independent of a choice of what one could call a ``covariant perturbative gauge-fixing''), the basis in which one describes the physical dof \textit{will} depend on that choice. In particular, which electric degrees of freedom are {\it precisely} coordinatized by $f$ through \eqref{eq:Gauss-Coul} depends on the choice of $\varpi$. This is important, for example, when considering how (on-shell of the Gauss constraint) a ``flux rotation'' alters the bulk electric field $E = E_{\text{rad}} + E_{\text{Coul}}$. Flux rotations, for different choices of $\varpi$ and for the same value of $f$, will have different effects on the bulk electric field (as the ``meaning'' of $f$---i.e .the component of the electric field that it coordinatizes---changes along with these choices). \section[Charges]{\label{sec:charges}Charges\protect\footnote{In this section we shall rectify some statements made in \cite{GomesHopfRiello}. In particular, contrary to what we had assumed there, the horizontal symplectic form is {\it not} invariant under charge-transformations $\chi^\#$. Here, we will discuss the origin and consequences of the important obstructions to this statement which had been hitherto missed.}} In the previous sections we have established that dynamical quantities in the quasi-local gauge-reduced phase space---which are by definition gauge invariant---are encoded in the horizontal symplectic structure $\Omega^H$. We have also noticed that the generator of gauge transformations, $H_\xi = \int \sqrt{g}\,\mathrm{Tr}( {\mathsf{G}}_f \xi) \approx \oint \sqrt{h} \, \mathrm{Tr}(f\xi) $, are encoded rather in the remaining part of the symplectic structure, $\Omega^{\partial}$ \eqref{eq:Hxi}. This means that the gauge generators $H_\xi$, which are the (naive) Noether charges for the gauge symmetry, have in general no bearing on the radiative degrees of freedom in the bulk of $R$. Shortly, we will argue that these charges do not encode any particular conservation laws. (These facts notwithstanding, these charges still encode information on the $f$-superselection sector, which is an important physical information; notice, however, how this statement has to be qualified: in non-Abelian theories, neither $f$ nor any of the $H_\xi$ is gauge invariant and therefore observable as such.) In this section, we are going to clarify these statements, argue that one needs reducible configurations to obtain a gauge-invariant set of charges that satisfy a Gauss' law as well as appropriate conservation laws \cite{AbbottDeser, Barnich, DeWitt_Book}, and discuss how these charges are related to certain geometric features of field space and the kernel of the SdW boundary value problem. (Notice that we draw a distinction between the Gauss constraint, which is an elliptic differential equation ${\mathrm{D}} E - \rho = 0$, and the (integrated) Gauss' law, which is an integral relation between the total charge contained in a region and the total electric flux through its boundary, e.g. in electromagnetism $\int \rho = \oint f$.) At the end of this section, we will briefly comment on the consequences of these observations for the symplectic flow of these charges. \subsection{Reducible configurations: an overview\label{sec:red_basics}} At a configuration $ A \in {\mathcal{A}} $, consider the infinitesimal gauge transformations $\chi_{ A}\in{\mathrm{Lie}(\G)}$ such that \begin{equation} \delta_{\chi_{ A}} A \equiv {\mathrm{D}} \chi_{ A} = 0. \label{eq:reducible_def} \end{equation} If a $\chi_A \neq 0$ exists, then $ A_i$ is said {\it reducible} and $\chi_{ A}$ is called a {\it reducibility parameter} or {\it stabilizer}. The stabilizers $\chi_{ A}$ depend on the global properties of $\tilde A_i$ and constitute a finite dimensional Lie algebra (possibly a zero-dimensional one). Denote this Lie algebra $\mathrm{Lie}({\cal I}_A)$, where ${\cal I}_A\subset {\mathcal{G}}$ is the stabilizer or isotropy group of $A$, i.e. the subgroup of ${\mathcal{G}}$ composed by the elements $h\in{\mathcal{G}}$ such that $A^h = A$. The isotropy group is a covariant notion, in the sense that \begin{equation} \mathcal{I}_{A^g} = g^{-1}\mathcal{I}_A g. \label{eq:IA} \end{equation}% In general, $\mathcal{I}_A$ is {\it not} a normal subgroup of ${\mathcal{G}}$, and ${\mathcal{G}}_A := {\mathcal{G}}/\mathcal{I}_A$ is only a (right) quotient and not a group. Infinitesimally, $\mathrm{Lie}(\mathcal{I}_A)$ is a sub-Lie-algebra but generally not an ideal of ${\mathrm{Lie}(\G)}$, and therefore $\mathfrak{G}_A := {\mathrm{Lie}(\G)}/\mathrm{Lie}(\mathcal{I}_A)$ is only a quotient of vector spaces and {\it not} a Lie algebra. We will indicate elements of $\mathfrak{G}_A$ by $[\xi]_A\equiv[\xi + \chi_A]_A$ or, more often, just $[\xi]$. Since at $A\in {\mathcal{A}}$, $\chi_A^\# \equiv 0$, one has that $[\xi]^\#_A = \xi^\#_A$ is a well defined vertical vector in ${\mathrm{T}}_A{\mathcal{A}}$. In non-Abelian theories, reducible configurations form a meager set,\footnote{A \textit{meagre} set is one whose complement is an everywhere dense set: roughly, an arbitrarily small perturbation takes one out of a meager set. Here, reducible configurations form a meager set according to the standard field-space metric topology on $\mathcal{A}$ (the Inverse-Limit-Hilbert topology \cite{YangMillsSlice, kondracki1983}, see also\cite{fischermarsden}).} in the same way as those spacetime metrics which admit non-trivial Killing vector fields are ``extremely rare'' (i.e. form a meager set). In this respect, Abelian theories, such as electromagnetism, are an exception: {\it all} their configurations have as their reducibility parameter the constant gauge transformation, e.g. $\chi_\text{EM}=const \in i \mathbb R$. This means that the action of ${\mathcal{G}}$ does not act freely on ${\mathcal{A}}$, and therefore ${\mathcal{A}}$ cannot be a bona-fide (infinite-dimensional) principal fibre bundle, since it lacks the necessary, homogeneous local product structure: fibres associated to reducible configurations are not isomorphic to ${\mathcal{G}}$---see figure \ref{fig8}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.17]{fig_new_3} \caption{ In this representation ${\mathcal{A}}$ is the page's plane and the orbits are given by concentric circles. The field $A$ is generic, and has a generic orbit, $\mathcal{O}_A$. The field $\tilde A$ has a nontrivial stabilizer group (i.e. it has non-trivial reducibility parameters), and its orbit $\mathcal{O}_{\tilde A}$ is of a different dimension than $\mathcal{O}_A$. The projection of $\tilde A$ on ${\mathcal{A}}/{\mathcal{G}}$ therefore sits at a qualitatively different point than that of $A$ (a lower-dimensional stratum of ${\mathcal{A}}/{\mathcal{G}}$). Exclusion of the reducible configuration $\tilde A$ gives rise to a fibre bundle structure over ${\mathcal{A}}\setminus \{\tilde A\}$; here $\sigma$ represents a section of ${\mathcal{A}}\setminus \{\tilde A\}$. Locally, the concept of section can be generalized to include reducible configurations such as $\tilde A$, thus leading to the notion of ``slice''. This is briefly reviewed in appendix \ref{app:slice}. } \label{fig8} \end{center} \end{figure} However, it turns out that ${\mathcal{A}}$ can be decomposed into ``strata'' defined by an increasing degree of symmetry, each of which does have (at least locally) a product structure. Indeed, a {\it slice theorem} shows that ${\mathcal{A}}$ is regularly stratified by the action of ${\mathcal{G}}$, and in particular that all the strata are smooth submanifolds of ${\mathcal{A}}$. Given the gauge-covariance of the constructions involved in the slice theorem, the stratification of field space survives the gauge-reduction, and is thus geometrically reflected in the structure of the reduced field space. This will give us the opportunity to build gauge invariant (sets of) global charges. The kernel of the SdW boundary value problem at $A\in{\mathcal{A}}$ (cf. \eqref{eq:SdW} and \eqref{eq:Coul_pot}) is provided precisely by the reducibility parameters of $A$ (see proposition \ref{prop:Dchi=0}). This tyies the SdW kernel to certain geometrical properties of ${\mathcal{A}}$: a fact which has two main consequences. First, it means that the SdW kernel is empty almost-everywhere in ${\mathcal{A}}$ and therefore that the SdW boundary value problem has generically a unique solution. Second, it means that at a generic non-Abelian configuration there is no integrated Gauss law nor a gauge invariant notion of conserved charges. The goal of this section is to analyze and explain these statements and show how one can leverage the a relation between the SdW kernel and the geometry of ${\mathcal{A}}$. We will take electromagnetism as the epitomic Abelian theory. \subsection{Green's functions}\label{sec:Greens} Let us consider the properties of the Green's functions $G_{\alpha,x}(y)$ of the SdW boundary value problem entering the definition of the radiative degrees of freedom as well as of the Coulombic ones. For definiteness and future convenience, we will focus on the example of the Coulombic degrees of freedom. The Green's functions $G_{\alpha,x}(y)$ are defined by the following\footnote{Here, $\delta_x(y)\equiv \delta(x,y)$ is a Dirac delta distribution. The notation is meant to emphasize the index structure of $\mathrm{Lie}({\mathcal{G}})$.} \begin{equation} \begin{dcases} {\mathrm{D}}^2 G_{\alpha,x}(y) = \tau_\alpha \delta_x(y) & \text{in }R,\\ {\mathrm{D}}_s G_{\alpha,x}(y) = 0 & \text{at }{\partial} R. \end{dcases} \label{eq:Green} \end{equation} Physically, this choice of (covariant) Neumann boundary conditions corresponds to the demand that the charged perturbation inserted at $x$ does \textit{not} contribute to the flux $f$ at ${\partial} R$. In EM, this choice of boundary conditions is inconsistent and should be amended e.g. by demanding that it creates a constant flux compatible with the integrated Gauss law.\footnote{In EM one possible natural boundary condition is ${\partial}_s G = 1/\text{Vol}({\partial} R)$ with $\text{Vol}({\partial} R)$ the volume of the region's boundary ${\partial} R$, whereas in YM at a $\tilde A$ with a single reducibility parameter $\chi$, the following $y$-constant boundary condition plays a similar role ${\mathrm{D}}_s G_{\alpha,x} = \mathrm{Tr}(\chi(x) \tau_\alpha)/||\chi||^2_{{\partial} R}$.} This is because in EM all configuration have as a stabilizer $\chi_\text{EM}=const$. However, as we discussed above, {\it generic} non-Abelian configurations are irreducible and possess no such stabilizer: therefore at these configurations there is {\it no integrated Gauss law} that the Green's function should respect. Indeed, the extension of the (Abelian) integrated Gauss law $\int \rho = \int \nabla_iE^i = \oint f$, to the non-Abelian context would be \begin{equation} \label{eq:no_charge} \int \sqrt{g}\,\mathrm{Tr}( \tau_\alpha \rho) \approx \int\sqrt{g}\, \mathrm{Tr}( \tau_\alpha {\mathrm{D}}_i E^i) = -\int\sqrt{g}\, \mathrm{Tr}( E^i {\mathrm{D}}_i \tau_\alpha) + \oint \sqrt{h}\,\mathrm{Tr}(\tau_\alpha f), \end{equation} but the bulk term on the rightmost side vanishes only if $A$ is reducible and $\tau_\alpha$ is replaced by a reducibility parameter. Notice that reducibility parameters are rigid, i.e. their value at one point determines their value everywhere (they solve a first-order differential equation), and only in such a situation does one lose the functional independence between bulk and boundary integrals. The ensuing functional dependence is what makes the very possibility of having an integrated Gauss law meaningful. Therefore, we conclude that at a generic configuration of a non-Abelian YM theory, there is no (integrated) Gauss law relating total charges and (integrated) electric fluxes.\footnote{The issue is formally the same as the difficulties present in defining quasi-local conserved quantities in general relativity. Also, notice that bringing the gauge field contribution on the left-hand side of \eqref{eq:no_charge} to make the ensuing ``integrated Gauss law'' satisfied identically is a trick with no bearing on the dressing problem and the definition of the Green's functions for the matter field.} Using the definition \eqref{eq:Green} of the Green's function, together with the following non-Abelian generalization of Green's theorem (e.g. \cite{JacksonBook}), \begin{equation} \int_R \sqrt{g}\,\mathrm{Tr}( \psi_1 {\mathrm{D}}^2 \psi_2 - \psi_2 {\mathrm{D}}^2 \psi_1 ) = \oint_{{\partial} R} \sqrt{h}\,\mathrm{Tr}( \psi_1 {\mathrm{D}}_s \psi_2 - \psi_2 {\mathrm{D}}_s \psi_1) \quad\forall \psi_{1,2}\in \Omega^0(R,\mathrm{Lie}({\mathcal{G}})), \label{eq:Greenthm} \end{equation} one can choose $\psi_1 = \varphi$ and $\psi_2 = G_{\alpha,x}$, to obtain the Coulombic component of the electric field in terms of the charge density $\rho$ and the flux $f$: \begin{align} \varphi_\alpha(x) & = \int_R \sqrt{g(y)} \, \mathrm{Tr} \Big( G_{\alpha,x}(y) {\mathrm{D}}^2 \varphi (y) \Big) - \oint_{{\partial} R}\sqrt{h(y)}\, \mathrm{Tr}\Big( G_{\alpha,x}(y){\mathrm{D}}_s \varphi(y) \Big) \notag\\ & \approx \int_R \sqrt{g(y)} \, \mathrm{Tr} \Big( G_{\alpha,x}(y) \rho (y) \Big) - \oint_{{\partial} R}\sqrt{h(y)}\, \mathrm{Tr}\Big( G_{\alpha,x}(y) f(y) \Big) . \end{align} At reducible configurations, this formula must again be amended by the addition of constant offsets due to the modified boundary condition. The addition of such offsets is possible thanks to the freedom of redefining $G_{\alpha,x}$ by some combination of the reducibility parameters, since they lie in the kernel of \eqref{eq:Green}. A similar construction allows us to solve the SdW boundary value problem for $\varpi_{\text{SdW}}$. More on this in section \ref{sec:dressing}. \subsection{Conserved charges}\label{sec:conservation} To talk about conservation laws, we consider now a spacetime process in the presence of matter. At the light of the previous section, we see that an integrated Gauss law exists at reducible configurations. This suggests the possibility of defining {\it stabilizer charges} at such configurations via \eqref{eq:no_charge}:\footnote{Notice that $\mathrm{Tr}(\rho\chi) = \bar\psi \chi \psi$. Therefore, this charge can be zero even for $\rho\neq0$, e.g. if $\chi\psi=0$ while $\psi\neq0$. For matter in the fundamental representation, the latter condition is not attainable for $G={{\mathrm{SU}}}(2)$, but it is for larger $N>2$. This situation was analyzed in \cite{GomesHopfRiello} through the lens of the Higgs mechanism for condensates. \label{fn:SU2} } \begin{equation} Q[\chi_A] := \int \sqrt{g}\,\mathrm{Tr}(\rho \chi_A) \approx \oint \sqrt{h} \,\mathrm{Tr}(\chi_A f). \label{eq:stabchargedef} \end{equation} As a consequence of the Gauss law, these charges are inherently {\it quasilocal}, insofar the value of $Q[\chi]$ over a closed Cauchy surface $\Sigma$, ${\partial} \Sigma =\emptyset$, necessarily vanish. Notice also that, if $\dim(\mathcal{I}_A)=1$, and we take $\chi_A$ to be of unit\footnote{${\mathrm{Lie}(\G)}$ is equipped with the following canonical positively-definite inner product: $\langle \xi,\eta\rangle := \int \sqrt{g} \,\mathrm{Tr}(\xi\eta) $.} norm, then from \eqref{eq:IA} it follows that $\chi_{A^g} = g^{-1}\chi_A g$ and therefore $Q[\chi_A]$ is gauge invariant. However, if $\dim(\mathcal{I}_A)>1$, the identification of specific elements of $\mathrm{Lie}(\mathcal{I}_A)$ at different $A$'s is more problematic: for this reason in the reminder of this section we will focus on the case $\dim(\mathcal{I}_A)=1$ and will comment on its generalization at the end. Having established a Gauss law and the gauge invariance of the stabilizer charge $Q[\chi_A]$, we now ask whether such charges are also conserved. Consider a configuration of $\Phi={\mathrm{T}}{\mathcal{A}} \times (\bar \Psi \times \Psi)$ whose time evolution in temporal gauge allows a time-{\it in}dependent extension of $\chi_A$ to a {\it time} neighbourhood $N$ of $R$, $N=R\times(t_0,t_1)$, that satisfies $ {\mathrm{D}}_i\chi_A =0$ at every time.\footnote{This is equivalent to asking that the evolution is confined to the stratum $\cal N_A$---see the end of this section and appendix \ref{app:slice}. It is possible that such $\chi$'s are uniquely fixed by demanding that they conserve both $ A$ and $ E_{\text{rad}}$ (and then evolving these solutions in time). Whether these motions are physically relevant (at least in some approximation) is not clear to us. We are also ignoring here the difficulty of identifying a given $\chi\in\mathrm{Lie}(\mathcal{I}_A)$ at different configurations in $\mathcal N_A$ when $n=\dim(\mathcal{I}_A)>1$---see the last paragraph of section \ref{sec:charges_YM}. This difficulty might result in the ``mixing'' of different stabilizer charges, which is inconsequential for the present argument.} Then, the quantity $Q[\chi_A]$ is conserved in the sense that it satisfies a balance law in terms of the matter current \cite{AbbottDeser,Barnich, DeWitt_Book}: \begin{equation}\label{eq:balance_YM_time} 0 = \int_N \sqrt{g}\, \nabla_\mu \mathrm{Tr}(\chi J^\mu) = \Delta_{t_1,t_0} Q[\chi] + \int_{t_0}^{t_1} \d t \, F_{{\partial} R}[\chi], \end{equation} where we introduced the fluxes $F_{{\partial} R}[\chi] := \oint \sqrt{h}\,\mathrm{Tr}(\chi J_s)$ through ${\partial} R$. The first equality follows from $ {\mathrm{D}}_\mu \chi_A = 0$ as well as the equation of motion ${\mathrm{D}}_\mu J^{\mu\alpha} = 0$ (Noether II); instead, the second equality is just an application of Stokes' theorem. It is important to stress that all integrands in the above balance law are {\it quantities constructed geometrically} from the properties of $ A$ in $N$, with gauge-invariance properties analogous to those of $Q[\chi_A]$. Indeed, the existence and properties of a $\chi_A$ such that ${\mathrm{D}}_\mu \chi=0$ are gauge-invariant features of the configuration history $ A(t)$. The construction of such quantities would not be possible at non-reducible configurations, where the equation (Noether II) ${\mathrm{D}}_\mu J^{ \mu\alpha} = 0$ does {\it not} constitute an appropirate replacement of the above. Notice that the condition ${\mathrm{D}}_\mu\chi_A =0$ on the whole of $N=R\times(t_0,t_1)$ implies that $ A(t)$ and $ E(t)=\dot A(t)$ are both reducible, and therefore---via the Gauss constraint---that $\rho$ commutes with $\chi_A$: \begin{equation} {\mathrm{D}}_\mu\chi_A =0 \quad \Rightarrow \quad {\mathrm{D}}_i\chi_A =0, \quad [ E_i, \chi_A ] = 0 \quad\text{and}\quad [ \rho, \chi_A]=0. \end{equation} We conclude this section by noticing that the balance equation expressed in \eqref{eq:balance_YM_time} is akin to the conservation of Komar charges for Killing vector fields in general relativity. Similarly, the impossibility of identifying a meaningful non-Abelian charge density over generic, i.e. nonreducible, configurations parallels, in general relativity, the difficulties in identifying conserved stress-energy charges away from backgrounds with Killing symmetries \cite{Szabados}. In general relativity, the Komar charges encode the conservation of energy-momentum and angular-momentum in the test particle approximation over a symmetric background. The physical relevance of this approximation in general relativity is more than well established (think of the theory special relativity); the same cannot be said for YM. (This difference is due to the extreme weakness of the gravitational attraction compared to the other forces.) Finally, within the present framework, the construction of asymptotic YM charges at null infinity that are more akin to the Bondi, rather than Komar, charges was carried out in \cite{RielloSoft}. The above analysis has focused on the space and space-time properties of reducible configurations. In the following sections, we will instead focus on their field-space and symplectic properties. Whereas these properties will be analyzed in detail in the case of the Abelian theory, in the non-Abelian case we will limit ourselves at emphasizing the difficulties a generalization would incur. \subsection{Charges and symplectic geometry in electromagnetism}\label{sec:charges_EM} As the prototypical example of an Abelian YM theory, we will focus on electromagnetism (EM). As we have already notice, in the {\it Abelian} case {\it all} configurations are reducible. It is therefore necessary to incorporate charge transformations in the symplectic treatment of the Abelian theory. In this case, for all $A$, $\mathcal{I}_A \equiv \mathcal{I}_\text{EM}\cong G$ is the (normal) subgroup of constant gauge transformations in ${\mathcal{G}}$ (the ``electric-charge'' group), and therefore ${\mathcal{A}}_\text{EM}$ has the structure of a bona-fide infinite dimensional fibre bundle for the group ${\mathcal{G}}_\text{EM}:={\mathcal{G}} / \mathcal{I}_\text{EM}$. Since in EM the electric field $E$ is gauge invariant, the matter-free phase space ${\mathrm{T}}^*{\mathcal{A}}_\text{EM}$ inherits a bona-fide fibre bundle structure with respect to the same quotient group. In particular, all phase space configurations $(A,E)\in{\mathrm{T}}^*{\mathcal{A}}_\text{EM}$ are reducible with respect to the constant gauge transformations $\chi_\text{EM}=const$. However, in spite of this, none of the matter field configurations for which $\psi\neq0$ is thus reducible: \begin{equation} \delta_{\chi_\text{EM}}\psi = -\chi_\text{EM} \psi \neq 0. \label{eq:psi_chiEM} \end{equation} Indeed, $\chi_\text{EM}^\#$ as a vector field on the full phase space $\Phi_\text{EM} = {\mathrm{T}}^*{\mathcal{A}}_\text{EM} \times( \bar \Psi \times \Psi)$ reads\footnote{Here, $B$ is a spinorial index in $\mathbb C^4$, e.g. the Dirac gamma matrices $\gamma^\mu$ have components $(\gamma^\mu)^{B'}{}_B$.} \begin{equation} \chi_\text{EM}^\#{} = \int (-\chi_\text{EM} \psi)^B(x) \frac{\delta}{\delta \psi^B(x)} + (\bar \psi \chi_\text{EM})^B(x) \frac{\delta}{\delta \bar\psi^B(x)} \in \mathrm T \Phi_\text{EM}. \label{eq:chiEMpsi} \end{equation} Therefore, although we can define a functional connection on ${\mathcal{A}}_\text{EM}$ for the the quotient gauge group ${\mathcal{G}}_\text{EM}$, in order to use this connection to define horizontal derivatives on $\Phi_\text{EM}$, we need to be able to identify elements $[\xi] = [\xi + \chi_\text{EM} ] \in \mathrm{Lie}({\mathcal{G}}_\text{EM})$ with elements of $\mathrm{Lie}({\mathcal{G}})$. This cannot be done canonically, and a choice of embedding map of vector spaces \begin{equation}\label{eq:kappa_EM} \kappa : \mathrm{Lie}({\mathcal{G}}_\text{EM}) \hookrightarrow {\mathrm{Lie}(\G)} \end{equation} has to be made. (Notice that in the Abelian case, as long as $\kappa$ preserves the vector-space structure of $\mathrm{Lie}({\mathcal{G}}_\text{EM})$, it will also preserve its (trivial) Lie-algebra structure.) A simple choice is to represent ${\mathcal{G}}_\text{EM} $ in ${\mathcal{G}}$ as the so-called group of ``pointed\footnote{One can always find the respective group of pointed gauge transformations that acts freely on the space of field configurations. Its construction is completely analogous in non-Abelian YM. In all cases ${\mathcal{G}}_\ast\subset {\mathcal{G}}$ is a normal subgroup and ${\mathcal{G}}/{\mathcal{G}}_\ast \cong G$. (Analogous considerations hold in metric general relativity where ``pointed diffeomorphisms" are diffeomorphisms that leave a point and a tangent space at that point invariant.) What distinguishes the Abelian case is that the group of pointed gauge transformations is isomorphic to the quotient ${\mathcal{G}}_A := {\mathcal{G}} / \mathcal I_A$ (for all $A$) which only in this case is a group itself.} gauge transformations,'' ${\mathcal{G}}_* := \{ g \in {\mathcal{G}} \text{ such that } g(x_*) = \mathrm{id} \} $ for a certain (arbitrary) $x_*\in R$. I.e. fixing $\kappa(\xi)$ as the only element of $\xi + \mathrm{Lie}(\mathcal{I}_\text{EM})$ such that $(\kappa(\xi))(x_\ast)= 0$. But there are other possibilities, too. In the following we will denote by $\xi_*$ elements of $\kappa(\mathrm{Lie}({\mathcal{G}}_\text{EM}))\subset{\mathrm{Lie}(\G)}$, irrespectively of the choice of $\kappa$ that has been made. At the end of this section, we will argue that the choice of $\kappa$ is irrelevant---at least within a given superselection sector of $f$. We now turn our attention to the geometry of field space, and to the definition of a functional connection form. Recalling that the SdW boundary value problem has $\mathrm{Lie}(\mathcal{I}_\text{EM})$ as a kernel, we define $\varpi_\text{EM}\in\Omega^1({\mathcal{A}}_\text{EM},{\mathrm{Lie}(\G)})$ as the unique solution to the SdW equation\footnote{The SdW choice is here made for definiteness, but won't play any particular role in what follows. Other choices of connection can be studied e.g. by considering the corresponding vertical projector $\hat V$ and thus defining $\varpi_\text{EM} := \kappa\circ \iota^{-1} (\hat V(\cdot)) \in \Omega^1({\mathcal{A}}_\text{EM}, {\mathrm{Lie}(\G)})$, where $\iota \equiv \kappa^\#: \mathrm{Lie}({\mathcal{G}}_\text{EM}) \to V_A\subset {\mathrm{T}}_A{\mathcal{A}}_\text{EM}$ is the isomorphism between equivalence classes $[\xi+\chi_\text{EM}]\in \mathrm{Lie}({\mathcal{G}}_\text{EM})$ and vertical vector fields in ${\mathrm{T}}{\mathcal{A}}_\text{EM}$. The SdW connection considered in the text corresponds to the choice of $\hat V$ as the $\mathbb G$-orthogonal vertical projector.} that is valued in $\kappa(\mathrm{Lie}({\mathcal{G}}_\text{EM}))\subset \mathrm{Lie}({\mathcal{G}})$. This connection satisfies the projection and covariance properties \eqref{eq:varpi_def} with respect to gauge transformations in the image of $\kappa$: \begin{equation} \begin{cases} \mathbb i_{\xi_*^\#} \varpi_\text{EM} = \xi_*\\ \mathbb L_{\xi_*^\#}\varpi_\text{EM} = {\mathbb{d}} \xi_* \end{cases} \qquad \forall \xi_*\in\kappa(\mathrm{Lie}({\mathcal{G}}_\text{EM})) \label{eq:EM_varpi-pointed}. \end{equation} The above properties, however, ``fail'' if $\xi$ is replaced by an element of the stabilizer $\chi_\text{EM} = const$: \begin{equation} \mathbb i_{\chi_\text{EM}^\#} \varpi_\text{EM} = 0 = \mathbb L_{\chi_\text{EM}^\#} \varpi_\text{EM}, \label{eq:varpi_chiEM} \end{equation} These two equations can be summarized in the following: \begin{equation} \begin{cases} \mathbb i_{\xi^\#} \varpi_\text{EM} = \kappa( \xi)\\ \mathbb L_{\xi^\#}\varpi_\text{EM} = {\mathbb{d}} \kappa(\xi) \end{cases} \qquad \forall \xi \in{\mathrm{Lie}(\G)}. \label{eq:EM_varpikappa} \end{equation} In ${\mathcal{A}}_\text{EM}$, these equations are trivial since $\chi_\text{EM}^\# \equiv 0$ in ${\mathrm{T}}{\mathcal{A}}$. But these equations can readily be interpreted within the phase space $\Phi_\text{EM}$ as well. In this case $\varpi_\text{EM}$ is as usual the pullback of the SdW connection defined over ${\mathcal{A}}_\text{EM}$ and $\cdot^\#$ now includes the action of ${\mathcal{G}}$ on the electric field (which is also trivial) and on the matter fields $\psi$ (which is {\it non}trivial) \eqref{eq:chiEMpsi}. Hence, over $\Phi_\text{EM}$, the vector $\chi^\#_\text{EM}$ does not vanish but is nonetheless in the kernel of (the pullback of) $\varpi_\text{EM}$. Thus, we see that we have geometrically isolated the set of ``constant gauge transformations'' of EM. Of course, these transformations are precisely those associated with the the total electric charge contained in $R$. In the following, we will rather call them {\it charge transformations}. We can now define the horizontal derivatives as usual: \begin{equation} {\mathbb{d}}_\perp A := {\mathbb{d}} A - \d \varpi_\text{EM} \qquad {\mathbb{d}}_\perp E := {\mathbb{d}} E \qquad\text{and}\qquad {\mathbb{d}}_\perp \psi : = {\mathbb{d}} \psi + \varpi_\text{EM}\psi \end{equation} with the understanding that---in the presence of the matter fields---full covariance is only guaranteed with respect to the gauge transformation {\it modulo} charge transformations. Also, notice how the horizontal derivative of $A$ could be unambiguously defined using a connection valued in $\mathrm{Lie}({\mathcal{G}}_\text{EM}) = {\mathrm{Lie}(\G)}/\mathrm{Lie}(\mathcal{I}_\text{EM})$, since ${\mathrm{D}}\chi_\text{EM} \equiv 0$, but the horizontal derivative of $\psi$ requires a connection valued in ${\mathrm{Lie}(\G)}$, since $\xi$ and $\xi+\chi_\text{EM}$ act differently on $\psi$. Let us analyze the properties of $\theta^\perp = \theta^\perp_\text{EM} + \theta^\perp_\text{Dirac}$ under gauge and charge transformations. We start by noticing that, contrary to the Noether charges $H^\text{EM}[\xi_*] := \theta(\xi_*^\#) = \theta^V(\xi_*^\#)$, the stabilizer charges $Q_\text{EM}[\chi_\text{EM}]$ can be defined solely from the {\it horizontal} symplectic potential:\footnote{Notice that, since $ \varpi_\text{EM}(\chi_\text{EM}^\#) = 0$, $Q_\text{EM}[\chi_\text{EM}] := \theta^\perp(\chi^\#_\text{EM})$ is also equal to $ \theta(\chi^\#_\text{EM})$, to $ \theta_\text{Dirac}(\chi^\#_\text{EM})$, and to $ \theta^\perp_\text{Dirac}(\chi^\#_\text{EM})$.} \begin{equation} 0 \neq Q_\text{EM}[\chi_\text{EM}] := \int \sqrt{g} \, ( \chi_\text{EM} \rho ) = \theta^\perp (\chi^\#_\text{EM}). \label{eq:QEM} \end{equation} We notice that the Gauss constraint \eqref{eq:Gauss-Coul} together with the fact that $0=\delta_{\chi_\text{EM}} A = \d \chi_\text{EM}$, implies the (integrated) Gauss law for the stabilizer charges---usually expressed for $\chi_\text{EM}=1$: \begin{equation} Q_\text{EM}[\chi_\text{EM}] \approx \int \sqrt{g} \, ( \chi_\text{EM} \nabla^i E_i^{\text{Coul}} ) \approx \oint \sqrt{h}\,( \chi_\text{EM} f) = \chi_\text{EM} \oint \sqrt{h} \, f, \end{equation} If $\chi_\text{EM}$ is not only a constant in space but also in time, $Q_\text{EM}[\chi_\text{EM}]$ is a quantity satisfying a balance equation like \eqref{eq:balance_YM_time} (electric charge conservation). To understand the symplectic significance of the charge $Q_\text{EM}[\chi_\text{EM}]$, the following identity is important:\footnote{This equation can be obtained by Lie-deriving $\theta^H$ \eqref{eq:theta_HV} using $ \mathbb L_{\mathbb X} {\mathbb{d}} \bullet = {\mathbb{d}} \mathbb L_{\mathbb X} \bullet$, the Leibniz rule, as well as the identities $\mathbb L_{\chi^\#_\text{EM}} A \equiv \delta_{\chi_\text{EM}} A = 0$ and \eqref{eq:varpi_chiEM} (which imply $\mathbb L_{\chi_\text{EM}^\#} \theta^H_\text{EM} = \mathbb L_{\chi_\text{EM}^\#} \theta^{H}_\text{EM,Dirac} $), and \eqref{eq:psi_chiEM}. } \begin{equation} \mathbb L_{\chi_\text{EM}^\#} \theta^\perp = \int \sqrt{g}\, (\rho\, {\mathbb{d}} \chi_\text{EM}). \label{eq:L-ddchi} \end{equation} This identity establishes that $\theta^\perp$ is {\it not} invariant under the flow of a {\it charge} transformation $\chi_\text{EM}$, unless $\chi_\text{EM}$ is field-{\it in}dependent. This should be contrasted with the invariance of $\theta^\perp$ under gauge transformations proper \eqref{eq:LthetaH=0}. Before we make the comparison explicit, let us first follow the previous remark to its conclusions: the invariance of $\theta^\perp$ under the field-{\it in}dependent charge flow $\chi^\#_\text{EM}$, for ${\mathbb{d}} \chi_\text{EM}=0$, implies the following {\it nontrivial} flow equation: \begin{equation} 0 = \mathbb L_{\chi_\text{EM}^\#} \theta^\perp = {\mathbb{d}} Q_\text{EM}[\chi_\text{EM}] + \mathbb i_{\chi^\#_\text{EM}} \Omega^\perp \qquad ({\mathbb{d}} \chi_\text{EM} = 0). \label{eq:Abflow} \end{equation} Whereas the second equality is an identity that follows solely from Cartan's formula and the definition \eqref{eq:QEM}, we stress once again that, in the presence of matter (where the equation is nontrivial), the first equality and therefore the Hamiltonian-flow equation hold if and only if $ \chi_\text{EM}$ is the {\it same} throughout ${\mathcal{A}}_\text{EM}$, i.e. only if ${\mathbb{d}} \chi_\text{EM} = 0$ (cf. \eqref{eq:L-ddchi}). Heuristically, this makes sense: as defined here, a charge $Q_\text{EM}[\chi_\text{EM}] $ is a measure of a certain physical property of a matter distribution over a (symmetric) background, and the flow equation compares this measure at two neighbouring configurations---but, for this comparison to be meaningful, one cannot change the ``measuring rod'' (i.e. $\chi_\text{EM}$) from one configuration to the other. Back to the comparison with the {\it trivial} ``flow'' equation for gauge-transformations proper. This is implied by \eqref{eq:LthetaH=0} when ${\mathcal{A}}$ actually has the structure of a fibre bundle (so is a proper foliation induced by the action of the gauge group) and $\varpi$ satisfies the connection-form axioms \eqref{eq:varpi_def} for {\it all} $\xi\in{\mathrm{Lie}(\G)}$ (as opposed to \eqref{eq:varpi_chiEM}). In EM, thanks to \eqref{eq:EM_varpi-pointed}, it is {\it pointed} gauge transformations which satisfy $\mathbb L_{\xi_*^\#}\theta^\perp_\text{EM}\equiv 0$ and thus the trivial flow equation follows:\footnote{More generally, the two equations above can be summarized in the following equality between the two expressions of $\mathbb L_{\xi^\#} \theta^\perp $: $$ Q_\text{EM}[{\mathbb{d}}\Xi] = {\mathbb{d}} Q_\text{EM}[\Xi] + \mathbb i_{\Xi^\#} \Omega^\perp \quad\text{where}\quad \Xi := \xi - \kappa(\xi) \in \mathrm{Lie}(\mathcal{I}_\text{EM}).$$ } \begin{equation} 0 \equiv \mathbb L_{\xi_*^\#} \theta^\perp = {\mathbb{d}} (\mathbb i_{\xi_*^\#}\theta^\perp) + \mathbb i_{(\xi-\kappa(\xi))^\#} \Omega^\perp. \label{eq:gaugeinvariance} \end{equation} This ``flow'' equation is said to be trivial because each term on the rhs vanishes identically and independently, even when ${\mathbb{d}} \xi_* \neq 0$. In sum, that such {\it charge} transformations $\chi_\text{EM}$ are physical, and are thus distinguished from {\it gauge} transformations $\xi_*$, is not postulated, but derived: the transformations corresponding to $\chi_\text{EM}$ are entirely generated by the $\theta^\perp_\text{EM}$ components, rather than by the Gauss constraint (which is entirely in $\theta^V_\text{EM}$), and thus survive the symmetry reduction process. Therefore, the formal similarity between the on-shell stabilizer charges $Q_\text{EM}[\chi_\text{EM}] \approx \oint \sqrt{h}\,( \chi_\text{EM} f) $ and the on-shell Noether charge $H^\text{EM}_{\xi_*} \approx \oint \sqrt{h}\,( \xi_ * f)$, should not obfuscate the important differences between these two very different quantities. The Noether charges $H^\text{EM}_{\xi_*}$ should be thought of as encoding information on the $f$-superselection sector, rather than on the quasi-local radiative degrees of freedom contained in $R$. \paragraph*{Remark} Above we have chosen to represent ${\mathcal{G}}/\mathcal{I}_\text{EM}$ in terms of ${\mathcal{G}}_\text{EM}$. This choice is arbitrary not only in the choice of $x_*$, but also due to the fact that other ways exist of representing the quotient ${\mathcal{G}}/\mathcal{I}_\text{EM}$ (e.g., for a cubic region, in terms of the Fourier modes of $\xi$ beyond the zero mode). Different such choices lead to different prescriptions for defining the SdW connection $\varpi_\text{EM}$. Consider two such prescriptions, $\varpi_{\text{EM},1}$ and $\varpi_{\text{EM},2}$. Then, since $\varpi_\text{EM}$ is itself exact, $\varpi_{\text{EM},2} - \varpi_{\text{EM},2} = {\mathbb{d}} \sigma$, for some $\sigma \in \Omega^0(\mathrm{Lie}(\mathcal{I}_\text{EM}))$. From this, it is easy to see that $\Omega^\perp_2 - \Omega^\perp_1 = Q_\text{EM}[{\mathbb{d}}\sigma] = {\mathbb{d}} \sigma \curlywedge {\mathbb{d}} q_R $ where $q_R = \int \sqrt{g} \rho$ is the total electric charge in $R$. Thus, the two reduced symplectic forms coincide within any given superselection sector (actually, even within larger sectors of constant $q_R$ rather than constant $f$). \subsection{Considerations on the non-Abelian generalization}\label{sec:charges_YM} How much of the constructions carried over in the previous section generalize to the non-Abelian case? We leave a comprehensive answer to this question for future work. Here, we limit ourselves to some general considerations on the difficulties one would encounter in this process of generalization. As observed in the previous part of this section, we already have a candidate for a global set of charges in YM theory: this is the stabilizer charge $Q[\chi_A]$. What is in question is: Are these charges gauge-invariant? Are they the Hamiltonian generator of the charge transformation $\chi_A^\#$ for the {\it horizontal} symplectic structure? In the rest of this section we will try to identify the difficulties one needs to face when addressing these questions. Albeit rarely, we will at times be compelled to make reference to notions---that we have not introduced---regarding the stratification of ${\mathcal{A}}$ by the reducible configurations, or the slice theorem \cite{Ebin, Palais, isenberg1982slice, kondracki1983}. To make this article self-contained, we add a brief summary of these notions in appendix \ref{app:slice}. \paragraph*{An example: the vacuum and constant gauge transformations} Consider the non-Abelian vacuum configuration $A=0$. Similarly to the EM case, the non-Abelian vacuum is also stabilized by constant gauge transformations, $\mathcal{I}_{A=0} \cong G$ and $\chi_{A=0} = const$; this might suggest that similar considerations to those made in the previous section about EM might be made for YM around the vacuum background $A=0$. This would recover the notion of global charges proposed in \cite[Ch. 7]{StrocchiBook}, who singles out ``global gauge transformations'' of this sort (i.e. $\xi = const$) as having a particular physical significance. However, complications arise and this simple example is useful to exemplify some of the difficulties one encounters when attempting to generalize the constructions of the previous section to the non-Abelian case. From a mathematical perspective, taking the directions $\chi_{A=0}^\#$ as physical also at non-vacuum configurations means modelling the space of physical configurations ${\mathcal{A}}/{\mathcal{G}}$ by the slice through the vacuum configuration\footnote{Mutatis mutandis, any slice through a vacuum configuration $A=0^g = g^{-1}\d g \in {\cal O}_{A=0}$ would do. Although a brief summary will follow, for a more through discussion of the notion of ``slice'' see appendix \ref{app:slice} and references therein.} $\mathscr S_{A=0}$. The notion of ``slice'' generalizes the notion of ``section'' of a fibre bundle to the case in which reducible configurations are present. However, at reducible configurations, the notion of slice necessarily differs from the naive intuition behind the notion of section. In particular, the slice $\mathscr S_{A=0} \subset {\mathcal{A}}$ contains the non-trivial orbits of the $A\neq 0$ under the constant transformations $\chi_{A=0}^\#$ even if they contain a single representative of the (partial) orbit of ``non-constant gauge transformations'' ${\mathcal{G}}_{A=0} = {\mathcal{G}} / {\cal I}_{A=0}$ (which do not form a group). The stabilizer charges $Q[\chi_{A=0}]$ then emerge as the Noether charges (Noether I) associated with the now ``frozen'' background $A=0$ and (fluctuation) fields that transform in the adjoint of $\mathcal{I}_{A=0}\cong G$. (This treatment has an analogue at all background configurations $\tilde A$ with nontrivial stabilizers. These analogue constructions, however, lead to different charge groups, $\mathcal{I}_{\tilde A}$). There are however various reasons to deem this analysis incomplete. One such reason is the arbitrariness in the choice of symmetric background. Another is that, even granting that, in YM, the vacuum configurations in ${\cal O}_{A=0}$ {\it are} special, the use of $\mathscr S_{A=0}$ is at best perturbative around $A=0$, as the following analogy highlights: the analogue in general relativity of using $\mathscr S_{A=0}$ as model for ${\mathcal{A}}/{\mathcal{G}}$ in YM would correspond to (somehow) choosing one set of e.g. Cartesian coordinates (adapted to the vacuum $g_{\mu\nu}=\eta_{\mu\nu}$) and declaring that translations and rotations with respect to these coordinates have physical significance also at non-flat configurations. Therefore, the analysis offered above can, at best, have an approximate significance in the presence of small perturbations on top of the vacuum background. (As already emphasized at the end of section \ref{sec:Greens}, the YM stabilizer charges $Q[\chi]$ are indeed analogous to general relativity's Komar charges for backgrounds with a Killing symmetry; but whereas the physical importance of these charges over approximately symmetric backgrounds is manifest in the low-mass limit of general relativity, to the best of our knowledge it has not been established at all in (any regime of) YM.) These observations suggest that it is not possible to arrive at a single notion of global charges in YM that is meaningful throughout ${\mathcal{A}}$. The alternative is to work, as in the rest of this paper, in a differential-geometric language at the level of local tangent spaces, considering only those stabilizer charges defined at one given configuration. \paragraph*{Symmetry sectors} This takes us to the next complication. Since stabilizer charges exists only at reducible configurations which form a meagre set in ${\mathcal{A}}$, differentiating quantities such as $Q[\chi_A]$---e.g. to study the associated symplectic flows---is problematic: generic variations of the symmetric base configurations that are not fine-tuned necessarily take us to irreducible configurations that do not admit any stabilizer charge at all. This is another reason why, in the non-Abelian theory, the physical viability of stabilizer charges is unclear. The above notwithstanding, it still is of mathematical interest to analyze the consequence of stabilizer charges {\it within} field-space sectors characterized by a certain degree of symmetry, i.e. within given strata ${\cal N}_A$ characterized by non-trivial (possibly sub-maximal) stabilizers, ${\cal I}_A\supsetneq \{\mathrm{id}\} $. (One notable case in which focusing on reducible configurations is not physically restrictive is that of Yang-Mills fields at asymptotic null infinity: there, all physically admissible configurations must be---at leading order in $1/r$---in the vacuum configuration. This means that certain asymptotic stabilizers are intrinsically defined, and thus lead to an enlarged group of asymptotic symmetries, that correspond to Strominger's leading soft-charges. Our formalism extends to that context without obstructions: See \cite{RielloSoft} and references therein.) Thus, within a single stratum, it is at least meaningful to attempt a generalization of the symplectic analysis we performed for the Abelian stabilizer charges. There are however further obstacles, which we will now highlight by inspecting the various ingredients entering \eqref{eq:Abflow} and \eqref{eq:gaugeinvariance}. In the following, all differential operators must be understood as being those intrinsic to a given stratum $\mathcal N_A$. \paragraph*{A $\varpi$ for reducible configurations?} The main tool we need to construct is a generalization of the connection form to a stratum $\mathcal N_A$ and thus of a horizontal differential. First, notice that the notion of horizontal differential is useful only if there are nontrivial horizontal directions within $\mathcal N_A$, which is not the case in the bottom stratum $\mathcal N_{A=0}$ of the YM vacuum $A=0$, since\footnote{Cf. footnote \ref{fnt:vac_stratum} in appendix \ref{app:slice}.} $\mathcal N_A = \mathcal O_{A=0}$. Second, in the non-Abelian theory, $\mathfrak{G}_A = {\mathrm{Lie}(\G)}/\mathrm{Lie}(\mathcal{I}_A)$ generically fails to be a Lie algebra, and so does $\kappa(\mathfrak{G}_A) \subset \mathrm{Lie}({\mathcal{G}})$, for $\kappa$ (the non-Abelian analogue of) the embedding map given in \eqref{eq:kappa_EM}. Therefore, on $\mathcal N_A$, it won't be possible to define an actual connection form that satisfies the covariance property as in \eqref{eq:EM_varpi-pointed}, and its extension to $\Phi$ will lack a generalization of this property as in \eqref{eq:EM_varpikappa}. In order to find a useful non-Abelian version of \eqref{eq:EM_varpikappa}, it will therefore be important to find a set of definitions leading to a minimal modifications of the projection and covariance properties \eqref{eq:varpi_def} in the presence of a nontrivial stabilizer group. \paragraph*{A basic symplectic structure?} Once a viable generalization of the projection and covariance properties has been found, and thus a $\varpi_\text{red}\in\Omega^1(\mathcal N_A, {\mathrm{Lie}(\G)})$ has been defined, one will still have to check whether---through this $\varpi_\text{red}$---it is possible to define an appropriately horizontal and gauge-invariant (i.e. basic) symplectic structure that can lead to the analogues of \eqref{eq:Abflow} and \eqref{eq:gaugeinvariance}. We expect this will not be, strictly speaking, possible: depending on the specific way $\varpi_\text{red}$ generalizes the projection and covariance properties, we expect certain obstructions to invariably appear. Ideally, these obstructions will be encoded in a certain combination of the stabilizer charges, rather than some new objects. \paragraph*{Field-space constant charge transformations?} Finally, an essential ingredient of the flow equation for the electric charges \eqref{eq:Abflow} is the condition ${\mathbb{d}} \chi_\text{EM} = 0$. A similar condition will have to appear in the non-Abelian case too. It is thus instructive to discuss why its naive generalization, ${\mathbb{d}} \chi_A = 0$, cannot be correct and what kind of generalization might be available. The failure of the naive generalization follows from the fact that $\mathcal{I}_A$ is preserved throughout $\mathcal N_A$ only {\it up to} conjugation by elements of ${\mathcal{G}}$ (see appendix \ref{app:slice}). Since $\mathcal{I}_A$ necessarily changes vertically according to \eqref{eq:IA}, an equation expressing (some form of) constancy of $\chi_A$ can only hold along horizontal directions. Indeed, it turns out that $\mathcal{I}_A$ {\it is} preserved in the directions that are $\mathbb G$-orthogonal to the orbits ${\cal O}_A$ (this fact is at the basis of most proves of the slice theorem, see footnote \ref{fnt:slicethm} in appendix \ref{app:slice}). If $\dim(\mathcal{I}_A) = 1$ or $\dim\mathscr{S}_A<2$,\footnote{ Here we take the slice within each stratum, $\cal{N}_A$. } this observation would suffice to find a (weaker) version of the condition ${\mathbb{d}} \chi_\text{EM}=0$ that applies to the non-Abelian case. However, it is still found to be insufficient if $\dim(\mathcal{I}_A) = n > 1$ (and $\dim\mathscr{S}_A\geq 2$). This is because there is no canonical way to a priori identify {\it elements} of $\mathcal{I}_A$ and $\mathcal{I}_{A'}$ at two different configurations $A$ and $A'$, even when $\mathcal{I}_A = \mathcal{I}_{A'}$. Therefore, in general, we expect that it would be necessary to introduce a set of bases of $\{ \chi_A^{(\ell)}\}_{\ell =1}^n$ of $\mathrm{Lie}(\mathcal{I}_A)$ and of a connection $\nu^{\ell}{}_{\ell'} \in \Omega^1(\mathcal N_A , \mathfrak{gl}(n))$. The curvature of this connection, if geometrically constrained, would provide yet another source of obstruction to the non-Abelian generalization of the flow equation \eqref{eq:Abflow}. We postpone any further analysis of these issues to future work. \section{The SdW connection from Dirac's dressing prescription \label{sec:dressing}} In this section, which is somewhat independent from the rest of the paper, we revisit Dirac's construction for the dressing of the electron \cite{Dirac:1955uv}, and provide considerations about its generalizability (or lack thereof) to the non-Abelian case. This discussion also offers an {\it independent}, albeit heuristic, route to the introduction of the SdW connection $\varpi_{\text{SdW}}$. It should be noted from the outset that, following Dirac, we will set up the problem in terms of a Coulombic potential since the very beginning (equation \eqref{eq:Green}). In the light of section \ref{sec:symred}, and of equation \eqref{eq:Coul_pot} in particular, it should then not come as a surprise that this will lead us precisely to the {\it SdW} connection. What {\it is} unexpected is that Dirac's construction will naturally lead us to a field-space connection form at all, and thus to a notion of dressing that involves ``field-space Wilson lines,'' which are field-space non-local objects. We will comment on this point at the end of this section. The Dirac's dressing construction can be motivated by the need to define a {\it physical} electron field that is meant to correspond to creation operators associated to the ``bare'' electron and its Coulombic electric field at once, so that the Gauss law is automatically satisfied. With the purport of being physical, this dressed field is expected to be, and indeed is, gauge invariant. At the same time, however, the dressed field describes a charged electron, and therefore also carries electric charge. This means that the total electric charge Poisson-generates a global shift in the phase of the dressed electron field, which might seem in contrast with the posited gauge invariance of the dressed field. However, as we saw in section \ref{sec:charges}, these requirements are not mathematically in conflict: constant ``gauge transformations'' associated to the electric charge correspond to stabilizer transformations that do have a different (geometric) status in ${\mathcal{A}}_\text{EM}$ with respect to generic (local) gauge transformations. Starting from these ideas, we will now revisit Dirac's construction. We will work from the onset in finite regions and in the non-Abelian setting. Denoting the dressed field with a hat, $\hat \psi$, the classical condition corresponding to the demand that the corresponding quantum field creates an electron at $x$ together with its electrostatic field is \begin{equation} \{ E_j^\beta(y) , \hat\psi(x) \} = - \big({\mathrm{D}}_j G_{\alpha,x} \big){}^\beta(y) \tau_\alpha \hat \psi(x) \label{eq:dressedPoisson} \end{equation} where $\{ \cdot , \cdot \}$ denotes the Poisson bracket and\footnotemark~$G_{\alpha,x}\in\Omega^0(R,\mathrm{Lie}({\mathcal{G}}))$ is the $\mathrm{Lie}({\mathcal{G}})$-valued Green's function of the SdW boundary value problem, as in \eqref{eq:Green}---which we report here for convenience: \begin{equation} \begin{dcases} {\mathrm{D}}^2 G_{\alpha,x}(y) = \tau_\alpha \delta_x(y) & \text{in }R\\ {\mathrm{D}}_s G_{\alpha,x}(y) = 0 & \text{at }{\partial} R \end{dcases} \label{eq:Green2} \end{equation} Although at this level any other choice of boundary conditions would have worked, the (covariant) Neumann boundary condition is chosen here for future convenience. As observed in section \ref{sec:Greens}, this choice of Green's function---valid only at non-reducible configurations\footnote{See \ref{sec:Greens} for how the boundary value problem should be amended at reducible configurations.}---corresponds to the demand that the dressed particle created at $x$ does \textit{not} contribute to the flux $f$ at ${\partial} R$. This is consistent with the fact reviewed in \ref{sec:Greens} that at non-reducible configurations there is no meaningful integrated Gauss law. A posteriori, with the knowledge acquired from the construction of the SdW connection, it is possible to see that the boundary conditions of \eqref{eq:Green2} are moreover the only ones that make $\hat \psi$ gauge invariant with respect to gauge transformations whose support is not limited to the interior of $R$, extending also to its boundary ${\partial} R$. Going back to the definition of the dressed matter field, and working formally ($\simeq$), we consider the Ansatz \begin{equation}\label{eq:dress_ansatz} \hat \psi(x) \simeq e^{\phi[A](x)} \psi(x) \quad \text{for some}\quad \phi[A]\in\mathrm{Lie}({\mathcal{G}}). \end{equation} From this Ansatz, by substitution into the requirement \eqref{eq:dressedPoisson}, we get the condition: \begin{equation} \frac{1}{\sqrt{g}}\frac{\delta}{\delta A} \phi = \{ E , \phi \} = - {\mathrm{D}} G, \end{equation} or, in full detail, \begin{equation} \frac{g_{ji}(y)}{\sqrt{g(y)}}\frac{\delta}{\delta A_i^\beta(y)} \phi^\alpha[A](x) = \{ E_j^\beta(y) , \phi^\alpha(x) \} = - \big({\mathrm{D}}_j G_{\alpha,x}\big){}^\beta(y), \label{eq:118} \end{equation} where we used $\Omega(\tfrac{\delta}{\delta A_i^\beta(y)}) = - \sqrt{g(y)} \,g^{ij}(y)\delta E_j^\beta(y)$ to individuate the Hamiltonian vector field associated to $E_j^\beta(y)$ (notice that this expression is valid also in the presence of boundaries without the need of bulk-supported smearings). Equation \eqref{eq:dressedPoisson} in the form of \eqref{eq:118} can then be formally solved through a line integral in configuration space ${\mathcal{A}}$, that we denote ${\int\kern-1.00em{\int}}^A$: \begin{equation} \phi[A]^\alpha(x) = - {\int\kern-1.00em{\int}}^A \int_R \d^Dy \sqrt{g(y)} \, g^{ij} \sum_\beta \big({\mathrm{D}}_i G_{\alpha,x}\big){}^\beta(y)\, {\mathbb{d}} A^\beta_j(y) \label{eq:Fdressing} \end{equation} Using a more compact notation, this can be written \begin{equation} \phi[A](x) = - {\int\kern-1.00em{\int}}^A \int_R \sqrt{g} \, \mathrm{Tr}\Big({\mathrm{D}}^i G_{\alpha,x}\, {\mathbb{d}} A^\beta_i \Big)\tau_\alpha. \end{equation} Integrating by parts, one obtains \begin{align} \phi[A](x) & = - {\int\kern-1.00em{\int}}^A \left( - \int_R \sqrt{g}\,\mathrm{Tr}\Big( G_{x,\alpha} {\mathrm{D}}^j{\mathbb{d}} A_j \Big)\tau_\alpha + \oint_{{\partial} R} \sqrt{h}\, \mathrm{Tr}\Big( G_{x,\alpha} {\mathbb{d}} A_s \Big)\tau_\alpha\right) \end{align} Now, to be able to use Green's theorem and simplify this expression, it is natural to introduce \begin{equation} \varpi \in \Omega^1(R, \mathrm{Lie}({\mathcal{G}})) \end{equation} defined by \eqref{eq:varpi_def} \begin{equation} \begin{cases} {\mathrm{D}}^2 \varpi = {\mathrm{D}}^i {\mathbb{d}} A_i & \text{in } R,\\ {\mathrm{D}}_s \varpi = {\mathbb{d}} A_s & \text{at } {\partial} R. \end{cases} \end{equation} Hence, using this definition and Green's theorem \eqref{eq:Greenthm} for $\psi_1 = \varpi$ and $\psi_2 = G_{\alpha, x}$, we obtain the {\it formal} solution \begin{align} \phi[A](x) & = {\int\kern-1.00em{\int}}^A \varpi(x) \label{eq:120} \end{align} and $\hat \psi \simeq \exp\left({\int\kern-1.00em{\int}}^A \varpi \right)\psi$ is the formal general solution to the demands imposed by Dirac's dressing. This construction provides an independent motivation for the introduction of the SdW connection form $\varpi$---even though at this level, its connection-form properties \eqref{eq:varpi_def}, and in particular its covariance property, are {\it not} manifest. But with hindsight knowledge of the connection-form nature of $\varpi$, we introduce the following gauge-covariant expression (i.e. even under field-{\it dependent} gauge transformations) involving a {\it field-space Wilson-line}: we call this the dressing factor:\footnote{This is a 1-dimensional integral along a curve embedded in an infinite dimensional space. It is the latter property that the doublestruck-face of the symbol $\mathbb{Pexp} {\int\kern-1.00em{\int}}$ is meant to emphasize. Cf. equation \eqref{eq:Wilson}, where a Wilson line in space---rather than in configuration space---is considered. } \begin{equation} \hat \psi(x) = \underbrace{ \mathbb{Pexp}\left( {\int\kern-1.00em{\int}}^A \varpi(x) \right) }_{\text{=:dressing factor $e^{\phi[A]}$}}\psi(x). \label{eq:dressedquark} \end{equation} (see below and especially \cite[Sec. 9]{GomesHopfRiello} for details and crucial subtleties regarding the dressing factor's gauge-covariance and the associated choice of field-space path). In EM, the SdW connection is Abelian and flat (cf. theorem \ref{thm:EM}), i.e. $\varpi = {\mathbb{d}} \varsigma$ for \begin{equation} \begin{cases} \nabla^2 \varsigma = {\mathrm{D}}^i A_i & \text{in } R,\\ {\partial}_s \varsigma = A_s & \text{at } {\partial} R. \end{cases} \end{equation} Therefore, both the path ordering and the choice of path in ${\mathcal{A}}$ are inessential. In particular, one can choose the trivial configuration $A^\star=0$ as a starting point for the field-space line integral so that the resulting expression (seemingly) depends only on the final configuration $A$. Indeed, using the fact that in EM the Green's function $G$ does not depend on $A$, we can perform the integral explicitly and readily find---cf. \eqref{eq:dressedquark}: \begin{equation} \hat \psi(x) = e^{\varsigma(x)} \psi(x) = e^{i \int_R \sqrt{g(y)}\, G(x,y){\partial}^i A_i(y) }\psi(x). \end{equation} This provides a generalization to finite and bounded regions of the Dirac dressing, which in $\mathbb R^{D=3}$ with isotropic boundary conditions at infinity reads (see \cite{Dirac:1955uv}): \begin{equation} \hat \psi(x) = e^{i \int \frac{\d^3 y}{4\pi} \frac{{\partial}^i A_i(y)}{|x-y|} }\psi(x). \end{equation} In the non-Abelian setting, we first proposed the expression \eqref{eq:dressedquark} (without reference to the derivation presented here) in \cite{GomesHopfRiello}. There, this formula was framed in relation to the work on dressings by Lavelle and McMullan \cite{Lavelle:1994rh, Lavelle:1995ty, bagan2000charges, bagan2000charges2}, and also to the Gribov-Zwanziger framework \cite{Zwanziger82, Zwanziger:1989mf} (see \cite{Vandersickel:2012tz} for a review and relation to confinement), and, finally, to the geometric approach to the quantum effective action by Vilkovisky and DeWitt \cite{Vilkovisky:1984st, vilkovisky1984gospel, Rebhan1987, DeWitt_Book, Pawlowski:2003sk, Branchina:2003ek}. In particular, in \cite{GomesHopfRiello}, we studied in detail the properties and limitations of \eqref{eq:dressedquark} and we related the limitations to certain obstructions appearing in the previous works \cite{Vilkovisky:1984st, vilkovisky1984gospel, Rebhan1987, DeWitt_Book, Zwanziger82, Zwanziger:1989mf, Lavelle:1995ty}. More specifically, we showed that: the obstructions found previously come from the curvature of the connection form, which induces a path-dependence ambiguity in the dressing; that this ambiguity can be fixed in a neighbourhood of a given reference configuration $A^\star$ (using field-space geodesics with respect to the so-called Vilkovisky connection \cite{Vilkovisky:1984st, vilkovisky1984gospel, DeWitt_Book}); and that, nonetheless, all expressions will still depend on the (gauge-dependent) choice of the reference configuration $A^\star \in {\mathcal{A}}$ \cite{Pawlowski:2003sk,Branchina:2003ek}. Finally, note that global existence and uniqueness of the Vilkovisky geodesics from $A^\star=0$ to a generic $A$, a question related to the non-perturbative existence and uniqueness of the dressing factor, is expected to fail in view of the Gribov problem. We restrain from dissecting these topics here, and refer to \cite[Sec. 9]{GomesHopfRiello} for a thorough discussion. Instead, we limit ourselves to the following observations: although the notion of a full-blown nonperturbative dressing is not viable in YM due to the involved geometry of ${\mathcal{A}}$, an infinitesimal version thereof is precisely provided by the SdW horizontal differential. Indeed, {\it formally}, the total differential of the (gauge invariant) dressed matter field, ${\mathbb{d}} \hat \psi$ is directly related to the SdW horizontal differential of the bare matter field, modulo the dressing factor:\footnote{Once again this can be made precise in the Abelian case where $\varpi = {\mathbb{d}} \varsigma$, where the following is an actual, i.e. not merely formal, equality (formally, $\phi = \varsigma - \varsigma_\star$---in the following we set $\varsigma_\star=0$): $${\mathbb{d}} \hat \psi = e^\varsigma ({\mathbb{d}} \psi + {\mathbb{d}} \varsigma \psi) = e^\varsigma {\mathbb{d}}_\perp \psi.$$} \begin{equation} {\mathbb{d}} \hat \psi \simeq e^\phi ({\mathbb{d}} \psi + \varpi \psi) = e^\phi {\mathbb{d}}_\perp \psi. \end{equation} Since the SdW-horizontal differential has a natural place in any Abelian as well as non-Abelian YM theory, it follows that, in this sense, {\it the SdW horizontal differential constitutes the closest analogue to the Dirac dressing that generalizes to the non-Abelian YM theory}. In particular, the discussion of symmetry charges of section \ref{sec:charges} shows that the dressed fields (or better, their differentials) do carry charges despite being fully gauge invariant (resp. horizontal covariant) objects. Although uncharged, the photon can also be made gauge invariant by dressing it with the same dressing factor. Not surprisingly this gives rise to the transverse photon. In the non-Abelian setting a dressed, covariantly-transverse, gluon can be defined with the same caveats as for the dressed quark and with a completely analogous relation to the horizontal differentials. We conclude by directing the reader to \cite{francoisthesis, Francois_review} for a more algebraic take on dressings and their consequences e.g. for the interpretation of spontaneous symmetry breaking, aka the Higgs mechanism. In relation to the Higgs mechanism within our formalism we refer to the field-space ``Higgs connection'' discussed in\footnote{See in particular {\it ibid.} point (\textit{x}) of Sec. 9.2.} \cite[Sec. 7 and 9]{GomesHopfRiello}. \section{Gluing \label{sec:gluing}} So far we have analyzed the definition of quasilocal degrees of freedom and charges within a given region with boundaries. The goal of this section is to study how these notions behave with regards to the composition, or gluing, of regions. The first result of this section concerns the gluing of field configurations or, more precisely, of horizontal field-space vectors---i.e. either horizontal perturbations of $A$ or radiative electric fields. The second result builds on the first and concerns the gluing of the symplectic structures. In other words, we first show that---with knowledge of the choice of connection---two horizontal field configurations can be glued {\it un}ambiguously. Then, in section \ref{sec:gluing_symp_pot}, we apply this result to the gluing of the symplectic structure. We will show that the horizontal symplectic structure do {\it not} factorize, even though the total symplectic structure can be unambiguously reconstructed from the regional ones. In doing so we will precisely identify {\it what} the new dof emerging upon gluing are. These results apply, strictly speaking, only when considering the gluing of (trivial bundles over) simply connected regions into a larger (trivial bundle over a) simply connected region. That is, we neglect topological effects, such as the emergence upon gluing of new Aharonov--Bohm dof. This possibility is briefly discussed in the simplest possible context in section \ref{sec:topology}. Nonetheless, despite the simplified context, this result is nontrivial: new dof {\it do} emerge upon gluing, and can be {\it uniquely} identified. The fact that new dof emerge upon gluing will not surprise those whose intuition is built through lattice considerations. However, the fact that these dof can be {\it uniquely} identified, i.e. that no gauge ``slippage'' is allowed at the interface, might defy their intuition. For this reason we will sidestep the problem of providing a thorough set of conditions for the existence part of the problem of (smooth) gluing.\footnote{A thorough discussion of this problem should be set up along the following lines: an appropriate functional space (e.g. the space of smooth functions, or a certain Sobolev space) must be chosen for the configurations $A\in{\mathcal{A}}$ and their fluctuations $\mathbb X\in\mathrm T{\mathcal{A}}$ (more generally, analogous choices must be made also for the electric and matter fields). Once this space is given, the existence within the same functional space (restricted over $R^\pm$) of the regional horizontal projections $h^\pm$ according to the SdW boundary value problem must be checked. Finally, conditions on the regional horizontal projections must be provided so that the resulting glued, global, and horizontal fluctuation $H=H(h^+,h^-)$ also belongs to the originally chosen functional space. If the functional space of choice is the space of smooth functions, $C^\infty$, then the main difficulty is ensuring that the glued, global, and horizontal fluctuation $H$ is smooth across $S$. Cf. the next section for the notation.} In this topologically trivial context we will also discuss how the presence of matter fields (on top of regionally-reducible configurations) can introduce ambiguities in the gluing. We relate these ambiguities to 't Hooft's beam splitter thought experiment and to the concept of Direct Empirical Significance for gauge symmetries (DES). \subsection{Mathematical statement of gluing}\label{sec:gluingthm} In the following, we will first state and then prove the theorem at the root of all our results on the composition of both the electric field $E$ and of the perturbations $\mathbb X\in\mathrm T{\mathcal{A}}$ of the gauge potential $A$. Here we are not interested in global, topological, dof, and will focus on a boundary-less, self-contained model of the universe as a whole. Therefore, we consider for simplicity a global region $\Sigma \cong S^D$, ${\partial} \Sigma =\emptyset$, which is split into two hemispheres, $R^\pm \cong \mathbb B^D$ (the $D$-dimensional ball), such that $R^+\cap R^-=S $ coincides with the equator up to orientations, $S = \pm {\partial} R^\pm \cong S^{D-1}$. I.e. \begin{equation} \Sigma = R^+ \cup_S R^-. \end{equation} The two hemispheres serve also as charts over $\Sigma$. Since we are not interested in studying topological dof, we consider the transition function for the gluing of the $A$'s across $S$ to be fixed. For simplicity we will fix this transition function as the identity, which will allow us to introduce quantities corresponding to global electric fields and global perturbations of the gauge potentials. At the end of section \ref{sec:GGTproof}, we will comment on the extension to the case of ${\partial} \Sigma \neq \emptyset$. To formally encode the separation of regions, we introduce $\Theta_\pm$ as the characteristic functions of ${R^\pm}$. Denoting $s_i$ the outgoing co-normal at $S$ with respect to the region ${R^+}$, one has \begin{equation} {\partial}_i \Theta_\pm =\mp s_i \delta_S \label{eq:dTheta} \end{equation} where $\delta_S$ is a $(D-1)$-dimensional delta function supported on the interface $S$. Having assumed a trivial bundle over $\Sigma$, a generic Lie-algebra-valued vector in $\mathrm T {\mathcal{A}}$ can be written as $\mathbb Y = \int_\Sigma Y \frac{\delta}{\delta A} \in \mathrm { T \mathcal{A}}$. It is useful to introduce the following notation for the regional decomposition of $\mathbb Y$ supported on $\Sigma$: \begin{equation} \mathbb Y = \mathbb Y^+ \oplus \mathbb Y^-, \label{eq:heavi_dec} \end{equation} where $Y^\pm := Y \Theta_\pm$ and $\mathbb Y^\pm = \int_{R^\pm} Y^\pm \frac{\delta}{\delta A}$. With this notation, notation we can state the following (see figure \ref{fig4}): \begin{Thm}[General Gluing Theorem]\label{thm:GGT} Given $\Sigma = R^+ \cup_S R^-$ as above, and given $\mathbb Y = \mathbb Y^+ \oplus \mathbb Y^-$ as above; consider three field-space connections $(\varpi,\varpi_\pm)$ each associated to $\Sigma$ and $R^\pm$ respectively, defining the three horizontal/vertical decompositions \begin{equation} Y = H + {\mathrm{D}} \Lambda \qquad\text{and}\qquad Y^\pm = h^\pm + {\mathrm{D}} \lambda^\pm, \label{eq:regs_Y_comps} \end{equation} where $\Lambda := \varpi(\mathbb Y)$ and $\lambda^\pm := \varpi_\pm(\mathbb Y^\pm)$, and $\varpi(\mathbb H) = 0 = \varpi_\pm (\mathbb h^\pm)$. Then, formally, $H$ is uniquely determined by $h^\pm$. \end{Thm} \begin{figure}[t] \begin{center} \includegraphics[width=6cm]{fig_new_6} \caption{ The two subregions of $\Sigma$, i.e. $\Sigma^\pm$, with the respective horizontal perturbations $\mathbb h^\pm$ on each side, along with the separating surface $S$. } \label{fig4} \end{center} \end{figure} Notice that $h^\pm \neq H \Theta_\pm$ and $\lambda^\pm \neq \Lambda \Theta_\pm$, i.e. that {\it regional restrictions and horizontal projections fail to commute}. This is a consequence of the nonlocality of the functional connections $(\varpi, \varpi_\pm)$ and the reason why the Gluing Theorem is nontrivial. Under the hypotheses of the theorem, the ``commutator'' between these two operations is provided by the regional vertical adjustments $\xi^\pm := \lambda^\pm - \Lambda \Theta_\pm$: \begin{equation} H(h^+,h^-) = ( h^+ + {\mathrm{D}} \xi^+)\Theta_++( h^- + {\mathrm{D}} \xi^-)\Theta_- . \label{eq:reconstructed_H} \end{equation} The precise form of these vertical adjustments depends on the specific choice of connections $(\varpi,\varpi_\pm)$. If all three connections are SdW connections, then the $\xi^\pm$'s can be determined through explicit formulas. Indeed, as it turns out, the General Gluing Theorem will be proven in the following section as an immediate consequence of the analogous statement for the SdW decompositions, the SdW Gluing Lemma: \begin{Lemma}[SdW Gluing Lemma]\label{Lem:SdWGL} Consider the premises of the General Gluing Theorem \ref{thm:GGT} in the case where $(\varpi,\varpi_\pm)$ are all SdW connections, so that \begin{equation} {\mathrm{D}}^i H_i = 0 \qquad\text{and}\qquad \begin{dcases} {\mathrm{D}}^i h^\pm_i=0 & \text{in } {R^\pm}\\ s^i h^\pm_i = 0 & \text{at }S \end{dcases} \qquad \text{(SdW)}. \label{eq:hor_conds} \end{equation} Assume also that $A$, $A^\pm := A\Theta_\pm$ and $^SA:=\iota_S^* A$ are irreducible as gauge potentials over $\Sigma$, $R^\pm$ and $S$ respectively. Then, if $H$ is $C^1$ across $S$, the vertical-adjustments $\xi^\pm = \xi^\pm(h^+,h^-)\in\Omega^0(R^\pm,\mathrm{Lie}(G))$ of formula \eqref{eq:reconstructed_H} are uniquely determined by the regional SdW value problems \begin{equation} \begin{cases} {\mathrm{D}}^2 \xi^\pm = 0 & \text{in }R^\pm\\ {\mathrm{D}}_s \xi^\pm = \Pi & \text{at }S \end{cases}\label{eq:gluing1} \qquad \text{(SdW)}, \end{equation} for $\Pi \in \Omega^0(S,\mathrm{Lie}(G))$ given by \begin{equation} \Pi = - \big( \mathcal R^{-1}_+ + \mathcal R^{-1}_-\big)^{-1} \mu \label{eq:RRPi} \end{equation} and $\mu\in \Omega^0(S,\mathrm{Lie}(G))$ determined by the following SdW boundary value problem \emph{intrinsic to the boundary} $S$ (${\partial} S = \emptyset$): \begin{equation} {{}^S{\D}}{}^2 \mu = {{}^S{\D}}{}^a \iota_S^*( h^+ - h^-)_a. \label{eq:mu} \end{equation} Here, ${{}^S{\D}}_a := (\iota_S^* {\mathrm{D}})_a$ is the covariant derivative intrinsic to $S$ and ${{}^S{\D}}{}^2 = h^{ab}\; {{}^S{\D}}_a\; {{}^S{\D}}_b$ is the covariant Laplace operator on $S$, with $h_{ab}=(\iota_S^* g)_{ab}$ the induced metric there. Finally the operators $\mathcal R_\pm$ appearing in \eqref{eq:RRPi} are the `generalized Dirichlet-to-Neumann pseudo-differential operators' attached to $S$, but geometrically associated to each region (see the next section for the precise definitions of these operators, equation \eqref{eq:DTN}). \end{Lemma} Notice that {\it neither} the General Gluing Theorem {\it nor} the SdW Gluing Lemma will attempt an in-depth analysis of the conditions that, from smooth $h^\pm$, allow us to reconstruct a global $H$ which is smooth across the interface $S$. Indeed, as emphasized above, our main interest is to prove that the gluing procedure is---whenever meaningful---fully unambiguous (at irreducible configurations and modulo topological ambiguities). We make the following remark on the conditions of Lemma \ref{Lem:SdWGL} in the {\it non-Abelian case}: as emphasized by our discussion of global charges, section \ref{sec:charges_YM}, the generic and physically most natural condition is subsumed by an irreducible \textit{bulk} configuration. Then, if the bulk configuration is irreducible, the requirement of irreducibility for $^SA=\iota_S^*A$ is also open: a choice of $S$ such that $^SA$ is irreducible always exists, and perturbations in the position of $S$ preserve this property. In the Abelian case, on the other hand, all configurations are reducible by constant ``gauge transformations.'' This reducibility moreover carries physical significance, since it is---in all cases---related to the total charge contained in the region. In section \ref{sec:gluing_matter} we will explore the physical meaning of a {\it bulk} stabilizer ambiguity due to a reducible bulk configuration $A$ in the presence of matter. \paragraph*{Road map of the proof} Let us chart a roadmap of the proof of the two propositions above, which will be given in greater detail in the next section. The proof consists of four steps. The first three focus on reconstructing the $\xi^\pm$ in the SdW case; the last one bootstraps the general case from the SdW case: \begin{enumerate} \item First, assuming that a global $H$ which is at least $C^1$ at $S$ exists, we will deduce restrictions on the difference $ h_+- h_-$ at $S$. Combined with the horizontality of the regional $h^\pm$, the requirement of smoothness gives us conditions on the longitudinal and transverse derivatives of $(\xi^+-\xi^-)$ at the boundary: \begin{enumerate} \item In the absence of boundary stabilizers, the longitudinal condition allows us to solve for the difference \begin{equation} \mu := - (\xi^+-\xi^-)_{|S}, \end{equation} in terms of the interface \emph{mismatch} $( h^+ - h^-)_{|S}$, which is parallel to the boundary due to \eqref{eq:hor_conds}. This leads to equation \eqref{eq:mu}. \item The transverse condition states the equality of the derivatives normal to the boundary, allowing us to introduce \begin{equation} \Pi:= {\mathrm{D}}_s \xi^+{}_{|S} = {\mathrm{D}}_s \xi^-{}_{|S} \qquad \text{(SdW)}. \label{eq:Pi} \end{equation} \end{enumerate} \item Second, the SdW horizontality of the global $H$ provides us with one extra condition on the bulk part of the $\xi^\pm$'s, stating that the $\xi^\pm$ must be (covariantly) harmonic in their own regional domains. Together with \eqref{eq:Pi}, this leads to the SdW boundary value problem for the $\xi^\pm$'s \eqref{eq:gluing1}. \item Third, we show that an equation fixing $\Pi$ in terms of $\mu$ exists \eqref{eq:RRPi}, thus allowing us to prove that the information above suffices to uniquely and fully reconstruct the $\xi^\pm$ in terms of $ h^+$ and $h^-$. The relationship between $\Pi$ and $\mu$ can be loosely understood as a conversion of Dirichlet ($\mu$) to Neumann ($\Pi$) boundary conditions for the $\xi^\pm$'s entering the SdW boundary value problem \eqref{eq:gluing1}. This is why \eqref{eq:RRPi} involves a combination of generalized Dirichlet-to-Neumann pseudo-differential operators $\mathcal R_\pm$ attached to the boundary (but geometrically associated to each region). The details on the nature of these operators are postponed to the next section. \item Finally, we show that the proof of the general case can always be reduced to the proof of the SdW case, thus showing that the General Gluing Theorem follows from the SdW Gluing Lemma. \end{enumerate} The next two sections are devoted to the proof (and explanation) of the above formulas. \subsubsection{Formal proof of the SdW Gluing Lemma \ref{Lem:SdWGL}} \begin{proof} Assuming a global $H$ which is $C^1$ across $\Sigma$ exists, from \eqref{eq:reconstructed_H} we deduce that the following relation must hold at the interface: \begin{equation} (h_i^+-h_i^-){}_{|S} = - {\mathrm{D}}_i (\xi^+ - \xi^-) {}_{|S}. \label{eq:continuity} \end{equation} This equation not only imposes a series of conditions on our unknown $\xi^\pm$, but also demands the interface mismatch of our variables $h_i^\pm$ to be of a pure-gradient form. This condition is restrictive and will be discussed in more detail in section \ref{sec:int_h}. For now, we take it for granted and focus on the consequences of this equation on $\xi^\pm$. We now decompose \eqref{eq:continuity} into its transverse and longitudinal components with respect to $S$. Since the component of $ h^\pm$ transverse to $S$ vanishes because of regional horizontality \eqref{eq:hor_conds}, contracting \eqref{eq:continuity} with $s^i$ we obtain that the normal derivatives of $\xi^\pm$ at $S$ must match \begin{equation} {\mathrm{D}}_s(\xi^+ - \xi^-)_{|S} = 0. \label{eq:normalmatchingxi} \end{equation} Therefore, taking the boundary divergence of the pullback of \eqref{eq:continuity} to $S$ (i.e. effectively contracting its pullback with ${{}^S{\D}}{}^a$) we find that the difference \begin{equation}\label{def:mu} \mu:= - (\xi^+-\xi^-)_{|S} \end{equation} is solely determined, in the absence of stabilizers $\chi_S$ of $^SA = \iota_S^* A$, by the mismatch of the two horizontals at the boundary according to the following SdW boundary value problem \emph{intrinsic to the boundary} $S$ (${\partial} S = \emptyset$): \begin{equation} {{}^S{\D}}{}^2\mu = {{}^S{\D}}{}^a \iota_S^*( h^+ - h^- )_a. \label{eq:Shoriz} \end{equation} Assuming that $^SA = \iota_S^* A$ is boundary-{\it irreducible}, this equation has a unique solution and this concludes step 1 of the proof outlined above. Now we move to step 2: assuming that the global region $\Sigma$ has no boundaries, by smearing the global horizontality condition, ${\mathrm{D}}^{i} H_{i}=0$, with $ H$ given by \eqref{eq:reconstructed_H}, we obtain: \begin{equation} \int_\Sigma \,\mathrm{Tr}\Big[\sigma\, {\mathrm{D}}^{i}\Big(( h_{i}^+ + {\mathrm{D}}_{i}\xi^+)\Theta_++( h_{i}^-+{\mathrm{D}}_{i}\xi^-)\Theta_-\Big)\Big]=0 \end{equation} for any smooth test function $\sigma\in C^\infty(\Sigma,\mathrm{Lie}(G))$. Now, thanks to the identity ${\partial}_{i}\Theta_\pm=\mp s_{i}\delta_S$ \eqref{eq:dTheta} and to the regional horizontality conditions ${\mathrm{D}}^{i} h_{i}^\pm=0=s^{i} h_{i}^\pm{}_{|S}$ \eqref{eq:hor_conds}, we get: \begin{equation} \int_{R^+} \mathrm{Tr}\Big( \sigma\,{\mathrm{D}}^2\xi^+ \Big) + \int_{R^-}\mathrm{Tr}\Big( \sigma\,{\mathrm{D}}^2\xi^-\Big) - \oint_S \mathrm{Tr}\Big(\sigma\, s^{i}{\mathrm{D}}_{i}\left(\xi^+-\xi^- \right)\Big)=0, \end{equation} where the last term above already vanishes due to \eqref{eq:normalmatchingxi}. From the arbitrariness of $\sigma$, we obtain the last, bulk, condition mentioned in step 2 of the outline above. We thus deduce that, if a global $H\in C^1(\Sigma, {\mathrm{Lie}(\G)})$ exists, then the $\xi^\pm$ must satisfy the following elliptic boundary value problem \begin{equation}\label{lambda} \begin{dcases} {\mathrm{D}}^2 \xi^\pm = 0 &\text{in }R^\pm\\ s^i{\mathrm{D}}_i\left(\xi^+-\xi^- \right)=0 &\text{at }S\\ (\xi^+-\xi^-) = -\mu(h^+,h^-) & \text{at }S \end{dcases} \end{equation} where $\mu(h^+,h^-)$ is the unique solution to \eqref{eq:Shoriz}. This concludes step 2. Now we must use the appropriate PDE tools to show that this boundary value problem determines $\xi^\pm$ in terms of the regional horizontal perturbations $\mathbb h^\pm$. This is step 3. For step 3, we proceed as follows: start by setting \begin{equation} \Pi := s^i({\mathrm{D}}_i \xi^\pm){}_{|S}, \label{eq:Pidef} \end{equation} from the second equation in \eqref{lambda}. Note that in possession of $\Pi$, we can determine $\xi^\pm$ by solving the boundary value problem given by \eqref{eq:Pidef} and the first equation of \eqref{lambda}. Then step 3 can be reformulated as the problem of fixing $\Pi$ in terms of $\mu(h^+,h^-)$. Notice that, given $\Pi$, $\xi^\pm$ are uniquely determined up to stabilizers, i.e. up to elements $\chi^\pm\in C^\infty(\Sigma,\mathrm{Lie}(G))$ such that ${\mathrm{D}}_i\chi^\pm=0$ which are nontrivial only at reducible configurations. In the topologically simple case that we have analyzed so far, this is the only ambiguity present in the determination of $\xi^\pm$. We postpone the discussion of reducible configurations and of nontrivial topologies until sections \ref{sec:gluing_matter} and \ref{sec:topology}, respectively. Now, to determine $\Pi$, we introduce generalized \textit{Dirichlet-to-Neumann operators} $\mathcal{R}_\pm$ (see e.g. \cite{BFK} and references therein). In each region, such operators map Dirichlet conditions for a (gauge-covariantly) harmonic function to the corresponding (gauge-covariant) Neumann conditions. In brief, for a given bounded region, $\mathcal R$ functions as follows: a given harmonic function with Dirichlet conditions---these conditions are the input of $\mathcal R$---will possess a certain normal derivative at the boundary; i.e. will induce certain Neumann conditions there---these conditions are the output of $\mathcal R$. But let us be more explicit. In general, for a manifold with boundary $S$ and outgoing normal $s^i$, we define the Dirichlet-to-Neumann operator $\mathcal R\in\mathrm{Aut}(\Omega^0(S, {\mathrm{Lie}(\G)}))$ by \begin{equation} \label{eq:DTN} \mathcal R(u):=s^i{\mathrm{D}}_i (\zeta_{u})_{|S} \end{equation} where $\zeta_u$ is the unique (gauge-covariantly) harmonic Lie-algebra-valued function defined by the elliptic Dirichlet boundary value problem: ${\mathrm{D}}^2 \zeta_{u}=0$ with $(\zeta_{u})_{|S } = u$. Notice that the subscript $u$ encodes the {\it Dirichlet} boundary condition employed. Using superscripts to denote (gauge-covariant) Neumann boundary conditions, we would have by definition $\zeta^{\mathcal{R}(u)} \equiv \zeta_u$. Moreover, since (in the absence of stabilizers) the corresponding Neumann problems also have unique solutions, $\cal R$ is invertible, i.e. $\zeta^\Pi \equiv \zeta_{\mathcal R^{-1}(\Pi)}$. At irreducible configurations, we can thus define $\cal R_\pm$ associated to $R^\pm$ with boundaries ${\partial} R^\pm = S$, and their inverses $\mathcal{ R}_\pm^{-1}$. Now, from \eqref{eq:Pidef} and the fact that $\xi$ is itself (gauge-covariantly) harmonic from the first equation of \eqref{lambda}, we have \begin{equation}\label{eq63} \xi^\pm= {^{(\pm)}\zeta}{}^{ \pm\Pi}\equiv{^{(\pm)}\zeta}{}_{\mathcal{R}_\pm^{-1}(\pm \Pi)} \end{equation} where the back-superscript $(\pm)$ indicates whether the respective covariantly harmonic functions ${}^{(\pm)}\zeta$ are defined over $R^+$ or $R^-$, respectively. We will now use the last equation of \eqref{lambda} to fix $\Pi$ uniquely. Once this is done, \eqref{eq63} contains all the information we sought for the gluing. Notice that there is a $\pm$ sign in the argument of $\mathcal{R}_\pm^{-1}$ in \eqref{eq63}. This sign is due to the fact that, at $S$, $s^i{\mathrm{D}}_i \xi^+ = s^i{\mathrm{D}}_i\xi^- $ but $s^i$ is the outgoing normal on one side and the ingoing normal on the other, so the conditions $s^i{\mathrm{D}}_i \xi^\pm = \Pi$ fix opposite Neumann conditions on the two sides. By the linearity of $\mathcal{R}$ we have \begin{equation} \mathcal{R}^{-1}_\pm({ \pm}\Pi) = {\pm} \mathcal{R}^{-1}_\pm(\Pi). \label{eq:signimportant} \end{equation} Hence, since by defintion $(\zeta_{u})_{|S } = u$, together with \eqref{eq63} and \eqref{eq:signimportant} we have \begin{equation}\label{eq:xis_bdary} (\xi^+-\xi^-)_{|S}=\mathcal{R}^{-1}_+(\Pi)- \mathcal{R}^{-1}_-(-\Pi)=\Big(\mathcal{R}^{-1}_+ + \mathcal{R}^{-1}_-\Big)(\Pi). \end{equation} This gives us a relation between the (gauge-covariant) Neumann boundary condition $\Pi$ and the difference of the Dirichlet boundary conditions $\xi_\pm{}_{|S}$. This relation finally allows us to provide a formula that fixes $\Pi$ in terms of the boundary discrepancy of the regional horizontals $( h^+ - h^-)_{|S}$. That is, we insert \eqref{eq:xis_bdary} into the last of the equations \eqref{lambda} to obtain: \begin{equation} \Big( \mathcal R^{-1}_+ + \mathcal R^{-1}_-\Big)(\Pi) = - \mu(h^+,h^-) \label{eq:111} \end{equation} This is the equation that fixes $\Pi$ in terms of $ \mu(h^+,h^-)$. Since its solution is unique---as we will discuss in a moment---it also fixes $\xi^\pm$ uniquely through \eqref{eq63}, thus subsuming the entire set of equations \eqref{lambda}. This concludes step three. For the uniqueness statement for $\Pi$ to be meaningful, it is important to check that the operator $(\mathcal{R}^{-1}_+ + \mathcal{R}^{-1}_-)$ is invertible. That this is (formally) the case follows from $\cal R_\pm$ being positive self-adjoint operators, and from the relative sign appearing on the left-hand-side of \eqref{eq:111}---a consequence of the sign in \eqref{eq:signimportant}. To show that the generalized Dirichlet-to-Neumann operators $\cal R_\pm$ are self-adjoint and have positive spectrum we proceed as follows. Consider again $\zeta_u\neq0$ to be the unique solution to the problem ${\mathrm{D}}^2 \zeta_u = 0$ in the bulk and $(\zeta_u)_{|S}=u$ at the boundary. Then, for any Lie-algebra valued functions $u, v$ on the boundary, one has \begin{align}\label{eq:self_adj} \int_{R^+}\sqrt{g}\, g^{ij}\mathrm{Tr}( {\mathrm{D}}_i \zeta_u {\mathrm{D}}_j \zeta_v ) & = - \int_{R^+}\sqrt{g}\, \mathrm{Tr}( \zeta_u {\mathrm{D}}^2 \zeta_v )+ \oint_{S}\sqrt{h}\, s^i \mathrm{Tr}( \zeta_u {\mathrm{D}}_i\zeta_v )\notag\\ & = \oint_{S}\sqrt{h}\, \mathrm{Tr} (u \mathcal R_+(v)) = \oint_{S}\sqrt{h}\, \mathrm{Tr} (\mathcal R_+(u) v) . \end{align} Notice that the first step in \eqref{eq:self_adj} follows from an integration by parts and properties of the commutator under the trace (i.e. from the ad-invariance of the Killing form).\footnote{The following identity is valid for any smearing $\sigma \in C^\infty(\Sigma,\mathrm{Lie}(G))$: $$ \mathrm{Tr}\Big( - \sigma\, {\partial}^i{\mathrm{D}}_i \zeta + g^{ij}[A_i,\sigma] {\mathrm{D}}_j \zeta \Big) = \mathrm{Tr}\Big( - \sigma\, {\partial}^i{\mathrm{D}}_i \zeta - g^{ij} \sigma [A_i, {\mathrm{D}}_j \zeta] \Big) = \mathrm{Tr}\Big( - \sigma\, {\mathrm{D}}^2 \zeta \Big) . $$} The last line of \eqref{eq:self_adj} proves the self-adjointness of $\cal R_+$ with respect to the natural inner product $\langle u,v\rangle_S = \oint_S \sqrt{h}\,\mathrm{Tr}(uv)$, while setting $u=v$ in \eqref{eq:self_adj}, gives positivity: \begin{equation} \oint_{S} \sqrt{h}\,\mathrm{Tr} (u \mathcal R_+(u))\geq0. \end{equation} At {\it irreducible} configurations, the equality holds if and only if $\zeta_u=0$ and therefore if and only if $u=0$. Similar manipulations lead to the analogous conclusion for $\mathcal R_-$. We have thus showed that: {\it if} a global SdW horizontal $H\in C^1(\Sigma,{\mathrm{Lie}(\G)})$ exists, then it is uniquely determined by the SdW regional horizontals $h^\pm$ via \eqref{eq:reconstructed_H}, \eqref{eq:gluing1} and \eqref{eq:RRPi}. This concludes the proof of the SdW Gluing Lemma \ref{Lem:SdWGL}. \end{proof} \paragraph*{Summary} Here is what our gluing theorem means, in words. In a given region, say $R_+$, the vertical $\xi_+$ which translates between the global and regional horizontals, $H_{|R_+} = h_+ + {\mathrm{D}} \xi_+$, is defined as a harmonic function with Neumann boundary conditions (with respect to the \textit{covariant} differential operator ${\mathrm{D}}$). The Neumann conditions are implicitly defined by the difference of horizontals at the boundary, but since this difference would only give a Dirichlet boundary condition, one must apply the Dirichlet-to-Neumann boundary operator $\mathcal R_+$. Nonetheless, we can summarize: $\xi_\pm$ are the unique harmonic functions with Neumann conditions defined by the difference of horizontals at the boundary. Each such doublet will identify a unique global horizontal $H$ compatible with the doublet of horizontals, $(h_+, h_-)$. \\ In the next section we show that the SdW connection is just a crutch: the gluing theorem holds more generally. \subsubsection{Proof of the General Gluing Theorem \ref{thm:GGT}}\label{sec:GGTproof} The proof of SdW Gluing Lemma of course relies on the particular choice of the SdW connection. But as long as there exists a 1-1 correspondence between the horizontal vectors of one connection to the horizontal vectors of another, uniqueness will go through. To avoid confusions, in this section we denote the global and regional SdW connections as $(\varpi_{\text{SdW}}, \varpi_{\text{SdW}}^\pm)$ and the arbitrary connections of the statement of the General Gluing Theorem \ref{thm:GGT} as $(\varpi', \varpi'_\pm)$ so that the corresponding horizontal/vertical decompositions are also primed. Unprimed decompositions refer to the SdW connection. Let us emphasize once again that the three connections $(\varpi', \varpi'_\pm)$ can be completely unrelated: unlike $\varpi_{\text{SdW}}$ and $\varpi_{\text{SdW}}^\pm$, they might not all descend from the same geometric criterion. Now, according to the primed horizontal/vertical decomposition, equation \eqref{eq:heavi_dec} stays the same, whereas \eqref{eq:regs_Y_comps} and \eqref{eq:reconstructed_H} are rewritten with primes. E.g. \begin{equation}\label{eq:primed_reconstruct} H' = ( h'_+ + {\mathrm{D}} \xi'_+)\Theta_++( h'_- + {\mathrm{D}} \xi'_-)\Theta_- . \end{equation} Our goal is to show that, given $h'_\pm$, then $H'$ is uniquely determined. We start by SdW-decomposing $h_\pm'$, thus obtaining: \begin{equation} h'_\pm = h_\pm + {\mathrm{D}} \lambda_\pm, \end{equation} where $\lambda_\pm := \varpi^\pm_{\text{SdW}}(\mathbb h'_\pm)$ and $\varpi_{\text{SdW}}^\pm(\mathbb h_\pm) = 0$. Now, from the SdW Gluing Lemma, we formally compute the unique SdW-horizontal $H$ such that \begin{equation} H = ( h_+ + {\mathrm{D}} \xi_+)\Theta_++( h_- + {\mathrm{D}} \xi_-)\Theta_-; \end{equation} here, $\varpi_{\text{SdW}}(\mathbb H)= 0$ and the $\xi_\pm=\xi_\pm(h^+,h^-)$ are given by the SdW Gluing Lemma. Now, decomposing $H$ according to $\varpi'$ we obtain: \begin{equation} H = H' + {\mathrm{D}} \Lambda' \end{equation} where $\Lambda' := \varpi'(\mathbb H)$. Hence, combing all formulas together, we find that the $\xi'_\pm$'s of equation \eqref{eq:primed_reconstruct} are given by: \begin{equation} \xi'_\pm = \xi_+ - \lambda_+ - \Lambda'\Theta_\pm. \end{equation} Therefore, if $H'$ is to be a horizontal field according to $\varpi'$, we can find unique vertical adjustments $\xi_\pm'$ to \eqref{eq:primed_reconstruct}. This concludes the formal proof of General Gluing Theorem.\\ Finally, let us comment on the role of the condition ${\partial} \Sigma = \emptyset$. This condition ensures that the only boundary of $R^\pm$ is the interface $S = R^+ \cap R^-$. Relaxing this condition by e.g. introducing ``radial gluing'' of spherical shells introduces only minor variations on the above construction. This is the case unless boundaries intersect at corners, as e.g. in the case of two topological balls glued to form a larger ball. This is most clearly highlighted from the perspective of the SdW Gluing Lemma, in which case one must require further (corner) boundary conditions for the equation determining the mismatch $\mu := - (\xi^+ - \xi^-){}_{|S}$. We will not attempt an analysis of this situation here beyond the preliminary observations offered at the end of the next section. \subsection{Continuity at $S$: towards a dimensional tower of conditions on $h^\pm$}\label{sec:int_h} Recall that, whereas the normal component of the continuity condition for $H$ \eqref{eq:continuity} is a condition on the $(\xi^+ - \xi^-)_{|S}$ only, its parallel component to $S$ not only encodes a relation between $(\xi^+ - \xi^-)_{|S}$ and $(h^+ - h^-)_{|S}$, but also requires $(h^+ - h^-)_{|S}$ to be a pure gradient parallel to $S$. This is a necessary and sufficient condition on $h^\pm$ for there to exist a continuous global horizontal field $H$ corresponding to their composition. In this section we will discuss a more constructive procedure to understand this condition on $(h^+ - h^-)_{|S}$. This procedure can be iteratively applied to the ``boundaries of the boundaries'', opening a door to the discussion of the more general gluing schemes involving corners. In a gauge theory, the space of the pullbacks to $S$ of the fields in ${\mathcal{A}}$ defines a new ``boundary configuration space'', $^S{\mathcal{A}} $ which is isomorphic to the space of gauge fields intrinsic to $S$: \begin{equation} {}^S A := \iota_S^* A \in {}^S{\mathcal{A}} . \end{equation} Moreover, the induced metric on $S$ defines a supermetric $^S\mathbb G$ on $^S{\mathcal{A}}$. From this, one can define an SdW connection $\varpi_{S}$ on $^S{\mathcal{A}}$ and hence, via pullback, on the ``phase space'' of boundary fields $\mathrm T^*({}^S{\mathcal{A}})$. Now, thanks to the second of the equations \eqref{eq:hor_conds}, i.e. $s^i\mathbb h_i=0$, the difference between two {\it generic}\footnote{I.e. that do {\it not} have to necessarily satisfy the continuity condition \eqref{eq:continuity}.} horizontal perturbations {$\mathbb h^\pm$} defines, without any loss of information, a vector field intrinsic to the boundary: \begin{equation} \label{eq:bbx} ^S\mathbb Y := \oint_S \iota^*_S( h^+ - h^-) \frac{\delta}{\delta({}^SA)}\in \mathrm T_{({}^SA)} (^S{\mathcal{A}}). \end{equation} Now, the boundary field-space vector $^S\mathbb Y$ can be decomposed via $\varpi_{S}$ into its horizontal and vertical parts \textit{within} $^S{\mathcal{A}}$: \begin{equation} \label{dec_S} ^S\mathbb Y= \;{}^S \mathbb H+ \left({}^S\xi \right)^{\#_S}, \end{equation} where the $\cdot^{\#_S}$ operation is the $S$-intrinsic analog of $\cdot^\#$. Given equations \eqref{eq:bbx} and \eqref{dec_S}, then it becomes clear that the parallel component of the continuity condition for a fiducial boundary \eqref{eq:continuity},\footnote{Fiducial interfaces are interfaces at which no fixed boundary condition is imposed.} is equivalent to demanding that $^S \mathbb Y$ has no horizontal component, i.e. $^S\mathbb H=0$. Of course, in this case, the ${}^S\xi$ of \eqref{dec_S} is identified with the $(\xi^- - \xi^+)$ of \eqref{eq:continuity}. From these observations we conclude that {\it the parallel continuity condition is satisfied if and only if $^S\mathbb Y$ is purely vertical, that is if and only if $^S \mathbb Y = \varpi_{S}^{\#_S}\left({}^S\mathbb Y\right)$.} If this is the case, this last equation is only a more formal way to write \eqref{eq:continuity}, with $(\xi^+-\xi^-)_{|S}=-\varpi_{S}({}^S\mathbb Y)$ being a rewriting of \eqref{eq:Shoriz}. We conclude this section by observing that the parallel continuity condition bears an interesting possibility. Note that if $S$ itself had corners, i.e. if it was subdivided into regions $S^\pm$ sharing a boundary, we could have repeated the same treatment for two possible horizontal differences, $({}^Sh)^+ - ({}^Sh)^-$, themselves arising from the difference of horizontals in a manifold of one higher dimension, as expressed in \eqref{dec_S}. This chain of descent to the boundaries of boundaries might become useful in discussions of more complex gluing patterns involving corners; a necessary extension for building general manifolds from fundamental building blocks. We conclude this section by noticing that this chain of descent is reminiscent of the BV-BFV formalism \cite{cattaneo2014classical, cattaneo2016bv, Mnev:2019ejh}, but we will leave an investigation of these matters to future work. \subsection{Gluing of gauge potential fluctuations\label{sec:glueA}} We are now ready to apply the above results to the gluing of the perturbations of the gauge potential $A$. We include matter in the next section, and apply the construction to the elecric field in section \ref{sec:electric_gluing}. Therefore, we consider \begin{equation} \mathbb X = \int X \frac{\delta}{\delta A} \in \mathrm T_A {\mathcal{A}}, \end{equation} and decompose it, and its regional restrictions, into their SdW-horizontal and SdW-vertical components \begin{equation} X = H + {\mathrm{D}} \Lambda \quad\text{and}\quad X^\pm = h^\pm + {\mathrm{D}} \lambda^\pm. \end{equation} Physically, whereas $\Lambda$ and $\lambda^\pm$ encode the ``pure gauge'' components of $X$ in $\Sigma$ and $R^\pm$ respectively, $H$ and $h^\pm$ encode their physical components. Therefore, the gluing question can be rephrased as the following: given only the regional gauge invariant perturbations $h^\pm$, is the global gauge invariant perturbation\footnote{Notice that the theorem involves the perturbations of $A$ (elements of $\mathrm T{\mathcal{A}}$) over a globally smooth, fixed, background configuration $A$. } $H$ {\it uniquely} reconstructed, provided it can be reconstructed at all? The theorem of the previous sections states that---whenever possible---{\it the reconstruction of a continuous $H$ from $h^\pm$ is indeed unique}, and no additional information is needed to perform the gluing. In particular, the theorem provides an explicit formula \eqref{eq:RRPi} for the reconstruction of the gauge transformations $\xi^\pm$ that relate the regional and global horizontals according to \begin{equation} H = (h^+ + {\mathrm{D}} \xi^\pm)\Theta_+ + (h^- + {\mathrm{D}} \xi^-) \Theta_-, \end{equation} where the $\xi^\pm$ were fully determined in \eqref{eq:111} and \eqref{eq63}, i.e. by a covariant Laplace equations with boundary conditions determined in terms of the mismatch $\iota_S^*(h^+ - h^-)$. However, the derivation assumed the mismatch $\iota_S^*(h^+ - h^-)$ to be a pure (gauge-covariant) gradient intrinsic to $S$. As explained in the previous section, whether this is the case can be checked by considering an SdW connection $\varpi_S$ intrinsic to $S$, and verifying whether $\iota_S^*(h^+ - h^-)$ is purely vertical with respect to $\varpi_S$. If this mismatch is not purely boundary-vertical, then there would be a physical discontinuity in the magnetic flux across $S$, i.e. in $F_{ab}$ ($a,b$ are tangential indices over $S$).\footnote{More precisely, in a neighbourhood of $S$, the relation between the curvature and the perturbation is: $F_{ab}(A+X) - F_{ab}(A) = [F_{ab}(A),\Lambda]+{}^S{\mathrm{D}}_{[b}H_{a]}+\mathcal{O}(X^2)$, where the first term on the right-hand side is an inconsequential perturbation in the gauge (vertical) direction and the second is the physical perturbation. Thus, only if $ (h^+-h^-)_{a} = {\mathrm{D}}_{a}\Xi$ does $^S{\mathrm{D}}_{[b}(h^+-h^-)_{a]} = [F_{ab}, \Xi]$ feed into the gauge ambiguity; otherwise, a physical discontinuity in the parallel curvature will emerge. In this case, existence fails. But, once again, we do not aim here to give a complete characterization of existence.} With reference to EM, it is interesting to observe that such a discontinuity is {\it not} the consequence of a distributional surface current density on $S$, which would rather contribute a discontinuity in $s^iF_{ia}$ corresponding to the tangential magnetic field. Rather, it is the consequence of a distributional surface density of magnetic monopole charges. Indeed, in the same way a discontinuity in the electric flux across a surface is due to a nonvanishing surface density of electric charges, a discontinuity in the magnetic flux is due to a nonvanishing surface density of magnetic monopoles. But, postulating the configuration space of Yang-Mills theory to be fundamentally given by the space of smooth (or at least once-differentiable) connections ${\mathcal{A}}$, we are implicitly excluding this possibility from the onset: the algebraic validity of the Bianchi identities ${\mathrm{D}} F=0$ excludes the existence of magnetic monopoles\footnote{Notice that, the discontinuity in the components $s^iF_{ia}$ of the magnetic field at $S$ induced by the presence of surface currents is more subtle from a gluing perspective since it does not necessarily stem from a discontinuity of $h_i$ (it could also be due to a discontinuity in its normal derivative). Given any vector field $u$ in a neighbourhood of $S$ that is tangent to $S$, and recalling that $h_s = 0$ by the horizontality condition, one has that the perturbation of $ F^\pm_{su} \equiv s^iu^jF^{\pm}_{ij}$ is given by $ s^i u^j {\mathrm{D}}_i h_j^\pm={\mathrm{D}}_s h^\pm_u - h^\pm_j (\pounds_u s)^j $.}---and thus guarantees that a physically allowed $H$ is continuous across $S$. \subsection{Gluing with matter: reducible configurations and charges\label{sec:gluing_matter}} In this section, we will briefly discuss caveats of our gluing theorem due to reducibility. First, we briefly comment on the changes brought about the presence of a reducibility condition on the boundary that does not extend into the region. In that case, $\mu$ defined in \eqref{def:mu}, whose solution in terms of the difference in boundary horizontals is described in \eqref{eq:Shoriz}, is defined only up to the boundary stabilizers: $\mu\rightarrow \mu+ {}^S\chi$. This degeneracy propagates to the determination of $\Pi$, in \eqref{eq:111}---sending $\Pi\mapsto \Pi+\big( \mathcal R^{-1}_+ + \mathcal R^{-1}_-\big)^{-1}({}^S\chi)$---and thereby to the final solution of the $\xi^\pm$ in \eqref{lambda}. Thus the total solution $(\xi_+, \xi_-)$ acquires a physically significant degeneracy labeled by the boundary stabilizers. The degeneracy is physically significant since, for each choice of $h_\pm$, $^S\chi$, we obtain a distinct global $H$. That is, we obtain a $H(^S\chi, h_\pm)$ that is not gauge-related to $H(^S\chi', h_\pm)$. The presence of a boundary stabilizer that is not extendible into the bulk is typical of asymptotic boundaries.\footnote{ As such, even at finite boundaries, it can be possibly interpreted as a (kinematical) {\it isolation} condition between two subsystems. With this interpretation, the above gluing ambiguity is maybe less surprising: if two subsystems are properly isolated there could be more ways of gluing them together.} At finite boundaries, and in non-Abelian theories, this condition is only slightly ``less generic'' than the presence of a bulk stabilizer. This latter case is the one we will now focus on. It is most relevant in the Abelian theory, where such a bulk stabilizer is always present and there is no mismatch between bulk and boundary stabilizers. In vacuum, the difference due to $\chi$ will then have no effect on the physical states. In the presence of matter, gluing is more subtle. Let us see how this goes. First, some notation: Let $\mathbb h^\pm = \mathbb h^\pm_A + \mathbb h^\pm_\psi$ and $\mathbb H = \mathbb H_A + \mathbb H_\psi$, be horizontals, which decompose according to \begin{equation} \begin{dcases} H_A = ( h_A^+ + {\mathrm{D}} \xi^+) \Theta_+ + ( h_A^- + {\mathrm{D}} \xi^-) \Theta_- \\~\\ H_\psi = ( h_\psi^+ - \xi^+\psi) \Theta_+ + ( h_\psi^- + \xi^-\psi) \Theta_- \end{dcases}.\label{eq:gluing_Apsi} \end{equation} and e.g. \begin{equation} \mathbb H = \mathbb H_A \oplus \mathbb H_\psi = \int H_A \frac{\delta}{\delta A} + \int H_\psi \frac{\delta}{\delta \psi}. \label{eq:HApsinotation} \end{equation} As above, we are here implicitly using the SdW connection to assess horizontality. It is important to note that the matter horizontal components $\mathbb h^\pm_\psi$ are then, in a sense, parasitic on the gauge-field: they are just the matter perturbations corrected by the vertical displacement provided by the gauge sector. Namely, for a fermion field in the fundamental representation of ${\mathcal{G}}$ \cite{GomesHopfRiello}, \begin{equation}\label{eq:corotation} H_\psi= X_\psi-\varpi(\mathbb X_A) X_\psi. \end{equation} where $\mathbb X_\psi$ and $\mathbb X_A$ denote arbitrary (not necessarily horizontal) matter and gauge-potential perturbations respectively. In other words, $\mathbb H_\psi$ and $\mathbb h^\pm_\psi$ do not satisfy horizontality conditions of their own. In section \ref{sec:dressing}, we provided an interpretation of this in terms of Dirac dressings. Then, we see that $\mathbb H$ (and $\mathbb h^\pm$) is horizontal (regionally horizontal, respectively) if and only if $\mathbb H_A$ ($\mathbb h^\pm_A$, respectively) is. This means in particular that the above procedure aimed at the determination of $\xi^\pm$ is completely insensitive to the presence of matter, and can be applied in the same way. Now, all previous results on gluing go through seamlessly unless either one of the {\it regional} configurations of the gauge potential, i.e. $A^\pm= A{}_{|R^\pm}$, is reducible. On such configurations, a modification of the connection-form, $\tilde\varpi$, must be employed, and this comes with certain added difficulties and obstructions to the usual properties of $\varpi$---see sections \ref{sec:charges_EM} and \ref{sec:charges_YM}. For what concerns this section, the main point is that at a reducible configurations $\tilde A$ an ambiguity is present in defining a pure-gauge transformation $\xi^+$ from a fluctuation of $\tilde A$ (parallel to the given stratum). If, say, $A^+=\tilde A^+$ is reducible, then the resulting ambiguity in the reconstruction of $ \xi^+$ will have no effect on the reconstruction of the global horizontal gauge potential $H_A$, but it \textit{will} generically render the reconstruction of the horizontal matter field $H_\psi$ ambiguous. This is always the case in QED, where we can always add constants $\chi_\text{EM}^\pm$ to the reconstructed $\xi^\pm$ and where a constant phase shift will affect the Dirac fermions, unless they vanish. In a non-Abelian theory, the zoology of the solution is more complicated, and will depend on the gauge group as well as the type of matter fields (fundamental, adjoint, etc). For definiteness and simplicity, we will henceforth suppose that only the regional configuration $A^+=\tilde A^+ $ is reducible by a single reducibility parameter, i.e. $\chi^+$ such that $\tilde{\mathrm{D}}\chi^+=0$, while $A^- $ is not reducible. The hypothesis that the stabilizer of $\tilde A^+$ is one-dimensional is quite strong, and it would be interesting to explore its relaxation (cf. the last paragraph of section \ref{sec:charges_YM}). Anyway, with these restrictions, we see that the the solution $\xi^+$ to the gluing boundary value problem \eqref{lambda} is defined only up to the addition of terms proportional to $\chi^+$. That is, there is a continuous 1-parameter family of solutions for $\xi^+$ that we write, by choosing an arbitrary origin $\xi^+_o$ and introducing the parameter $r$ (depending on the charge group), as \begin{equation} \xi^+_r := \xi^+_o + r \chi^+ \qquad r \in \mathbb R \text{ or } \mathbb C. \end{equation} Then, two distinct possibilities are given: either $\psi$ vanishes at $S$ or it does not. The second case allows us to glue the two perturbations together if and only if we can find an $r$ such that \begin{equation}\label{eq:bdary_stabilizers} \xi^+_r \psi{}_{|S} = \xi^- \psi{}_{|S}. \end{equation} With the continuity hypothesis for the original global field perturbation $\mathbb X = \mathbb X_A + \mathbb X_\psi$, this equation would then fix the global ambiguity, but for generic values of $\psi{}_{|S}$ no solution exists.\footnote{These compatibility requirements between $\chi^+$ and $\psi^+$ could be further formalized in terms of the kernel of the Higgs functional connection introduced in \cite{GomesHopfRiello}. However, the presence of distributional charged matter at $S$---as manifested over e.g. an idealized conducting plate---generally blocks the possibility of a smooth gluing \textit{of the electric field}, $E$, discussed in the following section.} If no solution exists, it means that the two perturbations are not glueable, i.e. they do not descend from a global smooth perturbation. Conversely, in the first case, which is realized if $\psi^+$ vanishes at $S$, the gluing of the two perturbations $\mathbb h^\pm$ is possible for any $r$ but will give rise to {\it distinct horizontal global perturbations}. These should be interpreted as physically distinct alternatives, thus leading---for the first time in our analysis so far---to an actual ambiguity in the gluing procedure. This ambiguity is due to the concomitant presence of a reducible gauge potential and of charged matter. To see how this comes about, we observe that in the presence of this stabilizer, there exists a 1-parameter family of global horizontal perturbations corresponding to each of the $\xi^+_r$, i.e. $\mathbb H^r = \mathbb H_A^r \oplus \mathbb H_\psi^r$, is given by \begin{equation}\label{eq:Hpsi_r} H_A^r \equiv H^o_A \qquad\text{and}\qquad H_\psi^r = H^o_\psi + r \chi^+ \psi \,\Theta_+, \end{equation} where the same notation as in \eqref{eq:HApsinotation} was used. Now, two possible situations are given: either $\chi^+$ stabilizes $\psi^+$ throughout $R^+$, or it does not. If $\psi^+$ is also stabilized,\footnote{For matter fields $\psi\neq0$ throughout $R^+$ which are in the fundamental representation, $G={\mathrm{SU}}(N\geq3)$ is needed; see \cite[Sec.7]{GomesHopfRiello}.} then uniqueness of the reconstructed global radiative mode is untouched: even if the regional gauge transformations $\xi^\pm$ are ambiguous, $\mathbb H$ of \eqref{eq:gluing_Apsi} will not be since in this case $\mathbb H^r\equiv\mathbb H^o$. The generally quite restrictive condition of $\chi^+$ stabilizing $\psi^+$ trivially applies if matter is absent from $R^+$, in which case $\mathbb H=\mathbb H_A$ is clearly unaffected by $\chi^+$ such that $\tilde{\mathrm{D}}\chi^+=0$. If $\psi^+$ is {\it not} stabilized by $\chi^+$, on the other hand, the resulting $\mathbb H^r$ are indeed {\it distinct} from one another. This setup formalizes the beam splitter thought experiment devised by 't Hooft \cite{thooft} (see also \cite{Brading2004}), and can be used to provide a concrete example for the considerations of Wallace and Greaves, characterizing ``symmetries with direct empirical significance'' (DES) \cite{GreavesWallace}. \begin{figure}[t] \begin{center} \includegraphics[width=5.5cm]{fig_tHooft.png} \caption{We consider the gluing of two regions $R^+$ and $R^-$ containing two charged (point) particles $\psi^+$ and $\psi^-$ connected by a Wilson line $\gamma$. In $\Sigma$, the observable $W$ \eqref{eq:Wilson} is gauge invariant. Nonetheless, its gluing is ambiguous in the presence of a regional stabilizer \eqref{eq:deltaW}. This ambiguity is closely related to 't Hooft's beam splitter thought experiment and highlights the fact that global phase transformations (which are part of the stabilizer) are physical transformations distinguished from pure gauge transformations. Our formalism allows to make this distinction also within a finite and bounded regional, i.e. at the quasi-locally.} \label{fig:tHooft} \end{center} \end{figure} As a proof of concept that the ensuing states are regionally indistinguishable but globally distinct, let us consider the following simplified scenario in the Abelian theory, closely related to 't Hooft's beam splitter (figure \ref{fig:tHooft}): let $R^\pm$ contain one charged particle each, located at $x^\pm\in R^\pm$, and thus $A^+$ admits a reducibility parameter $\chi^+ = const$. Denoting the particle's spinorial configurations\footnote{The bra-ket notation is employed to ease the writing of the following formulae, it does not refer to any quantum treatment.} by $|\psi^\pm\rangle$, we consider then the following global {\it gauge-invariant} Wilson-line observable between two charged particles (with obvious notation): \begin{equation} W = \langle \psi^- | \mathrm{exp} \left(\int_\gamma A\right) | \psi^+ \rangle , \label{eq:Wilson} \end{equation} where $\gamma$ is some path connecting across $S$ the positions $x^\pm\in R^\pm$ of the charged particles $|\psi^\pm\rangle$. Now, if we ``unglue'' the two regions, perform the (infinitesimal) charge transformation $\chi^+$ in $R^+$ and glue back (which as we saw above is a seamless operation), we will find that: whereas $\mathrm{exp} \left(\int_\gamma A\right)$ and $|\psi^-\rangle$ have not changed at all, $|\psi^+\rangle$ has changed by the (infinitesimal) amount $\delta_{\chi^+} |\psi^+\rangle = - \chi^+ |\psi^+\rangle$; in turn this means that the global, gauge invariant, observable $W$ is able to distinguish the two global states, since generically \begin{equation} \delta_{\chi^+} W = - \langle \psi^- | \mathrm{exp} \left(\int_\gamma A\right) \chi^+ | \psi^+ \rangle \neq 0. \label{eq:deltaW} \end{equation} In sum: in the presence of matter, the Wilson line $W$ is a gauge-invariant functional that is sensitive to the ambiguity present in the gluing procedure, i.e. in the determination of the vertical adjustments $\xi^\pm(h^+,h^-)$. As per our gluing theorem, this ambiguity is in one-to-one correspondence with choices of regional stabilizer, $\chi_\pm$, and is only relevant in the presence of matter (since $\delta_{\chi^+} A^+ = {\partial}\chi^+ \equiv 0 $, but in general $\delta_{\chi^+}\psi^+ = -\chi^+ \psi^+ \neq 0$). Of course, this construction is strictly related to the ability of defining a charge for the $|\psi^+\rangle$ on the reducible background $A^+=\tilde A^+$, and is in line with the claim that {\it (regional) stabilizers must be attributed a different status than generic gauge transformation}, as discussed in section \ref{sec:charges} (see in particular the last two paragraphs of \ref{sec:charges_EM} before the Remark). \subsection{Gluing of the electric field\label{sec:electric_gluing}} We now turn our attention to the gluing of the electric field $E$. We start by recalling the representation of the electric field as a configuration-space vector $\mathbb E$: \begin{equation} \mathbb E = \int E_i \frac{\delta}{\delta A_i} \in \mathrm T_A{\mathcal{A}} \subset \Phi. \end{equation} In this section, for ease of notation, we shall treat $E$ as a one-form, i.e.---consistently with the above---$E$ will stand for $E_i := g_{ij}E^j$. Thus (the components) of the global and regional SdW decompositions of $\mathbb E$ (see equation \eqref{eq:Erad} in Section \ref{sec:splitsympl}) are \begin{equation} E = E_{\text{rad}} + {\mathrm{D}} \varphi \quad\text{and }\quad E^\pm = E_{\text{rad}}^\pm + {\mathrm{D}} \varphi^\pm. \end{equation} We emphasize that, as was the case with $h^\pm$ and $\lambda^\pm$ in relation to $H$ and $\Lambda$ in \eqref{eq:hor_conds}, $\varphi^\pm$ is {\it not} the regional restriction\footnote{Having run out of symbols, we could not use the same capitalized vs. lower case variables to indicate that relationship.} of $\varphi$ to $R^\pm$, and similarly $E_{\text{rad}}^\pm$ is {\it not} the regional restriction of $E_{\text{rad}}$ to $R^\pm$; instead, \begin{equation} \begin{cases} \varphi = ( \varphi^+ - \eta^+)\Theta_\pm + (\varphi^- - \eta^- ) \Theta_- \\ E_{\text{rad}} = (E_{\text{rad}}^+ + {\mathrm{D}} \eta^+) \Theta_+ + (E_{\text{rad}}^- + {\mathrm{D}} \eta^-) \Theta_- \end{cases} \label{eq:El_glue} \end{equation} where, according to the theorem of section \ref{sec:gluingthm}, the $\eta^\pm$ are fully determined by the mismatch of $( E_{\text{rad}}^+ - E_{\text{rad}}^-)_{|S}$; the appropriate behaviour of $\varphi$ merely follows. Notice also that the electric flux $f$ through $S$ corresponds precisely to $\Pi=f$ of \eqref{eq:RRPi} in our main gluing theorem in section \ref{sec:gluingthm}. In the case of the electric field, we do not interpret the SdW vertical component $\varphi$ of $\mathbb E$ as a pure-gauge quantity, but as a Coulombic component of the electric field, while $E_{\text{rad}}$ is what we called its radiative component. Therefore, equation \eqref{eq:El_glue} states that the Coulombic/radiative split of the electric field depends on the choice of region in which the split is performed. To understand this phenomenon, it is particularly instructive to consider first the case without matter. We also recall that we have assumed, for simplicity, that the simply connected global region $\Sigma=R^+\cup_S R^-$ has no boundary, i.e. ${\partial}\Sigma=\emptyset$. From the above equations, it then follows that $\varphi\equiv 0$, and therefore, according to \eqref{eq:El_glue} $ \varphi^\pm = \eta^\pm$. Since $\eta^\pm$ are entirely functions of $E_{\text{rad}}^\pm{}_{|S}$, it follows that all components of the global electric field are determined solely by its regional radiatives. Indeed, for a globally radiative electric field (i.e. no global boundary and no charges), $E=E_{\text{rad}}$ and $E{}_{|R^\pm}=E_{\text{rad}}^\pm-{\mathrm{D}}\eta^\pm$ with $\eta^\pm$ functionals of $(E_{\text{rad}}^+-E_{\text{rad}}^-){}_{|S}$ only.\footnote{Again, as already stressed, it is important to note that regional restriction and horizontal projection do not commute, thus e.g.: $E_{\text{rad}}^\pm\neq E_{\text{rad}}{}_{|R^\pm}$.} Thus, in this case, once \textit{both} regional radiatives are known, even the \textit{regional} Coulombic components are completely determined---including the electric flux $f$ through $S$, which is thus no longer an independent degree of freedom once the radiative modes are accessible in {\it both} regions. Thus, in this case---when the larger (glued) region $\Sigma$ has no boundary,---the \textit{regional radiative modes encode the totality of the dof in the joint system.} In particular, the conclusion reached in section \ref{sec:QLSymplRed} from a regional viewpoint that $f$ through $S$ must be superselected is---as expected---a mere artifact of excluding observables in the complement of that region. The addition of charged matter does not change this conclusion. In sum, once the radiative modes are given in both regions, the role of the flux $f$ at $S$---i.e. to regionally fix $\varphi^\pm$---is taken over by $(E_{\text{rad}}^+-E_{\text{rad}}^-){}_{|S}$. Thus $f$---which is often claimed to embody the ``new boundary degrees of freedom'' \cite{AronWill} or their momenta \cite{DonnellyFreidel}---also constitutes a piece of redundant information for the final result of the gluing. Heuristically, we could say that $f$ only shows up when encoding one subregion's ignorance of the other, i.e. when we do not have access to both radiatives, $E_{\text{rad}}^\pm$. Explicitly, playing the role of $\Pi$ in the theorem of section \ref{sec:gluingthm}, the flux is given by \begin{equation} f =\Big( \mathcal R^{-1}_+ + \mathcal R^{-1}_-\Big)^{-1} \left({{}^S{\D}}{}^2 \right)^{-1} \;{{}^S{\D}}{}^a \iota_S^*( E_{\text{rad}}^+ -E_{\text{rad}}^-)_a. \label{eq:fluxreconstr} \end{equation}% This conclusion is only challenged in the presence of nontrivial cohomological 1-cycles in the Cauchy surface, a point exemplified in section \ref{sec:topology}. Concerning the analogues of the continuity conditions explored in section \ref{sec:int_h} for the gauge potential, we observe that on-shell the electric field is continuous across $S$ if and only if there is no \textit{ distributional} charge density there. Such a charge density would create a discontinuity in the fluxes $E^\pm_s{}_{|S}\equiv f^\pm$. No analogous physical discontinuity can be found in the components of the electric field parallel to $S$. Moreover, if there is no charge density and therefore $E$ is continuous, the difference $(E_{\text{rad}}^+-E_{\text{rad}}^-){}_{|S}$ is the same as the difference $({\mathrm{D}}_i\varphi^+-{\mathrm{D}}_i\varphi^-){}_{|S}$. Since the latter is always of the pure-gradient form, the radiative parts of a continuous electric field satisfy (on-shell of Gauss) the analogue of \eqref{eq:continuity}. \subsection{On the energy of radiative and Coulombic modes\label{sec:energy}} The radiative/Coulombic split of $E$ satisfies a monotonocity property, which roughly states that \textit{in a composite region $\Sigma=R^+\cup_S R^-$, a larger portion of the energy is attributed to the radiative part of the electric field than it is in the disjoint union of $R^+$ and $R^-$; the converse holds for its Coulombic part}. This section is devoted to establishing and interpreting this result. Let us start by writing the energy $\mathcal H$ contained in $\Sigma$. We decompose this energy into its electric (kinetic) and magnetic (potential) parts, \begin{equation} \mathcal H = \mathcal E + \mathcal B = \int_{\Sigma} \sqrt{g} \, \mathrm{Tr}(E^iE_i) + \int_\Sigma \sqrt{g}\, \tfrac{1}{2} \mathrm{Tr}(F^{ij} F_{ij} ) \end{equation} Since $F$ is fully determined by the background value of $A$ (which undergoes no SdW splitting), we will henceforth focus on the electric contribution. This can be written more abstractly as \begin{equation} {\cal E} = || \mathbb E ||^2 =|| \mathbb E ||_+^2 + || \mathbb E||_-^2, \label{eq:Eadditivity} \end{equation} with $||\cdot||$ and $||\cdot||_\pm$ the $\mathbb G$-norms over $\Sigma$ and $R^\pm$ respectively. E.g. $|| \mathbb E||_+^2 = \mathbb G_{R^+}(\mathbb E, \mathbb E) = \int_{R^+} \sqrt{g} \, g^{ij}\mathrm{Tr}(E_i E_j)$. Consider now the radiative/Coulombic decomposition of $E$, and recall that it corresponds to a horizontal/vertical orthogonal decomposition with respect to the $\mathbb G$ supermetric. Then, \begin{equation} || \mathbb E ||^2 = || \mathbb E_{\text{rad}}||^2 + || \varphi^\# ||^2 =: \mathcal E_{\text{rad}} + \mathcal E_{\text{Coul}}, \end{equation} and similarly on $R^\pm$. Applying the same decomposition to the second gluing formula of \eqref{eq:El_glue} gives \begin{align} || \mathbb E_{\text{rad}} ||^2 & = || \mathbb E_{\text{rad}}^+ + (\eta^+)^\# ||^2_+ + || \mathbb E_{\text{rad}}^- + (\eta^-)^\# ||^2_- \notag\\ & = || \mathbb E_{\text{rad}}^+||^2_+ + || \mathbb E_{\text{rad}}^-||^2_- + ||(\eta^+)^\# ||^2_+ + ||(\eta^-)^\# ||^2_-\notag\\ & \geq || \mathbb E_{\text{rad}}^+||^2_+ + || \mathbb E_{\text{rad}}^-||^2_-. \end{align} From the additivity of $\mathcal E$ \eqref{eq:Eadditivity}, the gluing formula \eqref{eq:El_glue} and the equation above, it follows that the total Coulombic contribution correspondingly decreases by the same amount:% \footnote{ This follows from the comparison of the following two expressions $$\begin{dcases} || \mathbb E ||^2 & = || \mathbb E_{\text{rad}} - \varphi^\# ||^2 = || \mathbb E_{\text{rad}} ||^2 + || \varphi^\# ||^2 = || \mathbb E_{\text{rad}}^+||^2_+ + || \mathbb E_{\text{rad}}^-||^2_- + ||(\eta^+)^\# ||^2_+ + ||(\eta^-)^\# ||^2_- + || \varphi^\# ||^2\notag\\ || \mathbb E ||^2 & = || \mathbb E ||_+^2 + || \mathbb E ||_-^2 = || \mathbb E_{\text{rad}}^+ - (\varphi^+)^\# ||^2_+ + || \mathbb E_{\text{rad}}^- - (\varphi^-)^\# ||^2_+ = || \mathbb E_{\text{rad}}^+||^2_+ + || (\varphi^+)^\# ||^2_+ + || \mathbb E_{\text{rad}}^- ||^2_- + || (\varphi^-)^\# ||^2_+ .\notag \end{dcases} $$ } \begin{align} || \varphi^\# ||^2 & = || (\varphi^+)^\# ||^2_+ + ||(\varphi^-)^\# ||^2_- - ||(\eta^+)^\# ||^2_+ - ||(\eta^-)^\# ||^2_-\notag\\ &\leq || (\varphi^+)^\# ||^2_+ + ||(\varphi^-)^\# ||^2_-. \end{align} We have thus proved (and qualified) our statement above. So, if to the radiative part of $E$ we ascribe the kinetic energy of the radiative modes, the following question arises: which new radiative field strengths are included in $\Sigma$ that are not present in the disjoint union of $R^+$ and $R^-$? The answer lies at the interface $S$: the regional Coulombic and vertical adjustments, $\eta^\pm$ and $\xi^\pm$, respectively, from the global perspective are additions to the radiative sector of $\Sigma$ with respect to the radiative sectors of $R^\pm$. Although supported on the whole regions $R^\pm$ respectively, these new components, are completely determined by the mismatch at $S$ of the two regional radiative modes, $\mathbb E_{\text{rad}}^\pm{}_{|S}$ (or $h^\pm{}_{|S}$, resp). In other words, the new global radiative field strength that emerges on $\Sigma$ upon gluing $R^\pm$ is entirely determined by the standard regional radiative modes at the boundary. In formulas: \begin{equation} \mathcal E = \mathcal E_{\text{rad}} = \mathcal E_{\text{rad}}^+ + \mathcal E_{\text{rad}}^- + \oint_S \sqrt{h}\, \mathrm{Tr}\Big( f \big(\mathcal R_+^{-1} + \mathcal R_-^{-1} \big) f\Big), \label{eq:radenergydiff} \end{equation} where $f$ should be understood as given by \eqref{eq:fluxreconstr} and there we used the following relation $ \mathcal E^\pm_\text{Coul} = || (\varphi^\pm)^\# ||^2_\pm = \oint_S \sqrt{h} \mathrm{Tr}( f \mathcal R_\pm^{-1} f )$ that is easily deducible from the definitions and results of section \ref{sec:gluingthm} (also, a similar computation will be carried out in more detail in the next section). We summarize these results in the following\footnote{The last statement follows from the positivity of $\mathcal R$, see the proof of the SdW Gluing Lemma \ref{Lem:SdWGL}.} \begin{Prop} Assuming the same geometrical setting relevant for the General Gluing Theorem \ref{thm:GGT}, the following radiative/Coulombic energy balance holds: $$ \mathcal E_{\text{rad}} - ( \mathcal E_{\text{rad}}^+ + \mathcal E_{\text{rad}}^-) = ( \mathcal E_{\text{Coul}}^+ + \mathcal E_{\text{Coul}}^-) - \mathcal E_{\text{Coul}} = \oint_S \sqrt{h}\, \mathrm{Tr}\Big( f \big(\mathcal R_+^{-1} + \mathcal R_-^{-1} \big) f\Big) \geq 0, $$ with $f=f(E_{\text{rad}}^+, E_{\text{rad}}^-)$ as in \eqref{eq:fluxreconstr}. The equality sign holds if and only if $f=0$. \end{Prop} It is important to stress that the new global contribution to the radiative energy is not encoded in either region, since it depends on the mismatch at $S$ of the two regional components \eqref{eq:fluxreconstr}. Thus, in this precise sense, we can claim that there is an additional component to the global radiative field strength: it results from the gluing and arises from the relation of the two subsystems at their common boundary. \subsection{Gluing of the symplectic potentials\label{sec:gluing_symp_pot}} It is now straightforward to study the gluing of the SdW-horizontal symplectic potential. As above, we focus on the situation where a $D$-dimensional simply connected hypersurface without boundary $\Sigma\cong S^D$ is split into two regions $R^\pm \cong B^D$ glued at $S={\partial} R^\pm \cong S^{D-1}$, i.e. $\Sigma = R^+ \cup_S R^-$. In this case, from \eqref{eq:theta_HV}, the total symplectic potential reads \begin{equation} \theta = \int_\Sigma \sqrt{g} \left\{ \mathrm{Tr}\Big( E^i {\mathbb{d}} A_i\Big) - \bar \psi \gamma^0 {\mathbb{d}} \psi \right\} \approx \int_\Sigma \sqrt{g} \left\{ \mathrm{Tr}\Big( E_{\text{rad}}^i {\mathbb{d}}_\perp A_i\Big) - \bar \psi \gamma^0 {\mathbb{d}}_\perp \psi \right\} = \theta^\perp , \end{equation} where $\theta \approx \theta^\perp$ since ${\partial} \Sigma =\emptyset$. Now, $\theta$ can also be decomposed into $\theta = \theta^+ + \theta^-$ simply by factorizing the integration domain in the first expression above, \begin{equation} \theta^\pm = \int_{R^\pm} \sqrt{g} \left\{ \mathrm{Tr}\Big( E^i {\mathbb{d}} A_i\Big) - \bar \psi \gamma^0 {\mathbb{d}} \psi \right\}. \end{equation} Each of these regional contributions can be written in the SdW decomposition following \eqref{eq:theta_HV}: \begin{equation} \theta^\pm \approx \int_{R^\pm} \sqrt{g} \left\{ \mathrm{Tr}\Big( E_{\text{rad}}^{\pm}{}^i {\mathbb{d}}_{\perp (\pm)} A_i\Big) - \bar \psi \gamma^0 {\mathbb{d}}_{\perp (\pm)} \psi \right\} \pm \oint_S \sqrt{h}\, \mathrm{Tr}\Big( f \varpi_\pm \Big) . \end{equation} where $\perp\!(\pm)$ denotes that the SdW decomposition intrinsic to $R^\pm$ has been respectively used, and the sign of the last term depends on the fact that, in $f = s^i E_i{}_{|S}$, the normal $s^i$ to $S$ is outgoing for $R^+$ and ingoing for $R^-$. Thus, we find \begin{equation} \theta \approx \theta^{\perp(+)} + \theta^{\perp(-)} + \oint_S \sqrt{h}\, \mathrm{Tr}\Big( f (\varpi_+ - \varpi_-) \Big) . \end{equation} The results of section \ref{sec:gluingthm}, and in particular equation \eqref{eq:Shoriz}, can be applied\footnote{This is entirely compatible with the standard definition of $\varpi$, which can be seen by noticing that given a vector $\mathbb Y\in\mathrm T_A{\mathcal{A}}$: $\mathbb i_{\mathbb Y} {\mathbb{d}}_\perp A_i = H_i$, $\mathbb i_{\mathbb Y} \varpi =\Lambda$, and similarly $\mathbb i_{\mathbb Y} {\mathbb{d}}_{\perp(\pm)} A_i = h^\pm_i$, $\mathbb i_{\mathbb Y} \varpi_\pm = \lambda^\pm$.} to $\varpi_\pm$ to obtain \begin{equation} (\varpi_+ - \varpi_-)_{|S} = - ({{}^S{\D}}{}^2)^{-1} {{}^S{\D}}{}^a \iota_S^*( {\mathbb{d}}_{\perp(+)} A - {\mathbb{d}}_{\perp(-)} A )_a \equiv -\frac{{{}^S{\D}}[ {\mathbb{d}}_\perp A]_S^\pm}{{{}^S{\D}}{}^2}. \end{equation} Here, $(\varpi_\pm){}_{|S}$ means that the connection---which is valued in $\mathrm{Lie}({\mathcal{G}}) = C(R^\pm, \mathrm{Lie}(G))$---is evaluated at the boundary $S = {\partial} R^\pm$, i.e. at points $x\in {\partial} R^\pm$. We have also introduced a new short-hand symbol for the interface mismatch of a given regional quantity $\bullet$, namely $[\bullet]^\pm_S$. For more compact notation, we have also schematically denoted the inverse operator by a fraction $({{}^S{\D}}{}^2)^{-1}(\bullet):=\frac{\bullet}{({{}^S{\D}}{}^2)}$. Similarly, we recall\footnote{The notation used for \eqref{eq:fluxreconstr} has been here (slightly) adapted to fit with the notation used in the rest of this section. We apologize with the reader for the inconvenience.} \eqref{eq:fluxreconstr} \begin{equation} \Big( \mathcal R^{-1}_+ + \mathcal R^{-1}_-\Big)(f) = - ({{}^S{\D}}{}^2)^{-1} {{}^S{\D}}{}^a \iota_S^*( E_{\text{rad}}^{+} - E_{\text{rad}}^{-} )_a \equiv-\frac{{{}^S{\D}}[ E_{\text{rad}}]_S^\pm}{{{}^S{\D}}{}^2}. \end{equation} Hence, combining these results, and remembering that $\mathcal R_\pm$ is self-adjoint, we find the following result for the gluing of the symplectic potential: \begin{Thm}[Gluing of the symplectic potential] Consider the same geometrical setting relevant for the General Gluing Theorem \ref{thm:GGT}. Denote, the YM regional symplectic potentials associated to $\Sigma$ and $R^\pm$ by $\theta$ and $\theta^{(\pm)}$ respectively, and their SdW-horizontal counterparts by $\theta^\perp$ and $\theta^{\perp(\pm)}$ respectively. Then, as a corollary of the SdW Gluing Lemma, \begin{equation} \theta \stackrel{ {\partial}\Sigma=\emptyset}{\approx} \theta^\perp= \theta^{\perp(+)} + \theta^{\perp(-)} + \oint_S \sqrt{h} \, \mathrm{Tr}\left( % \frac{{{}^S{\D}}[ E_{\text{rad}}]_S^\pm}{{{}^S{\D}}{}^2}\;% \Big( \mathcal R^{-1}_+ + \mathcal R^{-1}_-\Big)^{-1} \;% \frac{{{}^S{\D}}[ {\mathbb{d}}_\perp A]_S^\pm}{{{}^S{\D}}{}^2}% \right). \label{eq:GlueTheta} \end{equation} Since both $\Omega^\perp := {\mathbb{d}} \theta^\perp$ and $\Omega^{\perp(\pm)}:= {\mathbb{d}}\theta^{\perp(\pm)}$ are basic and closed, each of the terms on rhs above is projectable on the reduced phase space ${\mathcal{A}}/{\mathcal{G}}$. Since $\Omega^\perp$ projects to the full reduced symplectic structure (cf. section \ref{sec:QLSymplRed} and \cite{AldoNew}), each of the terms of the rhs encodes a ``pair'' of reduced canonical dof. The last terms, in particular, encodes the ``new'' radiative dof emerging upon gluing. \end{Thm} In other words, $\theta \approx \theta^\perp$ is a functional \emph{only} of the regional \emph{radiative} electric fields $E_{\text{rad}}^\pm$ and the regional \emph{SdW-horizontal} differentials ${\mathbb{d}}_{\perp(\pm)} A$---i.e. it does {\it not} require any knowledge of the regional Coulombic or pure-gauge dof. Nonetheless, $\theta^\perp$ does not factorize in terms of its regional SdW-horizontal counterparts, $\theta \approx \theta^\perp \neq \theta^{\perp(+)} + \theta^{\perp(-)}$. This non-factorizability has its root in the nonlocality of the horizontal/vertical decomposition. Its physical consequence is the emergence of new radiative dof upon gluing, which express the relational nature of the gauge theory across the interface. As discussed in section \ref{sec:energy}, the {\it mismatch} of the horizontal/radiative modes at the interface $S$ plays---from the global perspective---the role of a new horizontal/radiative dof which is not present in {\it either} region. As emphasized in the previous section, upon gluing, we are---consistently---no longer required to superselect, or otherwise fix or refer to the electric flux $f$ trough $S$ (which was conjugate to the pure-gauge part of the gauge potential): $f$ can now be reconstructed from the mismatch of the {\it electric} radiative modes \eqref{eq:fluxreconstr}. In sum, the horizontal symplectic structure of Yang-Mills theory fails, as expected, to factorize into regional horizontal symplectic structures. This is because there are global horizontal modes can only be reconstructed from the two regional radiative modes as functionals of their nontrivial {\it mismatch} at the common interface $S$. \subsection{Example: 1-dimensional gluing and the emergence of topological modes}\label{sec:topology} In this final section we work out a simple example, implementing the gluing of 1-dimensional intervals. Two cases are given: two closed intervals are glued into a larger interval, and one interval is glued on itself to form a circle. This second case falls outside the simply-connected setup we adopted for the rest of the paper. Nonetheless, this case allows us to easily discuss, without introducing a host of new technologies, the emergence of new global (or ``topological'') degrees of freedom associated to the non trivial cohomology of the circle. \subsubsection{Gluing into an interval} Let us start by considering two closed intervals $I^+=[0,1]$ and $I^-=[-1,0]$, that we shall glue together to form a new closed interval $I = [-1,1]$. We shall see that, since on the interval the gauge potential must be pure gauge, the regional horizontal perturbations must vanish---a fact consistently encoded by our gluing formula. Although somewhat trivial, this example helps us set the stage for the gluing into a circle. We first characterize the 1-dimensional gauge fields and their horizontal perturbations. One dimensional gauge fields are always locally pure gauge, \begin{equation} A^\pm = g_\pm^{-1} \d g_\pm \end{equation} for $g_+(x) = \mathrm{Pexp}\int_0^x A$ on $I^+$ and similarly on $I^-$, where we choose $g_-$ such that $g_-(0)=\mathbb 1$ too ($x=0$ is where the gluing takes place). Since in one dimension $s^i h_i{}_{|S}=0$ implies $ h_{|S}=0$, SdW-horizontal perturbations $\mathbb h^\pm$ in $I^\pm$, according to \eqref{eq:hor_conds} must satisfy the equations \begin{equation} {\mathrm{D}}^\pm h^\pm =0 \qquad\text{and}\qquad h^\pm{}_{|{\partial} I^\pm} = 0, \end{equation} which can be rewritten in terms of $\tilde{ h}{}^\pm := g_\pm h^\pm g^{-1}_\pm$ as ${\partial}\tilde{ h}{}^\pm = 0$ and $\tilde{ h}{}^\pm{}_{|{\partial} I^\pm} = 0$. Now, these equations can be solved to give $\tilde{ h}{}^\pm = 0$ and hence \begin{equation} \mathbb h^\pm=0. \label{eq:1dhoriz} \end{equation} This is solely an immediate consequence of the pure gauge character of all 1-dimensional configurations, and therefore all perturbations over topologically trivial regions must be purely vertical. Applying these results on the horizontal/vertical decomposition of fields on the interval to the electric field, we deduce that on the interval all electric fields are purely Coulombic. As per section \ref{sec:electric_gluing}, without any knowledge of regions outside of the interval $I^+\equiv R^+$, this is entirely characterized by the charge content of the interval and by $f$ at its boundary $S$. The latter encodes our ignorance of the outside of the region. Let us now analyze the gluing. Again, the global horizontal vector is denoted by \begin{equation} H=( h^++{\mathrm{D}}\xi^+)\Theta_++( h^-+{\mathrm{D}}\xi^-)\Theta_- = {\mathrm{D}}\xi^+\Theta_++{\mathrm{D}}\xi^-\Theta_- \end{equation} as in \eqref{eq:reconstructed_H}. The relevant equations for gluing arise as in \eqref{lambda}, with a couple of new features: (\textit{i}) there is no analogue to the last equation of \eqref{lambda}, since $ h_i$ has only one component that is transverse to the zero-dimensional gluing surface $S$; and (\textit{ii}) we have to add one equation per global boundary of the interval $I=[-1,1]$, since the total horizontal vector has now (two) endpoint boundaries, ${\partial}\Sigma \equiv {\partial} I= \{ -1\} \cup \{+1\}\neq \emptyset$. Thus, \begin{equation} \label{eq:gluing1d} \begin{dcases} {\mathrm{D}}^2 \xi^\pm = 0 &\text{in }I^\pm\\ {\mathrm{D}} \left(\xi^+-\xi^- \right)=0 &\text{at } {\partial} I^+ \cap {\partial} I^-=\{0\}\\ {\mathrm{D}} \xi^\pm=0 &\text{at } {\partial} I = \{\pm 1\} \end{dcases} \end{equation} Now, again, by defining $\tilde \xi{}^\pm := g_\pm \xi^\pm g_\pm^{-1}$, we can turn the covariant derivatives into ordinary ones. This allows us to readily solve these equations. In fact, the bulk equations (the first of \eqref{eq:gluing1d}) tell us that \begin{equation} \tilde \xi^\pm = \pm \tilde \Pi^\pm x + \tilde\chi^\pm, \end{equation} where $\tilde\chi^\pm$ are constant functions valued in $\mathrm{Lie}(G)$ corresponding to two arbitrary reducibility parameters of the vanishing configuration $\tilde A^\pm=0$. This is a concrete example of the discussion in the previous section. Now, the second equation of \eqref{eq:gluing1d} sets $\tilde\Pi^+ = - \tilde\Pi^-$, and the third one sets them equal to zero. Since the $\tilde \chi_\pm$ don't affect the value of the regional horizontal fields, we hence conclude that in this case the unique solution to the gluing problem at hand is $\xi^\pm = 0$ which readily leads to $\mathbb H =0$, consistently with the general regional result \eqref{eq:1dhoriz}. This concludes the gluing of two intervals $I^\pm$ into a larger one $I=[-1,1]$. \subsubsection{Gluing into a circle} We now move on to the second case, where one interval, $I=[-\pi,\pi]\ni\phi$, has its ends glued to form a unit circle. To keep the two cases notationally distinct, we have denoted an element of the circle by $\phi$, as opposed to $x$ of the interval in the previous case. This case requires a little more care. The idea is to split $I$ into two intervals which overlap around $\phi=0$, e.g. on the interval $U_\epsilon:=(-\epsilon,\epsilon)$. Thus we consider $I^- = \left[ - \pi , \epsilon\right)$ and $I^+ = \left(-\epsilon, \pi\right]$, so that we can glue at $\phi=\pm\pi$ according to the procedures of the above section, while matching the overlap of charts around $\phi=0$ to close the interval into a circle. This allows us to separate the problem of gluing from the problem of covering the circle. The latter is accomplished by overlapping open charts, with transition functions which appropriately match the gauge configuration. Let us start by analyzing the background configuration $A^\pm$ on $I^\pm$. We assume, as in the previous sections, that the configurations $A^\pm$ join smoothly at $\phi=\pm\pi$ As above, $A^\pm$ are pure gauge, i.e. $A^\pm = g^{-1}_\pm \d g_\pm$ with $g_+(\pi)=g_-(-\pi)$. On the other hand, on $U_\epsilon$, the configurations $A^\pm$ do not have to be equal; they need only be related by the action of a gauge transformation $\kappa$, the transition function. Since we are in 1-dimension, this does not constitute a restriction; one simply has $\kappa = g_-^{-1} g_+$. Now, we move on to consider the horizontal perturbations. We shall find that the relevant horizontality equations for $\mathbb h^\pm$ involve boundary conditions only at $\phi = \pm \pi$, and the one for $\mathbb H$ does not involve boundary conditions at all. In particular no boundary conditions are imposed at the open-extrema of the intervals $I^\pm$. This is not because the intervals are open, but rather because there are no boundaries from the perspective of the global $\mathbb H$. But let us be more detailed. We start from the observation that on the overlap region $U_\epsilon$, generic perturbations $\mathbb X^\pm$ must be gauge related through $ X^+ = {\mathrm{Ad}}_{\kappa} X^-$. This means that, using the appropriate partitions of unity over $S^1$, there is no difficulty, nor ambiguity, in the patching of the SdW inner products over $I^+$ and $I^-$: we obtain an inner product over $ S^1$ between two perturbations $\mathbb X^\pm$ and $\mathbb Y^\pm$ that satisfy the overlap condition we have just described. Recalling that SdW-horizontality is the requirement of being orthogonal to any purely vertical vector with respect to the SdW supermetric, we see that the horizontality condition for $\mathbb H$ does {\it not} involve boundary conditions at the non-glued boundaries of $I^\pm$, i.e. at $\phi=\pm \epsilon$. Of course, this was an expected result from the closed nature of the manifold on which $\mathbb H$ resides. Focusing now on horizontal perturbations, it is easy to see that this discussion doesn't change the fact that $\mathbb h^\pm=0$, since the manifold on which they reside still has boundaries at $\phi=\pm \pi$. Note moreover that $\mathbb h^\pm=0$ implies that their matching on $U_\epsilon$ is automatic. However, this discussion leads us to a horizontality condition for $\mathbb H$ that is distinct from the one found for the gluing into an interval \eqref{eq:gluing1d}. Indeed, in the present case, we find \begin{equation} \label{eq:gluing1d-circle} \begin{dcases} {\mathrm{D}}^2 \xi^\pm = 0 &\text{in }I^\pm\\ {\mathrm{D}} \left(\xi^+-\xi^- \right)=0 &\text{at } \phi=\pm\pi\\ \end{dcases} \end{equation} with {\it no} extra conditions at $\phi=\pm\epsilon$. Hence, it is readily clear that the solutions for $\xi^\pm$ are here much less restricted than they were in the closed interval case considered above: in this case we find that \begin{equation} \xi^\pm = {g_\pm^{-1}} (\tilde \Pi \phi + \tilde \chi^\pm) {g_\pm}. \label{eq:Pigluing} \end{equation} with the same, possibly non-vanishing, $\tilde \Pi$ for both the $\pm$ choices. From this we obtain, \begin{equation} H = {g_\pm^{-1}} \tilde \Pi {g_\pm}. \end{equation} As for the background, matching the perturbed configurations in $U_\epsilon$ comes at no cost (since $\mathbb h_\pm =0$). In summary, we see that the gluing procedure has no unique solution in this case, as a consequence of the absence of a second ``outer'' boundary for the interval (which is glued into a circle). The second outer boundary is instead replaced by the chart matching.\footnote{ The decoupling of chart transitioning and horizontal gluing can be made into a more general feature. For instance, had we wished to cut up the circle into three segments, we would divide the interval $[0,2\pi]$ into three sets, $I_1=[0,2\pi/3], I_2=[2\pi/3, 4\pi/3], I_3=[4\pi/3, 2\pi]$, with $\mathbb h_i\in I_i$. Then we can cover the circle with three charts $U_{1,2,3}$, given in larger, but largely overlapping, domains: $D_1=[0,4\pi/3], D_2=[\pi/3, 2\pi], D_3=[4\pi/3, \pi/3]$. Then $\mathbb h_1$ and $\mathbb h_2$ glue entirely within the $U_1$ chart domain $D_1$; $\mathbb h_2$ and $\mathbb h_3$ similarly glue in $D_2$; and $\mathbb h_3, \mathbb h_1$ glue in $D_3$. In this way, one decouples the chart matching from the horizontal gluing; we can cyclically glue all $\mathbb h_i$'s first and find the appropriate chart transition later, independently. In that case, it is the cyclicity of the equations that yields one less condition. This type of concatenating construction can be extended to higher dimensional manifolds.} We thus obtain a one-parameter family of solutions parametrized by an element $\tilde \Pi\in\mathrm{Lie}(G)$. This element constitutes the perturbation of the Wilson-loop observable around the circle (Aharonov-Bohm phase), which is precisely the unique physical degree of freedom present there. The existence of this new topological mode is of course related to the non-contractibility ($\pi_1(S_1) = \mathbb Z \neq 0$) of the circle. Application of these results to the gluing of the electric field on the circle leads to the following analogous result: the Coulombic adjustments $\eta^\pm$---formally corresponding to the vertical adjustments $\xi^\pm$ in the gauge-potential case---encode the global {\it radiative} mode of the electric field on the circle. This global radiative mode is \textit{regionally} of a pure Coulombic form. Then the analogue of $\tilde\Pi$ in equation \eqref{eq:Pigluing} for $\eta^\pm$ is not free, but fixed by the electric flux $f = E_s{}_{|S}$ through the gluing interface. In other words, a locally Coulombic field can be supported by the topology of the circle without any charged source; this is the conjugate dof to the Aharonov--Bohm phase, and what the electric analogue of $\tilde\Pi$ physically stands for. In sum, this 1-dimensional example provides a proof of principle that topological dof of the Aharonov-Bohm kind are not lost in our formalism, but rather emerge as ambiguities in the gluing procedure, ambiguities which are not there in topologically trivial situations. This consideration only partly endorses the attribution of ``new edge mode degrees of freedom'' to boundaries \cite{AronWill, DonnellyFreidel}. Namely, it grants such status only to those, {\it finitely many} degrees of freedom which encode information about a (global!) nontrivial first cohomology.\footnote{Of course, this distinction and the ensuing identification of finitely many topological modes cannot be performed at the regional level.} \section{Outlook\label{sec:conclusions}} We conclude this article by mentioning a few physically relevant questions that we expect our quasilocal framework will address and clarify. We will also take the opportunity to briefly comment on the relationship of the present quasilocal framework with other formalisms proposed in the literature. \paragraph*{Comparison to edge modes} The protagonist of this study is the functional connection on field space, $\varpi$, characterized by its projection and covariance properties. In hindsight and to our knowledge, the first appearance of an object possessing those two properties in the context of the symplectic geometry of YM in the presence of boundaries is \cite{DonnellyFreidel} (see also \cite{Balachandran:1994up, Speranza:2017gxd, Geiller:2017xad, Camps, Freidel_2020} among others). In contrast to the present work, the connection of \cite{DonnellyFreidel} was built out of {\it new} gauge-covariant fields; that is, by enlarging the configuration space of the theory and with no field-space geometrical interpretation in mind. These new fields were called ``edge modes'' since their existence is arguably revealed only at ${\partial} R$. In the following, we will denote by $\varpi_\text{DF}$ the functional connection that corresponds to the construction of \cite{DonnellyFreidel}. In the case of YM theory (that work considers also the case of general relativity), edge modes were posited to be group-valued, i.e. of the form $\tilde g(x)\in G$, and to transform under gauge transformation as $\tilde g\mapsto \tilde g g$ (on the right). This meant that $\varpi_\text{DF} = \tilde g^{-1} {\mathbb{d}} \tilde g$ could serve as a (flat) field-space connection\footnote{The reader should be aware that many expressions used in this paragraph cannot be found in \cite{DonnellyFreidel}, which is not framed in terms of principal fiber bundles in field space: we are using our language and conventions, to describe their results.} and that the following {\it extended} symplectic potential was horizontal and gauge-invariant: $\theta_\text{ext} = \theta_\text{YM} - \oint \mathrm{Tr}(f \varpi_\text{DF})$. Notice that $\theta_\text{ext}$ is---on-shell of the Gauss constraint---formally identical to our $\theta^H = \theta - \theta^V$. But the analogy stops there. Indeed, $\theta_\text{ext}$ is labeled {\it extended} with respect to $\theta$ because it contains the new fields $\tilde g$, whereas $\theta^H$ contains {\it less} modes than $\theta$ and is defined {\it intrinsically} to the phase space ${\mathrm{T}}^*{\mathcal{A}}$. In many ways, the construction of $\theta_\text{ext}$ can be understood as a ``Stuckelberg-ization'' of the gauge symmetry\footnote{In the fibre-bundle $P\to \Sigma$ description of YM, the edge modes $\tilde g(x)$ are nothing else than the bundle's fibre coordinates (in some arbitrary gauge)---and $\varpi_\text{DF}$ is the Maurer-Cartan form on the infinite dimensional bundle provided by ${\mathcal{A}}$. This relationship between edge modes and coordinates is even clearer in general relativity, where the analogue of the fields $\tilde g(x)$ are maps $\tilde X: \Sigma \to \mathbb R^3$ (or from $M \to \mathbb R^4$) which are actual coordinates in the sense of differential geometry.} (at the boundary), as can be inferred from the fact that the gauge charges $H_\xi$ (which have no place in $\theta^H$) reappear as charges associated to the ``global'' symmetries of the new fields $\tilde g$, i.e. $\tilde g \mapsto h^{-1} \tilde g$ (on the left). In fact, this simple observation can be made mathematically precise, thus revealing a hidden residual gauge-dependence of the edge mode construction. This analysis, as well as a detailed comparison of edge modes with the present geometric framework, is available in \cite{AldoNew}. In light of these considerations, it seems to us that the intrinsic geometric approach put forward in this article is more minimal and more insightful than the one based on group-valued edge modes. Indeed, it only relies on geometric properties that are already present inside standard Yang-Mills theory, and avoids introducing boundary conditions or new fields. This idea is taken to its logical conclusions \cite{AldoNew}, where the geometric approach developed in this paper is used to show that the reduced phase space $\Phi/{\mathcal{G}}$ is foliated by canonically-defined symplectic spaces associated to superselection sectors of fixed electric $f$ (see section \ref{sec:QLSymplRed} for a brief review). Therefore, we take the position that there is no a priori reason to introduce the group-valued edge-modes of \cite{DonnellyFreidel} in YM theory for the study of quasilocal degrees of freedom, charges, or gluing---all of which we have been able to analyze in greater detail from a purely field-space geometrical standpoint. (Having said that, edge modes can nonetheless be useful to model the idealized coupling of a bulk YM theory with other, physical degrees freedom leaving on a codimension-1 surface).\footnote{E.g. to a superconductor confined on a conducting surface \cite{SusskindMemory}. See also \cite{Wieland1} for a different coupling to boundary fields, this time represented by spinorial fields. In the literature one finds other two motivations for the introduction of edge modes that we haven't mentioned so far: the first is based on an analogy with Chern-Simons theory, the second one with (quantum) Lattice Gauge Theory. Their analysis is instructive.} \paragraph*{Edge modes and $\varpi$ in Chern-Simons theory} At the boundary of a bulk Chern-Simons theory (CS), it is well-known that a boundary Wess-Zumino-Witten theory (WZW) emerges, whose dof are analogous to the edge modes $\tilde g(x)$. But, in relation to gauge, the action and the symplectic structures of YM and CS are very different ($BF$ theory offers yet another example, in many ways more similar to CS than YM). The Lagrangian density of YM theory is point-wise gauge invariant, the same is not true for CS; moreover, in YM there exists a (natural) polarization of the symplectic potential which is gauge-invariant (under field-{\it in}dependent gauge transformations), whereas the same is not true for CS---this lack of invariance was used in \cite{Mnev:2019ejh} to derive the WZW from CS. ) These remarks suggest that it is totally conceivable that edge modes are required in CS but not in YM; \cite{GriffinSchiavina} make a similar point. However, to settle this point, it is necessary to give a treatment of CS theory through the formalism put forward in this paper; \cite{JFrancois19} might provide some useful tools to this purpose. \paragraph*{Comparison to Lattice Gauge Theory} The introduction of boundaries in (quantum) Lattice Gauge Theory (LGT) requires one to cut open a series of lattice links (see e.g. \cite{Casini_gauge, Delcamp:2016eya} where a second option---cutting along links---is also considered). At the 1-valent vertex of an open link, gauge invariance must necessarily be broken (unless the link carries a vanishing electric flux). This is most easily seen in the spin-network basis of lattice gauge theory \cite{Donnelly2008}. The result of this breaking of gauge symmetry, it is claimed, is that new would-be-gauge degrees of freedom have to be introduced at the 1-valent vertex of LGT. But let us consider two case-studies: a lattice $G = {\mathrm{SU}}(2)$ and $\mathrm U(1)$ gauge theories. Let us start by $G={\mathrm{SU}}(2)$. Then, the lattice links are associated with a spin $j\in\frac12\mathbb N$ (an irrep of $G$) which labels the eigenvalues of the modulus square of the quantum electric flux through a surface dual to the link, $\mathrm{Tr}(f^2) = j(j+1)$; the vertices are labeled by ${\mathrm{SU}}(2)$ invariant tensors (intertwiners); and the 1-valent vertices at the end of an open link carry, as new dof, the ${\mathrm{SU}}(2)$ magnetic indices $m\in\{-j, -j +1 ,\dots, j\}$. This means that the boundary states at an open link are given by $||j,m\rangle \in \mathcal H_j$. These magnetic numbers are claimed to be a quantum version of the edge modes. Before coming back to this claim let us discuss the other case, $\mathrm U(1)$. If $G=\mathrm U(1)$, lattice links are associated with an integer $n\in\mathbb Z$ (an irrep of $G$) which labels the eigenvalues of the quantum electric flux through a surface dual to the link, with the sign of $n$ encoding whether the flux is ingoing or outgoing (relative to the orientation of the link); at the vertices, gauge invariance means that the sum of these oriented flux quantum numbers must vanish (this is Gauss' law). Boundaries are where open (half) links end; if these half-link carry a nontrivial flux with $n\neq0$, then gauge invariance is manifestly broken there. However, since all irreps of $\mathrm U(1)$ are 1-dimensional, no extra dof (beside the magnitude of the flux) is present there. Therefore, in the $G=\mathrm U(1)$ there is no analogous candidate for the quantum edge modes, which according to the construction of \cite{DonnellyFreidel} should always be present. Why? The issue is that the magnetic numbers in the $G={\mathrm{SU}}(2)$ do {\it not} correspond to the edge modes of \cite{DonnellyFreidel}, but rather to the (quantum) direction that the electric flux is pointing towards in the internal (gauge) space. This can be seen from the fact that the electric flux operator on a given link is proportional to the ${\mathfrak{su}}(2)$-generator: $\hat f^\alpha = s_i \hat E^{i\alpha} \propto J^\alpha$. This is why, in the $\mathrm U(1)$ case no analogue of the magnetic numbers is necessary: the internal space is trivial.\footnote{The embedding of the link in $\Sigma$, i.e. the lattice discretization itself, projects the electric field $E^i$ in a particular spacial direction.} Moreover, as argued by \cite{Casini_gauge}, in this framework the value of the electric flux $n$ (or $j$) at the boundary is superselected. This means that they (Poisson-)commute with any other observable in the theory, i.e. fluxes become nondynamical and should have no conjugate variables. This is clearly in contrast to what happens in the edge-mode framework, where the edge modes are the conjugate variables to the fluxes themselves. Now, according to Kirillov's coadjoint orbit method,\footnote{The name coadjoint orbit method comes from the following: $\eta_f = \mathrm{Tr}(f \, \cdot\, )$ is an element of the vector space dual to $\mathrm{Lie}(G)$ whose coadjoint orbit is parametrized by elements $\tilde g\in G$ according to ${\mathrm{Ad}}_{\tilde g}^* \eta_f = \mathrm{Tr}( (\tilde g^{-1} f \tilde g) \, \cdot\, ) $.} the Hilbert space $\mathcal H_j$ arises as the quantization of the canonically-given symplectic form associated to the coadjoint orbit fixed by $\mathrm{Tr}(f^2) = j(j+1)$ (at a given link) \cite{Kirillov}. Therefore the LGT computation nicely matches the classical and continuum construction of \cite{AldoNew} (summarized in section \ref{sec:QLSymplRed}). Once again, this construction requires no edge modes, and rather relies on the restriction of $\theta$ to sectors at fixed $\mathrm{Tr}(f^2)$ (in the Abelian case this is precisely $\theta^H$).\footnote{The DF extended symplectic structure, which includes the new edge modes, rather than relying on Kirillov's canonical symplectic structure associated to a coadjoint-orbit of a given flux $f \in \mathrm{Lie}(G)$, relies on the canonical symplectic structure associated to ${\mathrm{T}}^*G$. This is how new dof $\tilde g$ are introdued which are conjugate to the $f$. See \cite{AldoNew} for details.} This interpretation is further confirmed by computations of entanglement entropy (see below). Although our construction nicely parallels the LGT phase space in the way presented above, it seems to us that relating gluing in the two pictures is less straightforward. We showed that in the continuum there is no ambiguity in the gluing procedure and that all dof can be reconstructed by solving certain elliptic boundary value problems. On the lattice, on the other hand, there is no true analogue of the elliptic boundary value problems that enter the gluing formula---which de facto require infinitely fine-grained knowledge of all the continuous modes of the fields involved. Moreover, gluing is highly ambiguous since one can in principle introduce a gauge ``slippage'' at the gluing of every open link: these missing modes are essentially new Aharonov-Bohm phases not present in the open lattice. And this leads us to the point of contact between the two formalisms: in our study of gluing in 1+1 dimensions (see section \ref{sec:topology}) we found that new Aharonov-Bohm dof are indeed seen to appear when the glued manifold has a nontrivial topology (like a circle). Let us clarify our argument with a more concrete example. Consider first electromagnetism in $\Sigma = R^+ \cup R^-$, and consider a Wilson loop $L$ which is cut in two by the interface $S={\partial} R^\pm$: $L = L^+ \cup L^-$ with $L^\pm\subset R^\pm$. Although there is no way to reconstruct the Aharonov-Bohm phase $\phi(L)$ around $L$ from gauge invariant information associated to its two open ``halves'' $L^\pm$, this information {\it is} gauge-invariantly encoded in $R^\pm$. Indeed, turning $L^\pm$ into closed loops $\bar L^\pm = L^\pm \cup \ell \subset R^\pm$ by closing $L^\pm$ with a common open Wilson line along the boundary, $\ell \subset S$ and $ {\partial} \ell = {\partial} L^\pm$, one obviously finds $\phi(L) = \phi(\bar L^+) + \phi(\bar L^-) \; (\text{mod} \; 2\pi)$. Given the nonlinear nature of non-Abelian YM theory, this trick would not work there; however, our gluing result shows that (at least at the linearized level) having access to {\it all} the gauge invariant information in $R^\pm$ would allow unambiguous gluing even in the non-Abelian theory. However, this information is not available on the lattice, where information about the field configuration is de facto limited to the knowledge of a finite number of Wilson loops. Therefore, it seems consistent to understand these results as saying that: from the perspective of our framework, LGT behaves as a gauge theory defined on a topologically (highly) nontrivial 1-dimensional manifold, where gluing is {\it non}-unique and new dof {\it do} emerge. (This distinction in the quasilocal properties of continuum and lattice gauge theories might have consequences for approaches to quantum gravity, like Loops and Spinfoams, that maintain as fundamental both gauge-like variables and a polymerized, i.e. lattice-like, notion of quantum spacetime \cite{LQG30years}.) \paragraph*{Lorentz covariance of the horizontal symplectic form} Our formalism is founded on a $D+1$ decomposition of spacetime, which manifestly breaks Lorentz invariance. In this regard, it is crucial to appreciate a rather trivial point: prior to the formalism itself, it is the focus on a $(D-1)$-dimensional surface $S$ that breaks global Lorentz invariance. Indeed, the natural spacetime structure associated with $S$ is given by a pair of disconnected causal domains $J^\pm$ within the globally hyperbolic spacetime $M \cong \Sigma\times \mathbb R$. These are the domain of dependence of the regions $R^\pm \subset \Sigma$. But whereas different choices of $R^\pm$ might determine the same $J^\pm$, all these equivalent choices share the same boundary $S={\partial} R^\pm$---which means we should write $S=S(J^\pm)$. Thus, even if the spacetime $M$ is a flat Minkowski space, Lorentz invariance is manifestly broken by our focus on $S$, which indeed picks a privileged rest frame (provided $\Sigma \supset R^\pm$ is a simultaneity hypersurface). More generally, the above causal spacetime geometry suggests that a better notion of spacetime covariance is given by the freedom to foliate the causal domains $J^\pm$. In this regard we think that an interesting future direction consists in studying the quasilocal dynamics within $J^\pm$ by means of the horizontal (and in particular the SdW) decomposition of the gauge fields. This is also the right (covariant) framework to talk about entanglement entropy---discussed below. We notice that this type of study requires a straightforward generalization of the present formalism to more general foliations with nontrivial lapse (and possibly shift, see \cite{RielloSoft}), as well as a way to deal with the divergences associated with a vanishing lapse at $S$. \paragraph*{Superselection Sectors and the Asymptotic Limit} It has been argued that, in the asymptotic limit ${\partial} R \to\infty$, $f=f_\infty$ is superselected and that its superselection has highly nontrivial and somewhat puzzling consequences such as the spontaneous breaking of Lorentz symmetry in Quantum Electrodynamics on a Minkowski spacetime (or, indeed, an asymptotically flat one) \cite{StrocchiCSR1974, FrohlichMorchioStrocchi1979, Buchholz1986, BalachandranVaidya2013}. In the works dealing with the asymptotic case, the superselection of $f_\infty$ follows from the remark that $f_\infty = E_s{}_{|\infty}$ at {\it infinity} is spacelike separated from, and hence commutes with, {\it all} the local operators of the theory (since they must have a finite support). In the case of a finite-region, we argued that $f$ is also superselected (see section \ref{sec:QLSymplRed} for a summary, \cite{Casini_gauge} for a lattice perspective, and \cite{AldoNew} for a complete treatment in the continuum). However, whereas in the standard argument for the asymptotic superselection the latter follows from an argument of complete knowledge (one has that $f_\infty$ commutes with {\it all} local observables), in the finite case we argued for the superselection of $f$ on a basis of our {\it ignorance}: adopting a quantum lingo, we are ``tracing over'' all observables in the complement of the region of interest. This interpretation is supported by our results on the gluing problem presented in section \ref{sec:gluing}. There we have shown that from a global perspective (one that is not intrinsic to $R$), the flux $f$ at a finite ${\partial} R$ functionally depends on the quasilocal radiative dof supported {\it both} on $R$ and on its complement. Therefore, we conclude, the physical origin of the {\it regional} superselection of $f$ is indeed the ``tracing'' over the dof contained in the complementary region to $R$ in $\Sigma$. From this stance, the Lorentz symmetry breaking in Quantum Electrodynamics---which follows from the superselection of $f_\infty$---appears as a consequence (an artifact?) of taking the idealized limit ${\partial} R\to\infty$ too seriously: i.e. not merely as a large-distance expansion, but as a limit that ``pushes'' the complementary region to $R$ out of existence. We find it compelling that this observation resonates with the previous one, on the breaking of Lorentz invariance: in the finite case, it is the presence of a finite boundary (and the tracing out of dof outside it) that {\it directly} causes {\it both} the superselection of $f$ and the breaking of Lorentz invariance. A detailed discussion of the finite-region superselection of $f$ is provided in \cite{AldoNew}. However, to fully bridge with the asymptotic case, a detailed study of the role played by the boundary (and fall-off) conditions for the (asymptotic) fields is needed---see e.g. \cite{Harlow_cov, brown1986}. This work begun in \cite{RielloSoft}, where null-infinity was analyzed, but we leave a more detailed analysis of these ideas to future work. \paragraph*{Entanglement Entropy} Another question that we expect our formalism can help clarify concerns the nonstandard properties of entanglement entropy of gauge systems \cite{Kabat}. In gauge theories, the entanglement entropy turns out to quantify not only the standard, ``distillable'', (quantum and classical) correlations between local excitations, but also a more exotic ``edge'' (or ``contact'') component. The latter component is classical, and descends from the probability distribution for finding the super-selected flux $f$ in a certain configuration---i.e. in a certain superselection sector \cite{Polikarpov, Donnelly_entanglement, Casini_gauge, Donnelly:2014fua, AronWill}. Given our understanding of the interplay between gauge, fiducial\footnote{Fiducial interfaces---i.e. interfaces at which no fixed boundary condition is imposed---are crucial to the generic definition of entanglement entropy, but for gauge theories they were not easily implementable in previous set-ups (see e.g. the ``brick wall'' of \cite{Donnelly:2014fua, AronWill}).} interfaces, and gauge symmetry, it is clear that the present formalism will shed light on the interpretation and computation of the edge component to the entanglement entropy. Indeed, it turns out that the probability distribution of a superselection sector of $f$, as computed in \cite{Donnelly:2014fua, AronWill}, comes precisely from a (Euclidean spacetime) analogue of formula \eqref{eq:radenergydiff} for the Coulombic contribution to the energy (there, the Euclidean action) of an $f$-superselection sector. It is also worth noticing that the Euclidean action featured in the computation of the entanglement entropy by the replica trick is the Euclideanization of the Lorentzian action in the Rindler causal domain. In \cite{Agarwal}, a computation of the contact term is proposed which starts from a comparison between a globally gauge-fixed path integral and its regional counterparts. The main ingredient of this computation is the Forman-BFK formula for the factorization of (zeta-regularized, Faddeev-Popov) functional determinants of Laplacians \cite{Forman, BFK, KirstenBFK} (the relevance of this ingredient to calculations of black-hole entropy was already identified\footnote{See also \cite{CarlipDellaPietra} for an even earlier application of Forman's results to the gluing, or ``sewing'', of string amplitudes.} by Carlip \cite{Carlip1995}). This formula features precisely the Abelian analogue of the operator $(\mathcal R_+^{-1} + \mathcal{R}^{-1}_-)$ that is central to our gluing formula. Indeed, interpreting horizontal modes as corresponding to the perturbatively gauge fixed ones, our gluing formula gives a precise non-degenerate\footnote{However, subtleties are expected to arise for non-simply-connected manifolds and at reducible background configurations.} Jacobian for the transformation of the global radiatives to the regional radiatives, whose determinant yields the relevant factor in the factorization of the path integrals. More generally, we notice that our formalism is well-suited not only for a broad generalization of the ideas of \cite{Agarwal} on the computation and interpretation of the contact term of the (3d Abelian) Yang-Mills theory, but also for inscribing them in a larger theoretical landscape, viz. in the geometry of the Yang-Mills field space. The first evidence that this is the right direction comes from an analysis of the LGT entanglement entropy computed in \cite{Donnelly_entanglement} with our framework: the non-distillable part of the entropy precisely reflects the foliation of the reduced phase space by symplectic superselection sectors analyzed in \cite{AldoNew} and summarized in section \ref{sec:QLSymplRed}. \paragraph*{Corners and Gluing} So far we have considered only gluing patterns in which two regions are glued along their {\it whole} boundaries. More generally, one should consider cases in which the gluing happens on portions of the boundaries bounded by corner surfaces, and the boundary of those corners, and so on until the 0-dimensional boundary terminates the descent. In particular, these more general gluing patterns are necessary to build topologically nontrivial manifolds from topologically simple building blocks (e.g. in the case of triangulated manifolds, or of the trinion decomposition of Riemann surfaces). This is therefore an important topic that deserves deeper study. In section \ref{sec:int_h}, we noticed that the continuity condition parallel to the interface $S$ is a verticality condition {\it in the space of boundary fields}, where the boundary field in question is the difference of the {\it pull-backs} of the regional horizontals onto $S$. In this scenario, it seems that a chain of descent could apply for horizontal/vertical decomposition at boundaries of boundaries, etc. with analogies to the nested structures featured in the BV-BFV formalism (when interfaces of multiple codimensions are considered) \cite{cattaneo2014classical, cattaneo2016bv, Mnev:2019ejh}. More generally, it would be valuable to have a precise mapping between, on one side, our reduction and gluing formalisms, which are based on ideas of symplectic reduction, and, on the other, those formalisms such as BV-BFV which are instead based on the ``opposite'' ideas of (homological, BRST) resolution of the gauge symmetry---such as the BV-BFV formalism of \cite{cattaneo2014classical, cattaneo2016bv, Mnev:2019ejh}, and the theory of factorization algebras of \cite{CostelloGwilliam}. \paragraph*{The symplectic flow of non-Abelian stabilizer charges} We have argued, in section \ref{sec:conservation}, that the only (nontrivial) geometrically-determined set of quasilocal charges that survives symplectic reduction is given by stabilizer charges $Q[\chi_A]$. These charges are only defined at reducible configurations and, in YM, only special configurations are reducible. Reducible configurations constitute ``meager'' submanifolds of the configuration space ${\mathcal{A}}$ and are organized along geometrical structures called strata (this is in analogy to metrics admitting Killing vector fields in general relativity, see \cite{Ebin, Palais, isenberg1982slice} and \cite{kondracki1983} for the same constructions in YM). Hence, in YM, the study of the symplectic flow associated with these symmetries must be performed intrinsically to these lower strata of ${\mathcal{A}}$, where the charges are defined (physically, this corresponds to a restriction to a sector of the gauge-field configurations determined by a given symmetry property---e.g. rotationally invariant solutions in general relativity). However, in the non-Abelian case, a definition of a connection-form in these strata is not forthcoming, as discussed in section \ref{sec:charges_YM}. Moreover, in the non-Abelian case, the stabilizers are necessarily field-dependent, and thus the relationship between the flow of the stabilizer transformations and the stabilizer charges, as their would-be-Hamiltonian-generators, is potentially obstructed. A more detailed study of the geometry of the strata is needed to better characterize this obstruction and fully clarify its relation to (that is, the curvature of an associated connection, if it can be defined there). \section*{Acknowledgements} We are thankful to Florian Hopfm\"uller as well as to Ali Seraj and Hal Haggard for valuable comments and feedback on an earlier version of this work. We also thank William Donnelly for encouraging us to study nontrivial topologies and in particular the 1+1 dimensional example. Finally, our gratitude goes to an anonymous referee whose insightful observations and questions allowed us to considerably improve this manuscript to its present form. \paragraph{Author contributions} All authors contributed equally to the present article. \paragraph{Funding information} HG was supported by the Cambridge International Trust. During the duration of this work, AR was supported first by the Perimeter Institute and then by the European Union’s Horizon 2020 programme. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801505. \begin{appendix} \section{A quick translation into common notation for \eqref{eq:OmegaH-pp}\label{app:translation}} To conclude, we provide a quick bridge to a more common notation for \eqref{eq:OmegaH-pp} (e.g. \cite{AshtekarStreubel} or \cite{Lee:1990nz}). Let $\mathbb X_{1,2}=\int (X_i^\alpha)_{1,2}\frac{\delta}{\delta A_i^\alpha}$ be two tangent vectors on configuration space. In interpreting them as two infinitesimal variations, we denote their components with the more common notation $(X_i^\alpha)_{1,2} \equiv \delta_{1,2} A_i^\alpha$. Then, the horizontal-vertical decomposition of $\delta_{1,2} A$ is given by \begin{equation} \delta_{1,2} A_i = (h_{1,2})_i + {\mathrm{D}}_i \eta_{1,2} \end{equation} where $h_{1,2}$ is the horizontal part of $\delta_{1,2} A$ and ${\mathrm{D}} \eta_{1,2}$ its vertical part. On-shell of the Gauss constraint and in vacuum, to obtain a complete basis of variations over $\mathrm T^*{\mathcal{A}}$, we define the field space vectors \begin{equation} \delta_{1,2} := ( \delta_{1,2} A, \delta_{1,2} E ) = (h_{1,2}, \eta_{1,2}, \varepsilon^{\text{rad}}_{1,2}, \delta_{1,2} f), \end{equation} where we denoted $\varepsilon_{1,2} = \delta_{1,2} E_{\text{rad}}$, and traded the variation of the Coulombic part of the electric field for that of $f$. Then, \begin{equation} \begin{dcases} \Omega^H(\delta_1 , \delta_2 ) = \int \sqrt{g} \, \mathrm{Tr}\Big( (\varepsilon_1)^i (h_2)_i - (\varepsilon_2)^i (h_1)_i\Big),\\ \Omega^{\partial}(\delta_1 , \delta_2 ) \approx \oint \sqrt{h} \,\mathrm{Tr}\Big( f[\eta_1 , \eta_2] + \delta_1 f \, \eta_2 -\delta_2 f \, \eta_1 \Big). \end{dcases} \end{equation} \section{A brief overview of the slice theorem\label{app:slice}} Denote $\tilde A$ a reducible configuration and by $\tilde \chi$ or $\tilde \chi_{\tilde A}$ one of its reducibility parameters. Let us start by the simple observation that since $ (\mathbb i_{\tilde \chi^\#} {\mathbb{d}} A)_{|\tilde A} = \delta_{\tilde \chi} \tilde A = 0 $, it follows from the definition \eqref{eq:hash} that at these configurations of ${\mathcal{A}}$, ${\tilde \chi}^\#{}_{|\tilde A}\in \mathrm T_{\tilde A}{\mathcal{A}}$ vanishes, thus establishing the degeneracy of the gauge orbit $\mathcal O_{\tilde A}\subset {\mathcal{A}}$. Therefore, ${\mathcal{A}}$ is not quite a bona fide fibre bundle, and its base manifold is in fact a stratified manifold, see figure \ref{fig8}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.17]{fig_new_3} \caption{In this representation ${\mathcal{A}}$ is the page's plane and the orbits are given by concentric circles. The field $A$ is generic, and has a generic orbit, $\mathcal{O}_A$. The configuratoin $\tilde A$ has a nontrivial stabilizer group (i.e. it has non-trivial reducibility parameters), and its orbit $\mathcal{O}_{\tilde A}$ is of a different dimension than $\mathcal{O}_A$. The projection of $\tilde A$ on ${\mathcal{A}}/{\mathcal{G}}$ therefore sits at a qualitatively different point than that of $A$ (a lower-dimensional stratum of ${\mathcal{A}}/{\mathcal{G}}$). Exclusion of the reducible configuration $\tilde A$ gives rise to a fibre bundle structure over ${\mathcal{A}}\setminus \{\tilde A\}$; here $\sigma$ represents a section of ${\mathcal{A}}\setminus \{\tilde A\}$. Whereas $\sigma$ defines a slice through $A$, the slice through $\tilde A$ (not depicted) is an open disk centred at $\tilde A$.} \label{fig8} \end{center} \end{figure} To be more precise, the definition of a fibre bundle requires a local product structure, and while ${\mathcal{A}}$ does not have that product structure, it can be decomposed into submanifolds that do. These manifolds are called the {\it strata} of ${\mathcal{A}}$, and this result is known as a \textit{slice theorem} for ${\mathcal{A}}$ \cite{Ebin, Palais, isenberg1982slice, kondracki1983}. A stratum $\mathcal N_A \subset {\mathcal{A}}$ consists of those connections that have the same stabilizer of $A$ up to conjugacy by ${\mathcal{G}}$. E.g. generic configurations are irreducible, i.e. have $\mathcal{I}_A=\{\mathrm{id}\}$, and therefore belong to the same (top) stratum; the (bottom) stratum of the vacuum configuration ${\cal N}_{A=0}$ is instead constituted by those configurations of maximal stabilizer\footnote{Contrary to general relativity, there is only one configuration (up to gauge) with maximal stabilizer in YM---at least over a simply connected space, e.g. $R\cong \mathbb R^D$, for $G$ a semisimple Lie group. Indeed, suppose $A$ is maximally symmetric, and denote $\{\chi^{(\ell)}_A\}_{\ell = 1}^n$ a basis of $\mathrm{Lie}(\mathcal{I}_{A})\cong \mathrm{Lie}(G)$, that is ${\mathrm{D}}\chi_A^{(\ell)} = 0$. This implies $[F_A,\chi_A^{(\ell)}]=0$ at every point in space. Now, since the $\dim(\mathrm{Lie}(\mathcal{I}_A)) = \dim (\mathrm{Lie}(G))$, and since the $\chi^{(\ell)}$ are all linearly independent at every point in space (this is because the equation ${\mathrm{D}}\chi=0$ is first order), we conclude that $[F_A(x), \mathrm{Lie}(G)]=0$. If $G$ is semisimple, this means $F_A=0$, and hence, using that $R$ is simply connected, one concludes that $A = g^{-1}\d g = 0^g \in {\cal O}_{A=0}$. \label{fnt:vac_stratum}} $\mathcal{I}_{A=0} \cong G$, i.e. ${\cal N}_{A=0}={\cal O}_{A=0} = \{ g^{-1}\d g, g\in {\mathcal{G}}\}$. Intermediate strata have an increasing degree of symmetry, $\{ \mathrm{id} \} \subset \mathcal{I}_A \subset G$. The slice theorem shows that ${\mathcal{A}}$ is regularly stratified by the action of ${\mathcal{G}}$. In particular, all the strata are smooth submanifolds of ${\mathcal{A}}$. A ``slice'' is a notion that reverts to the usual definition of a section on a fibre bundle, but in the presence of stabilizers, it differs in important ways. More precisely, at slice $\mathscr S_A$ at $A\in{\mathcal{A}}$ is an open submanifold of ${\mathcal{A}}$ containing $A$ such that \cite[Def. 1.1]{isenberg1982slice}: (\textit{i}) the \textit{entire} $\mathscr S_A$ is invariant under $\mathcal I_{A}$, i.e. for all $g\in{\cal I_A}$, $R_g\mathscr S_A = \mathscr S_A$; (\textit{ii}) an orbit will interesect with $\mathscr S_A$ only for the stabilizers, i.e. $(R_{g}\mathscr{S}_A)\cap\mathscr{S}_A\neq \emptyset$ iff $g\in \mathcal I_{A}$, (\textit{iii}) most importantly, the part of the group that is not in the stabilizer heuristically provides the fibres of its own kind of sub-bundle; namely, for an open neighbourhood around the identity coset ${\cal U}_A\subset {\mathcal{G}}_A := {\mathcal{G}}/\mathcal{I}_{A}$, and a section $\kappa: \mathcal{U}_A\rightarrow {\mathcal{G}}$, the following map $\Gamma$ is a local diffeomorphism: \begin{equation} \Gamma: \mathcal{U}_A\times \mathscr{S}_A\rightarrow {\mathcal{A}},\qquad ([g], s)\mapsto R_{\kappa([g])}s. \label{eq:Gamma-map} \end{equation} So-called ``slice theorems'' ensure that slices exist at all $A\in{\mathcal{A}}$ (cf. \cite{YangMillsSlice, kondracki1983}, see also\cite{fischermarsden}).\footnote{The difficulties in proving the slice theorem all stem from the infinite-dimensional nature of field space: one must show that the orbits are embedded manifolds, and that they are ``splitting'' (i.e. the total tangent space splits into the tangent to the orbit a closed complement) and that the Riemann exponential map (for some auxiliary gauge-compatible supermetric $\mathbb G$) is a local diffeomorphism. One then constructs the slice---whose tangent complements the vertical directions at the given configuration---by exponentiating some neighbourhood of the zero section of the normal bundle to the orbit (the subbundle of ${\mathrm{T}}{\mathcal{A}}$ which is $\mathbb G$-normal to the orbit in question). The ${\mathcal{G}}$-invariance of $\mathbb G$ guarantees that the slice has the necessary properties above. All of this must be done with due consideration of the relevant convergence properties for spaces with the appropriate Holder and Sobolev norms, within a given differentiability class. It is beyond the scope of this paper to exhibit these details (cf. \cite{kondracki1983}). This appeal to a super-metric shows once again the naturalness of the SdW notion of horizontality. \label{fnt:slicethm} } At a non-reducible configuration $A$, the stabilizer $\mathcal I_A = \{ \mathrm{id} \}$ and the definition of a slice collapses to that of a local section. The space of such configurations is open and dense inside of ${\mathcal{A}}$, i.e. it is generic. At a reducible configuration ${\tilde A}$, however, new features emerge. Call $\mathcal V_{\tilde A}$ a small open neighbourhood of $\tilde A$ (the image of $\Gamma$). Then, the demand (\textit{i})---that the entire $\mathscr S_{\tilde A}$ is stable under the action of $\mathcal I_{\tilde A}$---takes two different meanings depending on whether we focus on neighbouring configurations which take the stratum as an ambient manifold or which take the entire ${\mathcal{A}}$ as the ambient manifold, i.e. in $\mathcal V_{\tilde A} \cap\mathcal N_{\tilde A}$ or in $\mathcal V_{\tilde A}$. On the one hand, off the stratum, condition (\textit{i}) means that a suitable $\mathscr S_{\tilde A}$ contains also the non-trivial orbit of a generic $A\in\mathcal V_{\tilde A}\setminus {\mathcal N}_{\tilde A}$ with respect to $\mathcal I_{\tilde A}$. So the slice of a reducible configuration is of ``higher dimension'' than that of a generic configuration.\footnote{ In finite dimensions, one would have dim$(\mathscr S_{ A})=\text{dim}({\mathcal{A}})-\text{dim}(\mathcal{O}_A)$ and dim$(\mathcal{O}_A)=\text{dim}({\mathcal{G}})-\text{dim}(\mathcal I_A)$. In the present context, however, all these dimensions are actually infinite except that of $\mathcal{I}_A$ which is finite and bounded from above by $\mathrm{dim}(G)$. \label{fnt:slicedim}} This phenomenon ulitmately underlies our identification of charges. On the other hand, on the stratum, condition (\textit{i}) means that the slice cuts through the orbits within $\mathcal N_{\tilde A}$ in a non-generic manner, that is to ensure that for $\tilde A' \in \mathscr S_{\tilde A} \cap\mathcal N_{\tilde A}$, $\mathcal I_{\tilde A'}$ is equal to $\mathcal I_{\tilde A}$ and not just conjugate to it. The existence of $\mathscr S_{\tilde A}$ means that this ``special'' cuts exist; and indeed they are usually constructed by exponentiating an orthogonality condition with respect to a gauge-compatible supermetric $\mathbb G$ on ${\mathcal{A}}$ (cf. \cite{YangMillsSlice, kondracki1983}, see also\cite{fischermarsden}).\footnote{The exponential is equivariant, and ``transports'' the relevant properties above at $A$ to any other $A'$ in the slice. We discussed a completely analogous construction of transverse sections, this time at generic configurations, in \cite[Sect. 9]{GomesHopfRiello} under the name of Vilkovisky-DeWitt dressing. See there for details.} \section{List of Symbols}\label{app:symbols} \textbf{Space and time}\vspace{-1em} \begin{longtable}{cp{0.8\textwidth}} $\d$ & De-Rahm differential on $\Sigma$\\ $g_{ij}$, $\sqrt{g}$ & the space-like metric on $\Sigma$ and the square-root of its determinant and the square-root of its determinant\\ $h_{ab}$, $\sqrt{h}$ & the induced metric on ${\partial} R$, $h_{ab}:=(\iota_{{\partial} R}^*g)_{ab}$ and the square-root of its determinant\\ $N$ & a time-neighbourhood of $R$, $N = R\times(t_0,t_1)$ \\ $R$ & a (compact) subregion of $\Sigma$, possibly with boundary. It is assumed to have trivial topology, $\mathring R \cong \mathbb R^D$, and smooth boundary\\ $s^i$ & the outgoing normal to ${\partial} R$\\ $\Sigma$ & a $D$-dimensional Cauchy hypersurface of spacetime \\ $\int$ & integral over $R$\\ $\oint$ & integral over ${\partial} R$\\ $\nabla$ & the space(-time) Levi-Civita connection\\ $\wedge$ & the wedge product between differential forms (often omitted) \end{longtable} \noindent\textbf{Yang-Mills and matter fields}\vspace{-1em} \begin{longtable}{cp{0.8\textwidth}} $A$ & a $\mathrm{Lie}(G)$-valued gauge field configuration ($A\in{\mathcal{A}} = \Omega^1(R, \mathrm{Lie}(G))$ (a gauge potential over $\Sigma$, in temporal gauge)\\ ${\mathcal{A}}$ & the space of all field configurations $A$\\ $(A,E)$ & coordinates on the cotangent bundle of ${\mathrm{T}}^*{\mathcal{A}}$\\ ${\mathrm{D}}$ & the gauge-covariant differential ${\mathrm{D}} = \d + A$\\ $E$ & the electric field (the momentum conjugate to $A$). In temporal gauge, $E= \dot A$. See ``symplectic geometry'' below for the definition of $E_{\text{rad}}$ and $E_{\text{Coul}}$\\ $F$ & the field-strength of $A$ (magnetic field), $F = \d A + A \wedge A$ \\ $f$ & the electric flux through ${\partial} R$, $f = E_s\equiv s_i E^i$\\ $g$ & a (finite) gauge transformation, i.e. an element of ${\mathcal{G}}$\\ $G$ & the charge group (finite dimensional, e.g. $G={\mathrm{SU}}(N)$)\\ ${\mathcal{G}}$ & the gauge group (infinite dimensional, ${\mathcal{G}} = \mathcal C^\infty(R, G)$)\\ ${\mathsf{G}}$ & the Gauss constraint ${\mathsf{G}} := {\mathrm{D}}_i E^i - \rho \approx 0$ (see also \eqref{eq:Gauss-Coul} for ${\mathsf{G}}^\text{tot}_f$ and ${\mathsf{G}}_f^{\partial}$)\\ $G_{\alpha,x}(y)$ & the Green's function of the ``SdW boundary-value problem'' (see \eqref{eq:Green}) \\ $J^\mu $ & the $\mathrm{Lie}(G)$-valued current $J^\mu_\alpha = \bar\psi \gamma^\mu\tau_\alpha\psi$, $J^\mu= (\rho,J^i)$\\ $R_g$ & the action of ${\mathcal{G}}$ on ${\mathcal{A}}$ (or $\Phi$, see below)\\ $\mathrm{Tr}$ & a short-hand for the appropriately normalized Killing form on $\frak g$\\ $\gamma^\mu$ & Dirac's gamma matrices\\ $\xi,\eta,\dots$ & an infinitesimal ``field-dependent'' gauge transformations, i.e. elements of $\Omega^0({\mathcal{A}}, \mathrm{Lie}({\mathcal{G}}))$ (more generally elements of $\Omega^0(\Phi,\mathrm{Lie}({\mathcal{G}}))$, see below). One says $\xi$ is field-{\it in}dependent if ${\mathbb{d}} \xi = 0$ i.e. $\xi$ is a constant over ${\mathcal{A}}$ and can thus be identified with an element of $\mathrm{Lie}({\mathcal{G}})$. If ${\mathbb{d}}\xi\neq0$, $\xi$ is said field-dependent\\ $\rho$ & the $\mathrm{Lie}(G)$-valued matter charge density (see $J^\mu$)\\ $\tau_\alpha$ & a basis of $\mathrm{Lie}(G)$ normalized so that $\mathrm{Tr}(\tau_\alpha \tau_\beta)=\delta_{\alpha,\beta}$\\ $\Phi $ & the total phase space: $\Phi = {\mathrm{T}}^*{\mathcal{A}} \times(\Psi\times\bar\Psi)$ \\ $\psi$ & a matter field; for definiteness, often taken to be a charged Dirac spinor in the fundamental representation of ${\mathcal{G}}$\\ $\bar\psi$ & the conjugate spinor, $\bar\psi = i \psi^\dagger \gamma^0$\\ $\Psi, \bar \Psi$ & the spaces of $\psi$'s and $\bar\psi$'s respectively\\ $[\cdot,\cdot]$ & the Lie bracket in $\frak g$\\ \end{longtable} \noindent\textbf{Field space geometry}\vspace{-1em} \begin{longtable}{cp{0.8\textwidth}} ${\mathbb{d}}$ & the (formal) exterior differential over ${\mathcal{A}}$ (it commutes with $\d$, and satisfies ${\mathbb{d}}^2 \equiv 0$)\\ ${\mathbb{d}}_H$ & the horizontal differential adapted to a covariant horizontal distribution $H=\ker(\varpi)$. Heuristically, it is given by the ``covariant'' differential ${\mathbb{d}}_H = {\mathbb{d}} + \varpi$\\ ${\mathbb{d}}_\perp$ & the horizontal differential specific to $\varpi=\varpi_{\text{SdW}}$\\ $\mathbb E$ & the field-space vector on ${\mathcal{A}}$ built out of $E$ by means of $\mathbb G$, $\mathbb E = \int g_{ij} E^i \frac{\delta}{\delta A_j}$. The vectors $\mathbb E_{\text{rad}}$ and $\mathbb E_{\text{Coul}}$ are similarly defined from $E_{\text{rad}}$ and $E_{\text{Coul}}$. See ``symplectic geometry'' below for their definition\\ $\mathcal F$ & the vertical foliation, i.e. $\mathcal F = \{ \mathcal O_A\}$\\ $\mathbb F$ & the curvature of $\varpi$, it encodes the anholonomicity of the horizontal distribution $H\subset {\mathrm{T}}{\mathcal{A}}$\\ $\mathbb F_{\text{SdW}}$ & the curvature of $\varpi_{\text{SdW}}$\\ $\mathbb G$ & the ``kinetic super-metric'' on ${\mathcal{A}}$ built through the natural $L^2$ metric on $\Sigma$ together with the Killing form $\mathrm{Tr}$\\ $H $ & a transverse complement to $V$ in ${\mathrm{T}}{\mathcal{A}}$, $H\oplus V = {\mathrm{T}}{\mathcal{A}}$. For brevity, it often stands for $H_{\mathbb G}$ (see below)\\ $ H_{\mathbb G}$ & the orthogonal complement to $V$ with respect to $\mathbb G$, $H_{\mathbb G} = V^\perp$\\ $\hat H, \hat V$ & projectors from ${\mathrm{T}}{\mathcal{A}}$ to $H$ and $V$ respectively\\ $\mathbb h$ & a field-space horizontal vector (field), $\mathbb h_A \in H_A$\\ $\mathbb i$ & the field-space inclusion operator, e.g. $\mathbb i_{\mathbb X} {\mathbb{d}} \phi = \mathbb X(\phi)$ for all $\phi\in \Omega^0({\mathcal{A}})$\\ $\mathbb L$ & the field-space Lie derivative. Acting on field-space forms, $\mathbb L_{\mathbb X} = \mathbb i_{\mathbb X} {\mathbb{d}} + {\mathbb{d}} \mathbb i_{\mathbb X}$ (Cartan's formula)\\ $\mathcal O_A$ & the orbit of $A$ under the action of ${\mathcal{G}}$, a subspace of ${\mathcal{A}}$\\ $V$ & the vertical subspace of ${\mathrm{T}}{\mathcal{A}}$, i.e. $V = {\mathrm{T}}\mathcal F$ \\ $\hat V$ & see $\hat H$\\ $\mathbb X, \mathbb Y, \dots$ & a field-space vector (field), $\mathbb X \in \mathfrak{X}^1({\mathcal{A}})$, e.g. $\mathbb X = \int \d x X_i^\alpha(x) \frac{\delta}{\delta A_i^\alpha(x)}$\\ $\varpi$ & a connection 1-form on ${\mathcal{A}}$, $\varpi \in \Omega^1({\mathcal{A}}, \mathrm{Lie}({\mathcal{G}}))$. It is adapted to a choice of decomposition ${\mathrm{T}}{\mathcal{A}} = H\oplus V$ in the sense that $H = \ker(\varpi)$ and $\varpi^\# = \hat V $. It satisfies the defining properties \eqref{eq:varpi_def}. It can also stand for the pull-back of $\varpi$ to $\Phi$. Often, after section \ref{sec:SdW}, $\varpi$ can stand for $\varpi_{\text{SdW}}$\\ $\varpi_{\text{SdW}}$ & the Singer-DeWitt connection 1-form, i.e. the connection 1-form uniquely adapted to the orthogonal decomposition of ${\mathcal{A}}$ with respect to $\mathbb G$\\ $\varsigma$ & it is a ``potential'' for $\varpi$, i.e. $\varpi = {\mathbb{d}} \varsigma$. This potential exists only under restrictive hypothesis ($G$ Abelian and $\mathbb F = 0$) \\ $\cdot^\#$ & it is the infinitesimal version of $R_g$, it maps a field-independent $\xi \in \mathrm{Lie}({\mathcal{G}})$ to a vertical field-space vector, $\xi^\#_A\in V_A$. This maps extends canonically to field-dependent gauge transformations\\ $\llbracket \cdot, \cdot\rrbracket$ & the field space Lie bracket between vector fields $\mathbb L_{\mathbb X} \mathbb Y = \llbracket \mathbb X, \mathbb Y \rrbracket$\\ $\curlywedge$ & the formal antisymmetric tensor (wedge) product between field-space differential forms\\ \end{longtable} \noindent\textbf{Symplectic geometry}\vspace{-1em} \begin{longtable}{cp{0.8\textwidth}} $E_{\text{rad}}, E_{\text{Coul}}$ & the (functional) components of $E$ entering $\theta^H$ and $\theta^V$ respectively\\ $H_\xi$ & the (naive) Noether charge, defined as $H_\xi = \mathbb i_{\xi^\#}\theta$\\ $\mathcal S, \mathcal T, \dots$ & bulk-supported real-valued function(al)s on $\Phi$, i.e. function(al)s on $\Phi$ which do not depend on the value of the fields in an (arbitrary) collar neighbourhood of ${\partial} R$\\ $\mathbb X_{\cal S}, \mathbb X_{\cal T}, \dots$ & the Hamiltonian vector fields associated with $\mathcal S, \mathcal T, \dots$\\ $\theta$ & the sum $\theta = \theta_\text{YM} + \theta_\text{Dirac} \in \Omega^1(\Phi)$. It is the off-shell symplectic potential of YM theory with matter\\ $\theta_\text{Dirac}$ & the off-shell symplectic potential of the matter sector of Yang-Mills theory, a 1-form on $\Psi \times \bar \Psi$\\ $\theta_\text{YM}$ & the tautological 1-form on ${\mathrm{T}}^*{\mathcal{A}}$, it is the off-shell symplectic potential of pure Yang-Mills theory\\ $\varphi$ & for the SdW decomposition of $E$ into $E_{\text{rad}}$ and $E_{\text{Coul}}$, one finds $E^i_{\text{Coul}} = g^{ij} {\mathrm{D}}_j \varphi$\\ $\Omega$ & the off-shell symplectic form of Yang-Mills theory with matter, $\Omega \in \Omega^2(\Phi)$. Obvious variations are $\Omega_\text{YM} = {\mathbb{d}} \theta_\text{YM}$ and $\Omega_\text{Dirac} ={\mathbb{d}}\theta_\text{Dirac} $\\ $\theta^H, \theta^V$ & respectively the horizontal and vertical parts of $\theta$ with respect to a given decomposition ${\mathrm{T}}{\mathcal{A}} = H\oplus V$. Clearly $\theta = \theta^H + \theta^V$\\ $\Omega^H$ & is the differential $\Omega^H := {\mathbb{d}} \theta^H$ (it is necessarily horizontal, but it is not necessarily the horizontal part of $\Omega$)\\ $\Omega^{\partial}$ & it is the differential $\Omega^V := {\mathbb{d}} \theta^V$ (on-shell of the Gauss constraint it is a pure-boundary term)\\ \end{longtable} \noindent\textbf{Reducible configurations}\vspace{-1em} \begin{longtable}{cp{0.8\textwidth}} ${\mathcal{A}}_\text{EM}$ & the configuration space of the electromagnetic theory taken as the prototypical example of an Abelian YM theory\\ ${\mathcal{G}}_A$ & the quotient ${\mathcal{G}}_A = {\mathcal{G}}/\mathcal{I}_A$. It is not a group unless $\mathcal{I}_A$ is a normal subgroup of ${\mathcal{G}}$ (which is a non-generic property)\\ ${\mathcal{G}}_\text{EM}$ & the quotient ${\mathcal{G}}_\text{EM} = {\mathcal{G}}/\mathcal{I}_\text{EM}$. It is a group\\ $\mathcal G_\ast$ & a subgroup of ${\mathcal{G}}$ homomorphic to ${\mathcal{G}}_\text{EM}$. In this case, $\kappa: {\mathcal{G}}_\text{EM}\to \mathcal G_\ast \subset {\mathcal{G}}$ is a group homomorphism. The choice of ${\mathcal{G}}_\ast\subset {\mathcal{G}}$ is not unique\\ $\mathfrak{G}_A$ & the quotient of vector spaces $\mathfrak{G}_A = {\mathrm{Lie}(\G)}/\mathrm{Lie}(\mathcal{I}_A)$\\ $\mathcal{I}_A$ & the stabilizer (or ``isotropy'') group of $A$, i.e. the subgroup of ${\mathcal{G}}$ given by those $g\in{\mathcal{G}}$ such that $A^g=A$\\ $\mathcal{I}_\text{EM}$ & the sub Lie algebra of constant gauge transformations in electromagnetism, $\mathrm{Lie}(\mathcal{I}_\text{EM})\cong i \mathbb R$. These transformations stabilize all $A\in{\mathcal{A}}_\text{EM}$\\ $\mathcal N_{\tilde A}$ & the subspace of $\cal N$ composed of all configurations $A$ with stabilizer conjugate to that of $\tilde A$. This space is called a (lower) ``stratum'' of ${\mathcal{A}}$ (the ``top'' stratum, which is dense in ${\mathcal{A}}$, is given by the set of generic configurations with trivial stabilizer; conversely the ``bottom'' stratum has maximal stabilizer $\mathcal{I}_A \cong G$ and is given by the single orbit $\mathcal O_{A=0}$)\\ $Q[\chi_A]$ & the stabilizer charge, $Q[\chi_A] = \int \sqrt{g} \, \mathrm{Tr}( \rho \chi_A)$, which is defined at reducible $A\in\cal N$\\ $\mathscr S_A$ & a ``slice'' through $A$. The notion of ``slice'' generalizes the notion of section at reducible configurations\\ $Q_\text{EM}[\chi_\text{EM}]$ & the stabilizer charge in electromagnetism. $Q_\text{EM}[\chi_\text{EM}] = \chi_\text{EM} \int \sqrt{g} \, \rho$ is the total electric charge in $R$ (times the constant $\chi_\text{EM}$)\\ $\kappa$ & a section $\kappa:\mathcal U_A \to {\mathcal{G}}$ where $\mathcal U_A$ is a neighbourhood of the identity coset $[\mathrm{id}]\in{\mathcal{G}}_A$. With an abuse of notation, we use the same symbol for what is actually the tangent map ${\mathrm{T}}\kappa : \mathfrak{G}_A \to {\mathrm{Lie}(\G)} $\\ $[\xi]_A, [\eta]_A,\dots$ & an element of $\mathfrak{G}_A$, $[\xi]_A = [\xi + \chi_A]_A$. It is often simply denoted by $[\xi]$\\ $\chi_A$ & an element of $\mathrm{Lie}(\mathcal{I}_A)$\\ $\chi_\text{EM}$ & an element of $\mathrm{Lie}(\mathcal{I}_\text{EM})$\\ \end{longtable} \noindent\textbf{Gluing}\vspace{-1em} \begin{longtable}{cp{0.8\textwidth}} ${}^S {\mathrm{D}}_a$ & the gauge-covariant Levi-Civita derivative on $S$ associated to $h_{ab}$\\ ${}^S {\mathrm{D}}^2$ & the gauge-covariant Laplace operator on $S$, ${}^S{\mathrm{D}}^2 := h^{ab} {}^S {\mathrm{D}}_a \,{}^S {\mathrm{D}}_b$\\ $\mathbb H_A, \, \mathbb H_\psi$ & the components along the gauge-potential and matter-field directions respectively of a SdW-horizontal field-space vector $\mathbb H \in{\mathrm{T}}\Phi$\\ $H_A,\, H_\psi$ & the components of $\mathbb H_A$ and $\mathbb H_\psi$\\ $h_{ab}$ & the induced metric on $S$, $h_{ab} := (\iota^*_S g)_{ab}$\\ $\mathcal R_\pm$ & the (generalized) Dirichlet-to-Neumann pseudo-differential operator associated with the SdW boundary value problem\\ $R^\pm$ & the two complementary regions in which $\Sigma$ is split, $\Sigma = R^+ \cup_S R^-$\\ $S$ & the common boundary $S = \pm {\partial} R^\pm$\\ $\Sigma$ & the whole Cauchy surface, assumed simply-connected and boundary-less\\ $\bullet^\pm, \bullet_\pm, {}^\pm\bullet$ & indicates which of the regions $R^\pm$ the given object $\bullet$ is associated to (this should {\it not} be confused with the restriction to a certain region of a globally defined object)\\ $\bullet_{|R^\pm}$ & restriction of a globally defined object $\bullet$ to the region $R^\pm$\\ $[\bullet]^\pm_S$ & the boundary mismatch, defined on regional one-form-valued objects (typically $\bullet \in \Omega^1(R^\pm, {\mathrm{Lie}(\G)})$) as $\iota_S^*(\bullet^+ - \bullet^-)$ \end{longtable} \end{appendix} \footnotesize \bibliographystyle{SciPost_bibstyle}
1,108,101,566,094
arxiv
\section{Introduction} Ferroelectric (FE) materials are of great fundamental and applied interests. They are currently used in many technologies such as electric capacitors, piezoelectric sensors and transducers, pyroelectric detectors, non-volatile memory devices, or energy converters~\cite{Lines:1977, Scott:1989, Scott:2007, Garcia:2009, Bowen:2014, Martin:2017,Kim:2018, Chanthbouala:2012}. For decades, most applications have relied on ferroelectric oxide perovskites. However, the need to combine ferroelectricity with other properties such as visible light absorption~\cite{Huang:2010,Li:2017} or long-range magnetic order~\cite{Spaldin:2019, Spaldin:2020} is driving the search for materials and structural classes beyond perovskites. High-throughput (HT) computational screening is a promising approach to search for materials meeting specific properties. It has been successfully used in a wide variety of fields from thermoelectrics~\cite{Chen:2016,Ricci:2017} to topological insulators~\cite{Li:2018,Zhang:2018,Choudhary:2019}. Different HT computing approaches have also been used to identify new ferroelectrics~\cite{Kroumova:2002,Bennett:2012,Garrity:2018, Smidt:2020}. Inspired by these previous studies and using a recently developed large phonon database, we searched for materials exhibiting dynamically unstable polar phonon modes, a signature of potential ferroelectricity. Our HT search identifies a new family of (anti-)ferroelectric materials: the series of anti-Ruddlesden-Popper phases of formula A$_4$X$_2$O with A a +2 alkali-earth or rare-earth element and X a $-$3 anion Bi, Sb, As and P. We survey how (anti-)ferroelectricity subtly depends on the chemistry of A$_4$X$_2$O and unveil the physical origin of the polar distortion. Interestingly, the discovered ferroelectrics belong to the new class of hyperferroelectrics~\cite{Garrity:2014} in which spontaneous polarization is maintained under open-circuit boundary conditions. The anti-Ruddlesden-Popper phases also lead to unique combinations of properties for instance a rare combination of ferroelectricity with ferromagnetism in Eu$_4$Sb$_2$O. \section{Results} A HT database of phonon band structures was recently built for more than 2,000 materials present in the Materials Project and mostly originating from the experimental Inorganic Crystal Structure (ICSD) database~\cite{Petretto:2018,Petretto:2018a,Jain:2013,Bergerhoff:1987,Zagorac:2019}. Using this database, we searched for non-polar structures presenting unstable phonon modes that could lead to a polar structure. This is the signature of a potential ferroelectric material~\cite{Garrity:2018}. We identified Ba$_4$Sb$_2$O (space group $I4/mmm$) to be such a ferroelectric candidate. Its crystal structure and phonon band structure are shown in Figs.~\ref{fig:distrortions}a and~\ref{fig:phonons}a, respectively. This phase was reported experimentally by Röhr \textit{et al.}~\cite{Rohr:1996} and its crystal structure can be described as analogous to a Ruddlesden-Popper K$_2$NiF$_4$ phase, a naturally layered structure alternating rocksalt (KF) and perovskite (KNiF$_3$) layers, but for which cation and anions have been switched. Inspired by the terminology used for anti-perovskites~\cite{Krivovichev:2008,Bilal:2015}, we will refer to it as an anti-Ruddlesden-Popper phase. In Ba$_4$Sb$_2$O, the large instability of a polar phonon at $\Gamma$ is compatible with ferroelectricity. Relaxing the structure along this unstable mode confirms the existence of a lower-energy stable phase ($\Delta E = -6.58$~meV/atom) with a non-centrosymmetric space group $I4mm$ and a spontaneous polarization of 9.55 $\mu$C/cm$^{2}$. The parent $I4/mmm$ structure consists in the periodic repetition of alternative rocksalt BaSb and anti-perovskite Ba$_3$SbO layers, along what we will refer to as the $z$ direction. In this structure, O atoms are at the center of regular Ba octahedra (see Fig.~\ref{fig:distrortions}b). The polar distortion appearing in the $I4mm$ phase has an overlap of 90\% with the unstable polar mode. When keeping the center of mass of the system fixed, the related atomic displacement pattern, illustrated in Fig.~1c, is dominated by the movement along $z$ of O anion ($\eta_{O}=0.212$) against the apical Ba cations, that moves opposite way ($\eta_{O}=-0.029$)\footnote{In the $I4mm$ ground state, the motion of the top apical O atom has been reduced by anharmonic couplings with other modes.}. This cooperative movement of Ba and O atoms is responsible for the spontaneous polarization along $z$, while Sb and the other Ba atoms play a more negligible role (in reducing the polarization by only 4\%). Contrary to regular Ruddlesden-Popper compounds, that can show incipient in-plane ferroelectricity, the polarization is here along the stacking direction. Also, Ba$_4$Sb$_2$O does not show the antiferrodistortive instabilities ubiquitous in traditional Ruddlesden-Popper phases~\cite{Freedman:2009, Xu:2017b,Zhang:2017b}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figures/figure1_crystal_structure/figure1_Ba4Sb2O_structure_combined.pdf} \caption{\label{fig:distrortions} (a) Conventional unit cell representing the anti-Ruddlesen-Popper structure of A$_4$X$_2$O. The A cation atoms (in green) form an octahedral cage with an O atom (in red) in its center. The X anion atoms (in violet) act as an environment in the voids surrounding the cages. Adopting a schematic representation with two neighboring octahedra surrounded by X atoms, we label three potentially metastable phases. In the reference non-polar phase (b), the two O atoms are located in the middle of the octahedral cages of A cations (shaded green), being equidistant of the two apical A cations. Upon the polar distortion (c), the O atoms move upwards in the direction of apical A cations moving downwards as indicated by the red and green arrows respectively. This results in a loss of centrosymmetry and, thus, leads to a finite polarization value along this direction. In the case of a anti-polar distortion (d), the O and A cation atoms in neighboring cages move in the opposite directions canceling out the polarization. In the plots, the displacements of the atoms have been amplified compared to their actual values (see text) in order to make them easily understood.} \end{figure} \begin{figure} \center \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{figures/figure2_phonons/figure2_phonon_dispersion_Ba4Sb2O_parent_longitudinal.pdf} \label{fig:ph1} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{figures/figure2_phonons/figure2_phonon_dispersion_Sr4Sb2O_parent_longitudinal.pdf} \label{fig:ph2} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{figures/figure2_phonons/figure2_phonon_dispersion_Ca4Sb2O_parent_longitudinal.pdf} \label{fig:ph3} \end{subfigure} \caption{\label{fig:phonons} Phonon dispersion curves of $I4/mmm$ A$_4$Sb$_2$O parent structures with the A cation atoms being Ba, Sr, Ca. Unstable phonon modes are highlighted in red. Change of the cation atom from the heavy Ba atom to the lighter Ca atom leads to the stabilization of the paralectric parent structure. On top of the phonon dispersion of Ba$_4$Sb$_2$O we plot the longitudinal character $L_(\mathbf{q},\nu)$ to distinguish between longitudinal and transverse optical modes and highlight a discontinuity at $\Gamma$.} \end{figure} Next to Ba$_4$Sb$_2$O, other alkali-earth atoms such as Ca and Sr have been reported to form in the same structure~\cite{Hadenfeldt:1991,Limartha:1980}. To further explore the role of chemistry on ferroelectricity, we plot in Fig.~\ref{fig:phonons} the phonon band structure of the A$_4$Sb$_2$O series, with A = Ca, Sr and Ba, in their $I4/mmm$ phase. All compounds are insulating. We observe that the polar instability is reduced in Sr$_4$Sb$_2$O in comparison to Ba$_4$Sb$_2$O and is totally suppressed in Ca$_4$Sb$_2$O. The existence of a polar instability is not enough to guarantee a ferroelectric ground state. Other competing phases (e.g., anti-polar distortions) could be more stable than the polar phase. The presence of phonon instabilities at other points than $\Gamma$ (e.g., X or L) indicates the possibility for such competing phases (see Fig.~\ref{fig:phonons}). By following the eigendisplacements of individual and combined unstable modes, we confirm that the lowest energy phase is polar for Ba$_4$Sb$_2$O. Combined with its insulating character (HSE direct band gap is 1.22 eV) and the moderate energy difference between non-polar and polar states, this confirms a ferroelectric ground-state. In Sr$_4$Sb$_2$O, we find that the ground state is instead an anti-polar $C2/m$ phase as illustrated in Fig.~\ref{fig:distrortions}d (see Fig.~S2 for the entire crystal structure of the anti-polar distortion of Sr$_4$Sb$_2$O). This anti-polar phase is however only $\Delta E =$ 0.57~meV/atoms lower in energy than the polar phase. So, the polar phase could be stabilized under moderate electric fields, making Sr$_4$Sb$_2$O a potential antiferroelectric compound~\cite{Rabe:2013}. Using $E_c \approx \Delta E / \Omega_0 P_s$, we estimate the critical field $E_c$ in Sr$_4$Sb$_2$O to be $81$ kV/cm, which could be easily accessible in experiment. Turning to the atomic pattern of anti-polar distortion, we see that it corresponds to a simple modulation of the polar distortion, with O atoms in neighboring octahedra moving in opposite directions and canceling out the macroscopic polarization (see Fig.~\ref{fig:distrortions}d). As such, Sr$_4$Sb$_2$O would therefore appear as a rare example of Kittel-type antiferroelectric~\cite{Kittel:1951,Rabe:2013,Milesi:2020}. We note an intriguing discontinuity at $\Gamma$ in the unstable phonon branch of Ba$_4$Sb$_2$O (Fig.~\ref{fig:phonons}). We rationalize this discontinuity by noting that the unstable optical mode is polarized along the $z$ axis, so that it is transverse (TO) along $\Gamma$-X and $\Gamma$-Y and longitudinal (LO) along $\Gamma$-Z. This is further illustrated in Fig.~\ref{fig:phonons} by the grey smearing on top of the phonon dispersion curves that indicates the longitudinal character $L_{\mathbf{q},\nu}$. The latter was defined as $L_{\mathbf{q},\nu} = \frac{\mathbf{q}\cdot(Z^{*}_{\alpha}\cdot\mathbf{\Delta}_{\alpha,\nu})}{|Z^{*}_{\alpha}\cdot\mathbf{\Delta}_{\alpha,\nu}|}$, where $\mathbf{q}$ is the phonon wavevector, $Z^{*}$ is the Born effective charge matrix and $\mathbf{\Delta}$ is the eigen-displacement of atom $\alpha$ with phonon mode index $\nu$. Interestingly, we notice that the overlap between the lowest LO and TO mode eigendisplacements is of 90\% and that the LO-TO splitting is rather small so that the longitudinal mode remains strongly unstable. Such a feature was previously reported in LiNbO$_3$~\cite{Veithen:2002}, or in hexagonal $ABC$ ferroelectrics and is the fingerprint of so-called hyperferroelectricity~\cite{Garrity:2014}. This demonstrates that Ba$_4$Sb$_2$O is not only ferroelectric but belongs to the interesting subclass of hyperferroelectrics in which a spontaneous polarization is maintained even under open-circuit boundary conditions (electrical boundary conditions with the electric displacement field $D=0$ ), so even when the unscreened depolarizing field tries to cancel out the bulk polarization. \begin{table}[t] \renewcommand{\arraystretch}{1.3} \centering \setlength\tabcolsep{3.5pt} \begin{tabular}{|c|c|c|c|c|} \hline & \textbf{Bi} & \textbf{Sb} & \textbf{As} & \textbf{P} \\ \hline \textbf{Ba} & \specialcell{anti-ferroelectric \\ $C2/m$, -7.37~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{ferroelectric \\ $I4mm$, -6.58~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{ferroelectric \\ $I4mm$, -5.93$\frac{\text{meV}}{\text{atom}}$} & \specialcell{anti-ferroelectric \\ $Cmce$, -21.24$\frac{\text{meV}}{\text{atom}}$}\\ \hline \textbf{Sr} & \specialcell{anti-ferroelectric \\ $C2/m$, -1.74~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{anti-ferroelectric \\ $C2/m$, -0.83~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{paraelectric \\ $C2/m$, -0.65~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{anti-ferroelectric \\ $Cmce$, -2.87~$\frac{\text{meV}}{\text{atom}}$} \\ \hline \textbf{Ca} & \specialcell{paraelectric \\ $I4/mmm$, 0.0~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{paraelectric \\ $I4/mmm$, 0.0~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{paraelectric \\ $I4/mmm$, 0.0~$\frac{\text{meV}}{\text{atom}}$} & \specialcell{paraelectric \\ $I4/mmm$, 0.0~$\frac{\text{meV}}{\text{atom}}$} \\ \hline \end{tabular} \caption{\label{table:classification} Classification of the A$_4$X$_2$O family according to their electric state. Paraelectric refer to a stable structure or a structure with non-polar transition only, ferroelectric is a material with the non-polar to polar transition, anti-ferroelectric is a material with non-polar to non-polar transition with a polar phase being slightly higher in energy with respect to the lowest phase. The energy difference between the parent and the lowest child phase as well as the space group of the ground phase are shown. The parent phase has a space group $I4/mmm$, the polar and anti-polar phase space groups are $I4mm$ and $C2/m$ respectively. For A$_4$P$_2$O another orthorhombic anti-polar phase emerges.} \end{table} The chemical versatility of the anti-Ruddlesden-Popper phases is high. Beyond A$_4$Sb$_2$O oxo-antimonides, synthesis of oxo-phosphides, oxo-arsenates and oxo-bismuthides have been reported (see SI). We have systematically computationally explored the entire range of A$_4$X$_2$O structures (A=Ca, Sr, Ba; X=Sb, P, As, Bi). The phonon band structures are all plotted in Fig.~S1 and the results of the relaxation along all unstable phonon modes are presented in Table~\ref{table:classification}. More information on the phases competing for each chemistry is available in the SI. We found that all Ca-based compounds are paraelectric. Only Ba$_4$As$_2$O and Ba$_4$Sb$_2$O show a polar ground state. The ground states are most of the time anti-polar. We note that we only found few instabilities through octahedra rotations and tilts in the anti-Ruddlesden-Popper phase while they are common in standard Ruddlesden-Popper structures such as (Ca,Sr)$_3$Ti$_2$O$_7$~\cite{Zhang:2016a}, Ca$_3$Zr$_2$S$_7$~\cite{Zhang:2017}, La$_2$SrCr$2$O$_7$~\cite{Zhang:2016}. One of the appeal of perovskites is their strong chemical tunability as many different chemical substitutions can be performed tuning the ferroelectric properties~\cite{Benedek:2015, Zhang:2020}. It appears that similar tunability could be available for A$_4$X$_2$O. Moreover, as our described anti-Ruddlesden-Popper structure corresponds to $n=1$ in the traditional series A$_{3n+1}$X$_{n+1}$O$_{n+1}$, one could consider tuning properties by varying $n$ to higher values possibly by thin-film growth~\cite{Sharma:1998, Nie:2014}. We now turn to the origin of the polar distortion in A$_4$X$_2$O. We especially focus on the A$_4$Sb$_2$O series which shows a transition in the nature of the ground state from strongly polar for Ba, to anti-polar for Sr and non-polar for Ca. The anti-Ruddlesen-Poppler structure shows a polar displacement of an anion in an octahedral cationic cage and it is natural to make the analogy with traditional ferroelectric perovskites such as BaTiO$_3$ where a cation moves in an anionic octahedral cage. However, the analysis of the Born effective charges hints at a very different physical mechanism in both situations. While ferroelectric oxide perovksites can show anomalously high Born effective charges ($Z^*_{Ti}$=+7.25 $e$, $Z^*_{0}$=$-$5.71 $e$)~\cite{Ghosez:1998}, the Born effective charges in Ba$_4$Sb$_2$O are closer to the nominal charges ($Z^*_{Ba}$=+2.67 $e$, $Z^*_{0}$=$-$2.71 $e$). This indicates a more ionic bonding between the O and alkali-earth atoms and that dynamical charge transfer is not as important as in oxide perovskites~\cite{Ghosez:1996}. This conclusion is further confirmed by the crystal orbital Hamilton population (COHP) analysis~\cite{Dronskowski:1993,Maintz:2016,Nelson:2020} showing rather weak ionic character of Ba-O bonds in Ba$_4$Sb$_2$O in contrast to the strong covalent character of Ti-O bonds in BaTiO$_3$ with ICOHP energy being one order of magnitude higher than the one in Ba$_4$Sb$_2$O. In passing, we note that while, Born effective charges are lower in anti-Ruddlesen-Popper structures, their large atomic displacements (e.g., 0.40 \AA\ for O and $-$0.20 \AA\ for one of Ba atoms in Ba$_4$Sb$_2$O) maintain a reasonable polarization. This analysis points out to a ferroelectric distortion driven by a geometrical effect with the simple picture of an O atom relatively free to move in a too large cationic cage. To further confirm this picture, we study the interatomic force constants (IFCs) in real space. We observe that the on-site IFC of the O atom, quantifying the restoring force that it feels when displaced with respect to the rest of the crystal, is close to zero in Ba$_4$X$_2$O along the $z$ (out-of-plane) direction, and one order of magnitude smaller than in-plane. This highlights that the O atoms are almost free to move along $z$ in the $I4/mmm$ phase. The close to nominal Born effective charges and very low on-site IFC are both characteristics of geometrically-driven ferroelectricity as described in fluoride perovskites~\cite{Garcia:2014}. The geometric nature of the instability naturally explains why going from Ba to Sr and Ca weakens the polar instability. Indeed, the on-site IFC of the O atom along $z$ increase as we go from Ba to Ca (0.17, 0.99, 1.79 eV/\AA$^2$) and as the cation to O distance along $z$ progressively decreases ($d_{AO} =$ 3.08, 2.88, 2.66 \AA). The smaller room for the O movement lowers the polar instability for Sr compared to Ba and cancels it for Ca. The local character of the structural instability in real space is confirmed by its fully delocalized character in reciprocal space in Fig.~2. The local and geometric nature of the structural instability is also consistent with the hyperferroelectric character~\cite{Khedidji:2020} and the possible emergence of antiferroelectricity. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{figures/figure3_strain/figure3_deltaE_and_polarization_vs_strain.pdf} \caption{\label{fig:pdos} Energy difference between child and parent phases (solid lines) and polarization (dashed lines) of Ba$_4$Sb$_2$O (black curves) and BaO (red curves) as a function of in-plane strain computed with PBE functional. The data was fitted with linear function and the 4th order polynomials for polarization and energy difference respectively. Regular BaO and strained elongated Ba$_4$Sb$_2$O octahedra are shown.} \end{figure} In A$_4$X$_2$O anti-Ruddlesden-Popper compounds, the O atoms are surrounded by an octahedron of A atoms, showing a local environment similar to that experienced in the AO rocksalt phases. The latter are consistuted by regular octahedron units and are paraelectric. However, it has been predicted theoretically~\cite{Bousquet:2010} and recently confirmed experimentally~\cite{Goian:2020} that rock salt alkali-earth can become ferroelectric beyond a critical compressive epitaxial strain. Fig.~3 shows the energy difference and polarization between the paraelectric and ferroelectric phase as a function of the compressive strain for BaO (red) and Ba$_4$Sb$_2$O (black). The ferroelectric phase becomes favored for BaO above a compressive strain of 1\%. On the other hand, the unstrained Ba$_4$X$_2$O has Ba$_6$O octahedra distorted to the equivalent of around $-$6\% in BaO. Applying a tensile strain on Ba$_4$X$_2$O moves the octahedral geometry towards unstrained rock salt BaO and lowers the polar instability. Additionally, the $c/a$ ratio describing the octahedron elongation is $\sim$1.2 and close to that of ferroelectric phase of BaO at that strain. This highlights that in Ba$_4$X$_2$O, the surrounding atoms impose an internal, chemical strain on the Ba$_6$O cages. This natural strain induces ferroelectricity as previously highlighted in strained BaO. We note that such a level of strain (6\%) would be very difficult to reach within epitaxial films of rock salt. In Ba$_4$Sb$_2$O, polarization, however, increases much slower with strain than in BaO due to the presence of Sb atoms which limit the deformation of octahedra in the ferroelectric phase (see SI). While we focused on the X=Sb antimonide series here, the trend with Ba$>$Sr$>$Ca in terms of polar distortion is present across all chemistries from X=P, As, Bi and Sb (see SI). Compared to traditional perovskite-related structures, the A$_4$X$_2$O family offers opportunities in achieving properties that have been traditionally difficult to combine with traditional ferroelectric perovskites. Anti-Ruddlesen-Popper materials show typically smaller band gaps compared to oxide perovskites. While tetragonal $P4mm$ BaTiO$_3$ shows an indirect optical band gap of about 3.2 eV (1.67~eV in GGA between O 2p and Ti 3d states), we estimated the band gap of Ba$_4$Sb$_2$O to 1.22~eV using the HSE hybrid functional (0.67 eV in PBE). The band structure of Ba$_4$Sb$_2$O is shown in Fig.~\ref{fig:bandstructure}, highlighting a direct gap at Z between Ba 5d and O 2p states. Other A$_4$X$_2$O compounds show similar band gaps in the range from 0.57 to 1.00~eV in PBE (see Fig.~S4 in the SI). Such ferroelectrics with small band gaps compatible with visible light could be very useful in the field of ferroelectricity-driven photovoltaics~\cite{Huang:2010,Li:2017,Young:2012,Peng:2017,Grinberg:2013,He:2016}. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/figure4_electronic_band_structure/figure4_Ba4Sb2O_child107_bandstructure_VASP_PBE.pdf} \caption{\label{fig:bandstructure} Electronic band structure of Ba$_4$Sb$_2$O in its $I4mm$ polar phase along the high-symmetry directions with PBE functional with a scissors correction of 0.55 eV. The direct band gap at Z point (1.22 eV) is marked by red and green points for the conduction and valence bands. } \end{figure} Another grand challenge has been to combine ferroelectricity with magnetic long-range order in magneto-electric multiferroics. The traditional mechanism of polar instability in the B site of a perovskite has been deemed difficult to combine with ferroelectricity since the non-magnetic d$^0$ character of the B site transition metal is often necessary to favor ferroelectricity~\cite{Hill:2000,Spaldin:2007}. Combining polar distortion on one site and magnetism on another site such as in EuTiO$_3$ or BiFeO$_3$~\cite{Shvartsman:2010,Spaldin:2019} or moving towards improper ferroelectricity as in YMnO$_3$ have lead to magnetoelectric multiferroics~\cite{Fennie:2005,Varignon:2013,Varignon:2019}. The geometrically driven polar instability demonstrated in anti-Ruddlesden-Popper structure offers an alternative opportunity for multiferroicity. Magnetic +2 rare-earth atoms often substitute to alkali-earth and Eu$_4$Sb$_2$O has been experimentally reported to form in the anti-Ruddlesden-Popper structure~\cite{Schaal:1998}. Computing phonon band structures and relaxing the structure along the unstable modes, we found that Eu$_4$Sb$_2$O is ferroelectric. Similar to Ba$_4$Sb$_2$O, the geometric polar instability in Eu$_4$Sb$_2$O involves directly the movement of non-magnetic O against the magnetic apical Eu$^{2+}$. This is likely to couple magnetism and ferroelectricity. Magnetic ordering computations show that Eu$_4$Sb$_2$O exhibits a ferromagnetic ground state with an easy axis pointing along the $c$ direction and along the polarization. We estimate the magnetic Curie temperature to be $\sim$24 K (see Methods). Most magnetoelectric multiferroic materials including the most studied BiFeO$_3$ are anti-ferromagnetic. Despite their technological importance, there are very few examples of materials combining ferromagnetic and ferroelectric order~\cite{Spaldin:2005} and the few known ones are double-perovskites (e.g., Pb$_2$CoWO$_6$~\cite{Brixel:1988} or the R$_2$NiMnO$_6$/La$_2$NiMnO$_6$ heterostructures~\cite{Zhao:2014}) where magnetism and ferroelectricity come from different sites. Eu$_4$Sb$_2$O as its parent rocksalt EuO is a ferromagnetic insulating oxide~\cite{Wei:2019}. The coexistence of ferromagnetism and ferroelectricity has just been confirmed experimentally in epitaxially strained EuO films~\cite{Goian:2020} and is naturally appearing in Eu$_4$Sb$_2$O anti-Ruddlesden-Popper phase. The magnetic space group $I4m'm'$ is compatible with linear magnetoelectric coupling and the magnetoelectric tensor has the following form~\cite{Gallego:2019}: \begin{equation} \alpha_{ME} = \begin{pmatrix} \alpha_{xx} & 0 & 0 \\ 0 & \alpha_{xx} & 0 \\ 0 & 0 & \alpha_{zz} \end{pmatrix} \end{equation} More quantitatively, the computation of the linear magnetoelectric tensor in Eu$_4$Sb$_2$O confirms that a coupling is present with a non-negligible value: $\alpha_{xx} = 0.1$ ps/m (ionic contribution 0.08 ps/m and 0.02 electronic contribution), $\alpha_{zz}=0.016$ ps/m (ionic contribution 0.006 ps/m and 0.01 electronic contribution). We note that other rare-earth based anti-Ruddlesden-Popper phases are known to exist Eu$_4$As$_2$O~\cite{Wang:1977}, Eu$_4$Bi$_2$O~\cite{Honle:1998}, Yb$_4$As$_2$O~\cite{Burkhardt:1998}, Yb$_4$Sb$_2$O~\cite{Klos:2018} and Sm$_4$Bi$_2$O~\cite{Nuss:2011}. It is possible that in addition to Eu$_4$Sb$_2$O other anti-Ruddlesen-Popper compounds are magnetoelectric multiferroics. \section{Conclusions} Following a data-driven approach based on a HT search within a database of phonons, we have identified a family of A$_4$X$_2$O (A=Ba, Sr, Ca, Eu and X=Bi, Sb, As, P) materials forming in an anti-Ruddlesden-Popper structure and showing (anti-)ferroelectrics properties. The new mechanism of polar distortion involves the movement of an anion in a cation octahedron. This distortion is geometrically-driven and controlled by the natural strain present in the cation octahedron. This new mechanism leads to hyperferroelectricity but also offers the possibility to combine ferroelectricity with properties uncommon in traditional perovskite-based structures such as small band gaps or magnetism. More specifically, we show that Eu$_4$Sb$_2$O exhibits a rare combination of ferromagnetic and ferroelectric order coupled through linear magnetoelectric coupling. The wide range of chemistries forming in the anti-Ruddlesden-Popper structure offers a tunability similar to that of perovskite structures in terms of strain, chemistry and heterostructures and opens a new avenue for ferroelectrics research. \section{Methods} The high-throughput search for novel ferroelectrics was performed using a recently published phonon database~\cite{Petretto:2018}. We first selected the unstable materials presenting at least one phonon mode $m$ with imaginary frequencies $\omega_m(\mathbf{q})$ within a $\mathbf{q}$-point region of the Brillouin zone. For each of these materials and modes, we focused on the high-symmetry $\mathbf{q}$-points commensurate with a 2$\times$2$\times$2 supercell. We generated a set of new structures by moving the atoms in that supercell according to the displacements corresponding to the different modes and $\mathbf{q}$-points. The symmetry of each new structure was analyzed using the spglib library~\cite{Togo:2018} with a tolerance of $10^{-6}$~\AA\ and 1$^\circ$ on angles. Then, the new structures were categorized as polar or non-polar depending on their point group. Finally, after relaxing all the structures in the set, we classified the materials as paraelectric (when all the structures in the set are non-polar and the polarization is thus always zero), ferroelectric (when the ground state is polar hence possessing a finite polarization value), or anti-ferroelectric (when the ground state is non-polar but there exists at least one polar phase in the set slightly higher in energy). In the latter case, the material can be driven to the polar phase upon application of a strong enough electric field and thus acquire the non-zero polarization. DFT calculations were performed with the ABINIT~\cite{Gonze:2020} and VASP~\cite{Kresse:1996,Kresse:1996a} codes. PBEsol exchange-correlation was used everywhere, if not otherwise noted. PseudoDojo norm-conserving scalar-relativistic pseudopotentials [ONCVSP v0.3]~\cite{Hamann:2013,vanSetten:2018} were used in ABINIT. The Brillouin zone was sampled using a density of approximately 1500 points per reciprocal atom. All the structures were relaxed with strict convergence criteria, i.e. until all the forces on the atoms were below $10^{-6}$~Ha/Bohr and the stresses below $10^{-4}$~Ha/Bohr$^3$~\cite{Petretto:2018}. The phonon bandstructures were computed within the DFPT formalism as implemented in ABINIT~\cite{Gonze:1997,Gonze:1997a} using a $\mathbf{q}$-point sampling density similar to the $\mathbf{k}$-point one though for $\Gamma$-centered grids. The polarization was computed with both the Berry-phase and Born effective charge approaches. GGA$\_$PBE PAW pseudopotentials were used in VASP~\cite{Kresse:1999}. The structures were relaxed up to $10^{-3}$~eV/\AA. The cut-off energy was set to 520~eV and electronic convergence was done up to $10^{-7}$~eV. The $\mathbf{k}$-point sampling was similar to the one used in ABINIT. Both codes yield essentially the same results in the identification of the ground state phase. The use of PBE exchange-correlation potential does not change the ground state phase as well. The Lobster calculations were performed based on VASP DFT calculations.~\cite{Dronskowski:1993,Maintz:2016,Nelson:2020} We used the following basis functions from the pbeVaspFit2015 for the projections: Ca (3p, 3s, 4s), Sr (4p, 4s, 5s), Ba (5s, 5p, 6s), Sb (5p, 5s), O (2p, 2s), Ti (3d, 3p, 4s). The \textbf{k}-point grids for these calculations were at least 12$\times$12$\times$3 for A$_4$X$_2$O and 13$\times$13$\times$13 for BaTiO$_3$. The magnetic structure calculations for Eu$_4$Sb$_2$O were performed with VASP code. The Eu pseudopotential includes 17 electrons in the valence. For the DFT+$U$ calculations, the parameters were set to $U$=6.0~eV and $J$=0.0~eV to accurately describe the localized Eu $f$ orbitals. Good electronic convergence up to $10^{-8}$~eV was obtained with an energy cut-off 600~eV and 6$\times$6$\times$3 $\mathbf{k}$-point grid. The results were double checked with a 12$\times$12$\times$6 $\mathbf{k}$-point grid. The Curie temperature was estimated using the random-phase approximation~\cite{Pajda:2001,Wei:2019}. The phonon bandstructure for Eu$_4$Sb$_2$O was computed through the finite displacements method as implemented in Phonopy~\cite{Togo:2018} using a 2$\times$2$\times$2 supercell. The electronic and ionic parts of magneto-electric tensor were computed with the magnetic field~\cite{Bousquet:2011} and finite displacements~\cite{Iniguez:2008} approaches, respectively. Magnetic symmetries and the form of magneto-electric tensor was identified via the Bilbao crystallographic server. \section*{Acknowledgements} This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 : Materials Project program KC23MP. H.~P.~C.~M. acknowledges financial support from F.R.S.-FNRS through the PDR Grants HTBaSE (T.1071.15). JG acknowledges funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 837910. The authors thank the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) for computational resources. Additionally, the present research benefited from computational resources made available on the {Tier-1} supercomputer of the F\'ed\'eration Wallonie-Bruxelles, infrastructure funded by the Walloon Region under grant agreement n\textsuperscript{o}1117545.
1,108,101,566,095
arxiv
\section{Introduction} \label{sec:intro} In graph theory, a network refers to a collection of vertices linked together via edges. It provides a powerful tool for modeling real-world entities as well as their complex interactions \cite{Newman:2010,EasleyKleinberg:2010}. Many problems can be investigated from this viewpoint. Specifically in scientific and engineering disciplines, transportation networks, communication networks, the World Wide Web, biological patterns, social connections, neural networks, metabolic networks, and pathological networks are all representative examples which can be modeled and studied from a network point of view \cite{JankowskiLorekJOW:2016}. Networks in various domains exhibit diversity in forms. To understand the dynamics of these apparently different networks, the study of the invariant characteristics is necessary. Among the characteristics, community structures are widely believed to be common and important in real networks \cite{GirvanNewman:2002}. That is, the vertices fall naturally into groups with close intra-group relations but estranged inter-group relations. Identifying community structures helps distinguish the pairs of vertices that are more likely to be connected from those pairs that are less likely to be connected, which has both invaluable theoretical values and tremendous practical applications \cite{Li:INS:2015,Li:NEUCOM:2015} and becomes an imperative research topic. To detect the underlying community structure for a given network, the ``modularity'' measure, developed by Girvan and Newman \cite{GirvanNewman:2002}, is routinely applied. The measure is expressed as the differences between the real fraction of edges connecting vertices within each community and the expected fraction when the edges were assumed to be uniformly distributed. It has been shown by extensive studies on both empirical and simulated networks that larger modularity values usually leads to better vertex partitions. Thereby maximizing the modularity measure provides a mathematically well-posed approach in revealing the underlying community structures in networks \cite{MuchaRMPO:2010,FacchettiIA:2011,SzellaLT:2010,Li:INS:2013}. Unfortunately, optimizing the modularity measure is mathematically difficult. It is known that the exactly maximizing the measure is an NP-complete problem over all graphs of a given size, and is only feasible for networks with up to a few hundred vertices \cite{Brandes06}. For larger networks, approximate solutions have to be sought to ensure the scalability, often at the price of losing accuracy. Despite the limited success that has been achieved by the state-of-the-art methods, it is highly desirable to work out a method that is applicable to large networks with high accuracy. Our work developed a novel spectral relaxation based method to maximize the modularity measure. Coupled with an iterative rounding strategy and a simple constrained power method, it provides a fast solution for detecting communities in large networks with high qualities. Another key benefit of the method is that it mainly involves basic matrix-vector operations, which can be easily performed in parallel with high efficiency. The method was implemented and tested on a parallel computing cluster with $128$ CPU cores. A nearly linear improvement in running speed was observed when increasing the number of computing nodes, which strongly verified the potential of the method in partitioning very large networks. The rest of this paper is structured as follows. Section 2 introduces the background knowledge including the modularity maximization model and its related work. Section 3 illustrates our successive spectral relaxation based approach in detail. Section 4 reports the empirical evaluation results, followed by the conclusion in Section 5. \section{Background} \label{sec:background} \subsection{Graph partition and community detection} Graph partition and community detection are two related problems yet with significant difference. Graph partition often arises in computer science, mathematics and physics. The problem is well-defined and has been studied since the 1960s \cite{KernighanLin:1970}. It usually refers to the task of splitting the vertices of a network into groups of fixed numbers or of given sizes with the objective of minimizing the number of edge connections between groups. Comparatively community detection is a much newer problem and has been studied mainly in the recent decade, but has appeared in much wider areas of natural sciences and social sciences including physics, chemistry, biology, social networks, and so on. In community detection, the number of groups and the size of each group are not specified in advance, but are determined by the network itself. The objective is to find a ``natural fault line'' along which a given network divides into partitions \cite{Newman:2010}. A number of community detection models and computational methods have been developed in literature \cite{Newman:2010}. These models try to address different aspects of networks and lead to different computer algorithms. In this paper, out work focuses on the modularity model and proposes an effective algorithm that runs efficiently on parallel computing platforms. \subsection{Modularity maximization on undirected networks} \label{sec:modularity:undirected} For a candidate division of a network, the \emph{modularity} measure is routinely applied to quantify the quality of the partition. Good divisions, with high modularity values, have dense intra-community connections (edges between vertices in the same group) but sparse inter-community connections (edges between vertices in different groups). The modularity measure expresses the concentration of edges inside each group compared with a uniform distribution of edges between each pair of vertices regardless of group partitions. Let us start the discussion from a simplified case. Assume $G=\left( V,E \right)$ is an undirected network, with a set of vertices $V=\left\{ v_1,v_2,\cdots,v_n \right\}$ and a set of undirected edges $E$. Let $a_{ij}=1 $ if there is an edge connecting $v_i$ and $v_j$, and $a_{ij}=0$ otherwise. For each vertex $v_i$, denote by $d_{i}=\sum_{j=1}^{n}a_{ij}$ its degree. Also denote by $m=\frac{1}{2}\sum_{i=1}^{n}d_{i}$ the total number of edges in the network. Given a candidate assignment of network vertices into groups, the modularity model assumes that the degree associated with each vertex holds preserved. With the uniform random selection principle, we know that the expected number of connections between any two vertices $v_i$ and $v_j$ is $\frac{d_{i}d_{j}}{2m}$. Therefore the real observation minus the expectation is given by $a_{ij}-\frac{d_{i}d_{j}}{2m}$. Sum over all pairs of vertices in the same group. The modularity measure, denoted by $Q$, is defined by% \begin{equation} Q=\frac{1}{2m}\sum_{i,j=1}^{n}\left[ a_{ij}-\frac{d_{i}d_{j}}{2m}\right] \delta_{ij} \label{equ:modularity:undirected} \end{equation}% with $\delta_{ij}=1$ if the vertices $v_i$ and $v_j$ are assigned in the same group and $\delta_{ij}=0$ otherwise. The modularity value has a range $\left[-\frac{1}{2},1\right)$. It is positive when the observed number of intra-group edges is greater than the expectation on the basis of chance. It has been verified through numerous real and simulated studies that larger modularity values are correlated with better community structures in networks. Therefore, optimizing the modularity measure provides a practical and principled way of partitioning networks. Through searching for the partition that has the largest modularity value, one can detect community structures in networks precisely. \subsection{Modularity maximization on directed networks} \label{sec:modularity:directed} With trivial modification, the modularity model on undirected networks can be applied in the case of directed networks as well \cite{LeichtNewman:2008}, with which the vertices typically have different in-degrees and out-degrees. Consider a directed network with $n$ vertices $\left\{ v_1,v_2,\cdots,v_n \right\}$. It has an edge from vertex $v_{j}$ to vertex $v_{i}$ with probability $\frac{d^{in}_{i}}{d^{out}_{j}}$, where $d^{in}_{i}$ is the in-degree of $d_i$ and $d^{out}_{j}$ is the out-degree of $d_j$. Let $a_{ij}=1$ if there is a directed edge from $v_j$ to $v_i$ and $a_{ij}=0$ otherwise. Similarly denote by $m$ the number of directed edges in the network. Then the modularity measure on the directed network is defined by: \begin{equation} Q=\frac{1}{m}\sum_{i,j=1}^{n}\left[ a_{ij}-\frac{d^{in}_{i}d^{out}_{j}}{m}\right] \delta_{ij}. \label{equ:modularity:directed} \end{equation}% Note that, different from the modularity measure on undirected networks, there is no factor of $2$ in the denominator of the model. The modularity models on both undirected and directed networks can be extended to the case of weighted networks, which can be done trivially by replacing $a_{ij}$ to be the edge weight if there is an edge between $v_i$ and $v_j$. Here we omit the detailed discussion. \subsection{Modularity maximization methods} \label{sec:modularity:methods} Partitioning networks by maximizing the modularity measure exactly is a known NP-complete problem \cite{Brandes06}. The required computation grows exponentially with the increasing size of the network. Despite the challenge of NP-completeness, a number of exact methods for exhaustive optimization were developed, which achieved limited success on networks with up to a few hundred vertices on conventional computing platforms, such as the integer programming approach and the column generation method \cite{AgarwalKempe:2008,AloiseCCHPL:2010}. To ensure the tractability on large networks, approximate algorithms have to be sought. Through relaxation, Agarwal \& Kempe designed a linear program method \cite{AgarwalKempe:2008}. On small networks, the method reported very accurate results. Unfortunately, the method is still computationally demanding and does not scale to large networks. The simulated annealing method was investigated in the problem \cite{GuimeraAmaral:2005}. Simulated annealing treats the quantity of interest as an energy and simulates the cooling process of solids until the system reaches the state with the lowest energy. The method had excellent empirical performance and reported the best known results on many real networks. Unfortunately, although the method partially lessens the computational requirement, the burden is still prohibitive for very large networks. Greedy heuristics were investigated on large networks. A straightforward way is to start with each vertex in a group of its own. The method then successively combines a pair of groups into one group. At each step it chooses the two groups with which the combination gives the largest modularity value increase, or the smallest decrease if no choice gives an increase. Eventually all vertices are merged into a single group. Then we go back over all the intermediate steps, select the one state with the highest modularity value and obtain the partition result \cite{CalusetNM:2004,WakitaTsurumi:2007}. A related heuristic is based on edge \textit{betweenness} \cite{GirvanNewman:2002}. For an edge, its betweenness is given by the number of shortest paths of all pairs of vertices that pass through the edge. The heuristic recursively seeks and removes one by one the edges with the highest betweenness, until the network breaks up into single vertices. Therefore the procedure generates a dendrogram with hierarchical divisions from a single group to all isolated vertices, with which the intermediate division possessing the highest modularity value will be chosen. Overall these greedy methods run fast, and give moderately good divisions of networks. But in practice the two simple heuristics have been superseded by alternatives that often find higher modularity values \cite{Newman:2010}. More complicated search heuristics were specially designed for the modularity maximization problem. Recently, Noack \& Rotta developed a multi-level search method \cite{RottaNoack:2011}, which involves coarsening- and refinement-based heuristics. Another method, the Louvain method \cite{BlondelGLL:2008}, uses a two-phase iterative search strategy by firstly looking for small communities through optimizing modularity locally and then aggregating nodes in the same community to construct a new network. These two search heuristics have reported very accurate results on many benchmarked networks and are regarded as state-of-the-art solutions for partitioning large networks \cite{AynaudBGL:2013}. \section{A Successive Spectral Relaxation Method} \label{sec:sar} \subsection{Conventional spectral relaxation} \label{sec:sar:conventional} The idea of spectral relaxation can be applied in community detection problems \cite{Newman:2006,WhiteSmyth:2005}. To illustrate the method, let us start from a special case of dividing an undirected network into just two groups. Use $s_{i}=\pm 1$ to denote the group membership of vertex $v_i$. Then we have $\sum_{i}s^{2}_{i}=n$ and $\delta_{ij}=\frac{1}{2}\left(s_{i}s_{j}+1\right)$. Then \begin{equation} Q=\frac{1}{4m}\sum_{i,j=1}^{n}\left[a_{ij}-\frac{d_{i}d_{j}}{2m}\right]\left(s_{i}s_{j}+1\right)=\frac{1}{4m}s^{T}Bs \label{equ:modularity:binary:undirected1} \end{equation} where $B$ is an $n\times n$ \textit{modularity} matrix with elements $b_{ij}=a_{ij}-\frac{d_{i}d_{j}}{2m}$. The sums of elements in each row and in each column of $B$ are all zero, which implies that the modularity value of an un-divided network is always zero. Label all eigenvalues of the modularity matrix in a non-increasing order $\lambda_{1}\geq \lambda_{2}\geq \cdots \geq \lambda_{n}$. Assume $u_{i}$ is the unit eigenvector associated with eigenvalue $\lambda_{i}$. Then the vector $s=\sum_{i}a_{i}u_{i}$, where $a_{i}=u_{i}^{T}s$. And we have \begin{equation} Q=\frac{1}{4m}\sum_{i=1}^{n}a_{i}u_{i}^{T}B\sum_{j=1}^{n}a_{j}u_{j} =\frac{1}{4m}\sum_{i=1}^{n}\left(u_{i}^{T}s\right)^{2}\lambda_{i} \label{equ:modularity:binary:undirected2} \end{equation} To maximize the value of $Q$, it is obvious that the vector $s$ needs to be chosen in a way such that as much weight as possible is concentrated involving the largest eigenvalue $\lambda_{1}$. Correspondingly the best choice of $s$ should be proportional to the first eigenvector $u_1$. Unfortunately with the constraint that each element of $s$ only takes the value of $1$ or $-1$, such a proportion is generally infeasible, which makes the optimization process a hard problem. A simple rounding strategy is often applied and found effective in practice, with which the vertices are divided into two groups based on the signs of the elements in the eigenvector $u_1$. That is: $s_{i}=+1$ if $u_{1i}>0$ and $s_{i}=-1$ otherwise, where $u_{1i}$ is the $i$-th element of $u_1$. Now the network partition problem is simplified to the problem of estimating the eigenvector $u_{1}$ of the modularity matrix $B$. The eigenvector can be calculated by the power iteration method efficiently. Starting with a random vector $v^0$, the power iteration method updates the vector through matrix-vector multiplication and normalization: \begin{equation} v^{i+1}=\frac{Bv^{i}}{\left\| Bv^{i} \right\|}, \end{equation} with $\left\| \cdot \right\|$ denoting the $\ell_2$-norm of a vector. After a number of iterations, the process gradually approaches the dominant eigenvector $v$, which is the eigenvector associated with the dominant eigenvalue $\lambda$ that has the largest magnitude. If the dominant eigenvalue $\lambda>0$, it is the first eigenvalue $\lambda_1$ and the dominant eigenvector $v$ is just the desired eigenvector $u_1$. The vertices are then divided into two communities based on the signs of the elements in $u_1$. If the dominant eigenvalue $\lambda<0$, however, it is $\lambda_n$ and the dominant eigenvector $v$ is $u_n$ instead of the desired $u_1$. In this case, we can shift the matrix $B$ to: $B^{\prime}=B+\left| \lambda \right|I$, where $I$ is the identity matrix of the same size as $B$. $B^{\prime}$ has the eigenvalues $\lambda_{1}+\left| \lambda \right|\ge \lambda_{2}+\left| \lambda \right|\ge\cdots \ge\lambda_{n}+\left| \lambda \right|$ and the same eigenvectors $u_1,u_2,\cdots,u_n$ as $B$. But applying the power iteration method on $B^{\prime}$ returns the desired dominant eigenvector $u_1$. When a directed network needs to be divided into two communities, again we define $s_{i}=+1$ if vertex $v_{i}$ is to be assigned to one community and $s_{i}=-1$ otherwise, which similarly leads to the maximization of \begin{equation} Q=\frac{1}{2m}\sum_{i,j=1}^{n}s_{i}b_{ij}s_{j}=\frac{1}{2m}s^{T}Bs \label{equ:modularity:binary:directed} \end{equation} with respect to $s\in \left\{-1,+1\right\}^n$, where the matrix $B=\left(b_{ij}\right)_{i,j=1}^{n}$ and $b_{ij}=a_{ij}-\frac{d^{in}_{i}d^{out}_{j}}{m}$. The modularity matrix $B$ in the case of directed networks is, in general, not symmetric. To restore the symmetry, we maximize \[ Q=\frac{1}{4m}s^{T}\left(B+B^{T}\right)s \] instead. Similarly the spectral relaxation method can be applied based on the first eigenvector of the matrix $B+B^{T}$. For network partition into more than two groups, this two-way division scheme is performed recursively on each group \cite{Newman:2010,LiSchuurmans:2011}. The division process repeats until there is no increase in $Q$'s value, which happens when the modularity matrix has no positive eigenvalues. \subsection{Successive relaxation and the constrained power method} \label{sec:sar:cpm} The conventional spectral relaxation method discussed in Section \ref{sec:sar:conventional} divides network vertices into two partitions according to the signs of the elements in the first eigenvector of the modularity matrix, while completely ignoring their magnitudes. However, the magnitudes contain important information. It is evident from Equ. (\ref{equ:modularity:binary:undirected2}) that a large magnitude would contribute significantly to the modularity value and therefore give us strong confidence in deciding the group membership of the corresponding vertex. Contrarily, a small magnitude makes it difficult to set the membership of the vertex due to its trivial influence on the modularity value. Considering the important information the magnitudes have, it is intuitively desirable and technically feasible to take the magnitudes into consideration and design a \textit{successive relaxation} method for network partition. Initially, the successive relaxation method is the same as the conventional relaxation approach and applies the power iteration method to compute the first eigenvector of the modularity matrix. The difference is, rather than making the division decision in a single batch, we only set the group membership of the vertices with large magnitudes. The decision of the remaining vertices with small magnitudes are postponed. In the forthcoming iterations, a residual problem is generated. The structure of the new problem is roughly the same as the first one but with fewer un-partitioned vertices and we can deal with it in a similar way. The process is repeated until no vertices are left un-partitioned. Mathematically, in the first iteration the spectral relaxation method solves $\max s^{T}Bs$, the same problem as in Section \ref{sec:sar:conventional}. Again we apply the classical power method to obtain the first eigenvector $u_1$ of the modularity matrix $B$. Then, instead of deploying the conventional rounding strategy, partition decisions are only made on those elements with sufficiently large magnitudes, i.e., \begin{equation} s_{i}=\left\{ \begin{array}{l} +1 \\ -1 \\ unknown% \end{array}% \right. \begin{array}{l} if \enspace u_{1i}\ge\sigma \\ if \enspace u_{1i}\le-\sigma \\ otherwise% \end{array}% \label{equ:ssr:threshold} \end{equation} where $\sigma$ is a positive threshold value, often setting as one. Denote by $s_+$ the rounded elements whose values have been held fixed in the first iteration, and $s_-$ the remaining elements awaiting to be set. Re-organize $s=\left( \begin{array}{c} s_{+} \\ s_{-}% \end{array}% \right)$. The new optimization objective becomes% \begin{equation} \left( \begin{array}{c} s_{+} \\ s_{-}% \end{array}% \right) ^{T}\left( \begin{array}{cc} B_{++} & B_{+-} \\ B_{-+} & B_{--}% \end{array}% \right) \left( \begin{array}{c} s_{+} \\ s_{-}% \end{array}% \right) \end{equation}% where $B_{++},B_{+-},B_{-+}$ and $B_{--}$ are four submatrices of $B$. Note that the value of $s_{+}^{T}B_{++}s_{+}$ holds constant and thus can be ignored. The objective becomes equivalently the maximization of% \begin{equation} L=s_{-}^{T}B_{--}s_{-}+2s_{-}^{T}B_{-+}s_{+} \label{equ:cpm:objective} \end{equation}% with respect to $s_{-}$, subject to the length constraint: $\left\Vert s_{-}\right\Vert = \sqrt{k}$ where $k$ denotes the number of elements in $s_{-}$. To solve the new problem, we designed a \textit{constrained power method} that has a simple update rule:% \begin{equation} s_{-}^{i+1}=\frac{B_{--}s_{-}^{i}+B_{-+}s_{+}}{\left\Vert B_{--}s_{-}^{i}+B_{-+}s_{+}\right\Vert } \times \sqrt{k}. \label{equ:cpm:update} \end{equation} The update rule can be intuitively explained from a viewpoint of gradients. Maximizing $L$ requires updating $s_{-}$ along the gradient direction and re-normalizing the vector to satisfy the norm constraint. The update is guaranteed to converge, which happens when the gradient direction $\nabla L$ is parallel to (propositional to) the current estimate of $s_{-}$: $\nabla L \varpropto s_{-}$. By taking the derivative of $L$ with respect to $s_{-}$, we also know $\nabla L \varpropto B_{--}s_{-}+B_{-+}s_{+}$. Therefore, it holds that $s_{-} \varpropto B_{--}s_{-}+B_{-+}s_{+}$. Considering that $s_{-}$ has a length of $\sqrt{k}$ and $\frac{B_{--}s_{-}^{i}+B_{-+}s_{+}}{\left\Vert B_{--}s_{-}^{i}+B_{-+}s_{+}\right\Vert }$ has a unit length, then $s_{-}=\pm \frac{B_{--}s_{-}+B_{-+}s_{+}}{\left\Vert B_{--}s_{-}+B_{-+}s_{+}\right\Vert } \times \sqrt{k}$. Taking the positive one, we have the update rule in Equ. (\ref{equ:cpm:update}). As in the first iteration, given the relaxed solution of $s_{-}$, a similar partial rounding procedure is adopted. Only those elements with sufficiently large magnitudes are rounded and fixed. In this way, the iterative rounding procedure and the constrained power method are performed successively to determine the group membership of the vertices. The process tops when $s_{-}$ becomes empty when all vertices have been allocated into two groups. \subsection{Complexity} \label{sec:sar:complexity} For a given network with $n$ vertices and $m\le kn$ edges where $k$ is a constant, a known result, based on the work of \cite{Newman:2010}, is that the classical power method needs $O\left(n \right)$ matrix-vector multiplications to calculate the leading eigenvector of the modularity matrix and each multiplication needs $O\left(n \right)$ floating point operations by taking the sparsity of the network into consideration. So in total, the spectral relaxation method needs $O\left(n^2\right)$ operations to partition a network into two groups based on the modularity model. The complexity of the successive spectral relaxation method can be analyzed similarly with a small modification. Instead of using a threshold value $\sigma$ as in Equ. (\ref{equ:ssr:threshold}) to determine the borderline of rounding, we assume an $\epsilon \left(0<\epsilon <1\right)$ faction of unrounded vertices' group membership get decided in each iteration. Similarly to the power method, in the first iteration the constrained power method has a complexity of $O\left(n^2\right)$ for a sparse network with $n$ variables. In the subsequent iteration the residual problem has $n\left(1-\epsilon \right)$ unrounded vertices, and the constrained power method would therefore require $O\left(n^2\left(1-\epsilon\right)^2\right)$ floating operations to converge. Repeating the argument, the complexity of the successive spectral relaxation method is given by: \[ n^2+n^2\left(1-\epsilon\right)^2+n^2\left(1-\epsilon\right)^4+\cdots = \frac{1}{2\epsilon-\epsilon^2}n^2, \] which lies between $\frac{1}{2\epsilon}n^2$ and $\frac{1}{\epsilon}n^2$. So we have the following result: \begin{lemma} To bipartition a network with $n$ vertices and $m\le kn$ edges where $k$ is a constant, the successive spectral relaxation method has a complexity of $O\left(\frac{1}{\epsilon}n^2 \right)$ where $\epsilon$ is the fraction of variables to round in each iteration. \end{lemma} \subsection{Relationship with the projected power method} \begin{figure}[!t] \centering \includegraphics[width=3.2in]{ppm.pdf} \caption{A graphical illustration of the projected power method.} \label{fig:ppm} \end{figure} The proposed constrained power method can be derived rigorously from the projected power method \cite{XuLS:2009}, which investigates a generic optimization problem, \begin{equation} \max_v v^{T}Av \quad \mbox{ subject to } \quad \left\Vert v \right\Vert=r, Gv=c. \label{equ:ppm:problem} \end{equation} where $A$ is a positive definite matrix, $r$ is a positive value, and $Gv=c$ denotes the linear constraints exerted on $v$. As shown in Fig. \ref{fig:ppm}, all feasible solutions of $v$ to the maximization problem are vectors starting from the origin and ending on the surface of $\left\Vert v \right\Vert=r$. Let $w$ be the vector from the origin to its projection point on the hyperplane $Gv=c$. It can be easily seen that every feasible solution of $v$ can be written as $v=u+w$ where vector $u$ lies on the hyperplane $Gu=0$ and $\left\Vert u \right\Vert=\sqrt{r^2-w^{T}w}$. The projection of vector $v$ onto the hyperplane $Gv=c$ is given by $Pv$, where $P=I-G^{T}\left(GG^{T} \right)^{-1}G$ is the projection matrix and $I$ denotes the identity matrix of appropriate size. In each iteration, given the current $v^{i}$, the projected power method stretches the vector by multiplying with $A$, projects the stretched vector $Av^{i}$ onto $Gv=c$, and re-normalizes the projection to $u^{i+1}$. Finally we obtain $v^{i+1}$ by summing up $u^{i+1}$ and $w$. That is, the projected power method has an update rule of: \begin{equation} v^{i+1}=\frac{PAv^i}{\left\Vert PAv^i \right\Vert}\times \sqrt{r^2-w^{T}w} + w. \label{equ:ppm:update} \end{equation} It can be proved that during each step, the estimate of $v$ gets nearer and nearer to the maximum stretching direction of $A$ while staying feasible. The convergence is theoretically guaranteed, and the convergence speed is usually very fast in practice \cite{XuLS:2009}. The update rule of the constrained power method in Equ. (\ref{equ:cpm:update}) can be rigorously derived from the update rule of the projected power method in Equ. (\ref{equ:ppm:update}). Without loss of generality, we just assume $B_{--}$ is a positive definite matrix\footnote{If $B_{--}$ is not positive definite, its diagonal elements can be shifted by a positive value to provide the positive definiteness, as shown in Section \ref{sec:sar:conventional}.}. We then re-write the optimization objective in Equ. (\ref{equ:cpm:objective}) as \begin{equation} L=v^{T}Av-z \label{equ:maxobj2} \end{equation} where $A=\left[ \begin{array}{cc} B_{--} & B_{-+}s_{+} \\ \left( B_{-+}s_{+}\right) ^{T} & z% \end{array}% \right] $ and $v=\left[ \begin{array}{c} s_- \\ 1% \end{array}% \right]$. With a sufficiently large value of $z$, the positive definiteness of matrix $A$ can be ensured. When $z$ is given, the objective becomes equivalently the maximization of $v^{T}Av$ satisfying: $\left\Vert v \right\Vert=\sqrt{k+1}$ and $v_{k+1}=1$. By exploring the structure of the problem in Equ. (\ref{equ:maxobj2}, we are able to get a simple solution by applying the update rule of the projected power method in Equ. (\ref{equ:ppm:update}). Decompose a feasible solution $v$ into $v=u+w$ where $w=\left[0,\cdots,0,1 \right]^{T}$ and $u$ is a vector satisfying $u_{k+1}=0$ and $\left\Vert u \right\Vert=\sqrt{k}$. Given the estimate in the $i$-th iteration $v^i=\left[ \begin{array}{c} s_-^i \\ 1% \end{array}% \right]$, we stretch it to $Av^{i}=\left[ \begin{array}{c} B_{--}s_{-}^{i}+B_{-+}s_{+} \\ \left( B_{-+}s_{+}\right) ^{T}s_{-}^{i}+d % \end{array}% \right]$. Project the vector onto the hyperplane $u_{k+1}=0$, and it becomes $\left[ \begin{array}{c} B_{--}s_{-}^{i}+B_{-+}s_{+} \\ 0 % \end{array}% \right]$. Re-normalize the result and we have a new estimate $v^{i+1}= \left[ \begin{array}{c} s_{-}^{i+1}\\ 1 \end{array} \right]= \left[ \begin{array}{c} \frac{B_{--}s_{-}^{i}+B_{-+}s_{+}}{\left\Vert B_{--}s_{-}^{i}+B_{-+}s_{+}\right\Vert} \times \sqrt{k}\\ 1 % \end{array}% \right]$ by summing up $u^{i+1}$ and $w$, which exactly gives the update rule of the constrained power method in Equ. (\ref{equ:cpm:update}). \subsection{Parallelizability} \label{sec:sar:parallel} Parallel computing refers to the type of computation with which the calculations are carried out simultaneously on multiple computing nodes \cite{Quinn:1994}. It has been employed for decades, mainly in high-performance computing, and has helped solve many difficult problems that cannot be tackled by conventional serial computing models. Nowadays, parallel computing is becoming more and more important in handling large-scale data processing applications. A key concern to the success of parallel computing is to divide the execution of an algorithm into parallel portions that can be distributed and solved independently. Practically, the algorithms are very different in the level of parallelizability, varying from easily parallelizable to totally unparallelizable at all. Another concern lies in the communication and synchronization costs for different computing nodes, which also affect the parallelizability of an algorithm significantly. The constrained power method proposed in this paper can be parallelized easily and effectively. The method runs iteratively, and in each iteration it mainly involves matrix-vector multiplication and addition operations. The matrix operands are easily split into smaller blocks so that the operations on each block can be executed simultaneously on different computing nodes. The final result is obtained by merging results from all blocks with small communication and synchronization costs that can be neglected. As a result, the proposed method has high efficiency in parallel execution. Empirically in our evaluation, a nearly linear improvement in running speed was observed when increasing the number of computing nodes. \section{Evaluation} \label{sec:evaluation} We evaluated the proposed method thoroughly on both real and synthetic networks, with three objectives: to evaluate the partition quality (i.e. the modularity values) with real networks, to evaluate the method's sensitivity towards the change of structures with synthetic networks, and to evaluate the method's running speed and parallel execution efficiency with a very large networks. \subsection{Modularity values on real networks} \label{sec:evaluation:modularity} Eighteen networks were used to evaluate and compare the empirical performance of different partition methods in modularity values. These networks, listed in Table \ref{tab:networks}, are from two collections publicly available in the Internet: from Mark Newman's website\footnote{http://www-personal.umich.edu/$\sim$mejn/netdata/} and from Stanford large network dataset collection\footnote{https://snap.stanford.edu/data/} \cite{LeskvecKrevl:2014}. The two collections include directed/undirected and weighted/unweighted networks and cover a wide range of real applications including social networks, co-purchase networks, email networks, cooperation networks, citation networks, product networks, etc. The sizes of the networks vary significantly from less than one hundred vertices and edges, to over three million vertices and sixteen million edges. In literature, these networks have been popularly used as benchmarks in evaluating community detection algorithms. \begin{table*}[!t] \caption{Benchmark networks from Mark Newman's personal website and Stanford large network dataset collection.} \label{tab:networks} \begin{center} \begin{small} \begin{tabular}{lccl} \hline Networks & $\#(vertices)$ & $\#(edges)$ & Description \\ \hline karate & $34$ & $78$ & Friendship relations of members in a karate club \\ dolphins & $62$ & $159$ & Frequent associations of dolphins \\ lesmis & $77$ & $254$ & Character interactions from \textit{Les Mis\'erables} \\ polbooks & $105$ & $441$ & Co-purchase of politics books from \textit{Amazon.com} \\ adjnoun & $112$ & $425$ & Adjacency of adjectives and nouns in \textit{David Copperfield} \\ football & $115$ & $613$ & American college football games network (2000) \\ jazz & $198$ & $2,742$ & Jazz musicians network \\ email & $1,133$ & $5451$ & An email communication network \\ ca-GrQc & $5,242$ & $28,980$ & Collaboration net of arxiv general relativity \\ ca-HepTh & $9,877$ & $51,971$ & Collaboration net of arxiv high energy physics theory \\ ca-HepPh & $12,008$ & $237,010$ & Collaboration network of arxiv high energy physics \\ ca-AstroPh & $18,772$ & $396,160$ & Collaboration net of arxiv astro physics \\ ca-CondMat & $23,133$ & $186,936$ & Collaboration net of arxiv condensed matter \\ cit-HepTh & $27,770$ & $352,807$ & Paper citation net of arxiv high energy physics theory \\ cit-HepPh & $34,546$ & $421,578$ & Paper citation net of arxiv high energy physics \\ com-DBLP & $317,080$ & $1,049,886$ & DBLP collaboration network \\ com-Amazon & $334,863$ & $925,872$ & Amazon product network \\ cit-Patents & $3,774,768$ & $16,518,948$ & US patent citation network (1975-1999) \\ \hline \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table*} \begin{table}[!t] \caption{Comparison of modularity values obtained by different methods. (For computational concerns, SA used an annealing parameter value of $0.99$ on networks with less than $5,000$ vertices, and a value of $0.90$ on networks with more than $5,000$ vertices.)} \label{tab:qvalues} \begin{center} \begin{small} \begin{tabular}{lcccccc} \hline Networks & CG & LP & SA & MLS & LOU & SSR \\ \hline karate & $.420$ & $.420$ & $.420$ & $.420$ & $.420$ & $.420$ \\ dolphins & $.529$ & $.529$ & $.527$ & $.528$ & $.527$ & $.527$ \\ lesmis & $.560$ & $.560$ & $.556$ & $.557$ & $.560$ & $.560$ \\ polbooks & $.527$ & $.527$ & $.527$ & $.527$ & $.527$ & $.527$ \\ adjnoun & $.308$ & $.308$ & $.308$ & $.308$ & $.308$ & $.308$ \\ football & $.605$ & $.605$ & $.604$ & $.605$ & $.605$ & $.605$ \\ jazz & $.445$ & $.445$ & $.445$ & $.445$ & $.445$ & $.445$ \\ email & $-$ & $-$ & $.575$ & $.575$ & $.576$ & $.576$ \\ ca-GrQc & $-$ & $-$ & $.853$ & $.861$ & $.863$ & $.863$ \\ ca-HepTh & $-$ & $-$ & $.765$ & $.770$ & $.770$ & $.770$ \\ ca-HepPh & $-$ & $-$ & $.640$ & $.657$ & $.658$ & $.663$ \\ ca-AstroPh & $-$ & $-$ & $.609$ & $.627$ & $.622$ & $.630$ \\ ca-CondMat & $-$ & $-$ & $.712$ & $.729$ & $.730$ & $.734$ \\ cit-HepTh & $-$ & $-$ & $.630$ & $.656$ & $.659$ & $.658$ \\ cit-HepPh & $-$ & $-$ & $.709$ & $.725$ & $.726$ & $.729$ \\ com-DBLP & $-$ & $-$ & $-$ & $-$ & $.822$ & $.819$ \\ com-Amazon & $-$ & $-$ & $-$ & $-$ & $.925$ & $.927$ \\ cit-Patents & $-$ & $-$ & $-$ & $-$ & $.810$ & $.813$ \\ \hline \end{tabular} \end{small} \end{center} \vskip -0.1in \end{table} The performance of the proposed successive spectral relaxation method (denoted by SSR) was compared with several state-of-the-art algorithms, including the linear programming method (LP) of Agarwal and Kempe \cite{AgarwalKempe:2008}, the simulated annealing method (SA) of Guimer{\`a} and Amaral \cite{GuimeraAmaral:2005}, the multi-level search method (MLS) of Noack \& Rotta \cite{NoackRotta:2008}, and the Louvain method (LOU) of Blondel et al. \cite{BlondelGLL:2008}. Besides, the optimal results obtained from the column generation method (CG) Aloise et al. \cite{AloiseCCHPL:2010}, which finds the optimal solution but only works on networks with up to a few hundred vertices, is also included as a reference when available. Table \ref{tab:qvalues} compares the modularity values obtained by different methods. On small networks with less than $1,000$ vertices with which the optimal modularity values are known by the CG method, all methods reported highly effective results that were equal to or at least very near to the optimal values. On networks with more than $1,000$ vertices, there are no known optimal modularity values due to the prohibitive computation required by the exact methods. Besides, some approximation methods may also require huge amounts of computation. For example on most of the networks, the LP method couldn't finish the execution within $24$ hours on our platform and the corresponding results were therefore left blank in Table \ref{tab:qvalues}. Among all results that are available on networks with more than $1,000$ vertices, the MLS, LOU and SSR methods reported very similar modularity values. Comparably the results of the SA method seemed to be inferior. One possible reason is that, to lessen the computation, the SA method used an annealing parameter value of $0.90$ when partitioning networks with more than $5,000$ vertices, rather than using the value of $0.99$ when partitioning smaller networks. \subsection{Sensitivity on Synthetic Networks} \label{sec:evaluation:sensitivity} \begin{figure}[!t] \centering \begin{subfigure}[t]{2.5in} \centering \includegraphics[width=2.5in]{fig_nmi01} \caption{$n=1000, \bar{d}=10, \Delta=25$}\label{fig:nmi:a} \end{subfigure} \begin{subfigure}[t]{2.5in} \centering \includegraphics[width=2.5in]{fig_nmi02} \caption{$n=1000, \bar{d}=20, \Delta=50$}\label{fig:nmi:b} \end{subfigure}\\ \begin{subfigure}[t]{2.5in} \centering \includegraphics[width=2.5in]{fig_nmi03} \caption{$n=5000, \bar{d}=10, \Delta=25$}\label{fig:nmi:c} \end{subfigure} \begin{subfigure}[t]{2.5in} \centering \includegraphics[width=2.5in]{fig_nmi04} \caption{$n=5000, \bar{d}=20, \Delta=50$}\label{fig:nmi:d} \end{subfigure} \caption{Comparison of NMI values by different methods. Horizontal: mixing parameter values (0.1--0.5). Vertical: NMI values (0.5--1.0). (For computational concerns, SA used an annealing parameter value of $0.99$ on networks with $1,000$ vertices, and a value of $0.90$ on networks with $5,000$ vertices.)} \label{fig:nmi} \end{figure} Besides the modularity values, our second goal is on the method's sensitivity towards the change of network structures. We synthesized artificial networks under different structural settings. With known network structures, we are able to evaluate the performance of different methods in revealing the communities by comparing the results with the ground-truth. In our experiments, twenty networks were generated with the LFR method \cite{LancichinettiFR:2008}. The networks have various number of vertices ($n=1000, 5000$), average degrees ($\bar{d}=10,20$), maximum degrees ($\Delta=25,50$) and mixing parameters ($0.1, 0.2, \cdots, 0.5$). A mixing parameter gives the ratio of inter-community edges over all edges. A parameter value of $0.5$ is the border beyond which the network community structures are not significant any more in the sense that the vertices have fewer intra-community connections than inter-community connections \cite{RadicchiCCLP:2004}. The normalized mutual information measure, or $NMI$, is routinely applied to show the quality of community detection results when the true structure is known \cite{DanonDDA:2005}. Given the true partition $P_{A}$ and a candidate partition $P_{B}$, let $r_{a}$ be the number of communities in $P_{A}$ and $r_{b}$ be the number of communities in $P_{B}$. Let $n_{kk^{\prime}}$ be the number of vertices that appear in community $k$ of $P_A$ and also found in community $k^{\prime}$ of $P_B$. Denote $n_{k.}=\sum_{k^{\prime}}n_{kk^{\prime}}$ and $n_{.k^{\prime}}$ and $n_{.k^{\prime}}=\sum_{k}n_{kk^{\prime}}$. The $NMI$ measure quantifies the quality of the partition $P_{B}$ by: \begin{equation} NMI_{A,B} =\frac{-2\sum_{k=1}^{r_{a}}\sum_{k=1}^{r_{b}}n_{kk^{% \prime }}\log \left( \frac{n_{kk^{\prime }}n}{n_{k.}n_{.k^{\prime }}}\right) }{\sum_{k=1}^{r_{a}}n_{k.}\log \left( \frac{n_{k.}}{n}\right) +\sum_{k^{\prime }=1}^{r_{b}}n_{.k^{\prime }}\log \left( \frac{n_{.k^{\prime }}}{n}\right) }. \end{equation} The $NMI$ value lies in the range of $\left[0,1 \right]$. A larger the $NMI$ value indicates a higher quality of the candidate partition complying with the true partition. If the two partitions are identical, the $NMI$ value reaches $1$. If they completely independent, the $NMI$ value approaches $0$. We compared the $NMI$ values for the SA, MLS, LOU and SSR methods on different networks. Fig. \ref{fig:nmi} shows the results. It can be seen that on these synthetic networks, all methods showed comparable results with high division quality on most of the networks. The SSR method had comparable sensitivity as the state-of-the-art approaches towards the change of network structures. Evidently the mixing parameter plays a key role that affects the partition qualities. All four methods showed similar sensitivity patterns towards the change of this parameter. When its value is less than or equal to $0.4$, the $NMI$s are very near to $1$. When its value approaches $0.5$, however, there is an evident drop of the $NMI$ value as the community structure becomes too weak. The observed pattern is consistent with the trend revealed in a previous study \cite{LancichinettiFR:2008}. \subsection{Parallel execution efficiency} \label{sec:evaluation:speed} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{fig_speed} \caption{Horizontal: number of CPU cores; Vertical: Running time in seconds.} \label{fig:speed} \end{figure} Besides the qualities and the sensitivities, we also investigated the running time of the proposed method with different numbers of computing units, and compared the results. A large network, \textit{cit-Patents}, which has over three million vertices and sixteen million edges, was used in the evaluation. The results are shown in Figure \ref{fig:speed}, where the horizontal axis gives the number of computing nodes (CPU cores), from a single node to $128$ nodes and the vertical axis shows the running time in seconds (log-scale). It can be seen that the execution time of the SSR method drops nearly linearly with the increase of computing nodes, from around $500$ seconds with one node to less than $30$ seconds with $128$ nodes, which verifies the high efficiency of the proposed SSR method when running in parallel computing platforms. Comparatively, the LOU method spent around $350$ seconds, which is slightly faster than the SSR method (implemented in MATLAB) with one computing node. Unfortunately, the execution of the LOU method is not readily to be parallelized and benefits little from multiple computing nodes. An insightful inspection may find that the LOU method is an iterative method and each iteration has two phases: a local search phase and a network building phase. The two phases are highly dependent and couldn't be executed simultaneously. Besides, the major computation comes from the local search phase, within which a local exchange heuristic is repeated by moving one vertex from one community to another community, in a way similar to the Kernighan-Lin algorithm \cite{LinKernighan:1973}. The heuristic has strong dependence between consecutive exchanges and thus the operations in the first phase can't be executed simultaneously either. \section{Conclusion} \label{sec:conclusion} With invaluable theoretical values and tremendous practical applications, the study of community detection and modularity maximization in complex networks has attracted much research attention recently. Unfortunately, the inherent NP-completeness nature of the problem poses non-trivial challenge and makes it difficult for most computational approaches to scale to large networks. To address the issue, we proposed a successive spectral relaxation based method to optimize the modularity measure. The key component of the proposed method is an algorithm that effectively finds the leading eigenvector of the modularity matrix while satisfying the required linear constraints. The method is simple and easy to implement. In benchmark evaluations it has reported high quality results comparable to the state-of-the-art approaches. A highly notable feature of the proposed method is that it only involves basic matrix-vector multiplication, addition and normalization operations and runs in parallel computing platforms with very high efficiency. Empirically the proposed method shows a nearly linear speed-up with the increase of computing nodes. It divides a network with millions of vertices in tens of seconds time with $128$ CPU cores, a significant improvement over other approaches. Thereby the proposed method provides a highly promising and practical solution in detecting communities in very large networks. \section{Acknowledgments} \label{sec:acknowledgments} This work is supported by Shenzhen Fundamental Research Fund under Grant No. KQTD2015033114415450 and Grant No. JCYJ20170306141038939. \bibliographystyle{abbrv}
1,108,101,566,096
arxiv
\section{Introduction} Image object detection and segmentation can be defined as a procedure to localize a region of interest (ROI) in an image and separate an image foreground from its background using image processing and/or machine learning approaches. Cell detection and segmentation are the primary and critical steps in microscopic image analysis. These processes play an important role in estimating the number of the cells, initializing cell segmentation, tracking, and extracting features necessary for further analysis. We categorize the segmentation methods as 1) traditional, feature- and machine learning (ML)-based methods and 2) deep learning (DL)-based methods. \subsection{Traditional cell segmentation methods} Traditional segmentation methods have achieved impressive results in cell boundary detection and segmentation, with an efficient processing time~\cite{Rojas-Moraleda2017,Tang2015}. These methods include low-level pixel processing approaches. The region-based methods are more robust than the threshold-based segmentation methods~\cite{Tang2015}. However, in low-contrast images, cells placed close together or flat cell regions can be segmented as blobs. Rojas-Moraleda et al.~\cite{Rojas-Moraleda2017} proposed a region-based method on the principles of persistent homology with an overall accuracy of 94.5\%. The iterative morphological and Ultimate Erosion \cite{Wang2016, Fan2013} suffer from poor segment performance when facing small and low-contrast objects. Guan et al.~\cite{Guan2011} detected rough circular cell boundaries using the Hough transform and the exact cell boundaries using fuzzy curve tracing. Compared with the watershed-based method~\cite{zhou2009}, this method was more robust to the noise and the uneven brightness in the cells. Winter et al.~\cite{Winter2019} combined the image Euclidean distance transformation with the Gaussian mixture model to detect elliptical cells. This method requires solid objects for computing the distance transform. The target objects' large holes or extreme internal irregularities make the distance transform unreliable and reduce the method performance. Buggenthin et al. \cite{Buggenthin2013} identified nearly all cell bodies and segmented multiple cells instantly in bright-field time-lapse microscopy images by a fast, automatic method combining the Maximally Stable Extremal Regions (MSER) with the watershed method. The main challenges for this method remain the oversegmentation and poor performance for out-of-focus images. The machine learning methods have expanded due to the microscopic images' complexity and the previous methods' low performance to detect and segment cells. The ML methods can be classified into two groups: supervised vs unsupervised. The supervised methods produce a mathematical function or model from the training data to map a new data sample~\cite{Stuart2010}. Mualla et al.~\cite{MuallaF2013} utilized the Scale Invariant Feature Transform (SIFT) as a feature extractor and the Balanced Random Forest as a classifier to calculate the descriptive cell keypoints. The SIFT descriptors are invariant to illumination conditions, cell size, and orientation. Tikkanen et al. \cite{Tikkanen2015} developed a method based on the Histogram of Oriented Gradients (HOG) and the Support Vector Machine (SVM) to extract feature descriptors and classify them as a cell or a non-cell in bright-field microscopic data. The proposed method is susceptible to the number of iterations in the training process as a crucial step to eliminating false positive detections. The unsupervised ML algorithms require no pre-assigned labels or scores for the training data~\cite{Hinton1999}. The best known unsupervised methods are clustering methods. Mualla et al.~\cite{MuallalF2014-c} segmented unstained cells in bright-field micrographs using a combination of a SIFT to extract key points, a self-labelling, and two clustering methods. This method is fast and accurate but sensitive to the feature selection step to avoid overfitting. \subsection{Deep Learning cell segmentation methods} In the last decade, Deep Learning has emerged as a new area of machine learning. The DL methods contain a class of ML techniques that exploit many layers of non-linear information processing for supervised or unsupervised feature extraction and transformation for pattern analysis and classification. The Deep Convolutional Networks exhibited impressive performance in many visual recognition tasks \cite{Girshick2014}. Song et al.~\cite{Song2016} used a multiscale convolutional network (MSCN) to extract scale-invariant features and graph-partitioning method for accurate segmentation of cervical cytoplasm and nuclei. This method significantly improved the Dice metric and standard deviation compared with similar methods. Xing et al.~\cite{Xing2016} also proposed an automated nucleus segmentation method based on a deep convolutional neural network (DCNN) to generate a probability map. However, the proposed mitosis counting remains laborious and subjective to the observer. One of the most popular models for semantic segmentation is Fully Convolutional Network (FCN) architectures. The FCN combines deep semantic information with a shallow appearance to achieve satisfactory segmentation results. The convolutional networks can take the arbitrary size of input images to train end-to-end, pixel-to-pixel, and produce an output of the corresponding size with efficient inference and learning to achieve semantic segmentation in complex images, including microscopic and medical images \cite{Long2015,Ben-Cohen2016}. Ronnenberger et al.~\cite{Ronneberger2015} proposed a training strategy that relies on the strong use of data augmentation by applying U-Net Neural Network, contracting the path to capture context, and expanding the path symmetrically to achieve a precise localization. This method was optimized with a low amount of training labelled samples and efficiently performed electron microscopic image segmentation. As described above, traditional ML methods are not much efficient to segment cells in a microscopic image with a complex background, particularly bright-field microscopy tiny cells \cite{Buggenthin2013, Tikkanen2015, MuallalF2014-c}. These methods cannot build sufficient models for big datasets. On the other hand, some Convolution Neural Networks (CNNs) require a vast number of manually labelled training datasets and higher computational costs compared with the ML methods~\cite{Long2015,Liu2015}. Deep learning-based methods have delivered better outcomes in segmentation tasks than other methods. Therefore, the main objective of our research is to propose a highly accurate and reasonably computationally cost deep learning-based method to segment human HeLa cells in unique telecentric bright-field transmitted light microscopic images. We chose the U-Net since it is one of the most promising methods used in semantic segmentation~\cite{Ronneberger2015}. To find the most suitable architecture for our datasets, we examined different U-Net architectures, such as Attention and Residual Attention U-Net. \section{Materials and methods} \subsection{Cell preparation and microscope specification} \label{microscopy} Human HeLa cell line was growing to low optical density overnight at 37$^{\circ}$C, 5\% CO$_2$, and 90\% RH. The nutrient solution consisted of DMEM (87.7\%) with high glucose ($>$1 g L$^{-1}$), fetal bovine serum (10\%), antibiotics and antimycotics (1\%), L-glutamine (1\%), and gentamicin (0.3\%; all purchased from Biowest, Nuaille, France). The HeLa cells were maintained in a Petri dish with a cover glass bottom and lid at room temperature of 37$^{\circ}$C when we were running experiments during data collection phase in different time laps experiments. We captured time-lapse image series of living human HeLa cells using a high-resolved bright-field light microscope for observation of sub-microscopic objects and cells. This microscope was designed by the Institute of Complex System (ICS, Nov\'{e} Hrady, Czech Republic) and built by Optax (Prague, Czech Republic) and ImageCode (Brloh, Czech Republic) in 2021. The microscope has a simple construction of the optical path. The light from two light-emitting diods CL-41 (Optika Microscopes, Ponteranica, Italy) was passing through a sample to reach a telecentric measurement objective TO4.5/43.4-48-F-WN (Vision \& Control GmbH, Shul, Germany) and an Arducam AR1820HS 1/2.3-inch 10-bit RGB camera with a chip of 4912$\times$3684 pixel resolution. The images were captured as a primary (raw) signal with theoretical pixel size (size of the object projected onto the camera pixel) of 113 nm. The software (developed by the ICS) controls to capture the primary signal with the camera exposure of 2.75 ms. All the experiments we performed in time-lapse to observe cells' behaviour over time. \subsection{Data acquisition}\label{dataprep} We completed different time-lapse experiments on the HeLa cells under the bright-field microscope (Sect.~\ref{microscopy}). The algorithm proposed in~\cite{Platonova2021} was fully automated and implemented in the microscope control software to calibrate the microscope optical path and correct all image series to avoid image background inhomogeneities and noise. After the image calibration, we converted the raw image representations to quarter-resolved 8-bit colour (rgb) mode with the usage of quadruplets of Bayer mask pixels \cite{Stys2016}: We adopted red and blue camera filter pixels into the relevant image channel and averaged each pair of green camera filter pixels to create the green image channel. Then, we rescaled images to 8-bits after creating the image series intensity histogram and omitting unoccupied intensity levels. This bit reduction ensured the maximal information preservation and mutual comparability of the images through the time-lapse series. The means denoising method \cite{Buades2005} minimized the background noise in the constructed RGB images at preserving the texture details. Afterwards, we cropped the image series to the $1024\times1024$ pixel size. We obtained 500 images from different time-lapse experiments by the steps described above. The cells in the images were labelled manually by MATLAB (MathWorks Inc., Natick, Massachusetts, USA) as Ground-Truth (GT) single class masks with the dimension of $1024\times1024$ (Fig.~\ref{fig1}). We used the labelled images as training, testing, and evaluation sets for the proposed U-Net networks (with images of the size of $512\times512$). \begin{figure} \graphicspath{ {./images/} } \centering \includegraphics[width=\textwidth]{Fig1.jpg} \caption{Examples of the train set and their ground truths. The image size is $512\times512$.} \label{fig1} \end{figure} \subsection{U-Net Model Architectures} The U-Net \cite{Ronneberger2015} is a semantic segmentation method proposed on the FCN architecture. The FCN consists of a typical encoder-decoder convolutional network. This architecture includes several feature channels to combine shallow and deep features. The deep features are used for positioning, whereas the shallow features are utilized for precise segmentation. We choose the architecture of simple U-Net (Fig.~\ref{fig2}) for training the model with the specific size of input images. \begin{figure}[htbp] \graphicspath{ {./images/} } \centering \includegraphics[width=1\textwidth]{Fig2.png} \caption{Architecture of the proposed simple U-Net model.} \label{fig2} \end{figure} The first layer of the encoder part consists of the input layer, which accepts RGB images with the size $512\times512$. Each level in the five-"level" U-Net structure includes two 3$\times$3 convolutions. Batch normalization follows each convolution, and "LeakyReLu" activation functions follow a rectified linear unit. In the down-sampling (encoder) part (Fig.~\ref{fig2}, left part), each "level" in the encoder consists of a $2\times2$ max pooling operation with the stride of two. The max-pooling process extracts the maximal value in the $2\times2$ area. By completing down-sampling in each level of the encoder part, convolutions will double the number of feature channels. In the up-sampling (decoder) section (Fig.~\ref{fig2}, right part), the height and width of the exciting feature maps are doubled in each level from bottom to top. Then, the high-resolution deep semantic and shallow features were combined and concatenated with the feature maps from the encoder section. After concatenation, the output feature maps have channels twice the size of the input feature maps. The output decoder layer at the top with a $1\times1$ convolution size predicts the probabilities of pixels. We consider padding in the convolution process to achieve the same input and output layers size. The computational result, combined with the Binary Focal Loss function, becomes the energy function of the U-Net. \begin{figure}[htbp] \graphicspath{ {./images/} } \centering \captionsetup{justification=centering} \includegraphics[width=\textwidth]{Fig3.png} \caption{$A$) Architecture of the proposed Attention U-Net model, $B$) the attentive module mechanism. The size of each feature map is shown in $H\times W\times D$, where $H$, $W$, and $D$ indicate height, width, and number of channels, respectively.} \label{fig3} \end{figure} Between each Encoder-Decoder layer in the simple U-Net (Fig.~\ref{fig2}), there is a connection combining the down-sampling path with the up-sampling path to achieve the spatial information. Nevertheless, at the same time, this process brings also many irrelevant feature representations from the initial layers. We applied the Attention U-Net architecture (Fig.~\ref{fig3}-$A$) with an impressive performance in medical imaging \cite{Oktay2018} to prevent this problem and improve semantic segmentation result achieved by standard U-Net. As an extension to the standard U-Net model architecture, the attention gate at the skip connections between encoder and decoder layers highlights the remarkable features and suppresses activations in the irrelevant regions. In this way, the attention gate improves model sensitivity and performance without requiring complicated heuristics. The attention gate (Fig.~\ref{fig3}-$B$) has two inputs $x$ and $g$. Input $x$ comes from the skip connection from the encoder layers. Since it comes from the early layers, it contains better spatial information. Input $g$--a gating signal--comes from the deeper network layer and contains a better feature representation. The attention part weights different parts of the images. This process will add the weights to the pixels based on their relevance in training steps. The relevant part of the image will get large weights than the less relevant parts. The achieved weights get also trained in the training process and make the trained model more attentive to the relevant regions. \begin{figure}[htbp] \graphicspath{ {./images/} } \centering \captionsetup{justification=centering} \includegraphics[width=\textwidth]{Fig4.png} \caption{($A$) Architecture of the Residual Attention U-Net model. ($B$) Each U-Net layer structure. ($C$) The sample of residual block progress. $BN$ refers to Batch Normalization.} \label{fig4} \end{figure} Another architecture used in this study and developed based on the U-Net models (originally for nuclei segmentation \cite{Alom2018}) is the Residual U-Net. The simple U-Net architecture was built based on repetitive Convolutional blocks in each level (Fig.~\ref{fig4}-$B$). Each of these Convolutional blocks consists of the input, two steps of the convolution operation followed by the activation function and the output. On the other hand, we face the vanishing gradient problem when dealing with very deep convolutional networks. We applied the residual step to update the weights in each convolutional block incrementally and continuously (Fig.~\ref{fig4}-$C$) to enhance the U-Net architecture performance by overcoming the vanishing gradient problems. In the traditional neural networks, each convolutional blocks feed the next blocks. The other problem in a DCNN-based network, such as stacking convolutional layers, is that a deeper structure of these kind of networks will affect generalization ability. To overtake this problem, the skip connections--the residual blocks--improve the network performance, with each layer feeding the next layer and layers about two or three steps apart (Fig.~\ref{fig4}--$C$). We connected the Residual and Attention U-Net architecture to build more effective and high-performance models from our datasets and improve segmentation results. \begin{table}[htbp] \scriptsize \centering \caption{Number of the trainable parameters and the run time for each U-Net model.} \label{tab:Exp_Time} \begin{tabular}{ccc} \hline \textbf{Network} & \textbf{Run time} & \textbf{Training parameter} \\ \hline \textbf{U-Net} & 3:42':18'' & 31,402,501 \\ \textbf{Attention U-Net} & 4:04':23'' & 34,334,665 \\ \textbf{Residual Att U-Net} & 4:11':24'' & 39,090,377 \\ \hline \end{tabular} \end{table} After completion of the semantic segmentation by U-Net methods described above, we applied the watershed algorithm based on morphological reconstruction \cite{Zhang2011}. We first transformed the U-net semantic segmentation result into a binary image using the Otsu method \cite{Otsu1979}. After that, we determined the background regions by binary image dilation. Then, the distance transform was applied to define the foreground eroded cell regions. The unknown region was achieved by diffraction of the particular foreground region from the sure background. The watershed method applied to the unknown regions separated the cell borders. The watershed segmentation further helped us solve the over and under-segmented regions and specify each individual separated cell by, e.g., cell diameters, solidity, or mean intensity. We optimized the segmentation results by the marked images. Wrongly detected residual connections between different cell regions were cut off, which improved the method accuracy. Figure \ref{fig5} presents a general diagram of the proposed U-Net based methods. \begin{figure}[htbp] \graphicspath{ {./images/} } \centering \captionsetup{justification=centering} \includegraphics[width=\textwidth]{Fig5.png} \caption{Flowchart of methodology applied in this study.} \label{fig5} \end{figure} \subsection{Training Models} The computation was implemented in Python 3.7. The framework for deep learning was Keras, and the backend was Tensorflow \cite{Abadi2015}. The whole method, including the Deep Learning framework, was transferred and executed on the Google Colab Pro account with P100 and T4 GPU, 24 Gb of RAM, and 2 vCPU \cite{GoogleColabPro}. After data preprocessing (Sect.~\ref{dataprep}), we divided the primary dataset into training (80\%) and test (20\%). A part (20\%) of the training set was used for model validation in the training process to avoid over-fitting and achieve higher performance. Among a 500-image dataset of the mixture of under-, over-, and focused images, 320 images were randomly selected to train the model, and 80 images were chosen randomly to validate the process. The rest of the 100 dataset images were considered for testing and evaluating the model after training. Before the training, the images were normalized: the pixel values were rescaled in the range from 0 to 1. Since all designed network architectures work with a specific input image size, all datasets were resized to $512\times512$ pixel size. We also applied data augmentation parameters for training all three U-Net architectures. The optimized values of the hyperparameters used in the training process are written in Tab. \ref{tab:Hyper_Param}. The "rotation range" represents an angle of the random rotation, "width shift range" represents an amplitude of the random horizontal offset, "height shift range" corresponds to an amplitude of the random vertical offset, "shear range" is a degree of the random shear transformation, "zoom range" represents a magnitude of the random scaling of the image. We applied Early Stopping hyperparameters to avoid over-fitting during the model training and the patient value was considered as 15. The activation function was set to the LeakyRelu, and the Batch size was set to 8. To optimize the network, we chose the Adam optimizer and set the learning rate to 10$^{-3}$. \begin{table}[htbp] \scriptsize \centering \caption{Hyperparameters setting for all three U-Net models.} \label{tab:Hyper_Param} \begin{tabular}{@{}ll@{}} \toprule \textbf{Parameter name} & \textbf{Value} \\ \midrule Activation function & LeakyRelu \\ Learning rate & 10$^{-3}$ \\ Batch size & 8 \\ Epochs number & 100 \\ Early stop & 15 \\ Step per epoch & 100 \\ Rotation range & 90 \\ Width shift range & 0.3 \\ Height shift range & 0.3 \\ Shear range & 0.5 \\ Zoom range & 0.3 \\ \bottomrule \end{tabular} \end{table} We can consider semantic image segmentation as a pixel classification as either the cell or background class. The Dice loss was used to compare the segmented cell image with the GT and minimize the difference between them as much as possible in the training process. One of the famous loss functions used for semantic segmentation is the Binary Focal Loss (Eq.~\ref{Eq1}) \cite{Lin2017}: \begin{equation} \label{Eq1} \mbox{Focal Loss} = -\alpha_t(1 - p_t)^\gamma \log(p_t), \end{equation} where $p_t \in [0, 1]$ is the model’s estimated probability for the GT class with label y = 1; a weighting factor $\alpha_t \in [0, 1]$ for class 1 and $1-\alpha_t$ for class $-1$; $\gamma \geq 0$ is a tunable focusing parameter. The focal loss can be enhanced by the contribution of hardly segmented regions (e.g., cells with vanish borders) and distinguish parts between the background and the cells with unclear borders. The second benefit of the focal loss is that it controls and limits the contribution of the easily segmented pixel regions (e.g., sharp and apparent cells) in the image at the loss of the model. In the final step, updating the gradient direction is under the control of the model algorithm, dependent on the loss of the model. \subsection{Evaluation metrics} \label{Evaluation metrics} We used different metrics (Eqs. \ref{Eq2}--\ref{Eq6}, where TP, FP, FN and TN are true positive, false positive, false negative, and true negative metrics, respectively), to evaluate our proposed semantic segmentation based models \cite{Pan2017}. The metrics were computed for all test sets and explained as mean values (Tab. \ref{tab:Exp_Res}). Overall pixel accuracy (Acc) represents a per cent of image pixels belonging to the correctly segmented cells. Precision (Pre) is a proportion of the cell pixels in the segmentation results that match the GT. The Pre, known as a positive predictive value, is a valuable metric for the segmentation performance because it is sensitive to over-segmentation. Recall (Recl) represents the proportion of cell pixels in the GT correctly identified through the segmentation process. This metric says what proportion of the objects annotated in the GT was captured as a positive prediction. The Pre and Recl together give an important metric--F1 score--to evaluate the segmentation result. The F1-score or Dice similarity coefficient states how the predicted segmented region matches the GT in location and level of details and considers each class's false alarm and missed value. This metric determines the accuracy of the segmentation boundaries \cite{Csurka2013} and have a higher priority than the Acc. Another common and essential evaluation metric for semantic image segmentation is the Jaccard similarity index known as Intersection over Union (IoU). This metric is a correlation among the prediction and GT \cite{Long2015,Vijay2015}, and represents the overlap and union area ratio for the predicted and GT segmentation. \begin{equation} \label{Eq2} \mbox{Acc} =\frac{\mbox{Correctly Predicted Pixels}}{\mbox{Total Number of Image Pixels}} = \frac{\mbox{TP + TN}}{\mbox{TP + FP + FN + TN}} \end{equation} \begin{equation} \mbox{Pre} = \frac{\mbox{Correctly Predicted Cell Pixels}}{\mbox{Total Number of Predicted Cell Pixels}} = \frac{\mbox{TP}}{\mbox{TP + FP}} \end{equation} \begin{equation} \mbox{Recl} =\frac{\mbox{Correctly Predicted Cell Pixels}}{\mbox{Total Number of Actual Cell Pixels}} = \frac{\mbox{TP}}{\mbox{TP + FN}} \end{equation} \begin{equation} \mbox{Dice} =\frac{\mbox{2 $\times$ Pre $\times$ Recl}}{\mbox{Pre + Recl}} = \frac{\mbox{2 $\times$ TP}}{\mbox{2 $\times$ TP + FP + FN}} \end{equation} \begin{equation} \label{Eq6} \mbox{IoU} = \frac{\mid y_t \cap y_p \mid}{\mid y_t \mid + \mid y_p \mid - \mid y_t \cap y_p \mid} = \frac{\mbox{TP}}{\mbox{TP + FP + FN}} \end{equation} \begin{sidewaysfigure} \graphicspath{ {./images/} } \includegraphics[width=1\textwidth]{Fig6.png} \captionsetup{justification=centering} \caption{Training/validation plots for Simple U-Net (left column), Attention U-Net (middle column), and Residual Attention U-Net (right column).} \label{fig6} \end{sidewaysfigure} \section{Results} All three models were well trained and converged after running 100 epochs based on training/validation loss and Jaccard plots per epochs (Fig.~\ref{fig6}). After tuning all hyperparameters with the best performance and training stability (Tab.~\ref{tab:Hyper_Param}), we selected such parameters values to improve the performance of the constructed models maximally. Then, we evaluated the achieved models with the test datasets. We assessed all trained models (Tab.~\ref{tab:Exp_Res}) using the metrics in Eqs.~\ref{Eq2}--\ref{Eq6}. \begin{sidewaysfigure} \graphicspath{ {./images/} } \centering \captionsetup{justification=centering} \includegraphics[width=1.0\textwidth]{Fig7.png} \caption{Segmentation results for $A$) the simple U-Net (the black cycle highlights the non-segmented, vanish cell boarders), $B$) Attention U-Net (the yellow cycle highlights under-segmentation problem), and $C$) the Residual Attention U-Net (red cycle shows the successful segmentation of the cell boarders. The image size is $512 \times 512$.} \label{fig7} \end{sidewaysfigure} Training the model with the simple U-Net method took the shortest run time with the lowest trainable number of parameters (Tab.~\ref{tab:Exp_Time}). Compared with the Attention U-Net and Residual Attention U-Net, the run time difference is not huge in terms of increasing trainable parameters. The computational cost also did not increase dramatically compared with the acceptable improvement in the model performance. Figure \ref{fig7} presents the segmentation results achieved by three different U-Net models. The simple U-Net segmentation result did not distinguish some vanished cell borders (Fig. \ref{fig7}--$A$, black circle). The Attention U-Net (Fig. \ref{fig7}--$B$) detected cells with the vanish borders more efficiently than the simple U-Net. However, the Attention U-net segmentation suffers from under-segmentation in some regions (visualized by the yellow circle). The outcome of the Residual Attention U-Net method (Fig. \ref{fig7}--$C$, red circle) achieved more accurate segmentation of the vanish cell borders. The watershed binary segmentation after the Residual Attention U-Net networks separated and identified the cells with the highest performance (Fig. \ref{fig7}). As seen in Mean-IoU, Mean-Dice, and Accuracy metrics (Tab.~\ref{tab:Exp_Res}), our Attention U-Net model showed better segmentation performance than the simple U-Net model in the same situation. After applying the residual step into the Attention U-net, we have further improved our segmentation results. \begin{table}[htbp] \scriptsize \centering \captionsetup{justification=centering} \caption{Results for metrics evaluating the U-Net Models. Green values represent the highest segmentation accuracy for the related metric.} \captionsetup{justification=centering} \label{tab:Exp_Res} \begin{adjustbox}{width=\textwidth} \begin{tabular}{ccccccc} \hline \multicolumn{1}{c}{\textbf{Network}} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{m-IoU} & \textbf{m-Dice} \\ \hline \textbf{U-Net} & 0.957418 & 0.988269 & 0.961264 & 0.950501 & 0.974481 \\ \textbf{Attention U-Net} & 0.959448 & 0.985663 & 0.965736 & 0.952471 & 0.975511 \\ \textbf{Residual Att U-Net} & 0.960010 & 0.986510 & 0.965574 & \cellcolor{green!10}0.953085 & \cellcolor{green!10}0.975840 \\ \hline \end{tabular} \end{adjustbox} \end{table} \section{Discussion} The analysis of bright-field microscopic image sequences is challenging due to living cells' complexity and temporal behaviour. We have to face (1) irregular shapes of the cells, (2) very different sizes of the cells, (3) noise blobs and artefacts, and (4) vast sizes of the time-lapse datasets. Traditional machine learning methods, including random forests and support vector machines, cannot deal with some of these difficulties in terms of higher computational cost and longer run time for huge time-lapse datasets. The traditional methods suffer from low performance in vanishing and tight cell detection and segmentation and are sensitive to training steps \cite{SommerC2011,Tikkanen2015}. The DL methods have been rapidly developed to overcome these problems. The U-Net is one of the most effective semantic segmentation methods for microscopic and biomedical images \cite{Ronneberger2015}. This method is based on the FCN architecture and consists of encoder and decoder parts with many convolution layers. The image data used to train the Residual Attention model are specific due to the way by which we obtained it. Firstly, we calibrated the optical path to obtain the number of photons that reaches each camera pixel with increasing illumination light intensity. This gave a calibration curve (image pixel intensity vs the number of photons reaching the relevant camera pixel) to correct the digital image pixel intensity. This step ensured homogeneity in digital image intensities to improve the quality of cell segmentation by the neural networks. We deal with the low-compressed telecentric transmitted light bright-field high-pixel microscopy images. The bright-field light microscope allows us to observe living cells in their most physiological state. Due to the object-sided telecentric objective, the final digital raw image of the observed cells is high-resolved and low-distorted, with no light interference halos around objects. The procedure compressed the raw colour images ensured the least information loss at the quarter-resolution decrease. Despite the next pixel resolution decreasing, the final pixel resolution of the images that are input into the neural network is higher (512$\times$512) than in the case of any other neural network datasets. While we try to maintain the image high resolution as much as possible, this fact arises requirements for neural network computing memory and performance parameters. As our microscope and acquired microscopic data are unique, and were not used before in similar research, it is hard to compare the results with other works. Despite this, we tried to compare the performances of our U-Net-based models with similar microscopic and medical works (Tab.~\ref{tab:comparision}). We trained and evaluated the first model on a simple U-Net structure. We achieved the Mean-IoU score of 0.9505. We assume that our best value of mean IoU will be achieved after the hyperparameter optimization (Tab.~\ref{tab:Hyper_Param}). Ronnenberger et al. \cite{Ronneberger2015} achieved 0.920 and 0.775 Mean-IoU scores for U373 cell line in phase-contrast microscopy and HeLa cell line in Nomarski contrast, respectively. Pan et al. \cite{Pan2019} segmented nuclei from medical, pathological MOD datasets with 0.7608 segmentation IoU accuracy score using the U-net. To improve the U-Net model performance further, we implemented an attention gate into the U-Net structure (so-called Attention U-Net) to weigh the relevant part of the image pixels containing the target object. In this way, we improved the Mean-IoU metric to 0.9524. The achieved IoU score represents a noticeable improvement in the trained model performance compared with the simple U-Net model. To the best of our knowledge, not many researchers have applied the Attention U-Net to microscopic datasets, but recent papers are prevalently about its application to medical datasets. Microscopic and medical datasets have their complexity and structure, complicating the comparison of the method performances. Applying the Attention U-Net, pancreas \cite{Oktay2018} and liver tumour \cite{Wang2021} medical datasets showed 0.840 and 0.948 Dice metric segmentation accuracy, respectively. \begin{table}[htbp] \scriptsize \centering \captionsetup{justification=centering} \caption{Performances of the proposed networks and other networks proposed for microscopic and medical applications. Green highlighted value represent the highest segmentation accuracy in term of mentioned metric.} \label{tab:comparision} \begin{tabular}{cccc} \hline \textbf{Models} & \textbf{IoU} & \textbf{Dice} & \textbf{Acc} \\ \hline \textbf{proposed U-Net} & 0.9505 & 0.9744 & 0.9574 \\ \textbf{proposed Att U-Net} & 0.9524 & 0.9755 & 0.9594 \\ \textbf{proposed Res\_Att\_U-Net} & \cellcolor{green!10}0.9530 & \cellcolor{green!10}0.9758 &\cellcolor{green!10} 0.9600 \\ U-Net \cite{Ronneberger2015} & 0.9203 & 0.9019 & 0.9554 \\ U-Net \cite{Pan2019} & 0.7608 & - & 0.9235 \\ Segnet \cite{Pan2019} & 0.7540 & - & 0.9225 \\ Attention U-Net \cite{Oktay2018} & - & 0.840 & 0.9734 \\ Residual Attention U-Net \cite{Wang2021} & - & 0.9081 & 0.9557 \\ Residual U-Net \cite{patel2019} & - & 0.8366 & - \\ Residual Attention U-Net \cite{Qiangguo2020} & - & 0.9655 & 0.9887 \\ \hline \end{tabular} \end{table} We improved our model by one step and obtained the Residual Attention U-Net to overcome the vanishing gradient problem and generalization ability. As a result, we have improved the segmentation accuracy by reaching the Mean-IoU of 0.953. The Residual Attention U-Net showed the Dice coefficient of 0.9655 in the testing phase of medical image segmentation \cite{Qiangguo2020}. The Recurrent Residual U-Net (R2U-Net) achieved the Dice coefficient of 0.9215 in the testing phase of nuclei segmentation \cite{Alom2018}. Patel et al. \cite{patel2019} applied the Residual U-Net to bright-field absorbance image and achieved the Mean-Dice coefficient score of 0.8366. \section{Conclusion} Microscopic image analysis via deep learning methods can be a convenient solution due to the complexity and variability of this kind of data. This research aimed to detect and segment living human HeLa cells in images acquired using an original custom-made bright-field transmitted light microscope. We involved three types of deep learning U-net architectures: the simple U-Net, Attention U-Net, and Residual Attention U-Net. The simple U-Net (Tab.~\ref{tab:Exp_Time}) has the fastest training time. On the other hand, the Residual Attention U-Net architecture achieved the best segmentation performance (Tab.~\ref{tab:Exp_Res}) with a run time similar to the other two U-Net models. The Attention U-net is a method to highlight only the relevant activations during the training process. This method can reduce the computational resource waste on irrelevant activations to generate more efficient models. Due to the integration of the residual learning structure (to overcome the gradient vanishing) together with the attention gate mechanism (to integrate a low and high-level feature representation) into the U-Net architecture, we achieved the best segmentation performance. After extracting semantic segmentation binary results (Tab.~ \ref{tab:Exp_Res}), we applied the watershed segmentation method exemplarily to separate the cells from each other, avoid the over-segmentation, label the cells individually, and extract vital information about the cells (e.g., the total number of the segmented cells, cell equivalent diameter, mean intensity and solidity). Nevertheless, future works are still essential to expand the knowledge on multi-class semantic segmentation with different and efficient CNN's architecture and combine the constructed CNN models in the prediction process to achieve the most accurate segmentation result. \section*{FUNDING} This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic -- projects CENAKVA (LM2018099) and from the European Regional Development Fund in frame of the project ImageHeadstart (ATCZ215) in the Interreg V-A Austria–Czech Republic programme. The work was further financed by the project GAJU 017/2016/Z. \section*{DECLARATION OF COMPETITING INTEREST} The authors declare no conflict of interest, or known competing financial interests, or personal relationships that could have appeared to influence the work reported in this paper. \section*{ACKNOWLEDGEMENT} The authors would like to thanks our lab colleagues Šárka Beranová and Pavlína Tláskalová (all from ICS USB) and Mohammad Mehdi Ziaei for their support of this study. \bibliographystyle{elsarticle-num}
1,108,101,566,097
arxiv
\section{Introduction} As the first reaction of the proton-proton ($pp$) chain, $pp$ fusion is the predominant process in conversion of hydrogen to helium in light stars like the Sun. Its rate is an essential ingredient in understanding stellar nucleosynthesis. However, the reaction cross section is difficult to measure in terrestrial laboratories; therefore, reliable theoretical prediction of it is often needed as one of the inputs for stellar models. In the present paper, we examine power counting of weak currents involved in this process in chiral effective field theory (ChEFT), with renormalization-group (RG) invariance as the guideline. At the hadronic level, the total cross section of $pp$ fusion consists of two essential elements: nuclear wave functions and axial current operators. Near the threshold, it can be schematically written as \begin{equation} \sigma\left(E\right) \propto \sum_{M} \lvert \langle \psi_d^M\rvert \vec{A}_- \lvert\psi_{pp} \rangle\rvert^2, \label{eqn:approx_cross_section} \end{equation} where $\vec{A}_-$ denotes the axial current, $\psi_d^M$ ($\psi_{pp}$) the deuteron bound state ($pp$ scattering state), $M$ the $z-$component of the deuteron spin, and $E$ the center-of-mass (CoM) energy. In the early investigations, both strong and weak interactions were phenomenologically constructed~\cite{Bethe:1938yy, Salpeter:1952ffc, Bahcall:1968wz, Kamionkowski:1993fr, Schiavilla:1998je}. As ChEFT developed, interests in $pp$ fusion were revived due to the prospect of quantifying its theoretical uncertainty in an EFT framework~\cite{Park:1998wq, Park:2002yp}. In the so-called hybrid approach, current operators were derived from ChEFT and various potential models were used to construct the nuclear wave functions. Full EFT calculations were actually carried out at first in pionless EFT~\cite{Kong:2000px, Butler:2001jj, Ando:2008va, Chen:2012hm, Behzadmoghaddam:2020pqr}. At next-to-leading order (NLO), however, a low-energy constant (LEC) is needed: $L_{1,A}$ that parametrizes the two-body axial current. Several means to determine $L_{1,A}$ were proposed in Refs.~\cite{Butler:2002cw, Savage:2016kon, De-Leon:2016wyu}. Applications of ChEFT to both potentials and currents were performed in Refs.~\cite{Marcucci:2013tda, Acharya:2016kfl}. In these ChEFT calculations, power counting of potentials and currents are based on naive dimensional analysis (NDA). NDA has been shown to be inconsistent with RG invariance and various power counting schemes of nuclear forces have been proposed to meet the requirement of RG invariance~\cite{Birse:2005um, Birse:2007sx, Birse:2009my, Valderrama:2009ei, PavonValderrama:2011fcz, Long:2011qx, Long:2011xw, Long:2012ve, PavonValderrama:2019lsu, vanKolck:2020llt, Zhou:2022loi}. RG analysis of the nuclear currents in ChEFT was pioneered by Ref.~\cite{PavonValderrama:2014zeq}, based on the short-range behavior of two-nucleon wave functions. The strategy of using RG for power counting nuclear currents was also applied in studying beyond-Standard Model physics in nuclei~\cite{Cirigliano:2018hja, Oosterhof:2019dlo, Yao:2020olm}. For different points of view towards RG invariance in the context of chiral nuclear forces, we refer to Refs.~\cite{Epelbaum:2009sd, Epelbaum:2006pt, Epelbaum:2018zli, Gasparyan:2021edy}. We examine power counting of axial current operators especially for the process of $pp$ fusion. Besides using RG invariance as a guideline, we treat higher-order potentials in perturbation theory in the same manner as they were studied in Refs.~\cite{Valderrama:2009ei,PavonValderrama:2011fcz,Long:2011qx, Long:2011xw, Long:2007vp, Long:2012ve, SanchezSanchez:2017tws, Wu:2018lai, Peng:2020nyz, Peng:2021pvo}, as opposed to lumping them altogether with the leading-order (LO) potential in the Schr\"odinger equation~\cite{vanKolck:2020llt}. The paper is organized as follow. In Sec.~\ref{sec:ppscat}, we demonstrate how to deal with the $pp$ interaction by calculating the $pp$ $\cs{1}{0}$ phase shifts up to next-to-next-to-leading order (N$^2$LO). We then discuss the nuclear matrix element of $pp$ fusion in Sec.~\ref{sec:rme}, including relevant axial current operators and the deuteron wave function. This is followed by results and discussions in Sec.~\ref{sec:results}. Finally, a summary is offered in Sec.~\ref{smry}. \section{Proton-proton scattering\label{sec:ppscat}} We describe near-threshold $pp$ scattering where the energy is so low that the Coulomb potential must be fully iterated. For discussions on perturbative treatment of the Coulomb potential in the context of pionless and cluster EFTs, we refer to Refs.~\cite{Konig:2015aka, Kirscher:2015zoa, Konig:2016iny}. The full $T$ matrix in the presence of the strong and Coulomb interactions can be divided into two parts: the pure Coulomb part $T_{c}$ and the modified strong amplitude $\widetilde{T}_{sc}$~\cite{Goldberger:1964ny}. We begin by introducing the Coulomb propagator: \begin{equation} G_c^{\pm}(E) = \frac{1}{E - H_0 - V_c \pm i\epsilon}, \end{equation} where $H_0$ is the free Hamiltonian, $V_c$ the Coulomb potential, and the CoM energy $E = p^2/m_N$ with the nucleon mass $m_N = 939$ MeV. The Coulomb amplitude $T_c(\vec{p}, \vec{p}\,')$ is defined as~\cite{Kong:1999sf} \begin{equation} T_{c}(\vec{p}\,', \vec{p}) = \langle\vec{p}\,'\left\vert V_c \right\vert \psi_c^+(\vec{p})\rangle \, . \end{equation} Here, the incoming ($\psi_c^{-}$) and outgoing ($\psi_c^{+}$) Coulomb wave functions are given by \begin{equation} \vert \psi_c^{\pm}(\vec{p})\rangle = \left(1 + G_c^{\pm} V_c \right)\vert \vec{p}\rangle \, . \end{equation} Operator $T_{sc}$ is defined by iterating the strong potential $V_\text{str}$ through $G_c(E)$: \begin{equation} T_{sc} = V_\text{str} + V_\text{str} G_c(E) T_{sc} \, , \label{eqn:TscDef} \end{equation} and $\widetilde{T}_{sc}(\vec{p}\,', \vec{p})$ is the matrix element of $T_{sc}$ between $\psi_c^+$ and $\psi_c^-$: \begin{equation} \widetilde{T}_{sc}(\vec{p}\,', \vec{p}) \equiv \langle \psi_c^-(\vec{p}\,')\left\vert T_{sc} \right\vert \psi_c^+(\vec{p})\rangle \, . \label{Tsc:expression} \end{equation} $T_c$ and $T_{sc}$ can be projected onto partial waves in a fashion similar to their strong-interaction counterparts. More specifically, $\widetilde{T}_{sc}(p, p)$ for $\cs{1}{0}$ is related to the strong phase shift $\delta_{sc}(p)$ as follows: \begin{equation} \widetilde{T}_{sc}(p, p) = -\frac{4\pi}{m_N}e^{2i\delta_c(p)}\frac{e^{2i\delta_{sc}(p)}-1}{2ip}\, , \end{equation} where $p$ is the CoM momentum and $\delta_c(p)$ the Coulomb phase shift. We will restrict ourselves to the $^1S_0$ channel of $pp$ interaction because the $P$-wave contribution to near-threshold $pp$ fusion is smaller than $S$-wave by several orders of magnitude~\cite{Marcucci:2013tda,Acharya:2019zil}. We drop the subscript of orbital angular momentum to simplify the notation. The technique presented in Refs.~\cite{Vincent:1974zz, Walzl:2000cx} is adopted to calculate the strong phase shift $\delta_{sc}(p)$. An artificial infrared cutoff in coordinate space $R_p$ is introduced, beyond which the strong potential is neglected. One expects $\delta_{sc}$ to be independent of $R_p$ as long as $R_p$ is much larger than the range of $V_\text{str}$. We have verified that when $R_p$ is chosen to be $10$ fm, the relative errors of the phase shifts $\delta_{sc}(p)$ are smaller than $10^{-3}$. The $pp$ scattering wave function $\psi_{pp}(\vec{r}; \vec{p}\,)$ will be constructed by this method. It is useful to show the spin and isospin structure of $\psi_{pp}(\vec{r}; \vec{p}\,)$~\cite{Schiavilla:1998je}: \begin{equation} \psi_{pp}(\vec{r}; \vec{p}) = 4\pi\sqrt{2}e^{i\delta_{sc}}\frac{\chi_0(r;p)}{pr}Y^*_{00}(\hat{p})Y_{00}(\hat{r})\eta^0_0\zeta^1_1 \label{pp:function}, \end{equation} where $\chi_0(r;p)$ is the radial wave function, $\eta_S^{M_S}$ ($\zeta_{T}^{M_T}$) the spin (isospin) piece of the wave function, with the $z$-component $M_S$ ($M_T$). The power counting for neutron-proton ($np$) $\cs{1}{0}$ interaction explained in Ref.~\cite{Long:2012ve} is our starting point for the strong potentials. Later, other schemes were proposed to improve the convergence of ChEFT in $\cs{1}{0}$~\cite{Long:2013cya, SanchezSanchez:2017tws, Peng:2021pvo, Mishra:2021luw, Ren:2017yvw}, but they are aiming at momenta much higher than concerned in the present paper. Following Ref.~\cite{Long:2012ve}, we expand the $\cs{1}{0}$ potential $V_\text{str}$ up to N$^2$LO: \begin{align} V_\text{str}^{(0)}(p',p) &= V_{1\pi}(p\,',p) + C^{(0)},\\ V_\text{str}^{(1)}(p',p) &= C^{(1)} + \frac{1}{2}D^{(0)}(p'^2+p^2),\\ V_\text{str}^{(2)}(p',p) &= V_{2\pi}(p',p) + C^{(2)} + \frac{1}{2}D^{(1)}(p'^2+p^2) + \frac{1}{2}E^{(0)}p'^2p^2, \end{align} where the LECs $C$ and $D$ are formally expanded at each order: $C = C^{(0)} + C^{(1)} + C^{(2)}$ and $D = D^{(0)} + D^{(1)}$. To regularize the ultraviolet part of potentials, we use a separable Gaussian regulator: \begin{equation} V^{\Lambda}(p',p) \equiv \exp\left(-\frac{p'^{\,4}}{\Lambda^4}\right)V(p',p)\exp\left(-\frac{p^{\,4}}{\Lambda^4}\right). \end{equation} Unlike in the $np$ sector, the $pp$ contact interactions are renormalized by the Coulomb force at short distances. On the other hand, the OPE potential--- the long-range part of the strong interactions--- is unchanged from $np$ to $pp$. Because OPE behaves similarly to the Coulomb force for $r \to 0$ where $r$ is the inter-nucleon distance, one expects the addition of the Coulomb force only to change the renormalization of the contact terms modestly and the power counting for the $\cs{1}{0}$ $pp$ contact terms to remain the same pattern as that for $np$. In addition to this argumentation, we will check the power counting against RG invariance by verifying numerically that $\delta_{sc}$ is independent of the cutoff value at each order. The perturbative treatment of higher-order potentials may be most conveniently explained by a generating function. We introduce an auxiliary parameter $x$ and define a potential in the form of $x$ polynomials, with $V_\text{str}^{(n)}$ as the coefficient of $x^n$: \begin{equation} V_\text{str}(p',p;x) = V_\text{str}^{(0)}(p',p) + xV_\text{str}^{(1)}(p',p) + x^2V_\text{str}^{(2)}(p',p) + \mathcal{O}(x^3) \, . \label{eqn:VstrExpan} \end{equation} This potential results in an $x$-dependent amplitude $\widetilde{T}_{sc}(p', p; x)$ whose Taylor-expansion in $x$ leads to the desired correction to the LO amplitude $\widetilde{T}_{sc}^{(0)}(p', p)$: \begin{equation} \widetilde{T}_{sc}(p',p;x) = \widetilde{T}_{sc}^{(0)}(p',p) + x\widetilde{T}_{sc}^{(1)}(p',p) + x^2\widetilde{T}_{sc}^{(2)}(p',p) + \cdots \, . \label{eqn:TscExpan} \end{equation} One can follow the same suit to relate the EFT expansion of $\delta_{sc}$ to that of $\widetilde{T}_{sc}$. In the numerical calculations, the following values are taken for various parameters: the fine-structure constant $\alpha$ = 1/137.036, the axial vector coupling constant $g_A$ = 1.29, the pion decay constant $f_{\pi}$ = 92.4 MeV and the pion mass $m_{\pi}$ = 138 MeV. To determine the LECs of $pp$ contact interactions, we fit $\tilde{T}_{sc}$ to the empirical values of $pp$ phase shifts provided by the partial-wave analysis (PWA) in Ref.~\cite{NNonline}. At LO, the phase shift at CoM momentum $p = 5.0$ MeV is used as the input. At NLO and N$^2$LO, $p = 68.5$ and 153.2 MeV are added. The $^1S_0$ phase shifts up to N$^2$LO are shown in Fig.~\ref{fig:pp1s0phase}. The convergence of EFT expansion and the cutoff variation bands are similar to those for $np$ scattering presented in Ref.~\cite{Long:2012ve}. A smaller shift from $\Lambda =$ 1.5 to 3.2 GeV than 0.5 GeV to 1.5 GeV indicates the cutoff convergence for large $\Lambda$'s at N$^2$LO. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{phase_shift_vs_k_from_cutoff_500_N2LO.pdf} \caption{The $pp$ $^1S_0$ phase shift as a function of CoM momentum $p$. The solid circles represent the empirical values from Ref.~\cite{NNonline}. The red and green bands represent the results at LO and NLO, respectively, from $\Lambda$ = 0.5 to 3.2 GeV. At {N$^2$LO}, $\Lambda$ = 0.5, 1.5, and 3.2 GeV are represented by, respectively, dashed, dotted, and dot-dashed curves.} \label{fig:pp1s0phase} \end{figure} \section{Axial currents and matrix elements\label{sec:rme}} We at first use NDA to take stock of the axial current operators to be used in the paper. The weak current $\vec{A}$ for two-nucleon system can be written in the plane-wave basis as: \begin{equation} \langle \vec{P}\,'\; \vec{p}\,'\vert \vec{A}\vert\vec{P}\; \vec{p}\,\rangle=\vec{A}_{1B}(\vec{p}\,',\vec{p}; \vec{q}\,)(2\pi)^3\delta^{(3)}(\vec{p}\,' - \vec{p} - \frac{\vec{q}}{2})+\vec{A}_{2B}(\vec{p}\,',\vec{p};\vec{q}\,) \end{equation} where $\vec{p}$ ($\vec{p}\,'$) denotes the initial (final) relative momentum, $\vec{P}$ ($\vec{P}\,'$) the initial (final) total momentum, $\vec{q} = \vec{P}\,'-\vec{P}$ the momentum carried by the current, $\vec{A}_{1B}$ ($\vec{A}_{2B}$) the one-body (two-body) current operators. Up to {N$^2$LO} in NDA, only one-body axial current operators contribute to $pp$ fusion rate. When there is no ambiguity, we drop the momentum-conserving delta function for one-body current operators. With these conventions, the LO axial current takes the following form: \begin{equation} \vec{A}_-^{(0)}(\vec{p}\,',\vec{p}\,) = -g_A\sum_i\vec{\sigma}_i\tau_{i,-}\, , \label{eqn:LOCurrent} \end{equation} where $\vec{\sigma}_i$ is the spin Pauli matrix of nucleon $i$ and $\tau_- \equiv (\tau_x - i\tau_y)/2$ acts on the isospin. By NDA, NLO axial currents vanish. At {N$^2$LO}, there are two types of contributions. One comes from the {N$^2$LO} correction to the nucleon axial form factor, which is proportional to $\langle r_A^2 \rangle q^2$. With the axial mean-square radius $\langle r_A^2 \rangle \simeq 0.4\, \text{fm}^2$ and the lepton-deuteron momentum transfer $q \sim 1$ MeV, $\langle r_A^2 \rangle q^2 \sim 10^{-5}$; therefore, this part, although nominally {N$^2$LO}, is negligible. The other part is what we will take into account: the $1/m_N^2$ correction to the nucleon axial vector coupling~\cite{Park:1993jf, Long:2010kt, Baroni:2015uza}, \begin{equation} \vec{A}_-^{(2)}(\vec{p}\,',\vec{p}\,) = \frac{g_A}{2m_N^2}\sum_i \left[\vec{K}^2\vec{\sigma}_i - (\vec{\sigma}_i \cdot \vec{K})\vec{K}\right]\tau_{i,-}, \label{eqn:N2LOCurrent} \end{equation} where $\vec{K} = \frac{1}{2}(\vec{p}+\vec{p}\,')$. Here the $\vec{q}-$dependent terms have been neglected due to the smallness of $q$ in near-threshold reactions. An equivalent expression for $\vec{A}_-^{(2)}$ can be found in Ref.~\cite{Krebs:2016rqz}. For expressions of the axial currents in coordinate space, we refer to Refs.~\cite{Park:2002yp, Baroni:2018fdn}. We will find that the following two-body contact axial current operator, as predicted in Ref.~\cite{PavonValderrama:2014zeq}, is enhanced in comparison with NDA: \begin{equation} \vec{A}_{ct}(\vec{p}\,',\vec{p}\,) = \hat{d}_R\,\vec{\sigma}_1\times\vec{\sigma}_2 \left(\pmb{\tau}_1\times\pmb{\tau}_2\right)_-\, . \label{eqn:Act} \end{equation} The LEC $\hat{d}_R$ is usually determined by fitting to observables of three-nucleon system, e.g., tritium $\beta-$decay~\cite{Park:2002yp} or binding energy~\cite{Marcucci:2013tda}, making use of the relation between $\hat{d}_R$ and the LEC $c_D$ that appears in three-nucleon forces, as demonstrated in Ref.~\cite{Gazit:2008ma}. The deuteron wave function is yet another essential ingredient. In coordinate space it has the following form: \begin{equation} \psi_d^M(\vec{r}\,) = \sum_{L=0, 2}\frac{u_L(r)}{r}\mathcal{Y}_{1L1}^{M}(\hat{r})\zeta_0^0 \, , \label{deuteron:function} \end{equation} where $\mathcal{Y}_{JLS}^{M}(\hat{r})$ are the normalized spin-angle wave functions~\cite{1979Theoretical2}. The $S$ and $D$-wave components of the wave function $u_0(r)$ and $u_2(r)$ are normalized so that \begin{equation} \int_0^\infty dr \left[u_0^2(r)+u_2^2(r)\right] = 1\, . \end{equation} We follow Ref.~\cite{Long:2011xw} regarding power counting of the chiral forces in the coupled channel of ${\cs{3}{1}-\cd{3}{1}}$, which actually coincides with NDA up to {N$^2$LO}. The procedure spelled out in Ref.~\cite{Shi:2022blm} is followed to determine the values taken by the contact LECs in the potentials. We also use the cutoff values adopted in Ref.~\cite{Shi:2022blm}, discarding some cutoff ranges where the numerical accuracy may suffer. The matrix element of the axial current between the $pp$ scattering and deuteron states is usually parametrized as \begin{align} \left< \psi_d^M\vert A_-^i\vert \psi_{pp} \right> = \delta_{Mi}\sqrt{\frac{32\pi}{\gamma^3}}g_AC_0\Lambda_R(p)\, , \label{ME:parametrized} \end{align} where $\gamma$ = 45.7 MeV is the deuteron binding momentum, $C_0 = \sqrt{2\pi\eta/(e^{2\pi\eta}-1)}$ the Gamow penetration factor (not to be confused with the contact coupling constants of the chiral potentials), and $\Lambda_R(p)$ the radial matrix element at the $pp$ relative momentum $p$. The contribution to $\Lambda_R$ from the LO one-body axial current operator \eqref{eqn:LOCurrent} reduces to the following integral~\cite{Schiavilla:1998je}: \begin{equation} \Lambda_R( p\, \vert \vec{A}^{(0)}_{-} ) = \sqrt{\frac{\gamma^3}{2p^2}}\frac{e^{i\delta_{sc}}}{C_0}\int_0^\infty dr\, u_0(r)\chi_0(r;p)\, . \label{eqn:LambdaROfA0} \end{equation} The contribution from the {N$^2$LO} axial current~\eqref{eqn:N2LOCurrent} is given by \begin{equation} \begin{split} \Lambda_R( p\, \vert \vec{A}^{(2)}_{-} ) &= \frac{1}{12m_N^2}\sqrt{\frac{\gamma^3}{2p^2}}\frac{e^{i\delta_{sc}^{(0)}}}{C_0} \\ &\quad \times \int_0^\infty dr \left[ u_0'' \chi_0 + u_0\chi_0'' - 2\left(u_0' - \frac{u_0}{r}\right)\left(\chi_0' - \frac{\chi_0}{r}\right) \right]\, . \end{split} \end{equation} Besides the matrix element of the {N$^2$LO} axial current between the LO wave functions, subleading corrections also include the matrix elements of the LO axial current operator $\vec{A}^{(0)}_-$ between the higher-order wave functions. In much the same way the $pp$ scattering amplitude was expanded (see Eqs.~\eqref{eqn:VstrExpan} and \eqref{eqn:TscExpan}), we can obtain the potential-corrected $\Lambda_R$ through numerical Taylor expansions. First, an auxiliary potential is defined by introducing dummy parameter $x$: \begin{equation} V(x) = V^{(0)} + x V^{(1)} + x^2 V^{(2)} \, . \end{equation} Second, a generating function is calculated through Eq.~\eqref{eqn:LambdaROfA0}, $\Lambda_R(p; x |\vec{A}^{(0)}_-)$. Its Taylor series around $x = 0$ yields desired corrections: \begin{equation} \Lambda_R(p; x |\vec{A}^{(0)}_-) = \Lambda_R^{(0)}(p) + x\Lambda_R^{(1)}(p) + x^2\Lambda_R^{\text{pot}}(p) + \cdots\,, \label{ME:expansion} \end{equation} where $\Lambda_R^{(1)}$ ($\Lambda_R^{\text{pot}}$) denotes the correction contributed by the NLO ({N$^2$LO}) potential. In practice, construction of the auxiliary potential $V(x)$ can be tweaked if higher numerical accuracy can be achieved or more information is needed. For instance, one can use instead \begin{equation} V(x, y, z) = V^{(0)} + x V^{(1)}_{\cs{1}{0}} + y V^{(2)}_{\cs{1}{0}} + z V^{(2)}_{{\cs{3}{1}-\cd{3}{1}}} \, , \end{equation} which makes the iterative contribution from $V^{(1)}$ and the first-order perturbation of $V^{(2)}$ in two $S$ waves be separated from each other. This breakdown of contributions is unambiguous up to {N$^2$LO} where different partial-wave potentials do not mix. We will come back to this in Sec.~\ref{sec:results}. \section{Results and Discussions\label{sec:results}} Electroweak reactions can reveal rich structure in nuclei. But multiple low-energy scales often coexist in these reactions, which may call for additional care in EFT analysis. The characteristic scales in $pp$ fusion include the $pp$ initial relative momentum, the deuteron binding momentum $\gamma \simeq 46$ MeV, and the inverse Bohr radius $\alpha m_N \simeq 7$ MeV. At energies of solar-physics interests, $p \lesssim \alpha m_N$ so that the Coulomb potential must be treated nonperturbatively. In what follows, we use a conservative estimation of $M_{\text{hi}} \simeq \delta \simeq 300$ MeV--- the delta isobar-nucleon mass splitting. Therefore, the acceptable upper bound for EFT truncation error at the $\nu$-th order will be $(\gamma/\delta)^{\nu + 1}$. The NDA estimation of the current operators could be upset by enhancement of nonperturbative nuclear dynamics in the initial or final states. We can be alerted to this sort of enhancement by RG analysis as a diagnostic tool. Our strategy of testing NDA of axial current operators against RG invariance is similar to that of Ref.~\cite{Shi:2022blm}. Long-range physics--- contributions from one-body and pion-exchange currents--- are assumed to follow NDA, and we study whether those contributions are independent of the cutoff value $\Lambda$. Choosing the initial relative momentum $p = 2.17$ MeV, we illustrate in Fig.~\ref{fig:LONLOcutoff} the cutoff variation of the radial matrix element $\Lambda_R(p)$ at LO and NLO. Cutoff independence is evidently achieved at LO for large cutoff values. The NLO fluctuation appears to be oscillating with a decaying magnitude. The magnitude--- from peak to trough--- is about (2.68 - 2.64)/2.65 $\simeq 1.5\%$, comparable or smaller than the theoretical uncertainty expected of a legitimate NLO $\simeq (\gamma/\delta)^2 \simeq 3\%$. Therefore, we conclude that both LO and NLO are sufficiently insensitive to the cutoff value. \begin{figure}[htbp] \centering \includegraphics[scale=1.2]{lo_and_nlo.pdf} \caption{The LO ($\Lambda_R^{(0)}$) and NLO ($\Lambda_R^{(0)} + \Lambda_R^{(1)}$) radial matrix elements for $p=2.17$ MeV as functions of the cutoff value $\Lambda$. } \label{fig:LONLOcutoff} \end{figure} We compare our NLO result with the rates calculated previously in the literature by choosing $p = 0$. With $\Lambda = 1$ GeV and the aforementioned truncation error of $3\%$, the value of the radial matrix element is $2.65 \pm 0.08$. One of the potential model calculation gives $\Lambda^2_R(0)=7.052 \pm 0.007$~\cite{Schiavilla:1998je}, translating to $\Lambda_R(0) = 2.656$. Pionless EFT calculation~\cite{Chen:2012hm} has $\Lambda_R(0) = 2.648$ and NDA-based ChEFT calculation in Ref.~\cite{Acharya:2016kfl} has $\Lambda_R(0) = 2.662$. Our NLO result agrees with these calculations within the uncertainty. At {N$^2$LO}, the cutoff variation is much more significant. We break down the {N$^2$LO} corrections at $p = 2.17$ MeV in Fig.~\ref{fig:N2LOBreakdown} according to the source that generates them. ``$\cs{3}{1}$'' is generated by the {N$^2$LO} deuteron wave function, ``$\cs{1}{0}$'' by the {N$^2$LO} $pp$ scattering wave function, and ``$\vec{A}^{(2)}$'' by the {N$^2$LO} axial current operator acting on the LO wave functions. The largest of these variations is due to the {N$^2$LO} ${\cs{3}{1}-\cd{3}{1}}$ potential, showing as large as $40\%$ deviation with respect to LO based on the values of $\Lambda_R$ for $\Lambda = 1.3$ and 1.6 GeV. The $^1S_0$ potential causes appreciable variation too, with the fluctuation amounting to an uncertainty of $5\%$ based on the values of $\Lambda_R$ for $\Lambda=1.7$ and 2.7 GeV. $\vec{A}^{(2)}$ only probes the cutoff variation of the LO wave functions, which is negligible in comparison with the other two contributions. We notice that both variations of $\cs{3}{1}$ and $\cs{1}{0}$ are much larger than acceptable {N$^2$LO} uncertainty $(\gamma/\delta)^3 \simeq 0.4\%$. \begin{figure}[htbp] \centering \includegraphics[scale=1.2]{N2LO_E5keV.pdf} \caption{The N$^2$LO corrections of radial matrix element $\Lambda_R$ as a function of the cutoff $\Lambda$ at $p=2.17$ MeV. } \label{fig:N2LOBreakdown} \end{figure} The sensitivity to the cutoff value at {N$^2$LO} suggests that modification of NDA-based power counting be in order. More specifically, we need to assign the contact axial current $\vec{A}_{ct}$ to {N$^2$LO} instead of the NDA counting of {N$^3$LO}. This is in approximate agreement with the conclusion of Ref.~\cite{PavonValderrama:2014zeq}, where $\vec{A}_{ct}$ was found to be N$^{7/4}$LO based on the analysis using the asymptotic wave functions at short distances. We now demonstrate that $\vec{A}_{ct}$ indeed renormalizes $\Lambda_R$ at {N$^2$LO}. To determine $\hat{d}_R$, we require the recommended value of $\Lambda_R(p = 0)$ 2.652, provided by Ref.~\cite{Adelberger:2010qa}, to be reproduced at {N$^2$LO}. Then the prediction of $\Lambda_R$ at other relative momenta is made for various cutoff values. Shown in Fig.~\ref{fig:RenormalizedNNLO}, $\Lambda_R$ is evidently renormalized. \begin{figure} \centering \includegraphics[scale=1.2]{renorm_N2LO_ME_vs_cutoff.pdf} \caption{Renormalized NNLO $\Lambda_R$ ($\Lambda_R^{(0)} + \Lambda_R^{(1)} + \Lambda_R^{(2)}$) for various CoM momenta $p$ as function of the cutoff value. The solids circles, squares and triangles correspond to $p = $2.17, 10, and 100 MeV, respectively. } \label{fig:RenormalizedNNLO} \end{figure} \section{Summary\label{smry}} We continue the RG-based analysis of nuclear electroweak currents that was initiated in Ref.~\cite{Shi:2022blm}. Proton-proton fusion is the focus of the present paper. We have calculated the nuclear matrix element of the axial current for this process up to {N$^2$LO}. The chiral forces responsible for $pp$ $S$-wave interactions and for the deuteron wave function were constructed according to the power counting laid out in Refs.~\cite{Long:2012ve, Long:2011xw}. Because the incoming $pp$ state is near threshold, the Coulomb force is fully iterated at LO. We have verified numerically that the inclusion of the Coulomb potential does not spoil RG invariance, but the $\cs{1}{0}$ contact terms need to be re-determined by fitting to $pp$ phase shifts. The novelty of our calculations is perturbative treatment of subleading chiral nuclear forces, as opposed to indiscriminate summation of LO and higher orders. Thanks in large part to strict perturbative calculations, we were able to isolate the contributions from different partial waves and to investigate their cutoff dependence individually. At LO and NLO, no significant cutoff variations were found, and our NLO value of the radial matrix element is in agreement with previous calculations within the EFT uncertainty. At {N$^2$LO}, the chiral force in ${\cs{3}{1}-\cd{3}{1}}$ was found to generate the most cutoff-sensitive contribution. (Interestingly, this is similar to Ref.~\cite{Shi:2022blm} where the ${\cs{3}{1}-\cd{3}{1}}$ potential at {N$^2$LO} was also found to drive a significant cutoff variation.) As a result, we concluded that one of the two-body contact axial current operators--- defined as $\vec{A}_{ct}$ in Eq.~\eqref{eqn:Act}--- must appear no later than {N$^2$LO}, one order lower than assessed by NDA. Renormalized by $\vec{A}_{ct}$, $\Lambda_R$ was illustrated to fulfill RG invariance at {N$^2$LO}. Our finding echos partly the analysis of contact electroweak currents in Ref.~\cite{PavonValderrama:2014zeq} where $\vec{A}_{ct}$ was assigned N$^{7/4}$LO. The most immediate consequence of promoting $\vec{A}_{ct}$ concerns the theoretical uncertainty of $pp$ fusion in chiral EFT. Without a reliable input of its LEC $\hat{d}_R$ the $pp$ fusion cross section can be predicted only up to NLO, with an uncertainty conservatively estimated to be $(\gamma/\delta)^2 \simeq 3\%$. \acknowledgments We thank Chen Ji for useful discussions. This work was supported by the National Natural Science Foundation of China (NSFC) under Grant No. 11735003 (BL) and the Fundamental Research Funds for the Central Universities (LS).
1,108,101,566,098
arxiv
\section{Introduction} \label{s:introduction} A variety of general and domain-specific knowledge graphs have been proposed to represent (scholarly) knowledge in a structured manner~\cite{spacecraftknowledgegraph,Zhao2018Architecture}. General purpose knowledge graphs include DBpedia\footnote{\url{https://www.dbpedia.org}}~\cite{dbpedia}, Wikidata\footnote{\url{https://www.wikidata.org/wiki/Wikidata:Main\_Page}}~\cite{wikidata}, YAGO~\cite{yago}, etc., whereas domain-specific infrastructures include approaches in Cultural Heritage~\cite{domainspecific}, KnowLife in Life Sciences~\cite{knowlife}, Hi-Knowledge in Invasion Biology\footnote{\url{https://hi-knowledge.org}}~\cite{heger2013conceptual,enders2020conceptual}, COVID-19 Air Quality Data Collection\footnote{\url{https://covid-aqs.fz-juelich.de}}, Papers With Code in Machine Learning\footnote{\url{https://paperswithcode.org}}, Cooperation Databank in Social Sciences\footnote{\url{https://cooperationdatabank.org}}~\cite{spadaro2020cooperation}, among others. In addition, knowledge graph technologies have also been employed to describe software packages in a structured manner~\cite{kelly,Abdelaziz_toolkit}. Extending the state-of-the-art, we propose an approach for scholarly knowledge extraction from published software packages by static analysis of package contents, i.e., (meta-)data and software (in particular, Python scripts), and represent the extracted knowledge in a knowledge graph. The main purpose of this knowledge graph is to capture information about the materials and methods used in scholarly work described in research articles. We address the following research question: Can structured scholarly knowledge be automatically extracted from published software packages? Our approach consists of the following steps: \begin{enumerate} \item \textit{Mining software packages} deposited in Zenodo\footnote{\url{https://zenodo.org}} using its REST API\footnote{\url{https://developers.zenodo.org}} and analyzing the API response to extract the linked metadata information, i.e, associated scholarly articles. We complement the approach by leveraging the Software Metadata Extraction Framework (SOMEF) to parse the README files and extract other related metadata information (i.e., software name, description, used programming languages). \item \textit{Perform static code analysis} to extract information about the procedures performed on data. We utilize Abstract Syntax Tree (AST) representations to statically analyze program code and identify operations performed on data. \item \textit{Identify scholarly knowledge} by performing keyword-based search of extracted information in article full text. Thus, among all the information extracted from software packages we identify that which is scholarly knowledge. \item \textit{Construct a knowledge graph} of scholarly knowledge extracted from software packages. For this purpose, we leverage the Open Research Knowledge Graph (ORKG)\footnote{\url{https://www.orkg.org/orkg/}}~\cite{orkg}, a production research infrastructure that supports producing and publishing machine actionable scholarly knowledge. \end{enumerate} \section{Related Work} \label{s:related-work} Several approaches have been suggested to retrieve metadata from software repositories. Mao et al.~\cite{mao} proposed the Software Metadata Extraction Framework (SOMEF) to extract metadata from software packages published on GitHub. Specifically, the framework employs machine learning-based methods to extract repository name, software description, citations, reference URLs, etc. from README files and to represent the metadata in structured formats (JSON-LD, JSON and RDF). SOMEF was later extended to extract additional metadata and auxiliary files (e.g., Notebooks, Dockerfiles) from software packages~\cite{kelly}. Moreover, the extended work also supports creating a knowledge graph of parsed metadata, thus improving search of software deposited in repositories. Abdelaziz et al.~\cite{Abdelaziz2020ADO} proposed CodeBreaker, a knowledge graph with information about 1.3 million Python scripts published on GitHub. The graph was embedded in an IDE to recommend code functions while writing software. Similarly, GraphGen4Code~\cite{Abdelaziz_toolkit} is a knowledge graph with information about software included in GitHub repositories. It was generated by analyzing the functionalities of Python scripts and linking them with the natural language artefacts (documentation and forum discussions on StackOverflow and StackExchange). The knowledge graph contains 2 billion triples. Several other machine learning-based approaches for searching~\cite{husain2020codesearchnet} software scripts and summarization~\cite{ahmad-etal-2020-transformer,iyer-etal-2016-summarizing} have been proposed. The Pydriller~\cite{PyDriller} and GitPython\footnote{\url{https://github.com/gitpython-developers/GitPython}} frameworks were proposed to mine information from GitHub repositories, including source code, commits, branch differences, etc. Similarly, ModelMine~\cite{modelmine} mines and analyzes models included in repositories. Vagavolu et al.~\cite{Vagavolu} presented an approach that leverages Code2vec~\cite{le2014distributed} and includes semantic graphs with Abstract Syntax Tree (AST) for performing different software engineering tasks.~\cite{allamanis2017learning} presented an AST based-approach for code representation and considered code data flow mechanisms to suggest code improvements. \section{Methodology} \label{s:methodology} In this section, we present our methodology for automatically extracting \emph{scholarly} knowledge from software packages and building a knowledge graph from the extracted meta(data). Figure~\ref{fig1} provides an overview of the key components. \begin{comment} \begin{enumerate} \item \textit{Mining software packages}: This step includes two tasks: (i) identifying relevant data sources and retrieving scientific software packages from these sources via APIs; and (ii) retrieving metadata from the software packages with machine learning-based services. The extracted metadata is analyzed to find the research articles associated with software packages. \item \textit{Static code analysis}: In this step, we generate AST-based structured code representations and extract information of interest, in particular the list of data manipulation processes. \item \textit{Identifying scholarly knowledge}: The purpose of this step is to match information extracted in the previous step with the full text of linked articles and thus constrain the extracted information to scholarly knowledge. \item \textit{Building the knowledge graph}: In this last step, we represent scholarly knowledge in machine actionable form. The resulting knowledge graph includes information about the mined software metadata and code semantics with the links among software packages, linked articles, and the semantics of code describing published research contributions. \end{enumerate} Further details of these components are elaborated in the following sections. \end{comment} \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig2.png} \caption{Pipeline for constructing a knowledge graph of scholarly knowledge extracted from software packages: 1) Mining software packages from the Zenodo repository using its REST API; 2) Extracting software metadata by analyzing the Zenodo API results as well as the GitHub API, using SOMEF; 3) Performing static code analysis using AST representations of software to extract code semantics, in particular operations on data; 4) Performing keywords-based search in article full texts to identify scholarly knowledge; 5) Knowledge graph construction with scholarly knowledge extracted from software packages.} \label{fig1} \end{figure*} \subsection{Mining Software Packages} We mine software packages from the Zenodo repository by leveraging its REST API. The metadata of each package is analyzed to retrieve its DOI and metadata about related versions and associated scholarly articles. The versions of software packages are retrieved by interpreting \texttt{relation: isVersionOf} metadata, whereas the DOI of the linked article, if available, is fetched using the \texttt{relation: cites} or \texttt{relation: isSupplementTo} metadata. We also leverage the Software Metadata Extraction Framework (SOMEF) and GitHub API to extract additional metadata from software packages, in particular software name, description, used programming languages, GitHub URL. Since not all software packages include the \texttt{cites} or \texttt{isSupplementTo} relations in metadata, we utilize SOMEF to parse the README files of software packages as an additional approach to extract the DOI of the related scholarly article. \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig3.png} \caption{Static code analysis: Exemplary Python script (shortened) included in a software package. The script lines highlighted with same color show different procedural changes that a particular variable has undergone.} \label{fig3} \end{figure*} \subsubsection{Static Code Analysis} We utilize Abstract Syntax Tree (AST) representations for static analysis of Python scripts included in software packages. AST provides structured representations of scripts, omitting unnecessary syntactic details (e.g., semicolons, commas, and comments). Our goal is to extract information about the data used in scripts and the procedures performed on that data. Our developed Python-based module sequentially reads the scripts contained in software packages and generates the AST. The implemented procedures and variables are tokenized and represented as nodes in the tree, which facilitates the analysis of the code flow. Thus, by traversing the tree we extracts the information about the data used in the scripts, the procedures performed on the data and, if available, the output data. \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig6.png} \caption{Abstract Syntax Tree (AST) of the script shown in Fig.~\ref{fig3}. For simplicity, the AST is shown only for Lines 1, 10 and 16.} \label{fig6} \end{figure*} Fig.~\ref{fig3} shows the Python script included in the software package\footnote{\url{https://zenodo.org/record/5874955}}. The script shows an example in which \texttt{Sample.csv} and \texttt{Reference.csv} used as input data, then the operation \texttt{LinearSVR} is performed on the data, and finally the resulting data \texttt{score.csv} is generated. Fig.~\ref{fig6} shows the AST of the Python script (Fig.~\ref{fig3}) created using a suitable Python library\footnote{\url{https://docs.python.org/3/library/ast.html}}. For simplicity, we show the AST of lines 1, 10, and 16. In the tree structure, the name of the node represents the functionality of each line of the script. For example, line 1 performs a task that reads data and \texttt{assigns} it to a variable. Therefore, the relevant node in the tree is labelled \texttt{Assign}. We retrieve all leaf nodes since they represent variables, their values, and procedures. Analyzing these script semantics, we can then find the flow of data between procedures. We investigate the flow of variables that contain the input data, i.e., examining which operations used a particular variable as a parameter. \subsection{Identifying Scholarly Knowledge} Not all information extracted from software packages and AST-analyzed program code is scholarly knowledge. Information is scholarly knowledge if it is included in a scholarly article. Hence, we filter the information extracted from software packages for information referred to in the article citing the software package. For this, we employ keyword-based search. Specifically, we search for the terms extracted in AST-analyzed program code in the related article full text. Assuming that the DOI of the related article has been identified, we fetch the PDF version of the article by utilizing the Unpaywall REST API\footnote{\url{https://api.unpaywall.org/v2/10.1186/s12920-019-0613-5?email=unpaywall\[email protected]}}. We make use of the Unpaywall API because, contrary to DOI metadata, it provides the URL to the PDF version of scholarly articles. In our example (Fig.~\ref{fig3}), the extracted terms (\texttt{Sample}, \texttt{Reference}, \texttt{read\_csv}, \texttt{LinearSVR}, \texttt{svr.fit}, and \texttt{to\_csv}) are searched in the PDF and we find \texttt{Sample}, \texttt{Reference} and \texttt{LinearSVR} are cited in the scholarly article. We thus assume that the extracted information is scholarly knowledge. \subsection{Knowledge Graph Construction} We now construct the knowledge graph with the scholarly knowledge obtained in the analysis of software packages. For this, we leverage the Open Research Knowledge Graph (ORKG)~\cite{orkg}. The ORKG aims to represent scholarly articles in a machine actionable and structured form. Abstractly speaking, the ORKG represents research contributions describing key results, the materials and methods used to obtain the results, and the addressed research problem. \begin{figure*}[t!] \includegraphics[width=\textwidth]{fig5.png} \caption{Knowledge graph depicting the scholarly knowledge extracted from a software package related to an article, describing key aspects (e.g., method used) of a research contribution of the work described in the article.} \label{fig4} \end{figure*} The scholarly information extracted from software packages in organized in triples and ingesting into ORKG using its REST API. Fig.~\ref{fig4} shows the resulting knowledge graph for a paper and its research contribution\footnote{\url{https://orkg.org/paper/R209873}}. The figure also shows the metadata of corresponding software package\footnote{\url{https://orkg.org/content-type/Software/R209880}}. \section{Results and Discussion} \label{s:discussion} At the time of writing, there are more than 80,000 software packages available on Zenodo. To expedite the execution process, we discard packages larger than 400 MB. We thus consider 52,236 software packages. We further process only those software packages that are also available on GitHub, that is 40,239 packages. We analyze the metadata of the software packages and the respective README files and find a total of 6221 research articles, of which 642 articles are associated with the related software packages in metadata through the \texttt{cites} or \texttt{isSupplementTo} relations. The remaining 5579 articles are extracted by analyzing the README files of the software packages using SOMEF. We only analyze software packages that include Python scripts and have linked scholarly articles, that is 2172 packages. Table~\ref{table1} summarizes the statistics. \begin{table*}[] \caption{Statistics about the (scholarly) information extracted from software packages.} \centering \begin{tabular}{|p{5cm}|l|l|l|l|l|l|l|l|} \hline \textbf{Entity} & \textbf{Total} \\ \hline \textit{Software package} & 52236 \\ \hline \textit{Paper} & \multicolumn{1}{p{5cm}|}{Explicit links in metadata: 642; SOMEF-based link extraction: 5579 (Total: 6221)} \\ \hline \textit{GitHub URL} & 40,239 \\ \hline \textit{Python-based software packages, linked with articles} & 2172 \\ \hline \textit{Analyzed Python scripts} & 67,936 \\ \hline \end{tabular} \label{table1} \end{table*} Out of 6221 articles, 4328 are described in ORKG because for the remaining articles the DOIs in README files are not parsed correctly. The articles added to ORKG include ORKG research contribution descriptions linking the software package and including information about computational methods and data used in research extracted by analyzing the software packages. \paragraph{Software semantics and Named Entity Recognition (NER) models} There exist numerous approaches for the extraction of scholarly knowledge from articles using machine learning and natural language processing, including scientific named entity recognition~\cite{Jiang,Coreference} and sentence classification~\cite{Crossdomain}. These approaches process the entire text to extract the essential entities in scholarly articles, which is costly in terms of data collection and training. Moreover, the approaches require large training data to achieve acceptable performance. We argue that extracting scholarly knowledge from software packages as proposed here is a significant step towards automated and cheap construction of scholarly knowledge graphs. Instead of extracting scholarly entities from full texts using machine learning models, the scholarly knowledge is extracted from related software packages with more structured data. \paragraph{Future directions} In future work, we aim to develop a pipeline that will automatically execute the software packages that contain scholarly knowledge. Such an approach can be integrated into software repositories (zenodo, figshare) to automatically execute the published software and determine whether the (extracted) scholarly knowledge is reproducible. \section{Conclusions} \label{s:conclusion} Our work is an important step towards automated and scalable mining of scholarly knowledge from published software packages and creating the knowledge graph using the extracted data. The resulting knowledge graph holds the links between articles and software packages, as well as and most interestingly descriptions of the computational methods and materials used in research work presented in articles. Evaluated on zenodo, our approach can be extended to other repositories, e.g., figshare, as well as software in languages other than Python, e.g., R, Java, Javascript, and C++---potentially further increasing the number of articles and related scholarly knowledge added to ORKG. \section*{Acknowledgment} This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and TIB--Leibniz Information Centre for Science and Technology. \bibliographystyle{splncs04}
1,108,101,566,099
arxiv
\section{Introduction} The generic Minimal Supersymmetric Standard Model (MSSM) contains a plethora of new sources of flavour violation, which reside in the supersymmetry--breaking sector. Especially the additional flavour violation in the squark sector can be dangerously large because the squark-quark-gluino vertex, which involves the strong coupling constant, is in general not flavour diagonal. This potential failure of the MSSM to describe the small flavour violation observed in experiment is known as the "SUSY flavour problem". We have only partial information from experiment about the quark mass matrices and therefore also about the Yukawa matrices. Not the whole matrices but only their singular values (physical masses) and the misalignment between the rotations of left-handed fields, needed to obtain the mass eigen-basis, (the CKM matrix) are known. This is the reason why it is useful to work in the so called super-CKM basis. We arrive at the super-CKM basis by applying the same rotations which are needed to diagonalize the quark mass matrices to the squark fields: \begin{equation} \tilde q^{int} = \left( {\begin{array}{*{20}c} {\tilde q_L^{{\mathop{\rm int}} } } \\ {\tilde q_R^{{\mathop{\rm int}} } } \\ \end{array}} \right) \to \tilde{q}^{SCKM}=\left( {\begin{array}{*{20}c} {U_L^q } & {0} \\ {0} & {U_R^q } \\ \end{array}} \right)\cdot\left( {\begin{array}{*{20}c} {\tilde q_L^{{\mathop{\rm int}} } } \\ {\tilde q_R^{{\mathop{\rm int}} } } \\ \end{array}} \right) = \left( {\begin{array}{*{20}c} {\tilde q_L^{SCKM} } \\ {\tilde q_R^{SCKM} } \\ \end{array}} \right) \end{equation} Here the superscript "int" means interaction eigenstates and the matrices $U_{L,R}^q$ are determined by the requirement that they diagonalize the tree-level quark mass matrices: \begin{equation} \renewcommand{\arraystretch}{1.4} \begin{array}{c} U_L^{u\dag } {\bf{m}}_u^{(0)} U_R^{u} = {\bf{m}}_u^{\left( D \right)},\qquad\qquad U_L^{d\dag } {\bf{m}}_d^{(0)} U_R^{d} = {\bf{m}}_d^{\left( D \right)} \end{array} \label{defrot} \end{equation} In the super-CKM basis the squark mass matrices in the down and in the up sector contain bilinear terms ${\rm M}_{\tilde {q}}^{2}$, ${\rm M}_{\tilde {u}}^{2}$and ${\rm M}_{\tilde {d}}^{2}$ as well as the trilinear terms ${{A}}^{u,d}$ which originate from the soft SUSY breaking and are flavour non-diagonal, in general. All other terms are flavour diagonal in the super-CKM basis and originate from the spontaneous breakdown of ${SU(2)}_L$. Since the squark mass matrices are hermitian they can be diagonalised by a unitary transformations of the squark fields: \begin{equation} \tilde{q}^{SCKM}\to\tilde{q}^{mass}=W^{\tilde q}\cdot\tilde{q}^{SCKM},\qquad\qquad M_{\tilde q}^{2\,(D)}=W^{\tilde q \dagger} M_{\tilde q}^2 W^{\tilde q} \end{equation} In the conventions of Ref.~\cite{Gabbiani:1996} the full $6\times 6$ mass matrix is parametrized by \begin{equation} \renewcommand{\arraystretch}{1.4} M_{\tilde q}^2 = \left( {\begin{array}{*{20}c} {\left(M_{1L}^{\tilde d}\right)^2} & {\Delta _{12}^{\tilde{d}\,LL} } & {\Delta _{13}^{\tilde{d}\,LL} } & {\Delta _{11}^{\tilde{d}\,LR} } & {\Delta _{12}^{\tilde{d}\,LR} } & {\Delta _{13}^{\tilde{d}\,LR} } \\ {{\Delta _{12}^{\tilde{d}\,LL}}^* } & {\left(M_{2L}^{\tilde d}\right)^2 } & {\Delta _{23}^{\tilde{d}\,LL} } & {{\Delta _{12}^{\tilde{d}\,RL}}^* } & {\Delta _{22}^{\tilde{d}\,LR} } & {\Delta _{23}^{\tilde{d}\,LR} } \\ {{\Delta _{13}^{\tilde{d}\,LL}}^* } & {{\Delta _{23}^{\tilde{d}\,LL} }^*} & {\left(M_{3L}^{\tilde d}\right)^2 } & {{\Delta _{13}^{\tilde{d}RL}}^* } & {\Delta _{23}^{RL*} } & {\Delta _{33}^{\tilde{d}\,LR} } \\ {{\Delta _{11}^{\tilde{d}\,LR}}^* } & {\Delta _{12}^{\tilde{d}RL} } & {\Delta _{13}^{\tilde{d}RL} } & {\left(M_{1R}^{\tilde d}\right)^2 } & {\Delta _{12}^{\tilde{d}\,RR} } & {\Delta _{13}^{\tilde{d}\,RR} } \\ {{\Delta _{12}^{\tilde{d}\,LR}}^* } & {\Delta _{22}^{\tilde{d}\,LR*} } & {\Delta _{23}^{\tilde{d}RL} } & {{\Delta _{12}^{\tilde{d}\,RR}}^* } & {\left(M_{2R}^{\tilde d}\right)^2 } & {\Delta _{23}^{\tilde{d}\,RR} } \\ {{\Delta _{13}^{\tilde{d}\,LR}}^* } & {{\Delta _{23}^{\tilde{d}\,LR}}^* } & {{\Delta _{33}^{\tilde{d}\,LR}}^* } & {{\Delta _{13}^{\tilde{d}\,RR}}^* } & {{\Delta _{23}^{\tilde{d}\,RR}}^* } & {\left(M_{3R}^{\tilde d}\right)^2 } \\ \end{array}} \right)\label{massmatrix} \end{equation} Anticipating the smallness of the off-diagonal elements $\Delta _{ij}^{\tilde q\,XY}$ (with $X,Y=L$ or $R$) it is possible to treat them pertubatively as squark mass terms ~\cite{Gabbiani:1996,Hall:1985dx,Misiak:1997ei,Buras:1997ij}. It is customary to define the dimensionless quantities \begin{equation} \delta^{q \,XY} _{ij} = \frac{\Delta^{\tilde q\, XY}_{ij}}{\frac{1}{6}\sum\limits_s {\left[M_{\tilde q}^2\right]_{ss}}} .\label{defde} \end{equation} Note that the chirality-flipping entries $\delta^{q \,XY} _{ij}$ with $X\neq Y$, even though they are dimensionless, do not stay constant if all SUSY parameters are scaled by a common factor of $a$, but rather decrease like $1/a$. In the current era of precision flavour physics stringent bounds on the parameters $\delta^{q \,XY} _{ij}$ have been derived from FCNC processes, by requiring that the gluino--squark loops do not exceed the measured values of the considered observables ~\cite{Gabbiani:1996,Hagelin:1992,Ciuchini:1998ix,Borzumati:1999,Becirevic:2001,Silvestrini:2007,Ciuchini:2007cw} We will show in the next section that even more stringent bounds on these quantities can be obtained if we apply a fine-tuning argument which assumes the absence of large accidental cancellations between different contributions to the CKM matrix. \section{Renormalization of the CKM matrix} The simplest diagram (and at least for our discussion the most important one) in which this new flavour and chirality violations induced by the squark mass matrices enters is a potentially flavour-changing self-energy with a squark and a gluino as virtual particles. Since the SUSY particles are much heavier than the five lightest quarks, it is possible in the calculation of these diagrams to expand in the external momentum, unless one external quark is the top. In the following we consider the self-energies with only light external quarks. The case with a top quark as an external quarks is discussed in ~\cite{Crivellin:2008mq}. Direct computation of the diagram gives: \begin{eqnarray} \Sigma^{q\,LR}_{fi} (p^2=0) = \frac{{2m_{\tilde g} }}{{3\pi }}\alpha _s (M_{\rm SUSY}) \sum_{s = 1}^6 W_{f + 3,s}^{\tilde q} W_{is}^{\tilde{q}*} B_0 \left( {m_{\tilde g} ,m_{\tilde q_s } } \right) \\ \Sigma^{q\,RL}_{fi} (p^2=0) = \frac{{2m_{\tilde g} }}{{3\pi }}\alpha _s (M_{\rm SUSY}) \sum_{s = 1}^6 W_{f,s}^{\tilde q} W_{i+3,s}^{\tilde{q}*} B_0 \left( {m_{\tilde g} ,m_{\tilde q_s } } \right) \label{selbstenergie} \end{eqnarray} For our definition of the loop-function $B_0$ see appendix of ~\cite{Crivellin:2008mq}. This self-energy has several important properties: \begin{itemize} \item It is finite and independent of the renormalization scale. \item It is always chirality-flipping. \item It does not decouple but rather converges to a constant if all SUSY parameter go to infinity. \item It satisfies $\Sigma^{q\,LR}_{fi}=\Sigma^{q\,RL\,*}_{if}$. \item It is chirally enhanced by an approximate factor of $\frac{\left|A^q_{fi}\right|}{M_{SUSY} \left|Y^q_{fi}\right|}$ or $\frac{v \tan\beta}{M_{SUSY}}$ compared to the tree-level quark coupling. These factors may compensate for the loop suppression factor of $1/(16 \pi^2)$. \end{itemize} In the case when this self-energy is flavour conserving, it renormalizes the corresponding quark mass in a rather trivial way: \begin{equation} m_{q_i}^{(0)}\to m_{q_i}=m_{q_i}^{(0)}+\Sigma^{q\,LR}_{ii} \end{equation} Since the self-energy is finite the introduction of a counter-term is optional. In minimal renormalization schemes the counter-term is absent and in the one-shell scheme it just equals $-\Sigma^{q\,LR}_{ii}$. Since we will later consider the possibility that the light quark masses are generated exclusively via these loops, meaning $m_{q_i}=\Sigma^{q\,LR}_{ii}$, it is most natural and intuitive to choose a minimal renormalization scheme like $\overline{\rm MS}$. \begin{figure} \includegraphics[width=1\textwidth]{W-Diagramm.eps} \caption{One-loop corrections to the CKM matrix from the down and up sectors contributing to $\Delta U_L^d$ and $\Delta U_L^u$ in \eq{physv}, respectively. \label{fig:W}} \end{figure} The renormalization of the CKM matrix is a bit more involved. There are two possible contributions, the self-energy diagrams and the proper vertex correction. The vertex diagrams involving a $W$ coupling to squarks are not chirally enhanced and moreover suffer from gauge cancellations with non-enhanced pieces from the self-energies. Therefore we only need to consider self-energies, just as in the case of the electroweak renormalization of $V$ in the SM ~\cite{Denner:1990}. The two diagrams shown in Fig 1 contribute at the one loop level. According to \cite{Logan:2000iv} they can be treated in the same way as one-particle-irreducible vertex corrections. Computing theses diagrams we receive the following corrections to the CKM matrix: \begin{equation} V^{(0)} \to V = \left(1+\Delta U_L^{u\dag}\right)V^{(0)}\left(1+\Delta U_L^d\right) \label{physv} \end{equation} with \begin{equation} \renewcommand{\arraystretch}{1.4} \Delta U_L^q \,=\, \left( {\begin{array}{*{6}c} 0 & {\frac{1}{{m_{q_2 }}} {\Sigma _{12}^{q\,LR} } } & {\frac{1}{{m_{q_3 }}} {\Sigma _{13}^{q\,LR} } } \\ {\frac{{ - 1}}{{m_{q_2 }}} {\Sigma _{21}^{q\,RL} } } & 0 & {\frac{1}{{m_{q_3 }}} {\Sigma _{23}^{q\,LR} } } \\ {\frac{{ - 1}}{{m_{q_3 }}} {\Sigma _{31}^{q\,RL} } } & {\frac{{ - 1}}{{m_{q_3 }}}{\Sigma _{32}^{q\,RL} } } & 0 \end{array}} \right) \label{DeltaU} \end{equation} In \eq{DeltaU} we have discarded small quark-mass ratios. Just as in the case of the mass-renormalization we choose a minimal renormalization scheme which complies with the use of the super-CKM basis (see \cite{Crivellin:2008mq} for details). It is easily seen from \eq{DeltaU} that the corrections are antihermitian which is in agreement with the demanded unitarity of the CKM matrix at one-loop. Our corrections are independent of the renormalization scale $\mu$. The choice $\mu=M_{SUSY}$ avoids large logarithms in $\Sigma_{ij}^{q\,LR}$, so that we have to evaluate it at this scale. This means we must also evaluate the quark masses appearing in \eq{DeltaU} at the scale $M_{SUSY}$. We can now receive constraints on the off-diagonal elements of the squark mass matrices from \eq{physv} by applying a fine-tuning argument. Large accidental cancellations between the SM and supersymmetric contributions are, as already mentioned in the introduction, unlikely and from the theoretical point of view undesirable. Requiring the absence of such cancellations is a commonly used fine-tuning argument, which is also employed in standard FCNC analyses of the $\delta^{q \,XY} _{ij}$'s ~\cite{Gabbiani:1996,Hagelin:1992,Ciuchini:1998ix,Borzumati:1999,Becirevic:2001,Silvestrini:2007,Ciuchini:2007cw}. Analogously, we assume that the corrections due to flavour-changing SQCD self-energies do not exceed the experimentally measured values for the CKM matrix elements quoted in the Particle Data Table (PDT) ~\cite{Amsler:2008zzb}. To this end we set the tree-level CKM matrix $V^{(0)}$ equal to the unit matrix and generate the measured values radiatively. For $m_{\tilde q}=m_{\tilde g}=1000 \mbox{GeV}$ we receive the constraints quoted in table 1. \begin{table}[t] \caption{Comparisons of our constraints on $\delta^{q \,XY} _{ij}$ with the constraints obtained from FCNC processes and vacuum stability bounds.} \vspace{0.6cm} \begin{center} \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|l|l l|l|} \hline quantity & our bound & \multicolumn{2}{|c|}{bound from FCNC's} & bound from VS ~\cite{Casas:1995}\\ \hline $|\delta^{d\,LR}_{12} |$ & $\leq 0.0011$ & $\leq 0.006$ & $K$ mixing ~\cite{Ciuchini:1998ix} & $\leq 1.5\, \times \,10^{-4}$ \\ $|\delta^{d\,LR}_{13} |$ & $\leq 0.0010$ & $\leq 0.15$ & $B_d$ mixing ~\cite{Becirevic:2001} & $\leq 0.05$ \\ $|\delta^{d\,LR}_{23} |$ & $\leq 0.010$ & $\leq 0.06$ & $B\rightarrow X_s\gamma; X_s l^+l^-$ ~\cite{Silvestrini:2007} & $\leq 0.05$ \\ $|\delta^{d\,LL}_{13} |$ & $\leq 0.032$ & $\leq 0.5$ & $B_d$ mixing ~\cite{Becirevic:2001} & $-$ \\ $|\delta^{u\,LR}_{12} |$ & $\leq 0.011$ & $\leq 0.016$ & $D$ mixing ~\cite{Ciuchini:2007cw} & $\leq 1.2\, \times \,10^{-3}$ \\ $|\delta^{u\,LR}_{13} |$ & $\leq 0.062$ & \multicolumn{2}{|c|}{---} & $\leq 0.22$ \\ $|\delta^{u\,LR}_{23} |$ & $\leq 0.59$ & \multicolumn{2}{|c|}{---} & $\leq 0.22$ \\ \hline \end{tabular} \end{center} \end{table} Note that our constraints are all much stronger than the FCNC bounds. The FCNC bounds in addition decouple. This means they vanish like $1/a^2$ if all SUSY masses are scaled with $a$. The vacuum stability bounds are stronger than our ones for the $\delta^{q\,LR}_{12}$ elements and they are non-decoupling like our bounds. However, the analysis of ~\cite{Casas:1995} only takes tree-level Yukawa coupling into account and the small Yukawa couplings are modified by the very same loop effects which enter $\Delta U_{L,R}^q$ in \eq{DeltaU}. \section{The Model} The smallness of the Yukawa couplings of the first two generations suggests that these couplings are generated through radiative corrections \cite{Weinberg:1972ws}. In the context of supersymmetric theories these loop-induced couplings arise from diagrams involving squarks and gluinos. Although the B factories have confirmed the CKM mechanism of flavour violation with very high precision, leaving little room for new sources of FCNCs, the possibility of radiative generation of quark masses and of the CKM matrix still remains valid as proven in section 2, even for SUSY masses well below 1 TeV, if the sources of flavour violation are the trilinear terms \cite{Crivellin:2008mq}. Of course, the heaviness of the top quark requires a special treatment of $Y^t$ and the successful bottom-tau Yukawa unification suggests to keep tree-level Yukawa couplings for the third generation. At large $\tan \beta$, this idea gets even more support from the successful unification of the top and bottom Yukawa coupling, as suggested by some GUT models. Radiative Yukawa interactions from SUSY-breaking terms have been considered earlier in Refs \cite{Buchmuller:1982ye,Ferrandis:2004,Borzumati:1997bd}. In the modern language of Refs.~\cite{D'Ambrosio:2002ex,cg} the global $[U(3)]^3$ flavour symmetry of the gauge sector (here we do not consider neutrinos) is broken to $[U(2)]^3 \times [U(1)]$ by the Yukawa couplings of the third generation. Here the three $U(2)$ factors correspond to rotations of the left-handed doublets and the right-handed singlets of the first two generation quarks in flavour space, respectively. This means we have \begin{equation} Y^{q} = \left( {\begin{array}{*{20}c} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & y^q \\ \end{array}} \right),\;\;\;V^{(0)} = \left( {\begin{array}{*{20}c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array}} \right) \end{equation} in the tree-level Lagrangian. We next assume that the soft breaking terms ${\Delta _{ij}^{\tilde{q}\,LL} }$ and ${\Delta _{ij}^{\tilde{q}\,RR} }$ possess the same flavour symmetry as the Yukawa sector, which implies that ${\bf{M}}_{\tilde q}$ , ${\bf{M}}_{\tilde d}$ and ${\bf{M}}_{\tilde u}$ are diagonal matrices with the first two entries being equal. For transitions involving the third generation the situation is different because flavour violation can occur not only because of a misalignment between $A^u$ and $A^d$ but also due to a misalignment with the Yukawa matrix. So the elements $A^{u,d}_{j3}$ do not only generate the CKM matrix at one-loop, they also act as a source of non-minimal flavour violation and thus can be constrained by FCNC processes. This model has several advantages compared with the generic MSSM: \begin{itemize} \item Flavour universality holds for the first two generations. Thus our Model is minimally flavour violating according to the definition of \cite{D'Ambrosio:2002ex} with respect to the first two generations since the quark and the squark mass matrices are diagonal in the same basis. This provides a explanation of the precise agreement between theory and experiment in K and D physics. \item The SUSY flavour problem is reduced to the quantities $\delta^{q\,RL}_{13,23}$. However, these flavour-changing elements are less constrained from FCNCs and might even explain a possible new CP phase indicated by recent data on $B_s$ mixing. \item The flavour symmetry of the Yukawa sector protects the quarks of the first two generations from a tree-level mass term. \item The model is economical: Flavour violation and SUSY breaking have the same origin. Small quark masses and small off-diagonal CKM elements are explained by a loop suppression. \item The SUSY CP problem is substantially alleviated by an automatic phase alignment \cite{Borzumati:1997bd}. In addition, the phase of $\mu$ does not enter the EDMs at the one-loop level, because the Yukawa couplings of the first two generations are zero. \end{itemize} \section{Conclusions} We have computed the renormalization of the CKM matrix by chirally-enhanced flavour-changing SQCD effects in the MSSM with generic flavour structure \cite{Crivellin:2008mq}. We have derived upper bounds on the flavour-changing off-diagonal elements $\Delta _{ij}^{\tilde{q}\,XY}$ of the squark mass matrices by requiring that the supersymmetric corrections do not exceed the measured values of the CKM elements. For $M_{\rm SUSY}\geq 500\,\mbox{GeV}$ our constraints on \emph{all}\ elements $\Delta _{ij}^{\tilde{d}\,LR}$, $i<j$, are stronger than the constraints from FCNC processes. As an important consequence, we conclude that it is possible to generate the observed CKM elements completely through finite supersymmetric loop diagrams \cite{Buchmuller:1982ye,Ferrandis:2004} without violating present-day data on FCNC processes. In this scenario the Yukawa sector possesses a higher flavour symmetry than the trilinear SUSY breaking terms. Additional applications to charged Higgs and chargino couplings are considered in \cite{Crivellin:2008mq}. \section*{Acknowledgments} This work is supported by BMBF grant 05 HT6VKB and by the EU Contract No.~MRTN-CT-2006-035482, \lq\lq FLAVIAnet''. I am grateful to the organizers for inviting me to this conference. I like to thank Lars Hofer for reading the manuscript and many useful discussions. I am grateful to Ulrich Nierste for the collaboration on the presented work ~\cite{Crivellin:2008mq}. \section*{References}
1,108,101,566,100
arxiv
\section{ \label{sec:introd} Introduction} Polarized lepton-nucleon deep inelastic scattering (DIS) has been studied in the last decades by several experiments which have measured spin asymmetries over a wide kinematic range~\cite{Ash89,Sti96}. These experiments have determined the spin structure functions of the proton and the neutron and have tested the related sum rules. When interpreted in the framework of the quark-parton model the experimental results show that the quark spins account for only a rather small fraction of the nucleon spin, thus implying an appreciable contribution either of gluons or possibly of orbital angular momentum. These data indicate also a large positive contribution of \uq~quarks, a negative contribution of \dq~quarks, and, surprisingly, a small negative contribution of \sq~quarks to the proton spin~\cite{Ash89}. A general introduction to this subject can be found for example in~\cite{Ans95,Lea96}. Inclusive polarized DIS measurements do not allow one to distinguish the role of each individual partonic component. A further separation of the contributions of different constituents to the nucleon spin, like $\Delta {\sf s}$ or $\Delta G$, requires additional input from the study of semi-inclusive DIS, for which only limited data have been obtained so far~\cite{Ade96}. In these experiments, in addition to the scattered lepton, one detects also one or more hadrons produced in the interaction. For instance, the study of polarized open-charm lepto-production allows to access the gluon polarization $\Delta G$ in a polarized nucleon~\cite{comp}. In this work we present {\tt POLDIS}, a program designed to simulate polarized DIS experiments, with particular emphasis on semi-inclusive DIS and Heavy Flavor lepto-production in the quasi-real photo-production limit ($Q^2 \rightarrow 0$). For these processes, {\tt POLDIS} generates the spin-dependent cross section asymmetries between parallel and antiparallel configurations of the incident lepton beam and target nucleon (or proton beam) polarizations, which are measured in these experiments. Since these asymmetries are ratios of cross sections no absolute normalization is needed in their evaluation. The spin-dependent cross sections can be extracted from these spin asymmetries and the spin-independent cross section. In this program electromagnetic processes mediated by one photon exchange are implemented. The present code can be used over a wide kinematical range, where the effects of the weak interaction can be neglected, {\it i.e.} for $Q^2 \leq 100~{\rm GeV}^2 \ll M_{Z^0}^2$. The implementation of the polarization in {\tt POLDIS} can be summarized in the following steps: \begin{itemize} \item[1 --] generation of an unpolarized event, \item[2 --] calculation of the partonic level hard-scattering spin asymmetry for this event, \item[3 --] evaluation of the final spin asymmetry and of the spin-dependent cross sections. \end{itemize} The unpolarized event generation is performed with the {\tt LEPTO}~\cite{lepto} Monte Carlo and the {\tt AROMA}~\cite{aroma} code for Heavy Flavor (HF) production. The hadronization is based on the LUND string model, which is known to reproduce fairly accurately the final hadronic state in a variety of processes, and is performed with {\tt JETSET}~\cite{jetset}. The hard-scattering spin asymmetries are calculated for each generated event to order $\alpha_s$\footnote{At present, the spin dependent hard cross sections are calculated to order $\alpha_s$ only.} and are convoluted with the ratio between the corresponding polarized and unpolarized parton densities ({\it i.e.} parton polarization). A {\it polarization asymmetry weight} is thus obtained, and the average of these {\it weights} for the generated sample gives the polarized cross section asymmetry. The spin-dependent cross sections can be obtained from this asymmetry and the spin-independent cross section. These calculations are performed in a set of subroutines to be linked with the existing unpolarized lepto-production event generator {\tt LEPTO}~\cite{lepto} for DIS and {\tt AROMA}~\cite{aroma} for HF production, and {\tt JETSET}~\cite{jetset} for the hadronization. No modification to these programs is required. The unpolarized parton densities are obtained from the {\tt PDFLIB} library~\cite{pdflib}; various polarized parton densities can be selected from a collection of some existing parametrizations provided with this program, although in a less standardized form. {\tt POLDIS} has been tested with the most recent versions of these programs ({\tt LEPTO 6.5}, {\tt AROMA 2.2}, and {\tt JETSET 7.4}); it is also backward compatible with older versions. A Monte Carlo code for polarized DIS with similar aims, {\tt PEPSI}, has been presented a few years ago~\cite{pepsi}. Similar results could, in principle, be obtained with {\tt PEPSI}; however no Heavy Flavor generation is included in that program. In order to generate the spin asymmetries, which are indeed the measured quantities, it requires separate runs for opposite spin configuration in addition to a run without polarization. This results in a less convenient usage compared to that adopted in {\tt POLDIS}. Additionally, {\tt POLDIS} can also generate simultaneously different asymmetry values using different polarized parton densities. In the next Section we present the kinematics, formalism, and formulae for the polarized DIS. The partonic level hard-scattering spin-independent and spin-dependent cross sections, calculated to first order in $\alpha_s$ are summarized in Appendix~A and~B. Section~3 describes the structure of the program and the implementation of the physics presented in Section~2. The usage of the program is explained in Section~4. We conclude in Section~5 with a comparison between the scattering asymmetries simulated by our program for some reactions and experimental data. \section{ \label{sec:asym} Polarized cross sections} \subsection{ \label{sec:kinema} Kinematics} Figure~\ref{fig:kinema} depicts a deep inelastic scattering (DIS) event. The four-vectors $k^\mu = (E,\vec{k})$ and $k^{\prime\mu} = (E^\prime,\vec{k}^\prime)$ represent the momenta of the incoming and scattered lepton, respectively, and $q^\mu = k^\mu - k^{\prime\mu}$ is the momentum transfer from the lepton to the hadron ($\gamma^\ast$ four-momentum). The target nucleon of mass $M$ has four-momentum $p^\mu$, and $p^\mu_i$ is the four-momentum of the $i^{th}$ hadron produced in the interaction. The interaction is usually described with the following variables: \begin{equation} \begin{array}{lclcl} Q^2 & = & - q^2 & \approx & 2 E E^\prime (1 - \cos \theta) \\ [6pt] \nu & = & p \cdot q / M & = & E - E^\prime \\ [6pt] y & = & p \cdot q / p \cdot k & = & \nu / E \\ [6pt] x & = & Q^2 / 2 p \cdot q & = & Q^2 / 2 M \nu \; . \end{array} \label{eq:kinema} \end{equation} The right-hand side of each equation is valid only in the laboratory frame, where the target nucleon is at rest, and $\theta$ is the lepton scattering angle. The kinematics for inclusive scattering, integrated in azimuth, is completely described by two of the four variables given above, say $x$ and $Q^2$. When describing semi-inclusive scattering, three additional variables are needed for each measured hadron; a common choice for these variables is the energy fraction of the hadron with respect to the $\gamma^\ast$ energy \begin{equation} z_i = p_i \cdot p / p \cdot q = E_i / \nu \; , \label{eq:sikinema} \end{equation} the hadron transverse momentum $p_T$ with respect to the $\gamma^\ast$ direction, and the azimuthal angle between the scattered lepton and the outgoing hadron. \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=12cm\epsffile{kinema.eps}} \end{center} \vspace*{-5mm} \caption{Deep inelastic scattering event.} \label{fig:kinema} \end{figure} \subsection{ \label{sec:xsecasymm} Cross section asymmetries} In a polarized DIS experiment one measures the asymmetries \begin{equation} A_\parallel = \frac{d\sigma^{\uparrow\downarrow} - \, d\sigma^{\uparrow\uparrow}} {d\sigma^{\uparrow\downarrow} + \, d\sigma^{\uparrow\uparrow}} \; \; \; \; \; {\rm and} \; \; \; \; \; A_\perp = \frac{d\sigma^{\downarrow\rightarrow} - \, d\sigma^{\uparrow\rightarrow}} {d\sigma^{\downarrow\rightarrow} + \, d\sigma^{\uparrow\rightarrow}} \label{eq:asym1} \end{equation} for longitudinal and transverse configurations of the incident lepton and target polarizations. The spin orientations in Eq.~\ref{eq:asym1} refer to the laboratory frame, where the target nucleon is at rest. These asymmetries are directly related to the polarized structure functions $g_1$ and $g_2$. In this paper only the longitudinal asymmetry $A_\parallel$ is discussed (and included in {\tt POLDIS}). In the next pages, we will use the following notation for this asymmetry: $A_{LL} \equiv A_\parallel$. Usually the scattering asymmetry results are presented in terms of the virtual photon asymmetries $A_1$ and $A_2$ \begin{equation} A_1 = \frac{\sigma^T_{1/2}-\sigma^T_{3/2}}{\sigma^T_{1/2}+\sigma^T_{3/2}} \; \; \; \; \; \; {\rm and} \; \; \; \; \; \; A_2 = \frac{2 \sigma^{TL}}{\sigma^T_{1/2}+\sigma^T_{3/2}} \label{eq:asym2} \end{equation} where $\sigma^T_J$ is the virtual photon absorption cross section in a configuration with total angular momentum $J$ along the incident photon direction, and $\sigma^{TL}$ is the interference term between transverse and longitudinal virtual photon nucleon scattering. $A_1$ and $A_2$ are related to the measured asymmetry $A_{LL}$ by \begin{equation} A_{LL} = D \, (A_1 + \eta A_2) \label{eq:asym3} \end{equation} where \begin{equation} D \approx \frac{y(y-2)}{y^2 + 2(1-y)(1+R)} \; , \; \; \; \; \; \; \; \; \; \; \eta \approx \frac{2(1-y)}{y(2-y)} \frac{\sqrt{Q^2}}{E} \label{eq:depol} \end{equation} in the high energy limit (large $\nu$) and neglecting the incident lepton mass. $D$ is the depolarization factor of the virtual photon with respect to the incident lepton, and \begin{equation} R = \frac{\sigma^L}{\sigma^T} \label{eq:rpar} \end{equation} is the ratio between the unpolarized cross section for the longitudinal and transverse virtual photon components. The ratio $R = R(x,Q^2)$ can be obtained from the QCD analysis of unpolarized inclusive DIS data. In practice, however, one uses parametrizations of $R$ obtained directly from DIS experiments (see Section~\ref{sec:rpar}). The virtual photon asymmetries are bounded by the positivity relations \begin{equation} |A_1| \leq 1 \; \; \; \; \; \; {\rm and} \; \; \; \; \; \; |A_2(x)| \leq \sqrt{R(x)} \; . \label{eq:pos} \end{equation} Since also $\eta \ll 1$ in the kinematic range of most high energy experiments, the term proportional to $A_2$ can be neglected, and \begin{equation} A_1 \simeq \frac{A_{LL}}{D} \; . \label{eq:asym4} \end{equation} In {\tt POLDIS} both asymmetries, $A_{LL}$ and $A_1$, are generated. \subsection{ \label{sec:formulae} Partonic cross sections} Owing to factorization, the unpolarized (polarized) DIS cross section can be written as a convolution of the unpolarized (polarized) parton distribution function $F$ ($\Delta F$) with the partonic hard-scattering cross sections ${\rm d} {\hat \sigma}$ (${\rm d} \Delta {\hat \sigma}$): \begin{equation} {\rm d} \sigma^\lambda \sim F \otimes {\rm d} {\hat \sigma} \, + \lambda \, \Delta F \otimes {\rm d} \Delta {\hat \sigma} \; . \label{eq:cs1} \end{equation} Here $\lambda$ refers to the parallel $\uparrow\uparrow$ ($\lambda = + 1$) and antiparallel $\uparrow\downarrow$ ($\lambda = - 1$) spin configuration of the incoming lepton and target nucleon in the $\gamma^\ast - N$ c.m.. The terms ${\rm d} {\hat \sigma}$ and ${\rm d} \Delta {\hat \sigma}$ are the spin-independent \begin{equation} {\rm d} {\hat \sigma} = \frac{1}{2} \, ({\rm d} {\hat \sigma}^{\uparrow\uparrow} + {\rm d} {\hat \sigma}^{\uparrow\downarrow}) \label{cs3} \end{equation} and spin-dependent \begin{equation} {\rm d} \Delta {\hat \sigma} = \frac{1}{2} \, ({\rm d} {\hat \sigma}^{\uparrow\uparrow} - {\rm d} {\hat \sigma}^{\uparrow\downarrow}) \; . \label{cs4} \end{equation} parts of the partonic hard cross section. One introduces also the partonic asymmetry $\widehat{a}_{LL}$ for the hard-scattering process \begin{equation} \widehat{a}_{LL} = \frac{{\rm d} \Delta {\hat \sigma}}{{\rm d} {\hat \sigma}} \; . \label{cs5} \end{equation} The scattering asymmetry $A_{LL}$ for the reaction is obtained from \begin{equation} A_{LL} = \frac{\sum \int {\rm d} \Delta {\hat \sigma} \, \Delta F} {\sum \int {\rm d} {\hat \sigma} \, F} \label{eq:asym5} \end{equation} where the sum runs over the hard-scattering sub-processes calculated to first order in $\alpha_s$ (the corresponding Feynman diagrams for ${\rm d} {\hat \sigma}$ are shown in Fig.~\ref{fig:feydia}), and is integrated over the accessible phase space. Using the partonic scattering asymmetry $\widehat{a}_{LL}$, the asymmetry in Eq.~\ref{eq:asym5} can be rewritten, as \begin{equation} A_{LL} = \frac{\sum \int {\rm d} {\hat \sigma} F \,\, \widehat{a}_{LL} \,\, \Delta F / F} {\sum \int {\rm d} {\hat \sigma} F} \label{eq:asym6} \end{equation} In the Monte Carlo calculation, ${\rm d} {\hat \sigma} \, F$ can be viewed as the {\it cross section weight}, and $\widehat{a}_{LL} \,\Delta F / F$ as the {\it asymmetry weight} for the generated event. The integrals in Eq.~\ref{eq:asym6} are performed by summing these quantities for all events in the generated sample. \subsection{ \label{sec:formulaeb} Hard cross sections at order $\alpha_s$ in QCD} \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{feydia.eps}} \end{center} \vspace*{-5mm} \caption{Lowest order Feynman diagrams for DIS: a) leading order, b) Compton, c) Photon-gluon fusion.} \label{fig:feydia} \end{figure} The leading order (L.O.) parton level process is the virtual photo-absorption $\gamma^\ast + q \rightarrow q$ (Fig.~\ref{fig:feydia}a). At first order in QCD the gluon radiation (Compton diagram) $\gamma^\ast + q \rightarrow q+G$ (Fig.~\ref{fig:feydia}b) and the photon-gluon fusion (PGF) $\gamma^\ast + G \rightarrow q + \bar{q}$ (Fig.~\ref{fig:feydia}c) also contribute to the DIS cross section. Since for the latter two processes there are two partons in the final state, the first order matrix elements involve three new degrees of freedom in addition to the two variables, say $x$ and $Q^2$, needed for the L.O. DIS diagram (Fig.~\ref{fig:feydia}a). These three new degrees of freedom correspond to the energy, polar angle, and the azimuthal angle $\phi$ between the lepton and QCD scattering plane of one of the final partons (the other is fixed by the kinematics). A suitable choice for these new degrees of freedom is~\cite{Pec80}: \begin{equation} x_p = \frac{x}{\xi} \; , \; \; \; \; z_q = \frac{p \cdot p_q}{p \cdot q} \; , \; \; \; \; \phi = \frac{(\vec{p} \times \vec{l}) \cdot (\vec{p} \times \vec{p_q})} {|\vec{p} \times \vec{l}| |\vec{p} \times \vec{p_q}|} \label{eq:var} \end{equation} where $\xi$ is the momentum fraction of the incoming parton, and $p_q$ is the momentum of the final quark, and the cross sections are five-fold differential \begin{equation} \frac{{\rm d}^5 {\hat \sigma} (x, Q^2, x_p, z_q, \phi)} {{\rm d}x \, {\rm d}Q^2 \, {\rm d}x_p \, {\rm d}z_q \, {\rm d}\phi} \; . \label{eq:fivefold} \end{equation} In the virtual boson-parton c.m.~frame the unpolarized cross section ${\rm d} {\hat \sigma}$ can be decomposed as~\cite{Pec80}: \begin{equation} {\rm d} {\hat \sigma} = {\rm d} {\hat \sigma}_0 + \cos\phi \; {\rm d} {\hat \sigma}_1 + \cos2\phi \; {\rm d} {\hat \sigma}_2 \label{eq:dec1} \end{equation} (note ${\rm d} {\hat \sigma}_i = {\rm d} {\hat \sigma}_i (x, Q^2, x_p, z_q)$) and the polarized cross section ${\rm d} \Delta {\hat \sigma}$ as: \begin{equation} {\rm d} \Delta {\hat \sigma} = {\rm d} \Delta {\hat \sigma}_0 + \cos\phi \; {\rm d} \Delta {\hat \sigma}_1 \; . \label{eq:dec2} \end{equation} The $\cos 2 \phi$ term does not appear in Eq.~\ref{eq:dec2}, because it enters only in the cross section for the virtual photon longitudinal component, and therefore cancels in $\Delta {\hat \sigma}$. After integration over the azimuthal angle $\phi$, only the first term on the right-hand side of Eqs.~\ref{eq:dec1} and~\ref{eq:dec2} remains ({\it i.e.} ${\rm d} {\hat \sigma}_0$ and ${\rm d} \Delta {\hat \sigma}_0$). When studying HF production, the masses of the quarks must be taken into account. In this case the helicity does not coincide with the quark spin: $q_{\pm1/2} = q_{R/L} + O \left( m / \sqrt{\hat{s}} \right)$ (the subscript $\pm 1/2$ denotes the quark helicity, and $q_{R/L} = 1/2 (1 \pm \gamma_5) q$). The HF photon-gluon fusion spin asymmetry $\widehat{a}_{LL}$ reaches the value $-1$ of the massless case only in the asymptotic limit of very high energies, while at threshold $\widehat{a}_{LL} = +1$. In the HF lepto-production via the PGF the relevent scales of the process are set by the HF quark mass $m_Q$, and, therefore, this process can be studied also in the quasi-real photo-production limit of $Q^2 \rightarrow 0$. In Appendix~A we summarize these partonic cross sections calculated for one photon exchange following the cross section decomposition of Eqs.~\ref{eq:dec1} and~\ref{eq:dec2}. In Appendix~B we summarize the same cross sections expressed in terms of the Mandelstam variables ${\hat s}$, ${\hat t}$, and ${\hat u}$ integrated over the azimuthal angle $\phi$. The unpolarized cross sections have been derived in~\cite{Pec80} for the massless case and in~\cite{Sch88} for HF. The polarized ones were re-derived by us, extending also the results of~\cite{pepsi,Rat83,Wat82}. The partonic scattering asymmetry $\widehat{a}_{LL}$ is obtained from Eq.~\ref{cs5} by adding up the various terms of the cross sections in Eqs.~\ref{eq:dec1} and~\ref{eq:dec2}, which are summarized in Appendix~A. For instance, the L.O. scattering asymmetry is \begin{equation} \widehat{a}_{LL}^{\gamma^\ast q \rightarrow q} \, = \, \frac{1-(1-y)^2}{1+(1-y)^2} \; . \label{eq:all} \end{equation} For the other Feynman diagrams shown in Fig.~\ref{fig:feydia}, the partonic asymmetries result in much more complicated expressions. In Figure~\ref{fig:asymm} we plot the partonic scattering asymmetries $\widehat{a}_{LL}$ of order $\alpha_s$ as a function of the c.m. scattering angle $\vartheta^\ast$ between the incoming and outgoing partons for various values of the $\gamma^\ast$ momentum transfer $Q^2$ and of the c.m. energy $\hat{s}$. To be noted the $Q^2$ dependence of these asymmetries. The angle $\vartheta^\ast$ is given by \begin{equation} \cos \vartheta^\ast = 1 - 2 z_q \; . \label{eq:cos0} \end{equation} \begin{figure} \vspace*{-10mm} \begin{center} \mbox{\epsfxsize=16cm\epsffile{asymm.eps}} \end{center} \vspace*{-5mm} \caption{The scattering asymmetry $\widehat{a}_{LL}$ for $\gamma^\ast + q \rightarrow q + G$, $\gamma^\ast + G \rightarrow q + \bar{q}$, and $\gamma^\ast + G \rightarrow Q + \bar{Q}$ (HF) as a function of the c.m. scattering angle $\vartheta^\ast$ at fixed $y = 0.7$ for different values of $Q^2$ and for two values of the c.m. energy $\hat{s}$.} \label{fig:asymm} \end{figure} \section{ \label{sec:MC} Structure of the program} {\tt POLDIS} consists of a set of subroutines for the handling of the polarization, which are linked with the unpolarized lepto-production event generetor {\tt LEPTO}~\cite{lepto} or {\tt AROMA}~\cite{aroma}. The hadronization is performed with {\tt JETSET}~\cite{jetset} using the LUND string model. No modifications to these event generators is required. The {\tt PDFLIB} library~\cite{pdflib} is used as a source for the unpolarized parton distribution functions. We assume familiarity with these programs. Various polarized parton distribution functions, obtained from their authors, are also included, although in a less standardized form. The general structure of {\tt POLDIS} is similar to a typical Monte Carlo program using {\tt LEPTO} or {\tt AROMA} with the addition of calls to some subroutines for the polarization calculations. {\tt POLDIS} produces in output, in addition to the standard {\tt LEPTO} output, the polarized scattering asymmetries $A_{LL}$ and $A_1$, and the spin-dependent cross sections. {\tt POLDIS} (as {\tt LEPTO}) is a {\it slave} program, in the sense that the main {\it steering} code for the administration of the event generation and the subsequent analysis of these events, has to be provided by the user. The various relevant {\tt POLDIS} program components are summarized in Tab.~\ref{tab:comp}. Most of names start with POL. These are the only components that may be accessed by the user. The program settings and parameters are listed in Tab.~\ref{tab:param}. These parameters contains the values of different spin asymmetries, which need to be accessed by the user. \begin{table} \begin{center} \begin{tabular}{|l|p{12cm}|} \hline POLINI (S) & initializes the (un)polarized parton distribution functions \\ POLASYM (S) & calculates the {\it polarization weight} for each generated event with calls to the functions for the calculation of the spin asymmetries calls to the subroutines for the extraction of unpolarized and polarized parton densities, and the calculation of $R$ \\ POLSTR (S) & returns the values of the polarized parton densities at given $x$ and $Q^2$ (PDG flavor code convention) \\ POLINTL (S) & contains the internal set of polarized parton densities \\ POLEND (S) & gives the spin-dependent cross sections (Monte Carlo estimate) \\ ALLQ (F) & calculates $\widehat{a}_{LL}$ for $\gamma + q \rightarrow q$ \\ ALLQG (F) & calculates $\widehat{a}_{LL}$ for $\gamma + q \rightarrow q + G$ \\ ALLQQ (F) & calculates $\widehat{a}_{LL}$ for $\gamma + G \rightarrow q + \bar{q}$ \\ ALLQQHF (F) & calculates $\widehat{a}_{LL}$ for $\gamma + G \rightarrow Q + \bar{Q}$ \\ RPAR (F) & gives the value of $R$ at given $x$ and $Q^2$ \\ POLDISU (C) & contains the {\tt POLDIS} settings and parameters which include also the asymmetry values \\ \hline \end{tabular} \end{center} \caption{Relevant {\tt POLDIS} program components: subroutines~(S), functions~(F), and common blocks~(C).} \label{tab:comp} \end{table} \begin{itemize} \item At the {\bf initialization stage} standard {\tt LEPTO} and/or {\tt AROMA} parameters and switches are selected. Additionally, a parametrization for the polarized distribution functions and for $R = \sigma^L / \sigma^T $ are chosen, and a kinematical interval is defined for the {\it simulation}. In the subroutine {\bf polini} the selected polarized distribution functions are read from the corresponding ASCII file(s). Immediately after that the unpolarized event generator is initialized with a call to {\bf linit} ({\tt LEPTO}) or {\bf arinit} ({\tt AROMA}). \item In the {\bf event loop} unpolarized events are generated with calls to {\bf lepto} (or {\bf aroma}). The asymmetry is calculated for each event in the subroutine {\bf polasym}, and the asymmetry results are stored in the common block {\bf /poldisu/}. At this point the user can perform additional analysis on the generated event: for instance, he can select a binning {\it e.g.} in $x$ or $y$ for the asymmetries $A_{LL}$ and $A_1$, or study semi-inclusive asymmetries by requiring in the event a $\pi^+$ with $z > 0.2$. \item In the {\bf ending stage} the spin-dependent cross sections are estimated in the subroutine {\bf polend}. The relevant results are printed in the form of a table. \end{itemize} \begin{table} \begin{center} \begin{tabular}{|l|l|} \hline POLLST(1) & polarized parton distribution function \\ POLLST(2,3) & unpolarized parton distribution function (in {\tt PDFLIB} format) \\ POLLST(4) & parametrization of $R$ \\ POLLST(5-9) & unused at present \\ POLLST(10) & number of generated events used in the asymmetry calculation \\ \hline POLPAR(1) & $\widehat{a}_{LL}$ for current event \\ POLPAR(2) & $R$ ($\sigma^L / \sigma^T$) for current event \\ POLPAR(3) & $D$ (depolarization) for current event \\ POLPAR(4-6) & $\Delta F / F$ for current event for the subsets \\ POLPAR(11-13) & $A_{LL}$ for current event \\ POLPAR(14-16) & $A_1$ for current event \\ POLPAR(17-19) & $A_{LL}$ for the generated sample \\ POLPAR(20-22) & $A_1$ for the generated sample \\ POLPAR(23-28) & unused at present \\ POLPAR(29-31) & $\sigma^{\uparrow\uparrow}$ in pb -- Monte Carlo estimate associated with generated event sample \\ POLPAR(32-34) & $\sigma^{\uparrow\downarrow}$ in pb \\ POLPAR(35-40) & unused at present \\ \hline \end{tabular} \end{center} \caption{{\tt POLDIS} parameters in common block {\bf /poldisu/}: POLLST is an array of integers, and POLPAR an array of double precision real numbers.} \label{tab:param} \end{table} \subsection{ \label{sec:asymmcalc} Asymmetry evaluation} The scattering asymmetry $\widehat{a}_{LL}$ is calculated for each generated event according to the underlying sub-process and the kinematic variables given by the unpolarized event generator. The unpolarized ($F$) parton densities are evaluated at the given $x$ and $Q^2$. The polarized ($\Delta F$) parton densities are also evaluated at the given $x$ and $Q^2$ for 2 or 3 subsets of the selected polarized parton distribution functions set (see Section~\ref{sec:polpdf}), and the parton polarizations, $\Delta F / F$, are thus obtained. The {\it polarization weight} \begin{equation} \widehat{w}_{LL} = \widehat{a}_{LL} \times \frac{\Delta F}{F} \end{equation} is finally calulated for all subsets. The values of the asymmetries $A_{LL}$ and $A_1$ are updated event by event: \begin{equation} A_{LL} = \frac{N-1}{N} A_{LL} + \frac{1}{N} \, \widehat{w}_{LL} \; \; \; \; \; {\rm and} \; \; \; \; \; A_1 = \frac{N-1}{N} A_1 + \frac{1}{N} \frac{\widehat{w}_{LL}}{D} \; . \end{equation} $N$ is the number of events generated so far, and $D$ is the virtual photon depolarization (Eq.~\ref{eq:depol}). The values of the asymmetries corresponding to different $\Delta F$'s from the same subset are stored in the common block {\bf /poldisu/}. The statistical accuracy on the asymmetries depends on the number of generated events $N$ and goes as $1 / \sqrt{N}$. To study, for instance, the $x$ and/or $Q^2$ behavior of the asymmetry, the values of $A_{LL}$ and $A_1$ can be binned as a function of $x$ and/or $Q^2$. The spin-dependent cross sections are obtained from the unpolarized cross section $\sigma^0$ and the scattering asymmetry $A_{LL}$: \begin{eqnarray} \sigma^{\uparrow \uparrow} & = & \frac{1}{2} \, \sigma^0 \, (1 + A_{LL}) \\ \sigma^{\uparrow \downarrow} & = & \frac{1}{2} \, \sigma^0 \, (1 - A_{LL}) \end{eqnarray} \subsection{ \label{sec:polpdf} Polarized parton distribution functions} The following polarized parton distribution functions for the proton are presently included (in parenthesis are shown the corresponding unpolarized parton densities, which should be used for consistency and are automatically selected, and the $x / Q^2$ range of validity of the parametrization; the notation adopted is that of {\tt PDFLIB}): \begin{itemize} \item[1 ] GS-95 (unpolarized: DO 1.1; range: $x > 10^{-5}$, $4 < Q^2 < 4.5 \times 10^5~{\rm GeV}^2$)~\cite{GS95} \item[2 ] GS-96LO (unpolarized: GRV-94LO; range: $x > 10^{-5}$, $1 < Q^2 < 10^6~{\rm GeV}^2$)~\cite{GS96} \item[3 ] GS-96NLO (unpolarized: MRSA$^\prime$; range: $x > 10^{-5}$, $1 < Q^2 < 10^6~{\rm GeV}^2$)~\cite{GS96} \item[4 ] GRSV-96LO (unpolarized GRV-94LO; range: $x > 10^{-4}$, $0.4 < Q^2 < 10^4~{\rm GeV}^2$)~\cite{GRSV} \item[5 ] GRSV-96NLO (unpolarized GRV-94HOMS; range: $x > 10^{-4}$, $0.4 < Q^2 < 10^4~{\rm GeV}^2$)~\cite{GRSV} \end{itemize} Typically, each set of polarized parton densities contain two or three different parametrizations, obtained from the same analysis. The GS polarized parton distribution functions contain three subsets, referred as {\it set~A}, {\it set~B}, and {\it set~C}. The GRSV polarized parton distribution functions contain two subsets, referred as {\it standard} and {\it valence scenario}. In {\tt POLDIS} all subsets are used simultaneously, thus giving an output with two or three values for the spin asymmetries $A_{LL}$ and $A_1$ corresponding to the used subsets. The parton distribution functions are stored on a $x / Q^2$ grid for each parton component and stored in ASCII files. These polarized parton distribution functions have been obtained from the corresponding authors~\cite{GS95,GS96,GRSV}. An internal set of polarized distribution functions is also included, mainly for debugging and apparatus studies. Different parametrizations can be implemented by the user by simply editing the subroutine {\bf polintl}, which contains this internal set. This internal set can be also used, for instance, for evaluating the effects of a large negative sea polarization $\Delta {\sf s} < 0$ on the scattering asymmetry for a particular channel. It is assumed that these polarized parton densities scale as the corresponding unpolarized distribution functions (no dynamical generation of the sea and gluon polarization is performed). For instance, the following parametrization, based on the SU(6) spin structure of the proton combined with a {\it soft} gluon can be used (and it is included): $\Delta {\sf u_v} = {\sf u _v} - \frac{2}{3}{\sf d _v}, \, \Delta {\sf d _v} = - \frac{1}{3}{\sf d _v}, \, \Delta {\sf q_s} = 0, \, \Delta G = x G $ For {\it complex} targets ({\it i.e.} containing several protons and neutrons) full isospin symmetry is assumed between protons and neutrons: $\Delta {\sf u_v^p} = \Delta {\sf d_v^n}$, $\Delta G^{\sf p} = \Delta G^{\sf n}$, etc., and possible nuclear effects are neglected. The selection of a polarized parton distribution function set is performed by setting the corresponding switch to the desired value: 1 to 5 for the polarized parton densities listed above, 0 for the internal set, and $-1$ for no polarization. At the initialization stage in subroutine {\bf polini}, in addition to the polarized parton density, the corresponding unpolarized one is also selected. Our default polarized parton distribution functions set is the GS-96LO, combined with the unpolarized ones of GRV-94LO. \subsection{ \label{sec:rpar} Parametrizations of R} In most polarized DIS experiments the virtual photon asymmetry $A_1$ is obtained from the measured asymmetry $A_{LL}$ (Eq.~\ref{eq:asym4}, using a parametrization of $R = \sigma^L / \sigma^T$ determined from unpolarized DIS data. A similar approach is also adopted in {\tt POLDIS}. The following parametrizations are included (in parenthesis is shown the kinematic range over which $R$ was estimated, and corresponds also to the region, where the parametrization can be used safely): \begin{itemize} \item[1 ] SLAC (range: $x > 0.03$, $Q^2 > 0.35~{\rm GeV}^2$)~\cite{slac} \item[2 ] NMC-97 (range: $x > 0.003$, $Q^2 > 0.30~{\rm GeV}^2$)~\cite{nmc97} \item[3 ] BKS-97 (range: $4 \times 10^{-5} < x < 0.1$, $0.01 < Q^2 < 360~{\rm GeV}^2$)~\cite{bks97} \end{itemize} The selection of the desired parametrization of $R$ is obtained by setting the corresponding switch to the appropriate value: 1 to 3 for the parametrizations listed above, and $0$ for $R = 0$. Our default is the NMC-97 parametrization. \subsection{ \label{sec:comp} {\tt POLDIS} parameters} The common block {\bf /poldisu/} contains the program settings and parameters as illustrtated in Tab.~\ref{tab:param}. \begin{center} COMMON / {\bf POLDISU} / POLLST(10), POLPAR(40) \end{center} In this common block different asymmetry values are stored. For each spin asymmetry and spin-dependent cross section there are three different values, which correspond to the three subsets of the selected polarized parton distribution functions set. \subsection{ \label{sec:leptoint} Interface with {\tt LEPTO}} The relevant kinematics, program parameters, settings, and switches, used also for the asymmetry calculations, are stored in the {\tt LEPTO} common block {\bf /leptou/}: \begin{center} COMMON / {\bf LEPTOU} / CUT(14), LST(40), PARL(30), X, Y, W2, Q2, U \end{center} The correspondence between the kinematical variables stored in the common block {\bf /leptou/} to the ones discussed in the previous pages is: \begin{center} ${\rm X} \equiv x$, ${\rm Q2} \equiv Q^2$, ${\rm Y} \equiv y$, ${\rm U} \equiv \nu$, and \\ PARL(28) = $x_p$, PARL(29) = $z_q$, PARL(30) = $\phi$. \end{center} Additionally, the following {\tt LEPTO} switches are used for the asymmetry evaluation: \noindent LST(22) specifies the struck nucleon: $1=$ proton, $2=$ neutron. \noindent LST(24) specifies the hard-scattering sub-process: $1=q$, $2=qG$, $3=q\bar{q}$, $5=Q\bar{Q}-$event (HF). \noindent LST(25) specifies the struck quark: $1={\sf d}$, $2={\sf u}$, $3={\sf s}$, $-1={\sf \bar{d}}$, $-2={\sf \bar{u}}$, $-3={\sf \bar{s}}$. The unpolarized cross sections measured in pb are stored in PARL(23) (numerical integration at the initialization stage) and in PARL(24) (Monte Carlo estimate). The kinematics of the interaction and of all produced particles is stored in the {\tt JETSET} common block {\bf /lujets/}. \section{ \label{sec:program} How to run {\tt POLDIS}} \begin{figure} \begin{center} \mbox{\epsfxsize=17cm\epsffile{incl.eps}} \end{center} \vspace*{-10mm} \caption{a) {\it Simulated} inclusive asymmetry $A_1^{\sf p}$ compared to SMC data from polarized protons~\protect\cite{smcp93} using the GS-96LO {\it set~A} (full line) and the GRVS-96LO {\it standard scen.} (dashed line) polarized parton densities and the NMC-97 parametrization of $R$. b) Same as (a) for $A_1^{\sf d}$ compared to SMC data from a polarized deuteron target~\protect\cite{smcd95}.} \label{fig:inc} \end{figure} In addition to the standard {\tt LEPTO} (and {\tt AROMA}) input parameters and switches, such as the beam energy, target material, etc. (we assume the user to be familiar with them), two additional input switches are required (see previous Section): \noindent POLLST(1) = 0 to 5 for the polarized parton densities, and \noindent POLLST(4) = 0 to 3 for the parametrization of $R$. As already mentioned above, the user must provide a {\it steering} code for the administration of the event generation and analysis. Before the initialization of {\tt POLDIS} and {\tt LEPTO} the relevant parameters, switches, etc. must be set to the corresponding values. At the end of the event generation loop POLPAR contains various asymmetry values and the spin-dependent cross sections associated with the subsets of the selected polarized parton density set. \section{ \label{sec:test} Results of test runs} \begin{figure} \begin{center} \mbox{\epsfxsize=17cm\epsffile{semip.eps}} \end{center} \vspace*{-10mm} \caption{{\it Simulated} semi-inclusive asymmetry $A_{1, +}^{\sf p}$ on polarized protons (a) and $A_{1, +}^{\sf d}$ on polarized deuterons (b) for positive hadrons with $z > 0.2$, compared to SMC experimental data~\protect\cite{Ade96} (GS-96LO {\it set~A} full line, and GRVS-96LO {\it standard scen.} dashed line).} \label{fig:sincp} \end{figure} The inclusive asymmetry $A_1$, obtained with the GS-96LO {\it set~A} and the GRVS-96LO {\it standard scen.} polarized parton densities and the NMC-97 parametrization of $R$, is compared in Fig.~\ref{fig:inc} to the SMC data from polarized proton~\cite{smcp93} and deuteron~\cite{smcd95} targets. A fairly accurate agreement between the {\it simulated} and the real data can be observed in these plots. It has to be noted, however, that polarized DIS data were used for the evaluation of the polarized parton distribution functions used here, and therefore this agreement can not be considered as a meaningful physics result. On the other hand, such an agreement shows the validity of the procedure adopted in the simulation and the correctness of the calculations. Figures~\ref{fig:sincp} and~\ref{fig:sincn} show the semi-inclusive asymmetries for positive and negative hadrons generated with {\tt POLDIS}, respectively, compared to the SMC data~\cite{Ade96} from polarized protons and deuterons. Also for this {\it simulation} we used the GS-96LO {\it set~A} and the GRVS-96LO {\it standard scen.} polarized parton densities and the NMC-97 parametrization of $R$. In the Monte Carlo simulation a separation between charged pions and kaons can also be made. \begin{figure} \begin{center} \mbox{\epsfxsize=17cm\epsffile{semin.eps}} \end{center} \vspace*{-10mm} \caption{{\it Simulated} semi-inclusive asymmetry $A_{1, -}^{\sf p}$ on polarized protons (a) and $A_{1, -}^{\sf d}$ on polarized deuterons (b) for negative hadrons with $z > 0.2$ (GRVS-96LO {\it standard} AND NMC-97), compared to SMC experimental data~\protect\cite{Ade96} (GS-96LO {\it set~A} full line, and GRVS-96LO {\it standard scen.} dashed line).} \label{fig:sincn} \end{figure} \section*{ \label{sec:ack} Acknowledgments} We would like to acknowledge G. Ingelman for discussions on the LUND event generators used for this work, and G.K.~Mallot for useful discussions on DIS. We would like to thank T.~Gehrmann and V.~Vogelsang for providing us with their polarized parton distribution functions (GS-95LO, GS-96LO, GS-96NLO, and GRSV-96LO, GRSV-96NLO, respectively), and B.~Badelek for providing us with the BKS-97 parametrization of $R$. We would like also to thank E.~Rondio for using the preliminary versions of this program. This work is partially supported by KBN SPUB/P03/114/96.
1,108,101,566,101
arxiv
\section{Introduction} In the last years there has been vast progress in understanding four--dimensional F-theory compactifications on elliptically fibered Calabi-Yau fourfolds. These compactifications can admit non-Abelian gauge groups which arise from stacks of 7-branes. Geometrically this corresponds to a degeneration of the elliptic fiber over divisors in the base wrapped by the seven-branes \cite{Denef:2008wq,Weigand:2010wm}. At the intersection of two such divisors matter fields are localized. The presence of a 7-brane flux can lift part of the matter spectrum such that a net number of chiral fields remain in the $\mathcal{N}=1$ low energy effective supergravity theory. The aim of this paper is to determine formulas for the net number of such fields depended on the global geometric data of resolved Calabi-Yau fourfolds and the specification of four-form fluxes. To justify the proposed chirality formulas we will exploit the duality between M-theory and F-theory, including one-loop corrections to the Chern-Simons terms in the effective theories obtained from M-theory. A first approach in the derivation of a chirality formula for the charged matter fields along the intersection of two 7-branes was to use the local data of the geometry and gauge bundle \cite{Donagi:2008ca, Beasley:2008dc, Hayashi:2008ba}. The required data included the classes of the matter curves and the local two-form flux components on the 7-branes. For these consideration the input mainly came from two directions. In \cite{Donagi:2008ca,Hayashi:2008ba} a spectral cover construction, more familiar from the heterotic string \cite{Friedman:1997yq}, has been used. In contrast, the formulas in \cite{Beasley:2008dc} intensively use the local 7-brane gauge theory and more closely resemble the Type IIB analogs known from a weak coupling picture with D-branes. Despite these successes it is in general hard to obtain a global picture and develop the tools to study more complicated 7-brane configurations. In addition to the construction of compact Calabi-Yau fourfolds a complete global treatment has also to capture the flux data, which requires a deep understanding of the local singular geometry near the 7-branes and its global embedding. An analysis of the global geometry is particularly crucial when additional $U(1)$ symmetries are present \cite{Hayashi:2010zp,Grimm:2010ez}. Extensions of the spectral cover techniques to global constructions have been proposed in \cite{Marsano:2009gv,Blumenhagen:2009yv,Grimm:2009yu,Marsano:2010ix,Marsano:2011hv}. In particular, in \cite{Blumenhagen:2009yv,Grimm:2009yu,Marsano:2011hv} consistency checks have been given to argue for the applicability of the spectral cover techniques to specific compact settings. In the present work we will take a different route, since we will specify the flux data directly on the resolved Calabi-Yau fourfolds with no reference to a spectral cover construction. Let us summarize our general strategy to derive chirality formulas in F-theory. Firstly, note that to study F-theory compactifications one inevitably has to address the fact that non-Abelian gauge groups arise from singular Calabi-Yau geometries $X_4$ for which the geometrical data determining the spectrum and couplings cannot be determined directly. However, a natural way to deal with the singularities is to perform a resolution of the singularities and work with the fully resolved Calabi-Yau fourfold $\tilde X_4$. Such resolutions can be performed using toric methods as in \cite{Candelas:1996su,Candelas:1997eh,Blumenhagen:2009yv, Grimm:2009yu,Cvetic:2010rq,Chen:2010ts,Krause:2011xj,Braun:2011ux}, or stepwise as recently shown in \cite{Esole:2011sm,Marsano:2011hv}. The physics induced by the resolution process can be addressed in the M-theory description of F-theory \cite{Denef:2008wq,Weigand:2010wm}. M-theory compactification on an resolved elliptically fibered Calabi-Yau fourfold yields a specific three-dimensional effective theory with a couplings which can arise from a four-dimensional $\mathcal{N}=1$ supergravity compactified on a circle \cite{Grimm:2010ks}. For the resolved Calabi-Yau fourfolds the theory will be in the Coulomb branch of the three-dimensional gauge theory. In the M-theory picture of F-theory the 7-brane fluxes correspond to four-form fluxes $G_4$ of the field-strength of M-theory three-form potential. Not all $G_4$ fluxes will lift to a four-dimensional F-theory compactification. Crucially one has to impose that the allowed fluxes preserve four-dimensional Poincar\'e invariance in the F-theory limit \cite{Dasgupta:1999ss}. Further restriction are imposed by demanding the existence of an unbroken four-dimensional gauge theory. It was argued in \cite{Marsano:2011hv} that there are $G_4$ fluxes that satisfy these conditions, and reproduce the four-dimensional chirality formulas known from the spectral cover construction of an $SU(5)$ model. The allowed $G_4$ fluxes crucially involve the wedges of two-forms Poincar\'e-dual to the exceptional resolution divisors. In this work we will give a physical interpretation of this fact. To link the $G_4$ flux with the four-dimensional chirality it is crucial to point out that the M-theory reduction on a Calabi-Yau fourfold with $G_4$ flux induces terms in the three-dimensional gauge theory which are not obtained by a classical circle reduction of a general four-dimensional gauge theory. This can be inferred from the explicit reduction of \cite{Haack:2001jz,Grimm:2010ks}. In fact, the M-theory reduction will induce Chern-Simons terms for the $U(1)$ gauge-fields in the Coulomb branch. We note that in the reduction of the four-dimensional theory such terms must arise from one-loop corrections with charged fermions running in the loop. This links the charged matter spectrum of the four-dimensional theory with the $G_4$-flux corrections of the M-theory reduction. We will show that this provides us with an interpretation how the chiral matter spectrum can be determined from the flux data. If the flux indeed encodes the net number of chiral fermions on the intersection curves of two 7-branes, the chiral index has to be of the form $\chi({\bf R}) = \int_{S_{\bf R}} G_4$, as already anticipated in \cite{Donagi:2008ca,Hayashi:2008ba}, and studied recently in \cite{Braun:2011zm,Marsano:2011hv,Krause:2011xj}. Here ${\bf R}$ is the representation of the four-dimensional gauge group in which the fermions transform. The intersection curve of the two 7-brane in the base of $\tilde X_4$ will be called matter curve $\Sigma_{\bf R}$ if matter fields in the representation $\bf R$ are located along this curve. The difficulty in evaluating the expression for $\chi({\bf R})$ is to give a global and universal definition of the surface $S_{\bf R}$. Using heterotic/F-theory duality one expects that $S_{\bf R}$ is obtained by fibering the resolution $\mathbb{P}^1$'s over the matter curve. In fact, in the M-theory picture the charged matter fields arise from M2-branes wrapping the $\mathbb{P}^1$-fibers of the resolved geometry. The group theory matches this geometric interpretation since the resolution $\mathbb{P}^1$'s over the matter curves can be associated to the weights of the representation~${\bf R}$ \cite{Intriligator:1997pq,Katz:1997eq}. These states are massive on the resolved space $\tilde X_4$ and become massless in the singular F-theory limit. To construct the matter surfaces for a given resolved Calabi-Yau fourfold we propose to exploit the data encoded by the cone of effective curves, i.e.~the Mori cone. It will be crucial to select a subcone of the full Mori cone, the relative Mori cone, consisting of curves in $\tilde X_4$ which shrink when going to the singular space $X_4$. This cone will be completed into the extended relative Mori cone by including other effective curves in the elliptic fiber, which intersect the exceptional resolution divisors. In simple cases this simply amounts to including the pinched elliptic fiber over the 7-brane. We will argue that the intersection of these curves with the exceptional divisors allows us to identify a pairing between generators of the extended relative Mori cone and weights of the four-dimensional gauge group. The exceptional divisors correspond to the simple roots of the gauge group. The identification of roots and weights with the geometric data has been proposed for local Calabi-Yau threefolds in \cite{Intriligator:1997pq,Katz:1997eq,Marsano:2011hv}. Note that a detailed analysis of which weights correspond to the effective curves in the extended relative Mori cone also allows us to stepwise reconstruct the resolution process along the co-dimension two and three singularity loci in the base of $X_4$. In this process, we make two assumptions. One is that the representation which can appear along the co-dimension two singularity loci are the same as the one of the matter fields localized along the curve. The second is that the degeneration of weights at the co-dimension three singularity points obeys the algebra $G_p$ when the singularity is enhanced to a type $G_p$. These assumptions are exploited already in \cite{Katz:1996xe,Donagi:2008ca,Beasley:2008dc,Hayashi:2009ge} and have been studied for compact settings in \cite{Marsano:2011hv, Krause:2011xj}. With these assumptions and the extended relative Mori cone at hand, we can generally determine the resolution process along the singularity loci. In this work we also include that case where additional geometrically massless $U(1)$ gauge fields are in the four-dimensional spectrum of the F-theory compactification. The methods to determine the resolution structure using the extended Mori cone naturally generalize to this situations, and one is able to explicitly construct the matter surfaces $S_{\bf R}$ also if distinguishing $U(1)$-charges of the representation $\bf R$ are present. However, one can generalize the Ansatz for the $G_4$ flux if one permits a gauging of the four-dimensional $U(1)$-symmetries. Such extra fluxes render the $U(1)$ massive, but allow to keep its global selection rules. An explicit example how such extra $U(1)$'s can be consistently induced in a Calabi-Yau fourfold compactification was given in \cite{Grimm:2010ez}, and termed $U(1)$-restricted Tate model. The construction of fluxes in this model have been recently given in \cite{Braun:2011zm,Krause:2011xj}. In reference \cite{Braun:2011zm} a direct link to the chirality formulas for D7-branes and O7-planes was established. For $SU(5)$ models and their extensions it was shown in \cite{Krause:2011xj} that the chirality formula can be evaluated using the ambient fivefold geometry in which the Calabi-Yau fourfold is embedded. These techniques also allowed to reduce the $G_4$ fluxes, using the ambient fivefold, to a two-form flux on the base $\mathcal{B}$, and reproducing the correct group theoretical factors as required for a valid chirality formula. In our formalism this detour is not required, and the $U(1)$ case appears as natural part of a more general construction. To illustrate the derivation of the net chiralities we will consider two explicit examples of hypersurfaces in toric ambient spaces. The gauge theory will be $SU(5)$ and $SU(5) \times U(1)$ and we perform an explicit resolution of all co-dimension singularities as in \cite{Blumenhagen:2009yv,Grimm:2009yu,Chen:2010ts,Grimm:2010ez,Krause:2011xj} by modifying the toric ambient space. We compute the net chiralities induced by a general $G_4$ flux compatible with four-dimensional Poincar\'e invariance and the preservation of the $SU(5)$ gauge symmetry in both cases. Our results are compared to the spectral cover and split spectral cover constructions \cite{Tatar:2009jk,Marsano:2009gv,Blumenhagen:2009yv}, and we find match of the chirality formulas for matter being localized near the $SU(5)$-brane as expected. \section{F-theory chirality and three-dimensional Chern-Simons theories} In this section we give a derivation of the F-theory chirality formulas by using one-loop corrections in a dual three-dimensional Chern-Simons theory. More precisely, we will exploit the description of F-theory via M-theory to show that a four-dimensional chiral spectrum can be induced by a special class of $G_4$-form fluxes on a resolved Calabi-Yau fourfold $\tilde X_4$. This will lead to a derivation of formulas of the form \begin{equation} \label{eq:chirality2} \chi({\bf R}) = n_{\bf R} - n_{\bf R^*} = \int_{S_{\bf R}} G_4 \ , \end{equation} where $S_{\bf R}$ is a four-cycle in $\tilde X_4$. Here we have denoted by $\chi({\bf R})$ the chiral index of $n_{\bf R}$ matter fields in the representation ${\bf R}$ minus $n_{\bf R^*}$ matter fields in the representation ${\bf R^*}$. In order to interpret chirality formulas involving $G_4$ we first have to summarize the properties of a fully resolved Calabi-Yau fourfold $\tilde X_4$ in section \ref{resolving-4folds}. In the compactification of M-theory on $\tilde X_4$ one can allow for $G_4$ fluxes in the reduction. We describe the M-theory and F-theory constraints on these fluxes in section \ref{introducingG4}. It is argued in section \ref{3dCS} that a certain class of M-theory fluxes induces Chern-Simons couplings in the three-dimensional effective theory. The matching these M-theory couplings with one-loop corrections of an F-theory setup compactified on a circle leads to chirality formulas of the form \eqref{eq:chirality2}. For completeness we establish a similar analysis for F-theory compactifications to six dimensions on elliptically fibered Calabi-Yau threefolds in appendix \ref{5dCS}. We include explicit formulas for the $SU(N)$ case. A more elaborated discussion of this duality including gravity can be found in \cite{BonettiGrimm}. \subsection{Resolving Calabi-Yau fourfolds} \label{resolving-4folds} Let us consider an elliptically fibered Calabi-Yau fourfold $X_4$ with fibers which can be singular over each complex co-dimension of the base $\mathcal{B}$. We further demand that these singularities can be consistently resolved while still preserving the Calabi-Yau condition. Numerous Calabi-Yau three- and fourfold examples with various gauge groups have been constructed in refs.~\cite{Candelas:1996su,Candelas:1997eh,Blumenhagen:2009yv,Grimm:2009yu,Cvetic:2010rq,Chen:2010ts,Krause:2011xj} as hypersurfaces and complete intersections inside a toric ambient space. One can show that the singularities are resolved by adding new blow-up divisors to the ambient toric space. This can be done systematically as argued in \cite{Candelas:1996su,Candelas:1997eh}. Note that only on the resolved Calabi-Yau manifolds one can straightforwardly compute the topological data of the geometry. These are required to determine the spectrum and couplings of the F-theory compactification \cite{Grimm:2010ks}. The toric resolutions are equivalent, at least at co-dimension one and two relevant here, to the small resolutions performed for an $SU(5)$ gauge group in \cite{Esole:2011sm,Marsano:2011hv}.\footnote{We like to thank D.~Klevers for explicitly checking this equivalence.} For simplicity let us focus on geometries with a single gauge group $G$ over a divisor $S_{\rm b} = S\cdot \mathcal{B}$ in the base $\mathcal{B}$ of the Calabi-Yau manifold. Here the dot denotes the intersection of the divisors $S$ and $\mathcal{B}$. The resolved Calabi-Yau fourfold will be named $\tilde X_4$ in the following. We denote the set of inequivalent exceptional resolution divisors and the Poincar\'e-dual two-forms by \begin{equation} \label{def-E} D_i,\ \omega_i \ , \qquad i =1,\ldots, \text{rank}(G)\, . \end{equation} In addition we have divisors and Poincar\'e-dual two-forms \begin{equation} \label{def-omegaalpha} D_\alpha, \ \omega_\alpha \ , \qquad \alpha =1,\ldots, h^{1,1}(\mathcal{B})\, , \end{equation} The divisors $D_\alpha$ are obtained from divisors in the base $\mathcal{B}$ as pre-image of the projection $\pi: X_4\rightarrow \mathcal{B} $ if there is no gauge group located along divisors $D_\alpha \cdot \mathcal{B}$. However, after the blow-up one has to modify the divisor $S$ in $X_4$ which hosts the gauge group $G$. One introduces the redefinition \begin{equation} S = \hat S + \sum_i a_i D_i \ , \label{eq:shift} \end{equation} where $a_i$ are the Dynkin labels of the group $G$. Note that this modification has to be taken into account when introducing a basis $D_\alpha$ on $\tilde X_4$. In such a basis one has the expansion \begin{equation} \label{S-expansion} S = C^\alpha D_\alpha\ . \end{equation} The simplest situation is that $S$ is one of the divisors $D_\alpha$. Finally, if the elliptic fibration only has a single section, we introduce the two-form $\omega_0$ Poincar\'e-dual to the base $\mathcal{B}$ itself. There are various generalizations to this setup. In particular, the geometry can induce additional $U(1)$ factors due to its fibration structure or additional singularities over curves in $\mathcal{B}$. The number of extra $U(1)$'s is counted by \begin{equation} \label{def-nU(1)} n_{U(1)} = h^{1,1}(\tilde X_4) - h^{1,1}(\mathcal{B}) - \text{rank}(G)\ . \end{equation} A particular example with an extra $U(1)$ is the $U(1)$-restricted Tate model discussed in~\cite{Grimm:2010ez}. The geometry is in this case restricted such that the discriminant locus develops an additional singularity over a curve, which after resolution induces a new two-form $\omega_X$. In general, each extra $U(1)$ comes with a new element $\tilde \omega_m$ of $H^{1,1}(\tilde X_4)$, and can be represented by a divisor $\tilde D_m$. Note that the two-forms $\tilde \omega_m$ have intersection properties similar to the $\omega_i$ introduced above. Hence, it will be useful to introduce the combined notation \begin{equation} \label{def-DomegaLambda} D_\Lambda = (D_i , \tilde D_m)\ , \quad \omega_\Lambda = (\omega_i,\tilde \omega_m) \ , \qquad \Lambda =1,\ldots, \text{rank}(G) +n_{U(1)}\, . \end{equation} As we will recall below, the $D_\Lambda$ have to have special intersection properties such that the corresponding gauge-fields are well-defined in four dimensions. This will allow to select an appropriate basis for $D_\Lambda$. It is important to stress that in F-theory the resolution $\tilde X_4$ is not physical. In fact, the F-theory compactification to four space-time dimensions has to be carried out on the singular space $X_4$ where the full non-Abelian gauge symmetry is present. However, the space $\tilde X_4$ can be used in the dual M-theory compactification. Recall that it is natural to describe F-theory via M-theory \cite{Denef:2008wq,Weigand:2010wm}. Starting with M-theory this interpretation requires to perform a T-duality along one of the one-cycles of the elliptic fiber of $X_4$ after going to Type IIA by shrinking the size of the elliptic fiber. Note that in the dual Type IIB setup the shrinking of the elliptic fiber corresponds to a decompactification to four space-time dimensions. The compactification on the resolved space $\tilde X_4$ is thus only possible in the M-theory picture, before shrinking the sizes of the elliptic fiber and the resolution divisors. In such a generic point in the K\"ahler moduli space of $\tilde{X}_4$, one is in the Coulomb branch of the three-dimensional gauge theory obtained by the M-theory compactification. The gauge group is \begin{equation} \label{Coulomb-Group} U(1)^{{\rm rank}(G)}\ \times\ U(1)^{n_{U(1)}}\ . \end{equation} The $U(1)$ gauge bosons arise from the expansion of the M-theory three-form $C_{3}$ into the two-form $\omega_\Lambda$ introduced in \eqref{def-DomegaLambda} as \begin{equation} \label{C3expansion} C_3 = A^\Lambda \wedge \omega_\Lambda \ ,\qquad \quad \Lambda = 1,\ldots , \text{rank}(G)+n_{U(1)}\ . \end{equation} Only in the limit in which the exceptional divisors $D_{i}$ shrink to the holomorphic surface $S$ one recovers the non-Abelian gauge symmetry $G$ present in the four-dimensional F-theory compactification. Having a fully resolved Calabi-Yau fourfolds $\tilde{X}_{4}$, one can compute the complete set of intersection numbers, and other topological data such as Chern classes. Let us here summarize the structure of intersection numbers. For a hypersurface or complete intersection in a toric ambient space they can be determined explicitly by inducing the intersection structure of the ambient space. The intersections depend on the `triangulation' as we will make more precise for the examples below. This implies that there will be various topological phases associated to an ambient space and its Calabi-Yau manifold \cite{Witten:1993yc}. We introduce the quadruple intersections as \begin{equation} \label{def-KABCD} \mathcal{K}_{ABCD} = \int_{\tilde X_4} \omega_A \wedge \omega_B \wedge \omega_C \wedge \omega_D\, , \end{equation} where $\omega_A = (\omega_0,\omega_\alpha,\omega_\Lambda)$. For resolved elliptically fibered Calabi-Yau fourfolds one has several vanishing conditions for the intersection numbers. Firstly, recall that for four divisors inherited from the base $\mathcal{B}$ one obviously has \begin{equation} \label{vanish_intersect1} \mathcal{K}_{\alpha \beta \gamma \delta} =0 \, . \end{equation} More subtle are the vanishing intersections involving the blow-up divisors $D_\Lambda$. The following vanishing conditions hold: \begin{equation} \label{vanish_intersect2} \mathcal{K}_{\Lambda \alpha \beta \gamma} = 0 \, , \quad \mathcal{K}_{0 \Lambda AB} = 0 \ , \end{equation} where $A,B$ run over all possible indices as in \eqref{def-KABCD}. To justify this recall that $\omega_\Lambda$ parameterizes the $U(1)$'s in \eqref{Coulomb-Group} through the expansion \eqref{C3expansion}. However, these are three-dimensional gauge fields in an M-theory compactification on $\tilde X_4$. In order that they lift to four-dimensional gauge fields the two conditions \eqref{vanish_intersect2} have to be satisfied \cite{Grimm:2010ks}. In fact, for the explicit resolutions performed below, this condition is satisfied for an appropriate basis $D_\Lambda$. Clearly, the conditions \eqref{vanish_intersect2} are consequences of the geometry of resolved elliptic fibrations. Let us now turn to the non-vanishing intersections. For a single gauge group $G$ with resolution divisors $D_i$ one finds \begin{equation} \label{dynkin_intersect} \mathcal{K}_{ij \alpha \beta} = - C_{ij} \, C^\gamma \, \mathcal{K}_{0 \alpha \beta \gamma}\ , \end{equation} where $C^\alpha$ has been introduced in \eqref{S-expansion}. $C_{ij}$ is the Cartan matrix of the algebra associated to the gauge group $G$. Note that the conditions \eqref{vanish_intersect1}, \eqref{vanish_intersect2} and \eqref{dynkin_intersect} are independent of the phase, or triangulation, of the resolution part of $\tilde X_4$.\footnote{In might be necessary to reorder the divisors $D_i$ to keep the same form of \eqref{dynkin_intersect}.} Of crucial importance for the chirality formulas will be the intersection numbers: \begin{equation} \mathcal{K}_{\alpha \Lambda \Sigma \Gamma} \ , \qquad \mathcal{K}_{\Lambda \Sigma \Gamma \Delta}\ , \end{equation} with three or four exceptional divisors $D_\Lambda$ introcuded in \eqref{def-DomegaLambda}. These crucially depend on the phase as we will see below. Let us note that the basis used for the computation of these intersection numbers is adapted to the structure of the elliptic fibration. A basis adapted to the K\"ahler cone, measuring positve volumes in the Calabi-Yau manifold, will be discussed in section \ref{KahlerMori}. \subsection{$G_4$-form fluxes and their F-theory interpretation} \label{introducingG4} In this section we introduce the $G_4$ fluxes on the resolved Calabi-Yau fourfold $\tilde X_4$. The $G_4$ fluxes have to be considered in the M-theory picture of F-theory and correspond to a non-trivial field strength of the M-theory three-form $C_3$. Together with the results of section \ref{3dCS}, this will allow us to find the set of fluxes which induce a net chiral matter spectrum along the intersection curves of the 7-branes in the F-theory limit. Let us first summarize some of the key properties of $G_4$. The flux is an element of the fourth cohomology group $H^{4}(\tilde X_4,\mathbb{R})$. It can be split into a horizontal and vertical part $H^{4}_V \oplus H^{4}_H$, where $H^{4}_V$ is obtained by wedging two forms of $H^{2}(\tilde X_4,\mathbb{R})$, and $H^{4}_H$ are the four-forms which can be reached by a complex structure variation of the holomorphic $(4,0)$-form on $\tilde X_4$. In the following we will be concerned with fluxes in $H^{4}_V(\tilde X_4,\mathbb{R})$, which can be written as \begin{equation} G_4 = m^{AB} \omega_A \wedge \omega_B\ , \end{equation} where $\omega_A$ is the basis introduced in section \ref{resolving-4folds}. There are constraints on $G_4$, both from an M-theory and an F-theory perspective. Firstly, M-theory anomalies demand that $G_4$ is properly quantized \cite{Witten:1996md} \begin{equation} \label{quantization} G_4 + \tfrac12 c_2(\tilde X_4)\ \in \ H^{4}_V(\tilde X_4,\mathbb{Z})\ . \end{equation} This condition is crucial for fluxes in $H^4_V$ since the second Chern class $c_2(\tilde X_4)$ is in this component of $H^4$. The quantization condition has recently been discussed in \cite{Collinucci:2010gz,Krause:2011xj} for specific gauge groups or specific geometries. However, let us stress that in general it is a hard question to determine a minimal integral basis of $H^4_V(\tilde X_4, \mathbb{Z})$.\footnote{In particular, even if one shows that a component of $c_2(\tilde X_4)$ can be written as $a\, \omega \wedge \tilde \omega$ for the effective $\omega, \tilde \omega$, the integrality of the coefficient $a$ does not imply that $\frac{a}{2} \omega \wedge \tilde \omega$ is non-integral. A fancy way to determine an integral basis is by using mirror symmetry \cite{Grimm:2009ef}.} Let us now turn to the constraints on $G_4$ imposed in the F-theory perspective. In order that the M-theory fluxes $G_4$ actually lift to F-theory fluxes without breaking four-dimensional Poincar\'e invariance and keeping the whole group $G$ unbroken, we have to enforce that various components of $G_4$ vanish. In order to do that we define\footnote{Note that we changed the definition of $\Theta_{AB}$ compared with \cite{Grimm:2011tb,Grimm:2011sk}. The chosen definition will be convenient in the match with the four-dimensional result.} \begin{equation}\label{def-theta_gen} \Theta_{AB} = \int_{\tilde X_4} G_4 \wedge \omega_A \wedge \omega_B\ . \end{equation} The fluxes relevant for our F-theory compactifications have to satisfy \begin{eqnarray} \Theta_{0\alpha} &=& 0 \ ,\qquad \Theta_{\alpha \beta} =0\ , \nonumber\\ \Theta_{i\alpha} &=& 0\ . \label{eq:G-condition} \eea Let us comment on these various constraints. The first two constraints are conditions on the existence of a Poincar\'e invariant four-dimensional theory. $\Theta_{0\alpha}$ correspond in the M-theory to F-theory limit to fluxes along the circle when performing the 4d/3d compactification as discussed in detail in \cite{Grimm:2011sk}. The fluxes $ \Theta_{\alpha \beta}$ are mapped to non-geometric fluxes in F-theory and make the existence of a four-dimensional effective theory questionable. Note that the fluxes $\Theta_{\Lambda 0}, \Theta_{00}$ are automatically vanishing due to \eqref{vanish_intersect2}, and the fact that $\Theta_{00} = \Theta_{0\alpha} K^\alpha$ with a vector $K^\alpha$ parameterizing the first Chern class of $\mathcal{B}$. The second line in \eqref{eq:G-condition} are conditions on an unbroken gauge group $G$. $\Theta_{i\alpha}$ is readily interpreted in the M-theory to F-theory limit. These fluxes have a four-dimensional interpretation and would induce gaugings of the axionic parts of the complexified K\"ahler moduli. This yields a breaking of the group $G$, which we demand to be unbroken in our considerations. In summary, we find that the only non-vanishing components of $\Theta_{AB}$ are given by \begin{equation} \label{def-theta} \Theta_{\Lambda \Sigma} = \int_{\tilde X_4} G_4 \wedge \omega_\Lambda \wedge \omega_\Sigma\ , \qquad \Theta_{\alpha m} = \int_{\tilde X_4} G_4 \wedge \omega_\alpha \wedge \tilde \omega_m\ . \end{equation} where $\omega_i,\omega_j$ are the two-forms Poincar\'e dual to the resolution divisors, and $\tilde \omega_m$ are the forms parameterizing extra $U(1)$'s as introduced in \eqref{def-DomegaLambda}. Let us make some further comments on the significance of $\Theta_{\alpha m}$. In \eqref{eq:G-condition} we have demanded $\Theta_{\alpha i} = 0$ to prevent breaking the gauge group by a gauging involving the Cartan generators only. For the extra $U(1)$'s such a gauging is precisely induced by $\Theta_{\alpha m}$, and we did not restrict to the case where it has to vanish. In fact the gauge invariant derivatives are \begin{equation} \label{DT-gauging} D T_{\alpha} = d T_\alpha + i \Theta_{\alpha m} A^m\ . \end{equation} Here $T_\alpha$ are the complexified K\"ahler volumes of the divisors in the base $\mathcal{B}$. The precise definition of $T_\alpha$ as well as the lift of \eqref{DT-gauging} from M-theory to F-theory can be found in \cite{Grimm:2010ks,Grimm:2011tb}. The presence of the gauging \eqref{DT-gauging} implies that the $U(1)$ can become massive by a Higgs effect. In fact, $A^m$ can `eat' the imaginary part of $T_\alpha$ and gain a new degree of freedom as required for a massive $U(1)$. Due to supersymmetry such a gauging induces also a D-term, which gives a mass to the real part of $T_\alpha$. This massive scalar appropriately combines with $A^m$ into a massive four-dimensional $\mathcal{N}=1$ vector multiplet. \subsection{Four-dimensional chirality formula from three-dimensional loops} \label{3dCS} Recall that in order to find a well-defined framework to deal with fluxes in F-theory we have used the fact that F-theory can be obtained as a limit of M-theory. In this limit four-dimensional F-theory compactifications on a singular Calabi--Yau fourfold $X_4$ are obtained from an M-theory compactification on the resolved fourfold $\tilde X_4$ in the limit of shrinking elliptic fiber and shrinking exceptional divisors. The two setups are best compared in three dimensions where the M-theory compactification on $\tilde X_4$ has to match a circle compactification of the four-dimensional F-theory effective action \cite{Grimm:2010ks}. In the following we will argue that the M-theory compactification with $G_4$ induces additional Chern-Simons terms which are not induced by a classical Kaluza-Klein reduction of a four-dimensional $\mathcal{N}=1$ gauge theory on a circle. The match is achieved only after including one-loop corrections with charged matter fermions running in the loop. Let us start by recalling some crucial facts about M-theory on a Calabi-Yau fourfold $\tilde X_4$ \cite{Haack:2001jz,Grimm:2010ks}. As in \eqref{Coulomb-Group} the three-dimensional gauge group is broken to $U(1)^{{\rm rk}G} \times U(1)^{n_{U(1)}}$ when performing the reduction on a resolved Calabi--Yau fourfold. Hence, the M-theory effective theory will be in the Coulomb branch in three-dimensional gauge theory. Note that the three-dimensional $\mathcal{N}=2$ vector multiplets contain as bosonic fields \begin{equation} \label{N=2vectors} (\xi^\Lambda,A^\Lambda)\ , \qquad \Lambda = 1,\ldots, {\rm rk}G+n_{U(1)}\ , \end{equation} where the $\xi^\Lambda$ are real scalars. The $A^\Lambda$ are the $U(1)$ gauge fields from the dimensional reduction of the M-theory three-form as in \eqref{C3expansion}, while the $\xi^\Lambda$ parameterize the size of the blow-ups in the M-theory compactification on $\tilde X_4$. The $\xi^\Lambda$ arise in the expansion of the normalized K\"ahler form $\tilde J= J\cdot \mathcal{V}^{-1}$, where $\mathcal{V}$ is the overall volume of $\tilde X_4$. Explicitly, one expands \begin{equation} \label{Kaehlerexpand} \tilde J = \xi^\Lambda \omega_\Lambda + L^\alpha \omega_\alpha + R \omega_0 \ , \end{equation} where $\omega_\alpha,\omega_\Lambda$ are the two-forms introduced in \eqref{def-omegaalpha}, \eqref{def-DomegaLambda}, and $\omega_0$ is the Poincar\'e dual to the base $\mathcal{B}$. The key observation is that the inclusion of $G_4$ fluxes in the M-theory reduction induces a Chern-Simons term in the three-dimensional effective action. In particular, for the vector multiplets \eqref{N=2vectors} one finds a Chern-Simons term \begin{equation} \label{Chern-Simons3d} S^{(3)}_{\rm CS} = \frac14 \int_{\mathbb{M}^{2,1}} \Theta_{\Lambda \Sigma} \, A^\Lambda \wedge F^\Sigma \end{equation} where $\Theta_{\Lambda \Sigma}$ is given in terms of the $G_4$ flux as in \eqref{def-theta}, and $F^\Lambda$ is the field strength of $A^\Lambda$. Due to the $\mathcal{N}=2$ supersymmetry of the three-dimensional theory $\Theta_{\Lambda \Sigma}$ has to be constant, which is consistent with \eqref{def-theta}. We now turn to the F-theory picture, and consider a general $\mathcal{N}=1$ gauge theory compactified to three dimensions on a circle. The four-dimensional theory is identified with the low energy effective theory obtained by reducing F-theory on a Calabi-Yau fourfold $X_4$. In the four-dimensional theory the charged fermions $\chi^s$ appear with a kinetic term \cite{Wess:1992cp} \begin{equation} \label{ferm-kinetic-term} K_{r \bar s} \bar \chi^s \displaystyle{\not}{\mathcal{D}} \chi^r \ , \end{equation} where $\mathcal{D}_\mu$ is the covariant derivative under the four-dimensional gauge group, and $K_{r \bar s}$ is the K\"ahler metric for the matter multiplets. After compactification on $S^1$, the terms \eqref{ferm-kinetic-term} will induce a coupling of the fermions to the $S^1$-component of the four-dimensional vectors. These components are identified with the $\xi^\Lambda$ if one move to the Coulomb branch of the gauge theory \cite{Grimm:2010ks}, where the vector fields also span the Abelian group \eqref{Coulomb-Group}. The resulting three-dimensional coupling is a mass term for the $\chi^r$ with mass parameter $\xi^\Lambda$. We aim to compare the three-dimensional theories of M-theory and F-theory after Kaluza-Klein reduction. In a general framework of three-dimensional Abelian gauge theories, the quantum-corrected coupling of the Chen-Simons term $\frac12\int (k_{\Lambda \Sigma})_{{\rm eff}} A^\Lambda \wedge F^\Sigma$ can be written as \cite{Aharony:1997bx} \begin{equation} (k_{\Lambda \Sigma})_{{\rm eff}} = (k_{\Lambda \Sigma})_{{\rm class}} + \frac{1}{2}\sum_{f}(q_f)_{\Lambda}(q_f)_{\Sigma}\ {\rm sign} \Big(\sum_{\Gamma=1}^{{\rm rk}G}(q_{f})_{\Gamma} \xi^{\Gamma} + \tilde m_{f} \Big), \label{eq:cs_3d} \end{equation} where the second term arises from a one-loop diagram with charged fermions running in the loop. Hence, $f$ runs through all the charged fermions, $(q_f)_\Lambda$ is a $U(1)_\Lambda$ charge and $\tilde m_f$ is the classical mass of the fermions. Since the three-dimensional theories we consider originate from the dimensional reduction of four-dimensional $\mathcal{N}=1$ supersymmetric gauge theories, the classical Chern-Simons term with these indices is absent \cite{Grimm:2010ks,Grimm:2011sk}, i.e.~$ (k_{\Lambda \Sigma})_{{\rm class}} =0$. Furthermore, since the fermions are massless in the F-theory limit $\xi^\Lambda \rightarrow 0$ one also has to set $\tilde m_f=0$. Therefore, comparing the Chern-Simons couplings \eqref{Chern-Simons3d} of the M-theory reduction with the general one-loop expression \eqref{eq:cs_3d} we find the relation\footnote{In the match of M-theory with F-theory a factor $1/2$ has to be taken into account. This has been discussed in \cite{Grimm:2011tb,Grimm:2011sk} for the gauge coupling function $f_{\rm M} = 1/2 f_{\rm F}$, $f_{\rm F}$ is the three-dimensional gauge coupling obtained after circle reduction.} \begin{equation} \Theta_{\Lambda \Sigma} = \frac{1}{2}\sum_{f}(q_f)_{\Lambda}(q_f)_{\Sigma}\ {\rm sign}\Big(\sum_{\Gamma=1}^{{\rm rk}G}(q_{f})_{\Gamma}\xi^{\Gamma}\Big). \label{eq:chirality3d} \end{equation} This expression gives the link between the $G_4$ fluxes on $\tilde X_4$ and the number of fermions running in the loop if the charges $(q_{f})_{\Gamma}$, and the sign-factors are given. We will now determine these data using the geometric M-theory setting. To see how the right-hand side of \eqref{eq:chirality3d} can be written by the geometric data we have to recall how the fermionic states arise in M-theory. Recall that in F-theory the matter fields arise from strings stretching between two 7-branes intersecting over a matter curve $\Sigma_{\bf R}$. Here we will indicate by ${\bf R}$ the representation of the four-dimensional gauge group in which the matter fields localized on this curve transform. In the M-theory picture these string states correspond to M2-branes. More precisely, in the resolved phase $\tilde X_4$ the matter multiplets arise from M2-branes wrapped on the resolution $\mathbb{P}^1$'s fibered over the matter surface \cite{Weigand:2010wm}. Crucially, one can establish a map between the weights of the representation $\bf R$ and the resolution $\mathbb{P}^1$ fibered over the matter curves $\Sigma_{\bf R}$ \cite{Intriligator:1997pq,Katz:1997eq,Marsano:2011hv}. Therefore we will denote the resolution curves associated to a weight ${\bf w}$ by $\mathcal{C}_{\bf w}$. We will discuss this identification in much more detail in section \ref{Strategy}. Using this map one can give a geometric formula for the $U(1)_\Lambda$ charge of an M2-brane wrapping on a curve $\mathcal{C}_{{\bf w}}$. The charge $(q_f)_\Lambda$ for the fields is given by \begin{equation} \label{def-qfi} (q_f)_\Lambda = q^{\bf w}_\Lambda = \int_{\mathcal{C}_{{\bf w}}}\omega_\Lambda \ , \end{equation} where we have used that $U(1)$ charge of a fermion does only depend on the weight to which it corresponds. The real scalar $\xi^i$ is obtained from the expansion of the K\"ahler form as in \eqref{Kaehlerexpand}. Hence, we can rewrite the sign part of \eqref{eq:chirality3d} as \begin{equation} {\rm sign} \sum_{k=1}^{{\rm rk}G}(q_{f})_{k}\xi^{k} = {\rm sign} \int_{\mathcal{C}_{{\bf w}}} \tilde J \equiv {\rm sign} ({\bf w}) \ , \label{eq:sign_kahler} \end{equation} for a matter field in a weight ${\bf w}$. Here we have used the abbreviation ${\rm sign} ({\bf w})$ to indicate when a curve is positive or negative, i.e.~we introduce the notation \begin{eqnarray} \label{wsmallbig} {\bf w} >0 \qquad &\Leftrightarrow& \qquad \int_{\mathcal{C}_{{\bf w}}} \tilde J > 0 \ ,\\ {\bf w} < 0 \qquad &\Leftrightarrow& \qquad \int_{\mathcal{C}_{{\bf w}}} \tilde J < 0 \ . \nonumber \eea Motivated by the appearance of this sign-factor in \eqref{eq:chirality3d} we will in section \ref{Strategy} introduce in detail the notion of the relative Mori cone. Roughly speaking, the curves in the relative Mori cone are precisely the resolution curves $\mathcal{C}_{{\bf w}}$ for which the sign \eqref{eq:sign_kahler} is positive. Therefore, providing the techniques to determine the relative Mori cone of a compact Calabi--Yau fourfolds will determine the signs in \eqref{eq:chirality3d}. Let us denote by $n_{\bf r}$ the number of fermions in the effective three-dimensional theory transforming in a representation ${\bf r}$. Using \eqref{def-qfi} and \eqref{eq:sign_kahler} we can rewrite \eqref{eq:chirality3d} as \begin{equation} \label{Theta_wweights} \Theta_{\Lambda \Sigma} = \frac12 \sum_{\bf r} n_{\bf r} \sum_{{\bf w} \in {W({\bf r})}} q^{\bf w}_\Lambda q^{\bf w}_\Sigma\ {\rm sign}( {\bf w})\ , \end{equation} where the sum runs over all representations for which $n_{\bf r}$ fermions appear in the spectrum. From the expression \eqref{Theta_wweights}, one can see that vector-like pairs drop off from the contribution to the Chern-Simons term. If there is a vector-like pair, we always have a pair of weight ${\bf w}$ and $-{\bf w}$ and their $U(1)$ charges are opposite $q_\Lambda^{-{\bf w}} = -q_\Lambda^{{\bf w}}$. Then, the contribution from the vector-like pair is \begin{eqnarray} q^{\bf w}_\Lambda q^{\bf w}_\Sigma \ {\rm sign}( {\bf w}) + (q^{-{\bf w}}_\Lambda) (q^{-{\bf w}}_\Sigma)\ {\rm sign}( -{\bf w}) &=& \nonumber \\ q^{\bf w}_\Lambda q^{\bf w}_\Sigma \ {\rm sign}( {\bf w}) + (-q^{{\bf w}}_\Lambda) (-q^{{\bf w}}_\Sigma)\ (-{\rm sign}( {\bf w}))&=&0 \eea Therefore, only the chiral indices $\chi({\bf R}) = n_{\bf R} - n_{{\bf R}^*}$ with some numerical factors appear in the right-hand side of \eqref{Theta_wweights}. Clearly, for a given setup one can simply compute the $q_{\Lambda}^{\bf w}$ and determine the signs \eqref{eq:sign_kahler}. This allows to read off $\chi({\bf R})$. Formally, one can write this as \begin{equation} \chi({\bf R}) = t^{\Lambda \Sigma}_{\bf R} \Theta_{\Lambda \Sigma}\ , \end{equation} where $t^{\Lambda \Sigma}_{\bf R}$ is a matrix associated to the representation $\bf R$. In fact, $t^{\Lambda \Sigma}_{\bf R}$ determines the matter surface $S_{\rm R}$ appearing in \eqref{eq:chirality2}. In the next section we will present a formalism to compute $t^{\Lambda \Sigma}_{\bf R}$ explicitly for a given Calabi-Yau geometry. Let us stress that in the evaluation of \eqref{Theta_wweights} one uses the three-dimensional fermion spectrum. However, due to the fact that this three-dimensional is obtained as a $S^{1}$ compactification of a four-dimensional theory arising from an F-theory compactification, the zero mode spectrum in the three dimensional theories should match that in the four-dimensional theories. In other words, \eqref{eq:chirality2} equally determines the chirality in F-theory compactifications on the Calabi--Yau fourfold $X_4$. One could suspect that there are other modes running in the loop which arise from the Kaluza-Klein tower in the circle compactification. It will be shown in \cite{BonettiGrimm} that such modes generate other Chern-Simons couplings but do not contribute in \eqref{Theta_wweights}. In summary, we need the following data in order to evaluate \eqref{Theta_wweights} and to determine the chiral index \begin{itemize} \item A detailed identification of the weights ${\bf w}$ with the resolution curves $\mathcal{C}_{\bf w}$ for a given resolved compact Calabi-Yau fourfold. \item The information about of the sign the K\"ahler form $\tilde J$ integrated over the curves $\mathcal{C}_{\bf w}$. \end{itemize} Both of these data will be induced in detail in section \ref{Strategy}, and evaluated for specific examples in section \ref{Examples}. To end this section, let us point out that there is an elegant way to encode the match \eqref{eq:chirality3d} by a single auxiliary function $\mathcal{T}$. One defines $\mathcal{T}$ such that its second derivative with respect to $\xi^\Lambda$ will generate \eqref{eq:chirality3d}. Hence, one has \begin{equation} \label{ddT} \Theta_{\Lambda \Sigma} = 2\, \partial_{\xi^\Lambda} \partial_{\xi^\Sigma} \mathcal{T}\ . \end{equation} From the M-theory perspective a natural definition of $\mathcal{T}$ is \begin{equation} \label{def-cT} \mathcal{T} = \frac14 \int_{\tilde X_4} \tilde J \wedge \tilde J \wedge G\ , \end{equation} which indeed satisfies \eqref{ddT}. Let us isolate the part $\mathcal{T}^{\rm c}$ of $\mathcal{T}$ which encodes the data about the fermionic spectrum running in the loop by defining $\mathcal{T}^{\rm c} = \frac12 \xi^\Lambda \xi^\Sigma \partial_{\xi^\Lambda} \partial_{\xi^\Sigma} \mathcal{T}$. Then the condition \eqref{Theta_wweights} translates into \begin{equation} \mathcal{T}^{\rm c} = \frac{1}{8}\sum_{\bf r} n_{\bf r} \sum_{{\bf w} \in W({\bf r})} \int_{\mathcal{C}_{\bf w}}\tilde J \big| \int_{\mathcal{C}_{\bf w}}\tilde J \big|\ . \end{equation} Note that the real function $\mathcal{T}$ as defined in \eqref{def-cT} is well-known in the M-theory and F-theory reductions \cite{Haack:2001jz,Grimm:2010ks,Grimm:2011sk}. It encodes not only data about the spectrum, as argued here, but also encodes the three-dimensional scalar potential. In fact, after performing the F-theory limit also the four-dimensional D-terms can be read off from this real function \cite{Grimm:2010ks}. For example, in the complete expansion of \eqref{def-cT} also the components $\Theta_{m\alpha}$ appearing in the $U(1)$-gaugings \eqref{DT-gauging} are included and generate the corresponding D-terms. \section{Strategy to derive chirality formulas on resolved fourfolds} \label{Strategy} In this section we will describe our strategy to explicitly evaluate the formula \eqref{eq:chirality2} to determine the four-dimensional chiral spectrum induced by non-trivial $G_4$ flux. The section is divided into several parts which stepwise introduce the geometrical tools to perform the computations. A particular focus will be on the determination of the matter surfaces using the Mori cone generators of the resolved Calabi-Yau fourfold. In outlining the tools we will also explain how details of the resolution process at co-dimension two and three in the base $\mathcal{B}$ can be inferred from the compact geometry using the Mori cone. The discussion of this section will be kept rather general. Examples for which these computations can be carried out explicitly are postponed to section~\ref{Examples}. \subsection{The relative K\"ahler and Mori cone} \label{KahlerMori} We have seen in the previous section \ref{3dCS} that the evaluation of the one-loop corrections \eqref{Theta_wweights} requires a detailed knowledge of the positivity of the resolution curve classes. In the following we want to formalize this further. We therefore endow the space of divisors of the resolved fourfolds $\tilde X_4$ with a cone structure by singling out positive K\"ahler forms. This will allow us to define the relative K\"ahler cone and relative Mori cone. Recall that the K\"ahler cone is spanned by K\"ahler forms $J$ satisfying the positivity conditions $\int_{\Sigma^k} J^k > 0 $, where $\Sigma^k$ are $k$-dimensional holomorphic submanifolds of $\tilde X_4$. The K\"ahler cone can be spanned by a basis of two-forms or, equivalently, a basis of divisors. The cone dual to the K\"ahler cone is known as the Mori cone. It is spanned by a basis of effective curves combined with positive coefficients. In the following we want to introduce the \textit{relative K\"ahler and Mori cones}, which parameterize fields which are driven to a special limit when blowing down the resolutions. Note that the exceptional divisors $D_{i}$ correspond to the simple roots of $G$. Since the weights are elements of the dual space to the simple roots, the weights correspond to holomorphic curves $\Sigma_{\mathcal{I}}$ inside the Mori cone, such that \begin{equation} D_i \ \Rightarrow \ \text{roots}\ ,\qquad \Sigma_\mathcal{I} \ \Rightarrow \ \text{weights} \ . \end{equation} The intersections $D_i \cdot \Sigma_{\mathcal{I}}$ corresponds to the natural dual pairing of weights and roots. We will discuss the precise identification of a given curve with a weight in the next subsection. In the following we want to first give the definitions of the relative cones. In the shrinking limit of the exceptional divisors $D_{\Lambda}$, there are holomorphic curves $\Sigma$ which are contained in $D_{\Lambda}$ and map to points in $X_4$. We will call the space of all such shrinking curves the relative Mori cone: \begin{equation} M(\tilde{X}_4/X_4)=\{ \Sigma \ |\ \Sigma\ \, {\rm effective\;curve\;mapping\;to\;a\;point\;in\;X_4}\}. \label{eq:relative_mori} \end{equation} In the M-theory interpretation of the F-theory compactification the charged matter fields arise from M2-branes wrapping on the holomorphic curves $\Sigma$ as discussed in section \ref{3dCS}. We have seen in \eqref{Theta_wweights} that the evaluation of the chirality requires a knowledge about the positivity of the curves. Later on we will also argue that the relative Mori cone plays a crucial role to identify the resolution process of higher co-dimension singularities. The dual cone to the relative Mori cone is called the relative K\"ahler cone $K(\tilde{X}_4/X_4)$. Hence, the relative K\"ahler cone can be defined as \begin{equation} K(\tilde{X}_4/X_4)=\{ D=\sum s_\Lambda D_\Lambda \ | \ D\cdot \Sigma > 0{\rm \;for\;all\;}\ \Sigma \in M(\tilde X_4/X_4) \}. \end{equation} Note that the relative K\"ahler cone for the Cartan generators of $G$ realized on a singular Calabi-Yau threefold was already introduced in \cite{Intriligator:1997pq}. In this case the negative relative K\"ahler cone is identified with the sub-wedge of the Weyl chamber of $G$ in five dimensional gauge theories. Let us introduce a natural extension of the relative Mori cone. We will add additional generators to $M(\tilde X_4/X_4)$ which are effective curves in the elliptic fiber, i.e.~the Mori cone elements, and intersect the generators of the relative K\"ahler cone. In simple cases this amounts to including the pinched elliptic fiber over the 7-brane. We will call the resulting cone $\widehat M(\tilde X_4/X_4)$ the \textit{extended relative Mori cone}. Clearly, we can similarly introduce the \textit{extended relative K\"ahler cone} $\widehat K(\tilde{X}_4/X_4)$ dual to $\widehat M(\tilde X_4/X_4)$. The cone $\widehat K(\tilde{X}_4/X_4)$ will contain one more generator $D_0 = \hat S$ which corresponds to the extended node of the Dynkin diagram of $G$. This generator allows to extend \eqref{dynkin_intersect} to \begin{equation} \mathcal{K}_{IJ \alpha \beta} = - C_{IJ} \, C^\gamma \, \mathcal{K}_{0 \alpha \beta \gamma}\ , \end{equation} where $I=(0,i)$ and $C_{IJ}$ is the extended Cartan matrix. \subsection{Mori cone, singularity resolution, and connection with group theory} \label{subsec:mori} Having determined the relative Mori and K\"ahler cone, we now want to make contact with the group theory of the seven-brane gauge theory with gauge group $G$. Our key point will be the precise association of some weights of a representation of the gauge group with the elements of the relative Mori cone. \subsubsection{General discussion} We start more general and introduce the charge vectors $\ell_{\mathcal{I}, A}$ given by intersecting curves $\Sigma_\mathcal{I}$ in the Mori cone with divisors $D_A$ in $\tilde X_4$ as \begin{equation} \label{ell-def} \ell_{\mathcal{I},A} = \int_{\Sigma_\mathcal{I}} \omega_A = \Sigma_\mathcal{I} \cdot D_A\ . \end{equation} We will determine the $\ell_{\mathcal{I},A}$ for specific examples in section \ref{Examples}. Let us make here some general comments, denoting henceforth by $\ell_\mathcal{I}$ the vector with entries \eqref{ell-def}. For Calabi-Yau fourfold examples which are realized as hypersurfaces or complete intersections in a toric ambient space one determines the vectors $\ell_{\mathcal{I}}$ in two steps \cite{Berglund:1995gd,Berglund:1996uy,Braun:2000hh}. Firstly, one uses the set of toric divisors $D_A$ of the ambient space and derives the $\ell_\mathcal{I}$ using the Mori cone generators of the ambient space. Since the ambient space can admit many triangulations, i.e.~topological phases connected by flop transitions, one obtains for a given geometry several sets of vectors $\ell_\mathcal{I}^{\rm (I)},\ell_\mathcal{I}^{\rm (II)}, \ell_\mathcal{I}^{\rm (III)},\ldots $, each set associated to a phase. Restricted to the Calabi-Yau manifold $\tilde X_4$ it can happen that different triangulations of the ambient space are connected by flops of curves which are not in $\tilde X_4$. This implies that several sets of the ambient space $\ell$-vectors have to be combined to describe the $\ell$-vectors of the Calabi-Yau manifold $\tilde X_4$.\footnote{By abuse of notation we have used the same symbols and indices for the $\ell$-vectors of the ambient space and the Calabi-Yau manifold $\tilde X_4$. Let us stress that even the number of $\ell$-vectors can differ for the two geometries.} Clearly, it will be our task to determine these vectors $\ell_\mathcal{I}$ for $\tilde X_4$ itself in section \ref{Examples}. For completeness a brief account of the general procedure to determine the $\ell$-vectors for a Calabi-Yau hypersurface is given in appendix \ref{sec:mori_cone}. Let us now make contact with the gauge theory on the 7-branes. We recall that we are working with the resolved fourfold $\tilde X_4$ and hence are on the Coulomb branch in the M-theory compactification to three dimensions. The geometrically massless gauge fields are then parameterizing the Abelian group \eqref{Coulomb-Group}, $U(1)^{\text{rank}(G)} \times U(1)^{n_{U(1)}}$. In connection with this gauge group it will be crucial to analyze the $\ell$-vectors associated to $U(1)$-charges for the divisors $D_\Lambda$. These are given by \begin{equation} \label{U(1)charges} \ell_{\mathcal{I}, \Lambda} = \Sigma_\mathcal{I} \cdot D_\Lambda\ , \end{equation} where $D_{\Lambda} = (D_i,\tilde D_m) $ as in \eqref{def-DomegaLambda}. In particular, for the Cartan $U(1)$'s in $G$ one has the Cartan charges $\ell_{\mathcal{I},i}=\Sigma_\mathcal{I} \cdot D_i$, where $D_i$ are the resolution divisors corresponding to the Cartan generators of $G$. One realizes that a curve $\Sigma_\mathcal{I}$ will be in the \textit{relative} Mori cone if it has negative intersection with one of the $D_\Lambda$: \begin{equation} \ell_{\mathcal{I},\Lambda} < 0 \quad \Rightarrow \quad \Sigma_\mathcal{I} \in M(\tilde X_4 /X_4)\ . \label{eq:relative_mori_condition} \end{equation} In fact, if the curve $\Sigma_\mathcal{I}$ has the negative intersection with $D_\Lambda$, $\Sigma_\mathcal{I}$ is contained in $D_\Lambda$ and shrinks to a point in $X_4$. Note that if a curve $\Sigma_\mathcal{I}$ is in the base $\mathcal{B}$ itself, the intersection with the $D_\Lambda$ vanishes due to the intersection structure \eqref{vanish_intersect2}. This is consistent with the fact that such curves have no $U(1)$-charges under the group \eqref{Coulomb-Group}, and are not in the relative Mori cone. By computing the $U(1)$-charges of a curve $\Sigma_\mathcal{I}$ with respect to the $D_\Lambda$, one can next determine a weight which reproduces the same $U(1)$ charges, and associate the weight to the curve $\Sigma_\mathcal{I}$ or the $\ell$-vector $\ell_\mathcal{I}$. This leads to the identification \begin{equation} \ell_\mathcal{I} \ \cong \ \text{weight of a representation of}\ G\ . \end{equation} Using this method one can associate a weight of $G$ to each generator of the relative Mori cone. This allows us to determine which weights $\bf{w}$ correspond to the effective curves and which weights do not. Since we know the weights which correspond to the generators of the relative Mori cone, other weights which correspond to effective curves should be realized by a linear combination of the weights in the relative Mori cone with positive integer coefficients. Applying this process, we determine the complete correspondence between the weights and the effective curves. Consistent with \eqref{wsmallbig} we use the following simplifying notation \begin{eqnarray} \label{notation_effective} {\bf w} >0 \quad &\Leftrightarrow& \quad {\bf w}\ \text{corresponds to an effective curve}\\ {\bf w} <0 \quad &\Leftrightarrow& \quad {\bf w}\ \text{does not correspond to an effective curve}\ . \nonumber \eea We argue in the following that details of the resolution process are contained in this information. Before turning to the resolution process, let us briefly comment on how one can also represent the roots as curves. In fact, due to the intersection numbers \eqref{dynkin_intersect}, we can always introduce a curve $\mathcal{C}_{-\alpha_i}$ associated to the negative of a simple root $\alpha_i$ by the triple intersection of three divisors \begin{equation} \mathcal{C}_{-\alpha_i} = D_i \cdot \tilde D \cdot \mathcal{D}, \label{simple-roots} \end{equation} where $\tilde D = v^\alpha D_\alpha$ and $\mathcal{D} = s^\alpha D_\alpha$ are linear combinations of the divisors $D_\alpha$ inherited from the base. To ensure the correct normalization of the simple root $\alpha_i$ these divisors have to satisfy the condition \begin{equation} \mathcal{B} \cdot S \cdot \tilde D \cdot \mathcal{D} = 1 \ . \end{equation} Hence, for $D_i, \tilde D, \mathcal{D}$ being holomorphic hypersurfaces of $\tilde X_4$ the curves which correspond to the negative simple roots are effective curves and elements in the relative Mori cone. The situation is different for the weights. Some of the weights do not correspond to effective curves. However, if one finds a weight which corresponds to an effective curve, one can construct other weights corresponding to effective curves from a linear combination of the original weight and the negative simple roots with positive integer coefficients. This does not mean that all the weights correspond to effective curves since some weights need negative coefficients for their construction. The co-dimension one singularities in $\mathcal{B}$ over the surface $S_{\rm b}$ determine the gauge group $G$ on the 7-branes. Generically, there are also co-dimension two and co-dimension three singularities. Physically, charged matter fields are localized on the co-dimension two singularities and the Yukawa interaction between these fields is generated from co-dimension three singularities. Focusing on $G$ we realize that the Cartan divisors $D_i$ are $\mathbb{P}^{1}$-fibrations at the generic points in the surface $S_{\rm b}$. However, the $\mathbb{P}^{1}$-fibers may degenerate into smaller irreducible components along the singularity enhancement locus where the matter and Yukawa couplings are localized. The resolution of the co-dimension one singularity generates the extended Dynkin diagram of $G$. The resolution of the higher co-dimension singularity will generate another Dynkin diagram which may have a rank larger than ${\rm rank}(G)$. We propose rules to determine the Dynkin diagrams from the resolution of the higher co-dimension singularity by exploiting the relative Mori cone. Let us consider a situation where the charged matter fields in the representation of ${\bf R}$ and ${\bf R}^{\ast}$ of $G$ are localized along the co-dimension two singularity enhancement locus $\Sigma_{\bf R}$. {}From the relative Mori cone, one can determine whether a weight of ${\bf R}$ or ${\bf R}^{\ast}$ corresponds to a effective curve or not. Then, the rule to determine the degeneration of $\mathbb{P}^{1}$ along $\Sigma_{{\bf R}}$ is that the negative of a simple root decomposes into a weight of ${\bf R}$ and a weight of ${\bf R}^{\ast}$ if both of them correspond to effective curves. If a curve corresponding to a weight lies in the relative Mori cone it is an effective curve. In this decomposition process one has to use the generators of the extended relative Mori cone as much as possible. In particular, one checks if the weight of ${\bf R}$ found in the decomposition can be further decomposed into a weight of ${\bf R}$ and the negative of a simple root, and if either of them is an element of the relative Mori cone or corresponds to the extended node. In this evaluation one should not mix with the weight of the other representation. Also, since the negative of a simple root is a generator of the relative Mori cone, it does not need to be decomposed further. By collecting all the irreducible components along $\Sigma_{{\bf R}}$ plus a curve corresponding to the extended Dynkin node, one can construct a Dynkin diagram generated from the resolution along the co-dimension two singularity locus $\Sigma_{\bf R}$. To make this algorithm more clear without introducing all the details of the global geometry we give a simple $SU(5)$ example in subsection \ref{following_SU(5)}. The co-dimension three singularity enhancement occurs at a point $p$ where at least two co-dimension two singularity loci intersect: \begin{equation} \label{def-p} p=\Sigma_{\bf R} \cdot \Sigma_{\bf R'} \ \subset S_{\rm b}\ . \end{equation} Here we suppose that the charged matter fields in the representation of ${\bf R}$ and ${\bf R}^{\ast}$ localized on one curve $\Sigma_{{\bf R}}$, and other charged matter fields in the representation of ${\bf R}^{\prime}$ and ${\bf R}^{\prime \ast}$ are localized along the other curve $\Sigma_{{\bf R}^{\prime}}$. Although the Dynkin diagram obtained from the resolution along the locus $\Sigma_{{\bf R}}$ consists of some of the weights of ${\bf R}$, weights of ${\bf R}^{\ast}$ and the negative simple roots, the weights of ${\bf R}^{\prime}$ and the weights of ${\bf R}^{\prime \ast}$ can also form the nodes of the Dynkin diagram from the resolution at the co-dimension three singularity point $p$ obtained \eqref{def-p}. Hence, a weight of ${\bf R}, {\bf R}^{\prime \ast}$ or the negative of a simple root of $G$ further decomposes at $p$ if they are made of effective curves which correspond to any weights of ${\bf R}, {\bf R}^{\ast}, {\bf R}^{\prime}, {\bf R}^{\prime \ast}$ and the negative simple roots of $G$. When the singularity is enhanced to $G_{p} \supset G$ at $p$, this decomposition has to obey the algebra of $G_{p}$. From this decomposition rule, one can obtain all the weights and simple roots which form a Dynkin diagram obtained from the resolution of the co-dimension three singularity at $p$. \subsubsection{A simple $SU(5)$ example} \label{following_SU(5)} Since this explanation is rather abstract, let us illustrate the above procedure on a simple example with gauge group $SU(5)$. The representations are the ${\bf R} = {\bf 5}$, and ${\bf R} = {\bf 10}$ along enhancement curves $\Sigma_{\bf 5}$ and $\Sigma_{\bf 10}$ respectively. We will not introduce the complete geometry here, but rather focus on the determination of the Dynkin nodes over the enhancement loci. In other words we assume the the $\ell$-vectors have been determined for a given geometry, and the association of the generators of the relative Mori cone with the weights of $SU(5)$ has been performed. We consider the following identification: \begin{equation} \label{id_weightsandell} \tilde \ell_{1} \cong -e_{3}, \qquad \tilde \ell_{2} \cong e_{3}+e_{4}, \qquad \tilde \ell_3 \cong -e_{1}+e_{2},\qquad \tilde \ell_4 \cong -e_{2}-e_{4} \ , \end{equation} where $\tilde \ell_i$ are the $\ell$-vectors generating the relative Mori cone, and $e_i$ are a orthonormal basis of $\mathbb{R}^5$ allowing to represent the roots and weights for $SU(5)$ and its representations. A compact Calabi-Yau fourfold which exactly yields the identification \eqref{id_weightsandell} can be found in section \ref{Example1}, see equation \eqref{eq:weight1}. We first want to determine the weights which appear as curves in the relative Mori cone. For the weights of the {\bf 5} representation, $-e_3$ corresponds to an effective curve from the generators of the relative Mori cone \eqref{id_weightsandell}. Then, it is straightforward to see \begin{eqnarray} e_4 &=& (-e_3) + (e_3+e_4),\\ -e_2 &=& (-e_2-e_4) + (-e_3) + (e_3+e_4),\\ -e_1 &=& (-e_1+e_2) + (-e_2-e_4) + (-e_3) + (e_3+e_4). \end{eqnarray} Hence, $e_4, -e_1$, and $-e_2$ correspond to effective curves. To determine $e_5$ or $-e_5$ corresponds to an effective curve, we use the fact that $e_1+e_2+e_3+e_4+e_5$ is a singlet of $SU(5)$. Then, we have \begin{eqnarray} e_5&=&e_1+e_3+e_5+ (-e_1) + (-e_3),\\ &=&(-e_2-e_4) + (-e_1) + (-e_3). \end{eqnarray} Therefore, $e_5$ corresponds to an effective curve. To summarize, the correspondence between the effective curves and the ${\bf 5}$ weights is given by \begin{equation} \label{effective5} e_{1} < 0,\qquad e_2 < 0,\qquad e_3 < 0,\qquad e_4 > 0,\qquad e_5 > 0\ , \end{equation} and has to be interpreted using the notation \eqref{notation_effective}. A similar analysis can be carried out for the weights of the $\bf 10$ representation \begin{eqnarray} \label{effective10} && e_1 + e_2 < 0,\qquad e_1+e_3<0,\qquad e_2 + e_3 < 0, \qquad e_2 +e_4 < 0, \\ && e_3 +e_4 > 0, \qquad e_3 + e_5>0,\qquad e_4 + e_5 > 0. \nonumber \eea This concludes the identification of weights with effective curves. Using this information we can now determine how the negative simple roots degenerate over the enhancement curves $\Sigma_{{\bf 10}}$ and $\Sigma_{\bf 5}$. Let us start with $\Sigma_{{\bf 10}}$, along which some of the negative simple roots degenerate into ${\bf 10}$ and $\overline{{\bf 10}}$ weights. First, we consider the decomposition of the negative simple roots into smaller components along $\Sigma_{{\bf 10}}$ \begin{equation} -({\rm simple}\;{\rm root}) = \overline{{\bf 10}}\;{\rm weight} + {\bf 10}\;{\rm weight}. \label{eq:decomposition1-10} \end{equation} Then, if both $\overline{{\bf 10}}$ weights and {\bf 10} weighs correspond to effective curves in the relative Mori cone, this degeneration occurs along $\Sigma_{{\bf 10}}$. The check of effectiveness of the curves corresponding to all the {\bf 10} weights was given in \eqref{effective10}. Hence, the degeneration of the negative simple roots along $\Sigma_{{\bf 10}}$ are \begin{eqnarray} -e_{2}+e_{3} &=& (-e_{2} - e_{4}) + (e_{3} + e_{4}),\label{eq:decomposition_D5-1}\\ -e_{4}+e_{5} &=& (-e_{1} - e_{4}) + (e_{1} + e_{5}) =(-e_{1} + e_{2}) + (-e_{2} -e_{4}) + (e_{1} + e_{5}).\label{eq:decomposition_D5-2} \nonumber \end{eqnarray} To summarize, along $\Sigma_{\bf 10}$ the negative simple roots of $SU(5)$ plus the extended node $e_1-e_5$ split into \begin{equation} e_{1}-e_{5},\quad e_{1}+e_{5},\quad -e_{1}+e_{2},\quad -e_{2}-e_{4},\quad -e_{3}+e_{4},\quad e_{3}+e_{4}. \label{eq:D5gens_text} \end{equation} The resolution curves associated to the weights \eqref{eq:D5gens_text} form the extended Dynkin diagram of $D_{5}$ as depicted in Figure \ref{fig:Dynkin_phase1}. Note that the well-known form of the $D_5$ Dynkin diagram is not directly visible by simply looking at the group-theoretic intersections of the elements \eqref{eq:D5gens_text}. However, this structure can be inferred by a local analysis as presented in appendix~\ref{sec:direct_comp}. Let us stress that this information is not needed in the evaluation of the chirality formulas and hence will not play a major role in this work. We next turn to singularity enhancement locus $\Sigma_{{\bf \bar{5}}}$. In this case, some of the negative simple roots decompose into {\bf 5} weight and $\bar{{\bf 5}}$ weight, \begin{equation} -({\rm simple}\;{\rm root}) = \bar{{\bf 5}}\;{\rm weight} + {\bf 5}\;{\rm weight} . \label{eq:decomposition1-5} \end{equation} Since $-e_3$ and $e_4$ corresponds to the effective curves from \eqref{effective5}, the decomposition of the negative simple roots along $\Sigma_{\bar{{\bf 5}}}$ is \begin{equation} -e_{3} + e_{4} = (-e_{3}) + (e_{4}). \end{equation} Then, the negative simple roots of the $SU(5)$ plus the extended Dynkin node become \begin{equation} \label{eq:A5gens_text} e_{1}-e_{5},\quad -e_{1}+e_{2},\quad -e_{2}+e_{3},\quad -e_{3},\quad e_{4}, \qquad -e_{4}+e_{5}. \end{equation} The curves associated to these weights form the extended $A_{5}$ Dynkin diagram as depicted in the Figure \ref{fig:Dynkin_phase1}. Once again, we will only need in the derivation of the chirality formulas the identification of \eqref{eq:A5gens_text} with effective curves and not the precise match with the Dynkin diagram. \subsection{Matter surfaces and the chiral index} Having discussed how the relative Mori cone can determine the resolution process of the higher co-dimension singularities we next want to include the $G_4$ fluxes on $\tilde X_4$ and evaluate a chirality formula \eqref{eq:chirality2}. Recall from section \ref{introducingG4} that F-theory fluxes have to satisfy the conditions \eqref{quantization} and \eqref{eq:G-condition}. The non-vanishing components of $G_4$ are captured by the matrices $\Theta_{\Lambda \Sigma}$ and $\Theta_{m\alpha}$ introduced in \eqref{def-theta}. Let us turn to the determination of the matter surfaces $S_{\bf R}$ appearing in \eqref{eq:chirality2} by using the extended relative Mori cone. As discussed in \cite{Donagi:2008ca,Hayashi:2008ba,Braun:2011zm,Marsano:2011hv,Krause:2011xj} this matter surface should be obtained by fibering the resolution $\mathbb{P}^1$'s over the matter curve $\Sigma_{\bf R}$ with representation ${\bf R}$ and ${\bf R^*}$. A relation of the matter surfaces with the weights of ${\bf R}$ was stressed in \cite{Marsano:2011hv}. The fiber $\mathcal{C}_{\bf w}$ corresponds to a weight ${\bf w}$ of the representation ${\bf R}$. The curves $\mathcal{C}_{{\bf w}}$ are identical to the ones introduced in the resolution of the co-dimension two singularity locus $\Sigma_{{\bf R}}$. Hence, each curve $\mathcal{C}_{\bf w}$ can be determined from the relative Mori cone as discussed above. Such effective curves can be written by the triple intersection of divisors. We make the Ansatz \begin{equation} \label{P1ansatz} \mathcal{C}_{\bf w} = t^{A \Sigma}_{{\bf w}}\, D_A \cdot D_{\Sigma} \cdot \mathcal{D}\, ,\qquad \quad \mathcal{D}=s^{\alpha}D_{\alpha}\ , \end{equation} with some real coefficients $t^{A \Sigma}_{{\bf w}} = t^{A}_{{\bf w}} v^\Sigma$ which generally depend on the $s^\alpha$. Here $v^\Sigma D_{\Sigma}$ is an exceptional divisor which contains the curve $\mathcal{C}_{\bf w}$. Let us note that we checked this Ansatz for our examples, and showed that it can always be satisfied. This includes the observation that there exists a divisor $\mathcal{D}$, that intersects the base $\mathcal{B}$ in a divisor, which can be separated as in \eqref{P1ansatz}. For weights of representations of the gauge group $G$ on $S_{\rm b}$, $\mathcal{D}$ intersects $S_{\rm b}$ in a curve. It would be desirable to give a geometric proof that there always exists a representation of the class of $\mathcal{C}_{\bf w}$ of the form \eqref{P1ansatz}. At least for $SU(N)$ gauge theories with matter in the fundamental and anti-symmetric representation, one can show that the curve $\mathcal{C}_{\bf w}$ can be generally written as \eqref{P1ansatz} using a group theory argument. In the fundamental or anti-symmetric representation, all the Cartan charges of the highest weight are non-negative. On the other hand, the other weights have at least one negative Cartan charge. Hence, the Ansatz \eqref{P1ansatz} might only be impossible if the highest weight appears as a generator of the relative Mori cone. For the other weights one can always choose a component $D_{\Lambda}$ in the Ansatz \eqref{P1ansatz} which has negative intersection number with the curve $\mathcal{C}_{\bf w}$. If the highest weight is a generator of the relative Mori cone, all the weights correspond to effective curves in the relative Mori cone since the negative simple roots are always effective curves by \eqref{simple-roots}. When one sums up all the weight $e_i,\; (i=1, \cdots, N)$ in the fundamental representation, or the weights $e_i + e_j,\; (1 \leq i \neq j \leq N)$ in the anti-symmetric representation, one has $N(e_1+ \cdots + e_N)$ or $(N-1)(e_1 + \cdots + e_N)$ respectively. Namely, the singlet of $SU(N)$ corresponds to the effective curve in the relative Mori cone. However, if the curve corresponding to the singlet is in the relative Mori cone, the relative K\"ahler cone cannot be defined since $\int_{\mathcal{C}_{{\rm singlet}}}\sum s^{i}D_{i} = 0$. Therefore, the highest weight cannot be the generator of the relative Mori cone and the generators of the relative Mori cone should have at least one negative Cartan charge. This negative Cartan charge indicates that the curve is contained in an exceptional divisor. Hence, one can always make the Ansatz \eqref{P1ansatz} for the $SU(N)$ gauge theories with matter in the fundamental and anti-symmetric representations. Since our final interest is in the matter surface $S_{{\bf R}}$ we still have to extract a surface out of the curve \eqref{P1ansatz}. In order to do that we propose to pull out the divisor $\mathcal{D}$ which becomes a curve in $S_{\rm b}$. In order that the normalization of the $t^{A \Sigma}_{{\bf w}}$ in \eqref{P1ansatz} is fixed we demand that the curve $\mathcal{D}\cdot S \cdot B$ intersects exactly once with the matter curve $\Sigma_{{\bf R}}$ in $S_{\rm b}$. In other words we normalize $\mathcal{D}$ such that \begin{equation} \Sigma_{{\bf R}} \cdot \mathcal{D} =1\ . \label{eq:multiplicity} \end{equation} The condition \eqref{eq:multiplicity} fixes the normalization of $\mathcal{D}$, and via \eqref{P1ansatz} the normalization of the $t^{A \Sigma}_{\bf w}$. The class of the matter surface $S_{\bf R}$ is then fixed and given by \begin{equation} \label{matter-surfaces} S_{\bf R} = t^{A \Sigma}_{{\bf w}}\, D_A \cdot D_\Sigma\, , \end{equation} with $t^{A \Sigma}_{\bf w} = t^{A}_{\bf w} v^\Sigma$ as in \eqref{P1ansatz}. For a fixed $\mathcal{D}$ the parameters $t^{A \Sigma}_{{\bf w}}$ are determined from the intersection numbers between the curve $\mathcal{C}_{\bf w}$ and the divisors $D_{A}$. The intersection numbers are already known as entries of the $\ell$-vectors. This procedure does not determine the parameters uniquely, but fixes the class of the curve $\mathcal{C}_{\bf w}$ and the matter surface $S_{\bf R}$. Curves are in the same class if their intersection number with the divisors are identical. Strictly speaking one should note that $S_{\bf R}$ depends on the chosen weight for the representation ${\bf R}$. However, as will become more clear momentarily, this ambiguity drops out from the chirality formula \eqref{eq:chirality2}. Using the non-vanishing components of the $G_4$ flux \eqref{def-theta}, the chirality formula \eqref{eq:chirality2} together with the explicit form of the matter surfaces $S_{\bf R}$ \eqref{matter-surfaces} yields \begin{equation} \chi({\bf R}) = t^{A \Sigma}_{\bf w} \Theta_{A\Sigma}\ . \label{chirality} \end{equation} {}From this expression one can infer that the chirality is indeed independent of the weight ${\bf w}$ and only depends on the representation ${\bf R}$. Suppose that a curve corresponding to another weight ${\bf w}^{\prime}$ of the representation ${\bf R}$ also appears along the locus $\Sigma_{{\bf R}}$. Since any two weights can be related by a linear combination of simple roots, one can write ${\bf w}^{\prime} = {\bf w} - \sum_{i}u^{i}\alpha_i$. Since the negative simple roots can be always written as \eqref{simple-roots}, we can expand $\tilde{D} = v^{\alpha}D_{\alpha}$, and identify $\mathcal{D}$ of \eqref{simple-roots} and \eqref{P1ansatz}. Hence, adding or subtracting the negative simple roots to or from the weight ${\bf w}$ corresponds to adding or subtracting $D_i \cdot \tilde{D}$ to or from $t_{\bf w}^{A \Sigma }D_{A} \cdot D_{\Sigma}$. Then, the expression \eqref{chirality} evaluated for the two different weights ${\bf w}$ and ${\bf w}^{\prime}$ yields \begin{eqnarray} \label{independent_of_weight} t_{{\bf w}^{\prime}}^{A \Sigma}\Theta_{A \Sigma} &=& t_{{\bf w}}^{A \Sigma}\Theta_{A\Sigma} + u^{i}v^{\alpha}\Theta_{i\alpha} \\ &=& t_{{\bf w}}^{A \Sigma}\Theta_{A\Sigma}\ ,\nonumber \end{eqnarray} where we use \eqref{eq:G-condition}. Therefore, the chirality formula \eqref{chirality} does not depend on the weight ${\bf w}$ but only on the representation ${\bf R}$. In geometrical terms this also implies that \eqref{chirality} does not depend on the topological phases of the $\tilde X_4$ distinguishing different Calabi-Yau resolutions of $X_4$. In fact, in different phases other weights of the same representation are associated to the matter curves, and \eqref{independent_of_weight} ensures independency of the resolution phase. \section{Examples} \label{Examples} In this section we discuss two illustrative examples of explicitly resolved Calabi-Yau hypersurfaces realized in a toric ambient space. Our first example will admit an $SU(5)$ singularity over a divisor in the base, as in the torically realized GUT models of \cite{Blumenhagen:2009yv,Grimm:2009yu}. In the second example an additional $U(1)$ will be present, such that $n_{U(1)}=1$ in \eqref{def-nU(1)}. The toric construction will correspond to the $U(1)$-restricted Tate model of \cite{Grimm:2010ez}. The toric methods required to perform the computations of this subsection have been explained in~\cite{Candelas:1996su,Candelas:1997eh,Blumenhagen:2009yv, Grimm:2009yu}, and where recently reviewed in~\cite{Knapp:2011ip}. The determination of the Mori cone can be performed using the methods of \cite{Berglund:1995gd,Berglund:1996uy,Braun:2000hh} as reviewed in appendix \ref{sec:mori_cone}. \subsection{A Calabi-Yau hypersurface with $SU(5)$ gauge group} \label{Example1} As the first example, we consider a Calabi--Yau fourfold $\tilde{X}_4$ which has a K3 fibration. The K3 fibration itself has an elliptic fibration such that it can be used in an F-theory compactification. Such a Calabi--Yau fourfold can be obtained from a hypersurface in the ambient toric space whose points on edges of the polyhedron are \begin{eqnarray} \begin{array}{|ccccc|rl|} \hline \multicolumn{5}{|c|}{\text{points}} &\ \text{divisor}& \hspace*{.1cm} \text{basis}\hspace*{.3cm} \\ \hline \hline -1 & 0 & 0 & 0 & 0 & D_{1} & \\ 0 & -1 & 0 & 0 & 0 & D_{2} & \\ 3 & 2 & 0 & 0 & 0 & D_{3} &= \mathcal{B} \\ 3 & 2 & 1 & 0 & 0 & D_{4} & =\hat S\\ 3 & 2 & -1 & 0 & 0 & D_{5} & \\ 3 & 2 & 0 & 1 & 1 & D_{6} & \\ 3 & 2 & 0 & -1 & 0 & D_{7} & \\ 3 & 2 & 0 & 0 & -1 & D_{8} & =H \\ 2 & 1 & 1 & 0 & 0 & D_{9} & =B_1 \\ 1 & 1 & 1 & 0 & 0 & D_{10} & =B_2\\ 1 & 0 & 1 & 0 & 0 & D_{11} & =B_3 \\ 0 & 0 & 1 & 0 & 0 & D_{12} & =B_4 \\ \hline \end{array} \label{eq:toric3} \end{eqnarray} Note that we have introduced a basis of independent toric divisors $\mathcal{B},\hat S,H$, and $B_i$. These $7$ divisors will span a basis of independent divisors on a generic hypersurface $\tilde X_4$ embedded in the class $K = \sum_i D_i$, such that $h^{1,1}(\tilde X_4)=7$.\footnote{This is a consequence of the Lefschetz hyperplane theorem.} One realizes from \eqref{eq:toric3} that $S_{\rm b} = \mathcal{B} \cdot \hat S$ is the $\mathbb{P}^2$ base of the K3-fibration. The normal bundle $N_{S_{\rm b}|\mathcal{B}}$ is trivial $N_{S|\mathcal{B}} = \mathcal{O}_{\mathbb{P}^{2}}$. In \eqref{eq:toric3} we also introduced the blow up divisors $B_i$ for the resolution of an $A_{4}$ singularity over $S_{\rm b}$. The to $S_{\rm b}$ associated divisor $\hat S$ in $\tilde X_4$ is given by $S = \hat S + B_1 + B_2 + B_3 + B_4$ as an $SU(5)$ version of \eqref{eq:shift}. The Hodge numbers of the Calabi--Yau fourfold can be computed as \begin{equation} h^{1,1}(\tilde{X}_{4}) = 7,\qquad h^{2,1}(\tilde{X}_{4})=0,\qquad h^{3,1}(\tilde{X}_{4})=2148,\qquad \chi(\tilde{X}_{4}) = 12978. \end{equation} \subsubsection{Mori cone, resolutions and group theory} \label{sec:degeneration} The generators of the Mori cone for the Calabi--Yau fourfold $\tilde{X}_{4}$ can be obtained by the method described in \cite{Berglund:1995gd,Berglund:1996uy,Braun:2000hh}. Note that the generators of the Mori cone for a toric ambient space is generally different from the generators of the Mori cone for a Calabi--Yau fourfold hypersurface. In general, some of the triangulations for the ambient space are connected by a flop of a curve which is not inside the Calabi--Yau fourfold. In our case, we have 54 star-triangulations of the polyhedron \eqref{eq:toric3} from the origin. However some of them are connected by the flops of the curves which are not included in $\tilde{X}_{4}$. If the defining equation of a curve and the Calabi--Yau hypersurface $\tilde{X}_4$ cannot be satisfied simultaneously because they are elements of the Stanley-Reisner ideal, the flop of the curve is not a true flop in the Calabi--Yau fourfold $\tilde{X}_4$. This can be confirmed from the distinct intersection numbers of $\tilde{X}_4$ since different phases give different intersection numbers. In the derivation of the intersection numbers we use the star-triangulations ignoring the interior points in the facets. The presence of these points indicates the existence of point-like singularities in the ambient space. Since the Calabi--Yau hypersurface does generically not intersect these singularities, a star-triangulation of the points in the polyhedron \eqref{eq:toric3} yields a smooth Calabi--Yau hypersurface. In our example \eqref{eq:toric3}, we find that the true number of the triangulations for $\tilde{X}_{4}$ is three. The generators of the Mori cone for the three phases are: \begin{eqnarray} \begin{array}{|ccc;{2pt/2pt}cccc|c|ccc;{2pt/2pt}cccc|c|ccc;{2pt/2pt}cccc|} \cline{1-7} \cline{9-15} \cline{17-23} \multicolumn{7}{|c|}{\text{phase I}} & &\multicolumn{7}{c|}{\text{phase II}} & & \multicolumn{7}{c|}{\text{phase III}}\\ \cline{1-7} \cline{9-15} \cline{17-23} \ell_{1} &\ell_{2} &\ell_{3} & \ell_{4} & \ell_{5} & \ell_{6} & \ell_{7} & & \ell_{1} &\ell_{2} &\ell_{3} & \ell_{4} &\ell_{5} &\ell_{6} &\ell_{7} & & \ell_{1} &\ell_{2} &\ell_{3} & \ell_{4} &\ell_{5} &\ell_{6} &\ell_{7} \\ \cline{1-7} \cline{9-15} \cline{17-23} 0 & 0 & 0 & 1 & 0 & 0 & 0 && 0 & 0 & 0 & 1 & 0 & 0 & 0 && 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 && 0 & 0 & 0 & 0 & 1 & 0 & 0 && 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ -2 & -3 & 1 & 0 & 0 & 0 & 0 && -2 & -3 & 1 &0 &0 &0 & 0 && -2 & -3 & 1 &0 &0 & 0 & 0 \\ 1 & 0 & -2 & 0 & 0 & 1 & 0 && 1 & 0 & -2 & 0 &0 & 1 & 0 && 1 & 0 &-1 & 1 & 0 & 1 &-1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 && 1 & 0 & 0 & 0 & 0 & 0 & 0 && 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 && 0 & 1 & 0 & 0 & 0 & 0 & 0 && 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 && 0 & 1 & 0 & 0 & 0 & 0 & 0 && 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 && 0 & 1 & 0 & 0 & 0 & 0 & 0 && 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & -2 & 1 && 0 & 0 & 1 & 1 & 1 & -1 &-1 && 0 & 0 & 0 & 0 & 1 &-2 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 & -1 && 0 & 0 & 1 & -1 &0 & -1 & 1 && 0 & 0 & 0 & -2 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & -1 & 1 & -1 && 0 & 0 & 0 & 0 &-2 & 0 & 1 && 0 & 0 & 0 & 0 & -2 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 & 1 && 0 & 0 & 0 & 0 & 1 & 1 &-1 && 0 & 0 & 1 & 1 & 1 & 0 &-1 \\ \cline{1-7} \cline{9-15} \cline{17-23} \end{array} \hspace*{.5cm} \label{Mori-generators1} \end{eqnarray} In the following we will indicate the phase by writing $\ell^{\rm (I)}_i$, $\ell^{\rm (II)}_i$, and $\ell^{\rm (III)}_i$ for the Mori vectors of the three phases respectively. Note that \begin{equation} \ell^{\rm (I)}_1 = \ell^{\rm (II)}_1 = \ell^{\rm (III)}_1\ ,\qquad \ell^{\rm (I)}_2 = \ell^{\rm (II)}_2 = \ell^{\rm (III)}_2\ ,\qquad \ell^{\rm (I)}_3 = \ell^{\rm (II)}_3 =\ell^{\rm (III)}_3 + \ell^{\rm (III)}_7 \ . \end{equation} {}From this identification we already realize that the phase III will be special, since its $\ell$-vectors appear more non-trivially in the last identification. A subset of the generators of the Mori cone for each phase corresponds to effective curves and can be identified with weights in a representation of $SU(5)$ in the way described in the section \ref{subsec:mori}. One can read off the Cartan matrix $C_{ij}$ from the intersection numbers \eqref{dynkin_intersect}. By comparing it with the Cartan matrix of $SU(5)$, one can deduce that the blow up divisors $B_1, B_2, B_3, B_4$ correspond to the simple roots $e_{1}-e_{2}, e_{4}-e_{5}, e_{2}-e_{3}, e_{3}-e_{4}$ respectively. Here $e_{i}$ denotes the orthonormal basis of $\mathbb{R}^{5}$. Then, we can identify the generators of the Mori cone \eqref{Mori-generators1} with the weights of some representations of $SU(5)$ from the Cartan charges \eqref{U(1)charges}. Since we are interested in the extended relative Mori cone, the relevant generators for the phase I is $\ell_3^{\rm (I)}, \ell_4^{\rm (I)}, \ell_5^{\rm (I)}, \ell_6^{\rm (I)}, \ell_7^{\rm (I)}$. Among them the generators of the relative Mori cone \eqref{eq:relative_mori} have negative intersection numbers with the Cartan divisors. Hence, they are $\ell_4^{\rm (I)}, \ell_5^{\rm (I)}, \ell_6^{\rm (I)}, \ell_7^{\rm (I)}$ and $\ell_3^{\rm (I)}$ corresponds to the extended Dynkin node. The weights of the generators of the relative Mori cone for the phase I are \begin{equation} \ell_{4}^{\rm (I)} \cong -e_{3}, \qquad \ell_{5}^{\rm (I)} \cong e_{3}+e_{4}, \qquad \ell_{6}^{\rm (I)} \cong -e_{1}+e_{2},\qquad \ell_{7}^{\rm (I)} \cong -e_{2}-e_{4}. \label{eq:weight1} \end{equation} The extended node $\ell_3^{\rm (I)}$ corresponds to $e_1-e_5$. For the phase II, the generators of the relative Mori cone are $\ell_4^{\rm (II)}, \ell_5^{\rm (II)}, \ell_6^{\rm (II)}, \ell_7^{\rm (II)}$. They correspond to the following weights \begin{equation} \ell_{4}^{\rm (II)} \cong e_{1}+e_{5}, \qquad \ell_{5}^{\rm (II)} \cong-e_{2}+e_{3}, \quad \ell_{6}^{\rm (II)} \cong-e_{1} - e_{4}, \qquad \ell_{7}^{\rm (II)} \cong e_{2}+e_{4}. \label{eq:weight2} \end{equation} Similarly, $\ell_{3}^{\rm (II)}$ corresponds to the extended Dynkin node $e_{1}-e_{5}$, and $\ell_3^{\rm (II)}, \ell_4^{\rm (II)}, \ell_5^{\rm (II)}, \ell_{6}^{\rm (II)}, \ell_7^{\rm (II)}$ are the generators of the extended relative Mori cone. For the phase III, the generators of the relative Mori cone and their correspondence to the weights are \begin{equation} \ell_{4}^{\rm (III)} \cong -e_{4}+e_{5}, \qquad \ell_{5}^{\rm (III)} \cong-e_{2}+e_{3}, \qquad \ell_{6}^{\rm (III)} \cong -e_{1}+e_{2}, \qquad \ell_{7}^{\rm (III)} \cong e_{1}+e_{4}. \label{eq:weight3} \end{equation} In this phase, we have a generator $\ell_{3}^{\rm (III)}$ which corresponds to a weight $-e_4 - e_5$ and does not shrink to a point in $X_4$. Hence $\ell_3^{\rm (III)}, \ell_4^{\rm (III)}, \ell_{5}^{\rm (III)}, \ell_6^{\rm (III)}, \ell_7^{\rm (III)}$ are the generators of the extended relative Mori cone. So far we have determined the generators of the extended relative Mori cone. This implies that the weights \eqref{eq:weight1}--\eqref{eq:weight3} correspond to effective curves. Using the strategy of section \ref{subsec:mori} one can also determine whether other weights correspond to effective curves from the relative Mori cone. Comparing \eqref{eq:weight1} and \eqref{id_weightsandell} we note that section~\ref{following_SU(5)} precisely discusses the phase I of the resolved Calabi-Yau fourfold \eqref{eq:toric3}. The identification of the effective curves was given in \eqref{effective5}. Following the same strategy also for phases II and III, one shows that for all the three phases one has \begin{equation} e_{1} < 0,\qquad e_2 < 0,\qquad e_3 < 0,\qquad e_4 > 0,\qquad e_5 > 0. \label{eq:5phase1} \end{equation} where we use the notation \eqref{notation_effective}. In section \ref{following_SU(5)} also the determination of the effectiveness of the $\bf{10}$ weights has been given for phase I. The result was given in \eqref{effective10}. Repeating the same analysis for the phases II and III one finds the result summarized in Figure~\ref{fig:10phase1}. \begin{figure}[tb] \begin{center} \begin{tabular}{c} \includegraphics[width=100mm]{10phase1.eps} \\ \end{tabular} \caption{The effective curves corresponding to the weights of {\bf 10} representation. The negative sign means that the negative of the weight corresponds to a effective curve.} \label{fig:10phase1} \end{center} \end{figure} As also explained in section \ref{subsec:mori}, one next determines the weights which describe the curves fibering over the matter curves $\Sigma_{\bf 5}$ and $\Sigma_{\bf 10}$. This allows to determine the degeneration structure of the curves in the Cartan resolution divisors $D_i$ from the relative Mori cone. One considers the splits \begin{eqnarray} \Sigma_{\bf 10}: &\qquad& -({\rm simple}\;{\rm root}) = \overline{{\bf 10}}\;{\rm weight} + {\bf 10}\;{\rm weight}\ ,\\ \Sigma_{\bf 5}: &\qquad& -({\rm simple}\;{\rm root}) = \bar{{\bf 5}}\;{\rm weight} + {\bf 5}\;{\rm weight}\ . \eea For the phase I this was explained in detail in section \ref{following_SU(5)}. The result was that the weights corresponding to the matter curves $\Sigma_{\bf 10}$ and $\Sigma_{\bf 5}$ are \begin{eqnarray} \Sigma_{\bf 10}: &\quad &e_{1}-e_{5},\;e_{1}+e_{5},\;-e_{1}+e_{2},\;-e_{2}-e_{4},\;-e_{3}+e_{4},\;e_{3}+e_{4} \ , \label{eq:D5gens}\\ \Sigma_{\bf 5}: &\quad& e_{1}-e_{5},\;-e_{1}+e_{2},\;-e_{2}+e_{3},\;-e_{3},\;e_{4}\, -e_{4}+e_{5}, \label{eq:A5gens} \eea as shown in \eqref{eq:D5gens_text} and \eqref{eq:A5gens_text}. The weights \eqref{eq:D5gens} form the extended Dynkin diagram of $D_{5}$ in Figure \ref{fig:Dynkin_phase1}. Similarly, the weights \eqref{eq:A5gens} form the extended $A_{5}$ Dynkin diagram also depicted in Figure \ref{fig:Dynkin_phase1}. The intersection numbers for both Dynkin diagrams can be calculated from the direct computation in appendix \ref{sec:direct_comp}. We will not need these intersection numbers in the following. \subsubsection{Yukawa couplings at co-dimension three} We have studied the degeneration along the co-dimension-two singularity locus. The singularity further enhances along the $E_{6}$ and $D_{6}$ Yukawa points. At the Yukawa points, the curves of \eqref{eq:D5gens} and \eqref{eq:A5gens} further degenerate into smaller irreducible components. In this case, the degeneration generates the curves corresponding to {\bf 10}, $\overline{{\bf 10}}$ weights and also {\bf 5}, $\bar{{\bf 5}}$ weights from one Yukawa point. In general, when the singularity is enhanced to $G_p$ at the co-dimension-three point, our proposal is that the degeneration of the curves obeys the algebra of $G_p$. Namely, the further degeneration is possible only if the decompositions of the weights at $E_6$ and $D_6$ points obey the $E_6$ and $D_6$ algebra respectively. First, let us see the degeneration of the extended $D_{5}$ Dynkin diagram at the $E_{6}$ enhancement point. Since $e_{1}-e_{5},\; -e_{1}+e_{2},\; -e_{2}-e_{4},\; e_{3}+e_{4}$ correspond to the generators of the relative Mori cone, they do not degenerate further. Since ${\bf 5}$ or $\bar{{\bf 5}}$ weights can appear at the $E_6$ enhancement point, the negative simple root $-e_3+e_4$ in \eqref{eq:D5gens} can decompose as \begin{equation} -e_{3} + e_{4} = (-e_{3}) + (e_{4}). \label{eq:decomposition_E6_2} \end{equation} Moreover, we have the decomposition \begin{eqnarray} e_{1} + e_{5} &=& -e_{2} - e_{3} - e_{4},\nonumber \\ &=& (-e_{2}-e_{4}) + (-e_{3}). \label{eq:decomposition_E6} \end{eqnarray} We use the fact that $e_{1} + e_{2} + e_{3} + e_{4} + e_{5}$ is a singlet in $SU(5)$. The decomposition \eqref{eq:decomposition_E6_2} obviously obeys the $E_6$ algebra since the adjoint weight decomposes into a vector-like pair. In order to see that the decomposition \eqref{eq:decomposition_E6} obeys the algebra of $E_{6}$ but does not obey the algebra of $D_{6}$, one can consider the following decomposition \begin{eqnarray} E_{6} &\supset& SU(5) \times U(1)_{1} \times U(1)_{2} \label{eq:E6}\\ {\bf 78} &\rightarrow& {\bf 1}_{0,0} + {\bf 1}_{0,0} + {\bf 1}_{-5,-3} + {\bf 1}_{5,3} + {\bf 24}_{0,0} \nonumber \\ &&+ {\bf 5}_{-3,3} + \bar{\bf 5}_{3,-3} + {\bf 10}_{-1,-3} + \overline{\bf 10}_{1,3} + {\bf 10}_{4,0} + \overline{\bf 10}_{-4,0}, \nonumber \\[.1cm] D_{6} &\supset& SU(5) \times U(1)_{1} \times U(1)_{2} \label{eq:SO(12)}\\ \bf{66} &\rightarrow& {\bf 1}_{0,0} + {\bf 1}_{0,0} + {\bf 24}_{0,0} \nonumber \\ &&+ {\bf 5}_{2,2} + {\bf 5}_{2,-2} + \bar{\bf 5}_{-2,2} + \bar{\bf 5}_{-2,-2} + {\bf 10}_{4,0} + \overline{\bf 10}_{-4,0}. \nonumber \end{eqnarray} {}From the $E_{6}$ decomposition \eqref{eq:E6}, one can associate the $E_{6}$ algebra \begin{equation} {\bf 10}_{4,0} \rightarrow \overline{\bf 10}_{1,3} + \bar{\bf 5}_{3,-3} \end{equation} to \eqref{eq:decomposition_E6}. However, one cannot associate the $D_{6}$ algebra to \eqref{eq:decomposition_E6} since the $U(1)$ charges are not conserved under the decomposition. Hence, this degeneration corresponds to the $E_{6}$ enhancement point. To summarize, we have the weights \begin{equation} e_{1} - e_{5},\quad -e_{1}+e_{2},\quad -e_{2}-e_{4},\quad e_{3}+e_{4},\quad -e_{3},\quad e_{4}, \label{eq:E6weights} \end{equation} at the $E_6$ Yukawa point. The weights \eqref{eq:E6weights} form the $E_{6}$ Dynkin diagram depicted in Figure \ref{fig:Dynkin_phase1}. As noted in \cite{Esole:2011sm}, \eqref{eq:E6weights} does not form the `extended' $E_{6}$ diagram and the rank does not enhances at the $E_{6}$ Yukawa point. One can also do the same analysis for the degeneration of the extended $D_{5}$ Dynkin diagram at the a $D_{6}$ enhancement point. At the $D_{6}$ Yukawa point, the degeneration of \eqref{eq:decomposition_E6} is impossible since it does not satisfy the $D_{6}$ algebra. Hence, $e_{1} + e_{5}$ remains to be irreducible at the $D_{6}$ Yukawa point. On the other hand, $-e_{3} + e_{4}$ can decompose differently to \eqref{eq:decomposition_E6_2} as \begin{equation} -e_{3} + e_{4} = (-e_{3}) + (-e_{3}^{\prime}) + (e_{3} + e_{4}). \label{eq:decomposition_D6} \end{equation} The decomposition \eqref{eq:decomposition_D6} is possible for the $D_{6}$ enhancement point since one has two $\bar{{\bf 5}}$ representations with different charges under $U(1)_{1}\times U(1)_{2}$ in the decomposition of $SO(12)$ as displayed in \eqref{eq:SO(12)}. The U(1) charge conservation corresponding to \eqref{eq:decomposition_D6} becomes \begin{equation} {\bf 24}_{0,0} \rightarrow \bar{\bf 5}_{-2,2} + \bar{\bf 5}_{-2,-2} + {\bf 10}_{4,0}. \end{equation} This is not allowed in the $E_6$ algebra. Note that our proposal is that one has to decompose the weights into the generators of the extended relative Mori cone as much as possible. Therefore, the degeneration at the $D_6$ Yukawa point should not stop at \eqref{eq:decomposition_E6_2} but proceeds further to \eqref{eq:decomposition_D6} since both $-e_3$ and $e_3+e_4$ are the generators of the extended relative Mori cone. To summarize, we have the weights \begin{equation} e_{1}-e_{5},\quad e_{1}+e_{5},\quad -e_{1}+e_{2},\quad -e_{2}-e_{4},\quad e_{3}+e_{4},\quad -e_{3},\quad -e_{3}^{\prime}. \label{eq:D6gens} \end{equation} at the $D_6$ Yukawa point. The curves associated to \eqref{eq:D6gens} form the extended $D_{6}$ Dynkin diagram in Figure \ref{fig:Dynkin_phase1}. \begin{figure}[tb] \begin{center} \begin{tabular}{c} \includegraphics[width=100mm]{Dynkin_phase1.eps} \\ \end{tabular} \caption{The chain of the Dynkin diagrams for the phase I. The number in the nodes denotes the multiplicity. The intersection structure cannot be inferred by simple group theoretic arguments about the weights, but requires an inspection of the resolution geometry.} \label{fig:Dynkin_phase1} \end{center} \end{figure} The chains of the Dynkin diagrams for the phase II and III can be computed in a similar manner and they are depicted in Figures \ref{fig:Dynkin_phase2} and \ref{fig:Dynkin_phase3}. \subsubsection{$G_4$-flux and chirality } In this section we test the chirality formula \eqref{eq:chirality2} for the matter fields in the ${\bf 10}$ and $\bar{{\bf 5}}$ representation from the F-theory compactifications on the Calabi--Yau fourfold \eqref{eq:toric4}. The necessary information is the $G_4$ flux and the mater surfaces $S_{{\bf R}}$ for the ${\bf 10}$ and $\bar{{\bf 5}}$ matter fields. Hereafter, we also focus on the phase I. A construction of the $G_4$ flux and matter surfaces for $SU(5)$ examples can be also be found in \cite{Marsano:2011hv}. First we determine $G_4$ flux. We consider the $G_4$ flux constructed from the intersection of the divisors of $\tilde{X}_4$. In order to preserve four-dimensional Poincar\'e invariance and the $SU(5)_{GUT}$ symmetry, the $G_4$ flux should satisfy the conditions \eqref{eq:G-condition}. We find such $G_4$ flux from the expansion by the general intersection of the divisors. Without any constraint, we have $7\times 8/2 = 28$ generators of such surfaces. However, not all of them are independent. First of all, we have the constraints from the Stanley-Reisner ideal of the toric ambient space for $\tilde X_{4}$. For the phase I, the Stanley-Reisner ideal is \begin{eqnarray} SR&=&\{D_{2}D_{10}, D_{3}D_{9}, D_{3}D_{10}, D_{3}D_{11}, D_{3}D_{12}, D_{4}D_{5}, D_{4}D_{11}, D_{4}D_{12}, D_{1}D_{9}, D_{1}D_{11}, D_{5}D_{9}, \nonumber\\ &&D_{5}D_{10}, D_{5}D_{11}, D_{5}D_{12},D_{9}D_{12}, D_{1}D_{2}D_{3}, D_{1}D_{2}D_{4}, D_{6}D_{7}D_{8}\}. \label{eq:SR1} \end{eqnarray} Hence we have 15 constraints for the surfaces and all of them are independent. We have another constraints coming from the incompatibility between the Stanley-Reisner ideal and the Calabi--Yau hypersurface equation. Those constraints are \begin{equation} D_{1}D_{3}, D_{1}D_{4}, D_{2}D_{3}, D_{2}D_{4}, D_{2}D_{9}. \label{eq:SR2} \end{equation} However, not all the constraints from \eqref{eq:SR1} and \eqref{eq:SR2} are independent. There are actually 19 independent constraints in total, so the number of the true generators for the expansion of surfaces is $28 - 19 = 9$. We choose \begin{equation} B_4^{2},\;B_3 \cdot B_4,\; B_3^{2},\; B_2\cdot B_3,\; B_2^{2},\; B_4\cdot H, H^{2},\; H\cdot \hat{S},\; \mathcal{B}\cdot H \end{equation} Then, the general expansion of the $G_4$ flux by the nine independent surfaces is \begin{equation} G_{4} = \alpha_{1}B_4^{2} + \alpha_{2}B_3\cdot B_4 + \alpha_{3}B_3^{2} + \alpha_{4}B_2\cdot B_3 + \alpha_{5}B_2^{2} + \alpha_{6}B_4\cdot H + \alpha_{7}H^{2} + \alpha_8 H \cdot \hat{S} + \alpha_{9}\mathcal{B}\cdot H. \label{eq:G} \end{equation} The condition \eqref{eq:G-condition} reduces the nine parameters to one parameter, \begin{equation} G_{4}=\beta(8B_2 \cdot B_3 -4 B_3\cdot B_3 - 2 B_3 \cdot B_4 +3 B_4^{2} + 9 B_4 \cdot H), \label{eq:G-1} \end{equation} where $3\beta=\alpha_2$. We choose $\beta$ such that all the coefficients are integers. Let us turn to the matter surface $S_{{\bf 10}}$ and $S_{\overline{{\bf 10}}}$. {}From Figure \ref{fig:Dynkin_phase1}, the matter surface $S_{{\bf 10}}$ corresponds to the weights $e_{1}+e_{5}, e_{3}+e_{4}$, and $S_{\overline{{\bf 10}}}$ corresponds to the weight $-e_{2}-e_{4}$. The class of the curves corresponding to $e_{1}+e_{5}, e_{3}+e_{4}$ and $-e_{2}-e_{4}$ can be determined from the intersection numbers \eqref{Mori-generators1}. For example, let us consider a curve $\ell_7$ corresponding to the weight $-e_{2}-e_{4}$. Since our final interest is the matter surface, we have to pull out a divisor, which intersects $S_b$ in a curve, from the triple intersection representing the curve. We make the Ansatz $(\mu H + \nu S )$ for this divisor. Furthermore, when $\ell_i$ has a negative intersection number with a divisor $B_i$, the triple intersection representing $\ell_i$ has a component $B_i$. Hence, we can make a general Ansatz for $\ell_7$, $\ell_7=a B_2 \cdot B_3 \cdot (\mu H + \nu S)$. The parameter $a$ can be determined from the intersection numbers and the result is \begin{equation} {\bf\overline{10}}: \quad \ell_7^{\rm (I)} \cong -e_{2} - e_{4}\quad \rightarrow \quad \tfrac{1}{3\mu} B_2 \cdot B_3 \cdot (\mu H + \nu S). \end{equation} Note that we solve only for $a$ and not for $\mu$ and $\nu$ in this process. In order to obtain a matter surface corresponding to the weight $-e_{2}-e_{4}$, one has to pull out a divisor in $\mathcal{B}$ with a correct multiplicity. The correct multiplicity can be determined from the intersection with the matter curve $\Sigma_{{\bf 10}}$ \eqref{eq:multiplicity}. Since the {\bf 10} matter curve lies in the class $c_{1}(\mathcal{B})|_{S_{\rm b}}$, \eqref{eq:multiplicity} becomes \begin{equation} S_{\rm b} \cdot_{\mathcal{B}} c_{1}(\mathcal{B}) \cdot_{\mathcal{B}} \mathcal{D} = 1. \label{eq:pullout1} \end{equation} In this case, the divisor can be chosen to be $\mathcal{D}=\frac{1}{3\mu}(\mu H + \nu S)$. Hence the matter surface $S_{{{\bf \overline{10}}}}$ corresponding to the weight $-e_{2}-e_{4}$ is \begin{equation} S_{{\bf \overline{10}}} = B_2 \cdot B_3\ . \label{eq:matter_surface10-1} \end{equation} {}From this expression of the matter surface $S_{\overline{{\bf 10}}}$, and the $G_4$ flux \eqref{eq:G-1} one can compute the chirality for the ${\bf 10}$ matter fields. The chirality formula \eqref{eq:chirality2} becomes \begin{equation} \chi(\overline{{\bf 10}}) = \int_{S_{ \overline{\bf 10}}} G_{4} = 3 \cdot 54 \beta. \label{eq:chirality2-ex1} \end{equation} where we have used the intersection numbers of $\tilde X_4$. For the chirality formula of the $\bar{{\bf 5}}$ matter, one has to determine the matter surface $S_{\bar{{\bf 5}}}$. We consider a curve corresponding to the weight $-e_{3}$ which is one of the generators of the relative Mori cone for the phase I. From the intersections one can determine the class of the curve $\ell_4$ corresponding to the weight $-e_{3}$. We also make an Ansatz such that the triple intersection has a component $(\mu H + \nu S )$ which is then being dropped in the determination of the matter surface $S_{\bar{{\bf 5}}}$. Moreover, since $\ell_4$ has a negative intersection number with $B_4$, we can make a general Ansatz $\ell_4 = (\sum_A a^A D_A) \cdot B_4 \cdot (\mu H + \nu S)$, where $\sum_A a^A D_A$ is a general linear combination of the divisors in \eqref{eq:toric3}. The parameters $a_i$ can be determined from the intersection numbers, such that $\ell_4$ is represented by \begin{equation} {\bf\overline{5}}: \quad \ell_4^{\rm (I)} \cong -e_{3} \quad \rightarrow \quad \tfrac{1}{24 \mu}\big( B_2 -B_3 + 9H \big)\cdot B_{4} \cdot (\mu H + \nu S), \end{equation} where we choose a special representative in the solutions just for the simplicity of the expression. In order to obtain the matter surface $S_{\bar{{\bf 5}}}$, one has to pull out a divisor $\mathcal{D}$ which satisfies the condition \begin{equation} S \cdot_{\mathcal{B}} (8c_{1}(\mathcal{B}) - 5S) \cdot_{\mathcal{B}} \mathcal{D} = 1, \label{eq:pullout2} \end{equation} where $(8c_{1}(\mathcal{B}) - 5S)|_{S}$ is a class of the $\bar{{\bf 5}}$ matter curve. By using the condition \eqref{eq:pullout2}, one finds $\mathcal{D}$ being of the form $\mathcal{D}=\frac{1}{24\mu}(\mu H + \nu S)$. Hence the matter surface $S_{\bar{{\bf 5}}}$ is \begin{equation} \label{Sbar5} S_{\bar{{\bf 5}}} = (B_2 - B_3 + 9 H)\cdot B_{4}. \end{equation} Therefore, the chirality formula for the $\bar{{\bf 5}}$ matter becomes \begin{equation} \chi({\bf \bar 5}) = \int_{S_{\bf \bar 5}} G_{4} = - 3 \cdot 54 \beta \ , \label{eq:chirality2-ex1-2} \end{equation} where we have inserted the matter surface \eqref{Sbar5} and the flux \eqref{eq:G-1}, and used the intersection numbers of $\tilde X_4$. {}From \eqref{eq:chirality2-ex1} and \eqref{eq:chirality2-ex1-2} we find the relation $\chi({{\bf 10}}) = -\chi(\overline{{\bf 10}}) = \chi({{\bf \bar 5}})$, which is consistent with the anomaly conditions for $SU(5)$ gauge theories. Note that we did not discuss the quantization of $\beta$ appearing in \eqref{eq:chirality2-ex1} and \eqref{eq:chirality2-ex1-2}. This can be done by investigating the integrality properties of the basis used in \eqref{eq:G-1}, and satisfying the constraint \eqref{quantization}. \subsubsection{Relation to three-dimensional Chern-Simons term} One can also see that the chirality \eqref{eq:chirality2-ex1} and \eqref{eq:chirality2-ex1-2} can be obtained from the formula \eqref{eq:chirality3d}. The sign in \eqref{eq:sign_kahler} is determined from the relative Mori cone. The effectiveness of the curves corresponding to {\bf 10} weights is depicted in the first column of Figure \ref{fig:10phase1}. The effectiveness of the curves corresponding to ${\bf 5}$ weights is \eqref{eq:5phase1}. The $U(1)_i$ charges $(q_f)_i$ of each weight can be determined from \eqref{U(1)charges}. By inserting all the information, the formula \eqref{eq:chirality3d} for the Calabi--Yau fourfold \eqref{eq:toric3} in the phase I is \begin{align} \label{eq:3d-ex1} &\Theta_{23} = -\chi({\bf 10}),& \qquad & \Theta_{24} = \tfrac{1}{2} \chi({\bf 10}) + \tfrac{1}{2}\chi(\bar{{\bf 5}}), &\\ &\Theta_{33} = \chi(\bar{{\bf 5}}),&\qquad & \Theta_{44} = -\chi({\bf 10}),& \nonumber \\ &\Theta_{13} = \tfrac{1}{2}\chi({\bf 10}) - \tfrac{1}{2}\chi(\bar{{\bf 5}}),&\qquad & \Theta_{34} = \tfrac{1}{2}\chi({\bf 10}) - \frac{1}{2}\chi(\bar{{\bf 5}}) & \nonumber \\ &\Theta_{11} = -\chi({\bf 10}) + \chi(\bar{{\bf 5}}),& \qquad & \Theta_{22} = \chi({\bf 10}) - \chi(\bar{{\bf 5}})& \nonumber \end{align} and the other components are zero. Using the intersection numbers with the $G_4$-flux \eqref{eq:G-1}, one can explicitly compute the components $\Theta_{ij}$. One finds \begin{equation} \Theta_{23} = 3 \times 54 \beta,\qquad \Theta_{24} = -3 \times 54 \beta, \qquad \Theta_{33} = -3 \times 54 \beta, \qquad \Theta_{44} = 3 \times 54 \beta, \label{eq:theta-ex1-1} \end{equation} with all the others being zero. By inserting the explicit numbers \eqref{eq:theta-ex1-1} into \eqref{eq:3d-ex1}, one obtains \begin{equation} \chi({\bf 10}) = -3 \times 54 \beta,\qquad \chi(\bar{{\bf 5}}) = -3 \times 54 \beta. \label{eq:3d_matching1} \end{equation} This precisely matches the chirality obtained from integrating $G_4$-fluxes over the matter surfaces \eqref{eq:chirality2-ex1} and \eqref{eq:chirality2-ex1-2}. \subsubsection{Comparison with spectral cover} We compare the results of the proposed chirality formula \eqref{eq:chirality2-ex1} and \eqref{eq:chirality2-ex1-2} with the spectral cover computation. The chirality formula for the matter in {\bf 10} representation is \cite{Curio:1998vu, Diaconescu:1998kg} \begin{equation} n_{{\bf 10}} = -\lambda \eta \cdot (5K_{S_{\rm b}} + \eta), \label{eq:chirality} \end{equation} where $S_{\rm b}$ is a surface wrapped by the $SU(5)$ brane. The divisor $\eta$ in $S_{\rm b}$ is related to a normal bundle $N_{S_{\rm b}|\mathcal{B}}$ by the equation \begin{equation} c_{1}(N_{S_{\rm b}|\mathcal{B}}) = 6 K_{S_{\rm b}} + \eta\, . \label{eq:normal_eta} \end{equation} Finally $\lambda$ is related to the $G_4$ flux and takes values in $\mathbb{Z}+\frac{1}{2}$ \cite{Curio:1998bva}. Note that the chirality formula \eqref{eq:chirality} is a local expression on the surface $S_{\rm b}$, In constrast, the chirality formula \eqref{eq:chirality2} is defined by integration over the whole resolved Calabi--Yau fourfold $\tilde{X}_4$. In the current example, we chose $S_{\rm b} = \mathbb{P}^{2}$ and $N_{S_{\rm b}|\mathcal{B}} = \mathcal{O}_{\mathbb{P}^{2}}$. Hence, $\eta = 18 H_{\mathbb{P}^{2}}$ where $H_{\mathbb{P}^{2}}$ is a hyperplane class of $\mathbb{P}^{2}$. The chirality formula \eqref{eq:chirality} becomes \begin{equation} n_{{\bf 10}} = - \lambda (18H_{\mathbb{P}^{1}} \cdot_{\mathbb{P}^{2}} 3H_{\mathbb{P}^{2}}) = -54 \lambda \label{eq:chirality1-ex1} \end{equation} One also finds the same number for the matter in the $\bar{{\bf 5}}$ representation. Comparing the spectral cover result \eqref{eq:chirality1-ex1} with the result \eqref{eq:chirality2-ex1}, we find agreement if we identify $3\beta = \lambda$. Since the chirality for the {\bf 10} matter fields and the chirality of $\bar{{\bf 5}}$ matter fields are the same, the same identification is satisfied for the comparison between \eqref{eq:chirality1-ex1} and \eqref{eq:chirality2-ex1}. \subsection{A $U(1)$-restricted hypersurface with $SU(5) \times U(1)$ gauge group} In the previous subsection we have exemplified the use of the Mori cone generators and their connection to a non-Abelian gauge group $SU(5)$ on a single stack of 7-branes. We now present a second example where an additional geometrically massless $U(1)$ is present. In an $SU(5)$ model this $U(1)$ can be identified with the $U(1)_X$ in $SO(10)$. The geometric construction presented here corresponds to the $U(1)$ restricted Tate model introduced in~\cite{Grimm:2010ez}. The Calabi-Yau fourfold which we will construct has a base which is a $\mathbb{P}^3$ blown up along a curve into a surface $S_{\rm b}$. The gauge group on $S_{\rm b}$ is engineered to be $SU(5)$. The points on edges of the polyhedron for the $U(1)_{X}$ restricted Tate model for such a setup are: \begin{eqnarray} \begin{array}{|ccccc|rl|} \hline \multicolumn{5}{|c|}{\text{points}} &\ \text{divisor}& \hspace*{.1cm} \text{basis}\hspace*{.3cm} \\ \hline \hline -1 & 0 & 0 & 0 & 0 & D_{1} & \\ 0 & -1 & 0 & 0 & 0 & D_{2} & \\ 3 & 2 & 0 & 0 & 0 & D_{3} &= \mathcal{B} \\ 3 & 2 & 1 & 1 & 1 & D_{4} & =H \\ 3 & 2 & -1 & 0 & 0 & D_{5} & \\ 3 & 2 & 0 & -1 & 0 & D_{6} & \\ 3 & 2 & 0 & 0 & -1 & D_{7} & \\ 3 & 2 & 1 & 1 & 0 & D_{8} & =\hat{S} \\ 2 & 1 & 1 & 1 & 0 & D_{9} & =B_1 \\ 1 & 1 & 1 & 1 & 0 & D_{10} & =B_2\\ 1 & 0 & 1 & 1 & 0 & D_{11} & =B_3 \\ 0 & 0 & 1 & 1 & 0 & D_{12} & =B_4 \\ -1 & -1 & 0 & 0 & 0& D_{13} & =X \\ \hline \end{array} \label{eq:toric4} \end{eqnarray} Note that the inclusion of the last vertex corresponding to $X= D_{13}$ enforces $a_6 = 0$ in the standard Tate constraint. Here $a_6$ is the coefficient of the $z^6$ term, with $z=0$ being the base $\mathcal{B}$. This method can be quite generally applied to obtain a geometrically massless $U(1)$'s in the four-dimensional spectrum \cite{Grimm:2010ez}. In \eqref{eq:toric4} we have introduced the independent divisors $\mathcal{B},H,\hat S,B_i$ and $X$. Note that $B_{1}, \cdots, B_{4}$ are the exceptional divisors resolving the $A_{4}$ singularity. $X$ originates from the resolution of an $SU(2)$ singularity along a curve outside $S_{\rm b}$. As in \eqref{eq:shift} we introduce a divisor $S = \hat{S} + B_1 + B_2 + B_3 +B_4$. The Hodge numbers of the Calabi--Yau fourfold $\tilde{X}_{4}$ are \begin{equation} h^{1,1}(\tilde{X}_{4})=8,\;\; h^{2,1}(\tilde{X}_{4})=0,\;\; h^{3,1}(\tilde{X}_{4})=1020,\;\; \chi(\tilde{X}_{4})=6216. \end{equation} In the following we will determine the Mori vectors and follow the resolution process. This will allow us to determine the net chirality induced by an F-theory compatible flux. \subsubsection{Mori cone, resolutions and group theory} \label{sec:degeneration2} We begin our analysis with the determination of the Mori cone and its relation to the group theory of $SU(5) \times U(1)$. Restricting to star-triangulations of the polyhedron \eqref{eq:toric4} including the origin, and using the method described in appendix \ref{sec:mori_cone}, we find twelve phases for the hypersurface. As in the previous example we use the star-triangulations ignoring the interior points in the facets. For simplicity we will focus on one phase in the following. The generators of the Mori cone for the phase are \begin{eqnarray} \begin{array}{|c c c;{2pt/2pt} c c c c c|} \hline \ell_1 & \ell_2 & \ell_3 & \ell_4 & \ell_5 & \ell_6 & \ell_7 & \ell_8\\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 \\ -3 & -1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -1 & -2 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 & -2 & 0 &0 \\ 0 & 0 & 1 & 1 & -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & -1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 &0 & 0 & 0 & 1 & -1 \\ \hline \end{array} \label{eq:mori4} \end{eqnarray} The singular limit $\tilde{X}_4 \rightarrow X_4$ is the limit in which $B_{1},\ldots, B_{4}$ collapse to the surface $S_{\rm b}$ and $X$ collapses to a curve. In this limit, each curve $\ell_{4}, \ell_{5}, \ell_{6}, \ell_{7}, \ell_{8}$ shrinks to a point in $X_4$. Hence, they are the generators of the relative Mori cone. Note that the Cartan charges of $SU(5)$ are the same as the ones of \eqref{Mori-generators1}. Furthermore, $\ell_3$ intersects with the Cartan divisors. Hence, the generators of the extended relative Mori cone are $\ell_{3}, \ell_{4}, \ell_{5}, \ell_{6}, \ell_7, \ell_{8}$. Some of the weights are charged under the new $U(1)$ which originates from the reduction along the Poincar\'e dual two-form of $X$. As for the Cartan divisor of the other $U(1)$, we use the divisor \cite{Grimm:2010ez} \begin{equation} B_{5} = X - B -[c_1(\mathcal{B})], \label{eq:U(1)} \end{equation} where $[c_1(\mathcal{B})] = H + D_5 + D_6 + D_7 + S$.\footnote{Here we use $[c_1(\mathcal{B})]$ including $S$ instead of $\hat S$. More precisely, one could also write $\pi^* c_{1}(\mathcal{B})$. Any modification by a shift with blow-up divisors will just result in a change of basis in the following discussion.} This redefinition is required to ensure that the intersection numbers satisfy the vanishing condition \eqref{vanish_intersect2} in this basis. The Poincar\'e dual two-from $\omega_5$ of $B_5$ is used for the dimensional reduction to obtain the $U(1)_X$ gauge field $A_X$ as $C_3 = A_X \wedge \omega_5$. From the intersection between $B_i$ with $i=1,\cdots, 4$, and $B_\Lambda$ with $\Lambda=1,\cdots, 5$, one obtains a part of the Cartan matrix of $SO(10)$ as \cite{Krause:2011xj} \begin{eqnarray} B_i \cdot B_\Lambda \cdot D_\alpha \cdot D_\beta = \left( \begin{array}{ccccc} -2 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & 0 & 0 \\ 0 & 1 & -2 & 1 & 1 \\ 0 & 0 & 1 & -2 & 0 \end{array} \right) \mathcal{B} \cdot S \cdot D_\alpha \cdot D_\beta \ . \label{eq:SO(10)_Cartan} \end{eqnarray} Since $B_5 \cdot B_5$ does not localize on the surface $S$, the component $C_{55}$ of \eqref{dynkin_intersect} does not reproduce the $5-5$ component of the $SO(10)$ Cartan matrix. This is consistent with a four-dimensional theory with gauge group $SU(5)_{GUT} \times U(1)_X$ rather then $SO(10)$, and matter representations originating from an underlying $SO(10)$. This precisely occurs in the $U(1)_X$ restricted model as a global realization of the $(4+1)$ split spectral cover model considered in \cite{Marsano:2009gv, Blumenhagen:2009yv}. Indeed, we can identify the weights corresponding to the generators of the Mori cone \eqref{eq:mori4} with weights in the representation of $SO(10)$ when we identify the root corresponding to the Cartan divisor $B_5$ with $e_4+e_5$ which is one of the simple roots of $SO(10)$. Motivated by the intersections \eqref{eq:SO(10)_Cartan}, we identify the Cartan divisors $B_{\Lambda}, (\Lambda = 1, \cdots, 5)$ with $e_1-e_2, e_4-e_5, e_2-e_3, e_3-e_4, e_4+e_5$, which are the simple roots of $SO(10)$. From the Cartan charges in \eqref{eq:mori4}, one can also identify the curves $\ell_{4}, \ell_{5}, \ell_{6}, \ell_{7}, \ell_{8}$ with weights in the representation of $SO(10)$ \begin{eqnarray} \ell_4 &\cong& -\tfrac{1}{2}e_1-\tfrac{1}{2}e_2 + \tfrac{1}{2}e_3+\tfrac{1}{2}e_{4}-\tfrac{1}{2}e_5 = +e_3+e_4-e_{\Sigma},\nonumber\\ \ell_5 &\cong& \tfrac{1}{2}e_1-\tfrac{1}{2}e_2 + \tfrac{1}{2}e_3-\tfrac{1}{2}e_{4}+\tfrac{1}{2}e_5 =-e_2-e_4+e_{\Sigma},\nonumber\\ \ell_6 &\cong& -e_1+e_2,\label{eq:gen_mori4}\\ \ell_7 &\cong& \tfrac{1}{2}e_1+\tfrac{1}{2}e_2 - \tfrac{1}{2}e_3+\tfrac{1}{2}e_{4}+\tfrac{1}{2} e_5 =-e_3+e_{\Sigma},\nonumber\\ \ell_8 &\cong& -\tfrac{1}{2}e_1-\tfrac{1}{2}e_2 - \tfrac{1}{2}e_3 - \tfrac{1}{2}e_{4}-\tfrac{1}{2}e_5 =-e_{\Sigma},\nonumber \end{eqnarray} where $e_{\Sigma}$ denotes $\tfrac{1}{2}(e_1+e_2+e_3+e_4+e_5)$. In terms of $SU(5)$, $\ell_4$ corresponds to a weight in the ${\bf 10}$ representation, $\ell_5$ corresponds to a weight in the ${\bf \overline{10}}$ representation, and $\ell_7$ corresponds to a weight in the ${\bf \bar{5}}$ representation. They are identical to the ones in the phase I of the first example \eqref{eq:weight1}. In the $U(1)_X$ restricted model, we can further understand their $SO(10)$ origins. The weights corresponding to $\ell_4$ and $\ell_7$ come from the ${\bf 16}^{\prime}$ representation and the weight corresponding to $\ell_5$ comes from the ${\bf 16}$ representation of $SO(10)$. We can also consider a weight corresponding to a curve $\ell_7 + \ell_8$ \begin{equation} \ell_7 + \ell_8 \cong -e_3. \end{equation} In $SU(5)$ this curve corresponds to a weight of the ${\bf \bar{5}}$ representation. Its $SO(10)$ origin is a weight of the ${\bf 10}$ representation since $\pm e_a \; (a=1, \cdots, 5)$ are the weights of the ${\bf 10}$ representation of $SO(10)$. Therefore, we have two types of ${\bf \bar{5}}$ representations originating from the ${\bf 16}^{\prime}$ and ${\bf 10}$ representation of $SO(10)$. We also have a singlet field associated with the weight $-e_{\Sigma}$ which corresponds to the curve $\ell_8$. In addition, the generator $\ell_3$ corresponds to the extended weight $e_1-e_5$ up to a term $e_{\Sigma}$ which is a singlet in $SU(5)$. This curve does not shrink in the singular limit and is an additional generator for the extended relative Mori cone. {}From the relative Mori cone, one can determine the resolution structure. Since the $SU(5)$ Cartan charges of the $\ell_3, \ell_{4}, \ell_{5}, \ell_{6}, \ell_{7}$ are the same as the ones of the phase I in the first example \eqref{eq:weight1}, the resolution of the chains $A_4 \rightarrow D_5 \rightarrow E_6, D_6$ and $A_4 \rightarrow A_5 \rightarrow E_6, D_6$ are essentially the same except for the singlet term $e_{\Sigma}$. However, we have other chains $A_4 \rightarrow A_5 \rightarrow A_6$ and $A_4 \rightarrow A_5^{\prime} \rightarrow A_6$. The $A_6$ singularity enhancement appears as a point where the two $A_5$ and $A_5^{\prime}$ singularity enhancement loci meet. Let us focus on the resolution along these chains. From the generators of the relative Mori cone, we have identified the two ${\bf \bar{5}}$ matter fields with $\ell_7$ and $\ell_7+\ell_8$. Therefore, the decompositions of the negative simple roots along the two $A_5$ and $A_5^{\prime}$ singularity enhancement loci are \begin{eqnarray} -e_3 + e_4 &=& \big(-e_3 + e_{\Sigma}\big) + \big(e_4 - e_{\Sigma}\big),\label{eq:SU(6)_1}\\ -e_3+e_4 &=& (-e_3) + (e_4). \label{eq:SU(6)_2} \end{eqnarray} Indeed, one can reconstruct the weights $e_4 - e_{\Sigma}$ and $e_4$ from the generators of the relative Mori cone \begin{eqnarray} e_4 - e_{\Sigma} &\cong& \ell_4 + \ell_7 + \ell_8,\\ e_4 &\cong& \ell_4 + \ell_7 \end{eqnarray} Hence, both weights correspond to the effective curves in the relative Mori cone. At the $A_6$ singularity enhancement points a further degeneration occurs in accord with the $SU(7)$ algebra. The ${\bf 5}$ and ${\bf \bar{5}}$ weights arising from different $SO(10)$ representations are located at the $A_6$ points. From the decomposition \eqref{eq:SU(6)_1}, $-e_3 + e_{\Sigma}$ cannot further decompose into smaller pieces since it is already a generator of the relative Mori cone. However, $e_4 - e_{\Sigma}$ can decompose as \begin{equation} e_4 - e_{\Sigma} = (e_4) + \big(- e_{\Sigma} \big)\ . \label{eq:SU(7)_1} \end{equation} The consistent degeneration requires that \eqref{eq:SU(6)_2} decomposes as \begin{equation} -e_3 = (-e_3 + e_{\Sigma}) + \big(-e_{\Sigma} \big)\ , \label{eq:SU(7)_2} \end{equation} and $e_4$ remains unchanged. We can see that the decompositions \eqref{eq:SU(7)_1} and \eqref{eq:SU(7)_2} also obey the $SU(7)$ algebra. To summarize, we have the curves corresponding to the weights \begin{equation} -e_1+e_2,\qquad -e_2+e_3, \qquad -e_3+e_{\Sigma},\qquad -e_{\Sigma}, \qquad e_4, \qquad -e_4+e_5, \end{equation} at the $A_6$ singularity enhancement points. The degeneration chain is depicted in Figure \ref{fig:Dynkin_A7-1}. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=100mm]{Dynkin_A7-1.eps} \\ \end{tabular} \caption{The chain of the Dynkin diagrams for $A_4 \rightarrow A_5 \rightarrow A_6$ and $A_4 \rightarrow A_5^{\prime} \rightarrow A_6$. $e_\Sigma$ denotes the singlet weight $\frac{1}{2}(e_1+e_2+e_3+e_4+e_5)$. The number in the node denotes the multiplicity.} \label{fig:Dynkin_A7-1} \end{center} \end{figure} \subsubsection{G$_4$-flux and chirality} In the $U(1)_X$ restricted model, we turn on a $G_4$-flux of the following form\footnote{In general, one can also include an additional flux $G_4 = m^{\Sigma \Lambda} \omega_{\Sigma} \wedge \omega_\Lambda$, we have checked that \eqref{eq:G-condition} restricts this Ansatz to a one-parameter family. It order to keep the analysis simple, we will not include this flux in the following.} \begin{equation} G_4 = F_X \wedge \omega_{\tilde{X}}\, , \label{eq:G4_res} \end{equation} where $F_X = n^\alpha \omega_\alpha$, and $\omega_{\tilde{X}}$ is the Poincar\'e dual two-form to a linear combination of the exceptional divisors $B_{\Lambda},\;(\Lambda=1,\cdots, 5)$. This means that we turn on a gauge flux in the direction of $\omega_{\tilde{X}}$. Since we preserve the $SU(5)_{GUT}$ symmetry for all $n^\alpha$, the condition \eqref{eq:G-condition}, given by $\Theta_{i\beta}=0$, reduces to \begin{equation} \int_{\tilde{X}_4} \omega_\alpha \wedge \omega_{\tilde{X}} \wedge \omega_{i} \wedge \omega_{\beta} = 0. \label{eq:orthogonality2} \end{equation} Then, $\omega_{\tilde{X}}$ can be determined up to an overall constant. The Poincar\'e dual to $\omega_{\tilde{X}}$ is \begin{equation} B_{\tilde{X}} = \alpha(-2 B_1 -4 B_3 -6 B_4 -3 B_2 -5 B_{5}), \label{eq:U(1)_X} \end{equation} We later fix the overall constant $\alpha$ by requiring that the $U(1)$ direction $\omega_{\tilde{X}}$ matches the $U(1)_X$ considered in the $(4+1)$ split spectral cover model. Furthermore, the $G_4$-flux \eqref{eq:G4_res} should satisfy the condition \eqref{eq:G-condition}. The condition can be satisfied when $F_X$ is an element of $H^{1,1}(\mathcal{B})$ because of \eqref{vanish_intersect2}. Hence the Poincar\'e dual of $F_X$ can be \begin{equation} F_X = a H + b S. \end{equation} The first constraint of \eqref{eq:G-condition} is also satisfied due to the requirement \eqref{eq:orthogonality2}. Let us determine the charges for matter fields under the $U(1)_X$ obtained along \eqref{eq:U(1)_X}. The Cartan charges can be computed by \eqref{U(1)charges}. We have seen in the previous subsection that the curve corresponding to the $\overline{{\bf 10}}$ matter field is $\ell_5$ and the curves corresponding to the two $\bar{{\bf 5}}$ matter fields are $\ell_7$ and $\ell_7 + \ell_8$. One can determine their curve classes from the intersection numbers \eqref{eq:mori4}. Note that the curve classes in the Calabi--Yau fourfold can be expressed as the triple intersection of the divisors. When $\ell_i$ has a negative intersection number with $B_i$, $B_i$ can be chosen as a component in the triple intersection. Hence, we make the Ans\"atze \begin{align} &\ell_5 = a B_2 \cdot B_3 \cdot (\mu H + \nu S)\; ,& \quad &\ell_7=(\sum_A a^A D_A) \cdot B_4 \cdot (\mu H + \nu S)\;,&\\ & \ell_7+\ell_8=(\sum_A b^A D_A) \cdot B_4 \cdot (\mu H + \nu S)\;,& \quad & \ell_8 = (\sum_A c^A D_A) \cdot X \cdot (\mu H + \nu S)\;, & \nonumber \end{align} where $\sum_A a^A D_A$ etc.~are general linear combinations of the divisors in \eqref{eq:toric4}. The parameters $a, a_A$, $b_A$ $c_A$ are determined from the intersection numbers \eqref{eq:mori4}. Then, the curve classes for $\ell_5, \ell_7, \ell_7+\ell_8$ and $\ell_8$ are \begin{eqnarray}\label{eq:curve10_res} \overline{{\bf 10}}&:&\quad \ell_5 \cong -e_2-e_4 + e_{\Sigma} \quad \rightarrow \quad \tfrac{1}{3\mu - 2 \nu}B_2\cdot B_3 \cdot (\mu H + \nu S),\\ \bar{{\bf 5}}&:&\quad \ell_7 \cong -e_3 + e_{\Sigma} \quad \rightarrow \quad \tfrac{1}{(\mu -2\nu)(7\mu - 2\nu)}\Big(2\mu B_2 + (\mu+2\nu)B_3 \nonumber \\ &&\hspace{4 cm}+ (\mu + 2\nu)B_4 - (\mu -2\nu) B_5 \Big) \cdot B_4 \cdot (\mu H + \nu S), \label{eq:curve5bar1_res}\\ \bar{{\bf 5}}&:&\quad \ell_7+ \ell_8 \cong - e_3 \quad \rightarrow \quad \tfrac{1}{4(\mu-2 \nu)(3\mu -\nu)} (2(3\mu-2\nu) B_2+ (5\mu - 2\nu) B_3 \nonumber \\ &&\hspace{4 cm}+ 2(3\mu -2\nu) B_4 + (\mu -2\nu) B_5) \cdot B_4 \cdot (\mu H + \nu S), \label{eq:curve5bar2_res}\\ {\bf 1}&:&\quad \ell_5 \cong -e_{\Sigma} \quad \rightarrow \quad \frac{1}{63\mu + 94\nu}(6B_4 + 8H + B_5 ) \cdot X \cdot (\mu H + \nu S),\label{eq:curvesinglet_res} \end{eqnarray} where we choose special representatives in the solutions just for simplicity. From the explicit forms \eqref{eq:curve10_res}--\eqref{eq:curvesinglet_res}, one can determine the charge under $U(1)_X$ for each weight \begin{equation} \ell_5 \rightarrow \overline{{\bf 10}}_{-\alpha}\, ,\qquad \ell_7 \rightarrow \bar{{\bf 5}}_{-3\alpha}\, , \qquad \ell_7+ \ell_8 \rightarrow \bar{{\bf 5}}_{2\alpha} \qquad \ell_8 \rightarrow {\bf 1}_{5\alpha}. \end{equation} For the comparison to the $(4+1)$ split spectral cover model, we choose the direction of $U(1)$ from $\omega_{\tilde{X}}$ to be the $U(1)_X$ in the $(4+1)$ split spectral cover model. Hence, we take $\alpha=1$ hereafter. Note that we have determined the matter representation curves directly from the generators of the extended relative Mori cone. This approach is different from the one discussed in \cite{Krause:2011xj}. In \cite{Krause:2011xj}, the matter representation curves are determined from the direct computation of the degeneration of the Tate form as done in the appendix \ref{sec:direct_comp} by exploiting the Stanley-Reisner ideal. The other information necessary for the computation of the chirality \eqref{eq:chirality2} are the matter surfaces $S_{{\bf 10}}$ and $S_{\bar{{\bf 5}}}$. They can be determined from the curves in the extended relative Mori cone. The curves are already obtained as \eqref{eq:curve10_res}--\eqref{eq:curve5bar2_res}. Then, we pull out $\mu H + \nu S$ with the correct multiplicity. The correct multiplicity can be computed from the consideration of the intersection between $\mu H + \nu S$ and the matter curve $\Sigma_{{\bf 10}}$, $\Sigma_{\bar{{\bf 5}}}$. The condition for the $\Sigma_{{\bf 10}}$ matter curve is the same as \eqref{eq:pullout1}. However, the $\Sigma_{\bar{{\bf 5}}}$ curve splits into two components $a_{3,2}=w=0$ and $a_1a_{4,3}-a_{2,1}a_{3,2}= w=0$, where $w=0$ defines the surface $S_{\rm b}$ and the definitions of the $a_{i,j}$ can be found in \eqref{eq:Tate}, \eqref{eq:SU(5)}. The matter fields in the $\bar{{\bf 5}}_{-3}$ and $\bar{{\bf 5}}_{2}$ representation are localized along $a_{3,2}=w=0$ and $a_1a_{4,3}-a_{2,1}a_{3,2}= w=0$, respectively. The matter fields in the singlet ${\bf 1}_{5}$ are localized along the curve $a_{3,2}=a_{4,3}=0$. Hence, the condition \eqref{eq:multiplicity} becomes \begin{eqnarray} \mathcal{B} \cdot S \cdot c_{1}(\mathcal{B}) \cdot (\mu H + \nu S)&=& 3\mu - 2\nu,\\ \mathcal{B} \cdot S \cdot (3c_{1}(\mathcal{B}) - 2c_{1}(N_{S|\mathcal{B}})) \cdot (\mu H + \nu S) &=& 7\mu -2\nu,\\ \mathcal{B} \cdot S \cdot (5c_1(\mathcal{B}) -3 c_1(N_{S|\mathcal{B}})) \cdot (\mu H + \nu S) &=& 12\mu - 4\nu,\\ \mathcal{B} \cdot (3c_1(\mathcal{B}) -2c_1(N_{S|\mathcal{B}})) \cdot (4c_1(\mathcal{B}) - 3c_1(N_{S|\mathcal{B}}))) \cdot (\mu H + \nu S) &=& 63\mu + 94\nu. \end{eqnarray} Therefore, the matter surfaces are \begin{eqnarray} S_{\overline{{\bf 10}}_{-1}} &=& B_2 \cdot B_3,\label{eq:matter10_res}\\ S_{\bar {{\bf 5}}_{-3}} &=& \tfrac{1}{\mu -2\nu} (2\mu B_2 + (\mu+2\nu)B_3+ (\mu + 2\nu)B_4 - (\mu -2\nu) B_5 \big) \cdot B_4,\label{eq:matte5bar1_res}\\ S_{\bar {{\bf 5}}_{2}} &=& \tfrac{1}{\mu-2 \nu} (2(3\mu-2\nu) B_2+ (5\mu - 2\nu) B_3+ 2(3\mu -2\nu) B_4 + (\mu -2\nu) B_5) \cdot B_4, \label{eq:matter5bar2_res}\\ S_{{\bf 1}_{5}} &=& (6B_4 + 8H + B_5) \cdot X. \label{eq:mattersinglet_res} \end{eqnarray} With the $G_4$ flux \eqref{eq:G4_res} and the matter surfaces \eqref{eq:matter10_res}--\eqref{eq:mattersinglet_res}, we compute the chirality of the matter fields in each representation by using \eqref{eq:chirality2}. The intersection between the matter surfaces \eqref{eq:matter10_res}-\eqref{eq:mattersinglet_res} and the $G_4$ flux \eqref{eq:G4_res} yields the numbers \begin{eqnarray} \chi(\overline{{\bf 10}}_{-1}) &=& \int_{S_{\overline{{\bf 10}}_{-1}}} \hspace*{-.3cm} G_4 = - 3a + 2b\,, \qquad \quad \chi(\bar{{\bf 5}}_{-3}) = \int_{S_{\bar{{\bf 5}}_{-3}}} \hspace*{-.3cm} G_4 = -21a+6b\, , \nonumber \\ \chi(\bar{{\bf 5}}_{2}) &=& \int_{S_{\bar{{\bf 5}}_{2}} } \hspace*{-.1cm} G_4 = 24a -8b\,,\qquad \qquad \chi({\bf 1}_{5}) = \int_{S_{{\bf 1}_{5}}} \hspace*{-.1cm} G_4 = 315a+470b.\label{eq:chi_ex2} \end{eqnarray} Note that consistent with anomaly cancellation in the four-dimensional gauge theory one has $\chi({\bf 10}_{1}) = \chi(\bar{{\bf 5}}_{-3})+\chi(\bar{{\bf 5}}_{2})$. It is important to stress that we did not address the quantization of the real scalars $a,b$ defining the $G_4$ flux \eqref{eq:G4_res}. In order to do that one has to evaluate the condition \eqref{quantization}. It is a trivial task to compute $c_2(\tilde X_4)$ for a torically realized hypersurface. The complication lies in the question if the base elements chosen in \eqref{eq:G4_res} are actually part of a minimal integral basis of $H^{4}_{\rm V}(\tilde X_4,\mathbb{Z})$. Furthermore, \eqref{quantization} can imply that we have to switch on another component of $G_4$ to compensate for the half-integrality of $c_2(\tilde X_4)/2$. \subsubsection{Relation to three-dimensional Chern-Simons term} The chirality \eqref{eq:chi_ex2} can be obtained from the three-dimensional Chern-Simons term by using \eqref{eq:chirality3d}. Since the $SU(5)$ Cartan charges of the generators of the relative Mori cone for the $U(1)_X$ restricted model are the same as the ones in the first example, the effectiveness of the curves corresponding to the ${\bf 10}$ weights and the ${\bf 5}$ weights are essentially the same. However, they are actually ${\bf 16}$ representation or ${\bf 10}$ representation of $SO(10)$ and the Cartan charge for the $U(1)$ obtained from $B_5$ is affected by the singlet term $e_{\Sigma}$. Hence, we also have to take into account the singlet term to compute the formula \eqref{eq:chirality3d}. Since the ${\bf 10}$ weights of $SU(5)$ come from the ${\bf 16}^{\prime}$ weights of $SO(10)$, the weight $e_i + e_j$ is indeed the weight $e_i+e_j -e_{\Sigma}$. For the ${\bf \bar{5}_{-3}}$ weights coming from the ${\bf 16}^{\prime}$, the shift for the weight $-e_i$ is $-e_i+e_{\Sigma}$. The ${\bf \bar{5}_{2}}$ weight coming from the ${\bf 10}$ weight of $SO(10)$ remains unchanged. These can be also seen explicitly by constructing the effective curves from the generators of the relative Mori cone. Then, all the $U(1)$ charges for $B_\Lambda,\;(\Lambda=1,\cdots, 5)$ can be obtained from the weights of $SO(10)$ and the formula \eqref{eq:chirality3d} becomes \begin{align} &\Theta_{23} = -\chi({\bf 10}), & \qquad & \Theta_{24} = \tfrac{1}{2} \chi({\bf 10}) + \tfrac{1}{2}(\chi(\bar{{\bf 5}}_{-3})+ \chi(\bar{{\bf 5}}_{2})), &\nonumber \\ &\Theta_{33} = \chi(\bar{{\bf 5}}_{-3})+ \chi(\bar{{\bf 5}}_{2}),& &\Theta_{44} = -\chi({\bf 10}), \nonumber \\ &\Theta_{45}=\tfrac{1}{2}\chi({\bf 10}) - \tfrac{1}{2}\chi(\bar{{\bf 5}}_{-3}) + \tfrac{1}{2}\chi(\bar{{\bf 5}}_{2}), & \label{eq:theta-ex2} \nonumber & \Theta_{55}= - \chi({\bf 10}) + \tfrac{3}{2} \chi(\bar{{\bf 5}}_{-3}) - \chi(\bar{{\bf 5}}_{2}) + \tfrac12 \chi({\bf 1}_{5}) , & \nonumber \\ &\Theta_{13} = \tfrac{1}{2}\chi({\bf 10}) - \tfrac{1}{2}(\chi(\bar{{\bf 5}}_{-3}) + \chi(\bar{{\bf 5}}_{2})),& &\Theta_{34} = \tfrac{1}{2}\chi({\bf 10}) - \tfrac{1}{2}(\chi(\bar{{\bf 5}}_{-3}) + \chi(\bar{{\bf 5}}_{2})), \nonumber \\ &\Theta_{11} = -\chi({\bf 10}) + \chi(\bar{{\bf 5}}_{-3}) + \chi(\bar{{\bf 5}}_{2}),&& \Theta_{22} = \chi({\bf 10}) - (\chi(\bar{{\bf 5}}_{-3}) + \chi(\bar{{\bf 5}}_{2})),& \end{align} and the other components are zero. From the explicit intersection numbers by using the $G_4$-flux \eqref{eq:G4_res}, $\Theta_{\Lambda\Sigma}$ is computed to be \begin{eqnarray} \label{ev_theta} \Theta_{23} &=& -3a+2b\, ,\qquad \Theta_{24}= 3a-2b\, , \qquad \Theta_{33}=3a-2b\, ,\\ \Theta_{44} &=& -3a+2b\, , \qquad \Theta_{45} = 24a-8b\, , \qquad \Theta_{55} = 99a+254b\, , \nonumber \end{eqnarray} and the other components are zero. The chirality of \eqref{eq:chi_ex2} is precisely reproduced by comparing \eqref{eq:theta-ex2} with the explicit expressions \eqref{ev_theta}. \subsubsection{Comparison with split spectral cover} Having obtained the chirality \eqref{eq:chi_ex2} from the formula \eqref{eq:chirality2}, we compare the results with the ones from the $(4+1)$ split spectral cover model. As for the $U(1)_X \in SU(5)_{\perp}$, we consider the generator $(1,1,1,1,-4) \in SU(5)_{\perp}$. Then, we have the matter fields localized on the GUT surface $S_b$ \begin{equation} {\bf 10}_{1}, \;\; \bar{{\bf 5}}_{-3}, \;\; \bar{{\bf 5}}_{2}, \label{eq:matters} \end{equation} where subscript denotes the charge under the $U(1)_X$. The $\bar{{\bf 5}}_{-3}$ representation matter fields are localized along $a_{3,2}=w=0$ and the $\bar{{\bf 5}}_{2}$ matter fields are localized along $a_1a_{4,3}-a_{2,1}a_{3,2}=w=0$. The chirality formulas for ${\bf 10}$ and $\bar{{\bf 5}}$ matter are \cite{Marsano:2009gv,Blumenhagen:2009yv} \begin{eqnarray} \chi_{{\bf 10}} &=& (-\lambda \tilde{\eta} + \frac{1}{4}\zeta) \cdot (\tilde \eta - 4c_{1}(S_{\rm b})),\label{eq:chirality_res10}\\ \chi_{\bar{{\bf 5}}_{-3}} &=& \lambda(-\tilde{\eta}^{2}+6\tilde{\eta}\, c_{1}(S_{\rm b})-8c_{1}^{2}(S_{\rm b})) + \frac{1}{4}\zeta(-3\tilde{\eta}+6c_{1}(S_{\rm b})),\label{eq:chirality_res5bar1}\\ \chi_{\bar{{\bf 5}}_{2}} &=& \lambda(-2\tilde{\eta}\, c_1(S_{\rm b})+8c_{1}^{2}(S_{\rm b})) + \frac{1}{4}\zeta(4\tilde{\eta}-10c_{1}(S_{\rm b})),\label{eq:chirality_res5bar2} \end{eqnarray} where $\tilde{\eta} = \eta-c_{1}(S_{\rm b})$ and $\eta$ is related to the first Chern class of the normal bundle \eqref{eq:normal_eta}. $\zeta$ is a flux part on $S$, namely $\zeta \in H^{2}(S_{\rm b},\mathbb{Z})$. In the current example \eqref{eq:toric4}, $\zeta$ can be chosen as \begin{equation} \frac{1}{4} \zeta = (a H + b S)|_{S_{\rm b}}. \end{equation} Then, the chirality formulas \eqref{eq:chirality_res10}--\eqref{eq:chirality_res5bar2} can be computed as \begin{equation} \chi_{{\bf 10}} = 3a -2b -38\lambda, \qquad \chi_{\bar{{\bf 5}}_{-3}} = -21a+6b-22\lambda,\label{eq:res5bar1} \qquad \chi_{\bar{{\bf 5}}_{2}} = 24a-8b -16\lambda. \end{equation} For the comparison with the results of \eqref{eq:chi_ex2} note that we turn on the $G_4$ flux only in the direction of the $U(1)_X$ \eqref{eq:G4_res}. This corresponds to the case where $\lambda=0$ in the $(4+1)$ split spectral cover model. By putting $\lambda=0$, \eqref{eq:res5bar1} exactly reproduce the chirality formulas \eqref{eq:chi_ex2} when we identify $\frac{1}{4}\zeta$ with $F_X$. \section{Conclusions} In this paper we discussed the determination of the net chiral matter spectrum of a four-dimensional F-theory compactification on a singular Calabi-Yau manifold $X_4$. We argued that the description of F-theory as a limit of M-theory allows to extract these data on the resolved fourfold $\tilde X_4$ with $G_4$ flux. The resolution is physical in the effective three-dimensional theory obtained from M-theory on $\tilde X_4$, and corresponds to moving to the Coulomb branch of the gauge theory. Due to the $G_4$ fluxes the resulting theory contains Chern-Simons couplings, proportional to $\Theta_{\Lambda \Sigma} A^\Lambda \wedge F^\Sigma$, for the $U(1)$ vector fields $A^\Lambda$. In contrast, such couplings are not induced by a classical circle reduction of a general four-dimensional $\mathcal{N}=1$ theory which arises as the low energy limit of F-theory on $X_4$. However, upon reduction to three dimensions the charged matter becomes massive in the Coulomb branch of the gauge theory. This precisely corresponds to the resolution process of $X_4$ to $\tilde X_4$. The Chern-Simons couplings are then induced by one-loop corrections with the massive charged matter running in the loop. Matching the Chern-Simons couplings of the fluxed M-theory reduction with the one-loop corrections in the F-theory reduction, we argued that the map between $G_4$ fluxes and net chiral matter can be inferred. The study of one-loop corrections in the three-dimensional Chern-Simons theory requires the knowledge of the $U(1)$ charges, as well as some positivity properties of the scalars in the three-dimensional vector multiplets. Geometrically this corresponds to the fact that curves associated to the singularity resolution can have positive or formally negative volume. This led us to introduce the relative Mori cone which contains all effective curves of $\tilde X_4$ which shrink to points in $X_4$. A detailed map between these curves and the weights of different representations of the matter fields allowed a deeper understanding of the resolution process at co-dimensions two and three where matter and Yukawa couplings are localized. With this data at hand the one-loop Chern-Simons couplings can be evaluated and matched with the $G_4$ flux result $\Theta_{\Lambda \Sigma}$. The expressions manifestly depend on the number of charged fermions and led to a computation of the chiral index $\chi({\bf R})$. While we have shown this for single non-Abelian gauge groups quite generally, it would be interesting to find a more group-theoretic reasoning that the index $\chi({\bf R})$ can always be extracted. An analysis of the extended relative Mori cone using the $\tilde X_4$-intersection numbers resulted in a detailed map between weights and resolution curves. We then proposed a formalism to determine the matter surfaces $S_{\bf R}$ which extract the chiral index directly from viable $G_4$-fluxes via $\chi({\bf R}) = \int_{S_{\bf R}} G_4$. We have shown that $S_{\bf R}$ can be constructed for a chosen weight of the representation ${\bf R}$. Only the integral $\chi({\bf R})$ is independent of the weight and the topological phase of the resolution. From the Chern-Simons analysis one realizes that $\chi({\bf R}) $ can be written as $\chi({\bf R}) = t^{\Lambda \Sigma}_{\bf R} \Theta_{\Lambda \Sigma}$, where $t^{\Lambda \Sigma}_{\bf R}$ is either determined from the matter surface, or from the charges and positivity properties of the curve classes. It would be nice to work out more details of the map from the resolution geometry to~$t^{\Lambda \Sigma}_{\bf R}$. In the last part of the paper we have evaluated the net chirality for two specific examples with $SU(5)$ and $SU(5)\times U(1)_X$ gauge group. We have found that in the first example there is a single parameter encoding $G_4$ flux which preserves four-dimensional Poincar\'e invariance and does not break the $SU(5)$ gauge symmetry. This is consistent with a spectral cover construction. However, our construction does not depend on the existence of a globally valid spectral cover description. In the second example we only included the $U(1)$-flux gauging the $U(1)_X$, and determined the induced net chirality. The result was successfully matched with the split spectral cover construction for states localized on the $SU(5)$ brane. The number of singlets localized away from the $SU(5)$-brane can equally determined using our construction. Both the Chern-Simons analysis as well as the explicit construction of the matter surfaces led to matching answers. It would be interesting to generalize the flux in this configuration. It can indeed be checked that there exists a one-parameter family of $G_4$ fluxes which do break $SU(5)$ and induce new chiral fields. Matching with a global split spectral cover is hard in this case, since this universal flux is not entirely localized on the $SU(5)$-brane, and our model has no heterotic dual. So while it is straightforward to evaluate the chirality using our formalism, there are not many results with which we can compare the answer. One other issue, which we addressed only briefly, is to determine the correct quantization conditions on the parameters determining the flux $G_4$. It is straightforward to compute the second Chern class for our examples which is required to evaluate \eqref{quantization}. The complication lies in the determination of a minimal integral basis of $H^4(\tilde X_4,\mathbb{Z})$. It would be interesting to do that for both the $SU(5)$ and $SU(5) \times U(1)_X$ model, e.g.~by using the explicit hypersurface equation and resolution. One expects also an interpretation of these quantization conditions in the three-dimensional Chern-Simons theory. An interesting extension of this work would be the systematic study of more complicated examples with varying gauge groups and representations. This includes cases with multiple non-Abelian factors, to which our formalism has to be extended. Furthermore, already the exceptional gauge group cases might yield some interesting new properties which can be studied for a given $\tilde X_4$. Much of our formalism can be algorithmically implement in a computer search. One of the main challenges will remain the systematic implementation of the quantization conditions. \vspace*{1.2cm} \noindent {\bf Acknowledgments}: We would like to thank Federico Bonetti, I\~ naki Garc\' ia-Etxebarria, Kenji Hashimoto, Denis Klevers, Albrecht Klemm, Seung-Joo Lee, Noppadol Mekareeya, Raffaele Savelli, Gary Shiu, Wati Taylor, and Timo Weigand for discussions. HH would like to thank the Hong Kong Institute for Advanced Study at HKUST and Max-Planck-Institut f\"ur Physik for hospitality and financial support during part of this work. The work of TG~was supported by a research grant of the Max Planck Society. The work of HH research was supported in part by JSPS Research Fellowships for Young Scientists. \newpage
1,108,101,566,102
arxiv
\section*{Summary} Observations of infrared and optical light curves of hot Jupiters have demonstrated that the peak brightness is generally offset eastward from the substellar point [1,2]. This observation is consistent with hydrodynamic numerical simulations that produce fast, eastward directed winds which advect the hottest point in the atmosphere eastward of the substellar point [3,4]. However, recent continuous Kepler measurements of HAT-P-7 b show that its peak brightness offset varies significantly in time, with excursions such that the brightest point is sometimes westward of the substellar point [5]. These variations in brightness offset require wind variability, with or without the presence of clouds. While such wind variability has not been seen in hydrodynamic simulations of hot Jupiter atmospheres, it has been seen in magnetohydrodynamic (MHD) simulations [6]. Here we show that MHD simulations of HAT-P-7 b indeed display variable winds and corresponding variability in the position of the hottest point in the atmosphere. Assuming the observed variability in HAT-P-7 b is due to magnetism we constrain its minimum magnetic field strength to be 6\,G. Similar observations of wind variability on hot giant exoplanets, or lack thereof, could help constrain their magnetic field strengths. Since dynamo simulations of these planets do not exist and theoretical scaling relations [7] may not apply, such observational constraints could prove immensely useful. \begin{multicols}{2} \section*{Main Text} To demonstrate magnetic effects on the winds of HAT-P-7 b, we simulate the atmosphere of a hot giant exoplanet with parameters similar to HAT-P-7 b using a spherical, three-dimensional (3D), anelastic MHD code [8,6]. We start with a hydrodynamic simulation of HD209458 b in terms of gravity, radius and rotation, but with a mean temperature (2200K) and temperature differential (1000K) of HAT-P-7 b (temperature and magnetic diffusivity profiles are shown in Supplementary Figure~1). The strong day-night temperature differential drives strong eastward atmospheric winds, consistent with previous simulations [9,10]. This simulation is run for $\sim$100 rotation periods before a magnetic field is added after which both the hydrodynamic and MHD simulations are run for an additional ~280 rotation periods. Details of the numerical code and simulation can be found in the Methods section. The extreme temperatures of HAT-P-7 b give rise to significant thermal ionization of alkali metals [11,12], which leads to coupling of the atmosphere to the deep seated magnetic field [13] and could also lead to an atmospheric dynamo [14]. The Lorentz force arising from this magnetic interaction disrupts the strong eastward directed atmospheric winds typically seen in hydrodynamic simulations, leading to variable and even oppositely directed winds [6]. Figure~1 shows a time-snapshot of magnetic field lines in the simulation looking onto the eastside terminator (a corresponding video of its complex evolution and variability is available in the Supplementary Material). The zonal-mean zonal-wind, averaged within 17$\deg$ of the equator and over the upper 1 mbar of the simulated domain, as a function of time is shown in Figure~2, along with the position of the hottest point in the atmosphere (also determined by an average over the same latitudes and height). There we see that, the hydrodynamic model retains a strong, eastward jet and associated positive hotspot displacement throughout the simulation (dotted line in Figure 2a and 2b). When a magnetic field is added, the zonal winds slow dramatically, reverse and then settle into an oscillatory pattern, with a timescale of $\sim$$10^6$\,s, consistent with the Alfven time ($\tau_A=\sqrt{4\pi\rho}\lambda/B$) of the imposed 10\,G field and is of the same order as the timescale of variability observed in HAT-P-7 b [5]. Variability in the hot spot displacement, including negative offsets, is seen on a similar timescale. \end{multicols}\begin{figure}[h!]\centering \includegraphics[width=0.9\textwidth]{test10-100} \caption{Magnetic Field lines in the atmosphere of a Hot Giant Exoplanet. Time snapshot of magnetic field lines in the numerical simulation of a hot Jupiter atmosphere (a model of HD209458 b but with a temperature structure similar to HAT-P-7 b). Magnetic field lines are color-coded to represent the azimuthal (toroidal) magnetic field with blue representing negative directed field (saturated at -50\,G) and magenta positive (saturated at 50\,G), with green and yellow ranging from -5\,G to 5\,G, respectively. The vantage point is looking onto the east-side terminator.} \end{figure}\begin{multicols}{2} \end{multicols}\begin{figure}[h!]\centering \includegraphics[width=0.9\textwidth]{vp+hotspotdisp} \caption{Atmospheric Dynamics of Simulated Hot Giant Exoplanet.] (a) Zonal Mean Zonal Wind in the hot Jupiter atmosphere averaged over 17$^{\circ}$ around the equator and over the upper 1 mbar of the simulated domain. Dotted line shows the winds in the hydrodynamic model while the solid line shows the winds in the MHD model. (b) Displacement of the hottest point of the atmosphere from the substellar point, at the same location and averaged as in (a). Similar to the mean winds, the hot-spot displacement in the MHD model (solid line) shows strong variability with excursions to points west of the substellar point. Hydrodynamic models show a stable, positive offset.} \end{figure}\begin{multicols}{2} Both the hydrodynamic and MHD models have more positive hot-spot displacements than the observations. This is expected given that the waves that force super-rotation can propagate further in HD209458 b than HAT-P-7 b before being damped [15]. Therefore, we expect a hydrodynamic model with HAT-P-7 b's gravity and rotation rate would show reduced hot-spot displacements compared to HD209458 b and we indeed find this (see Figure~3). While this magnetic model has some uncertainties (enhanced viscosity, crude radiative transfer), it naturally explains the bright spot excursions as due to changes in the thermal structure of the planet caused by variable winds. In this model clouds may not be necessary as HAT-P-7 b is hot enough that even the optical signal could be dominated by thermal emission. Moreover, this model may also explain the timescale of the observed fluctuations as due to Alfven waves. At the very least, it can provide the wind variability needed for models requiring clouds [5]. The effect of magnetism on zonal winds depends on the ratio of the magnetic to inertial terms in the momentum equation, which can be approximated as the ratio of magnetic to wave timescales $\tau_{mag}/\tau_{wave}$, where $\tau_{mag}=4\pi\rho\eta/B^2$ and $\tau_{wave}=L/\sqrt{gH}$. Here $\rho$ is the density, $\eta$ is the magnetic diffusivity, $B$ is the magnetic field strength, g is the gravity, L is the characteristic length scale of the horizontal flow and H is the depth of the atmosphere [11,15]. As magnetic effects are increased, either through increased magnetic field strength or increased conductivity, their effect on atmospheric zonal winds progressed from little to no effect (when $\tau_{mag} > \tau_{wave}$), to oscillatory winds (when $\tau_{mag} \sim \tau_{wave}$) to completely reversed (westward) winds (when $\tau_{mag} < \tau_{wave}$) [6]. Assuming the variable winds observed on HAT-P-7 b are due to magnetism and applying the oscillatory wind condition we find $B\sim \sqrt{4\pi\eta\rho/\tau_{wave}}$. Using the nightside value of $\eta$, we find that HAT-P-7 b must have a minimum field strength of $\sim$6\,G. This value is consistent with the theoretical scaling relation based on the Elsasser number [16] ($\Lambda= 2\rho\Omega/\mu_0 \eta \sim 1$) and with the upper limit placed on WASP-12 b [17], if we were to assume it had a similar field strength. To check this constraint, we ran additional models of HAT-P-7 b, with the appropriate rotation, gravity, size and temperature [18]. The temperature and magnetic diffusivity profiles for this model can be seen in Supplementary Figure 2. After running a hydrodynamic model for ~140 rotation periods a magnetic field was added and run an additional 15 rotation periods. We show the hot spot displacement for those models in Figure 3. We see that the hydrodynamic model (black line) has a steady hot-spot displacement of 2.8$\deg$. The MHD model with a 3\,G field (red line) shows a similar, stable hot spot displacement. However, both the 10\,G (blue line) and 20\,G (orange line) model show wind variability ranging from $\sim$$-15\deg$--$20\deg$. This range of displacement is more consistent with brightness variations observed (which range from $\sim$$-25\deg$--$25\deg$). However, clouds could also play a role in enhancing the large displacements observed by Kepler [5]. In our simulations wind variability sets in between 3 and 10\,G, consistent with the 6\,G lower limit based purely on a simple timescale analysis. If we had used the dayside magnetic diffusivity in the estimate the lower limit would have been $\sim$0.6\,G, inconsistent with our follow up models, which show no variability at 3\,G. This estimate depends only on the winds being variable and is independent of whether or not clouds are needed to explain the exact range of variability seen. Although these models are consistent with the 6\,G lower limit, on this timescale we see no completely reversed winds and therefore, conclude that we can only place a \textit{ lower} limit on the field strength. While it may be possible to hone this constraint with more simulations, its likely not worthwhile given the other limitations of these simulations. The continuous observations of HAT-P-7 b [5] were unique in that previous optical and infrared observations have generally only provided this measurement at a single epoch. The exception is the multiple epoch Spitzer observations of HD189733 b [19]. That work showed a fairly stable, positive offset. This lack of variability is consistent with little or no magnetic effects in HD189733 b, a plausible conclusion given that the low temperatures of HD189733 b would require unrealistic magnetic field strengths of $\sim$100--1000\,G to cause variability. In general, we would expect to see wind variability in objects where field-flow coupling is strong (as measured by the ratio of magnetic and wave timescales). Therefore, we would predict variability may also be found in other hot giant exoplanets, such as WASP-19b or WASP-12b. While long timeline or multiple epoch observations of hot Jupiters' phase curves have not been carried out for many objects, such a campaign coupled with MHD models of those planets' atmospheres could be used to place constraints on the magnetic field strengths of hot Jupiters. Such constraints are rare [20] and would be useful for dynamo theory, planetary evolution and interpretations of star-planet magnetic interactions [17]. As recently shown [5], these types of constraints are already possible with Kepler but will become more readily available with upcoming space missions such as JWST, CHEOPS, TESS and PLATO. In particular, JWST will be able to measure infrared phase curves directly, thus testing this theory without the complication clouds might add to optical curves. \end{multicols} \begin{figure}\centering \includegraphics[width=0.9\textwidth,trim=13mm 52mm 10mm 67mm,clip=true]{newhatp7b-hsdisp} \caption{Hot spot displacement of Simulated HAT-P-7b.] Hottest point in the atmosphere, calculated after a latitudinal average around 17$\deg$ of the equator and the upper 1mBar of the simulated domain, as a function of time. Black line shows the hydrodynamic model (barely visible under the other lines), red line is for a 3\,G field, blue line is for a 10G field and orange line is for a 20\,G field. The dotted line shows the sub-stellar point. The inset shows the time behavior after the magnetic field is added.} \end{figure} \section*{Addendum} \subsection*{Acknowledgments} T.~R. thanks J.~N.~McElwaine, and G.~Glatzmaier for helpful discussions leading to this manuscript and J.~Vriesema for help with the graphics. Funding for this work was provided by NASA grant NNX13AG80G and computing was carried out on Pleiades at NASA Ames. \subsection*{Author Contributions} T.R. carried out all work related to this manuscript. \subsection*{Requests and Correspondence} Correspondence and requests for materials should be addressed to T.~M.~Rogers\\ \href{mailto:[email protected]}{[email protected]} \section*{methods} We solve the magnetohydrodynamic (MHD) equations in three-dimensional (3D), spherical geometry in the anelastic approximation\cite{rk14}. The model solves the following equations: \vspace{0.15cm} \begin{eqnarray} \div (\rhobar \vvec) &=& 0 \\ \div \Bvec &=& 0 \\ \rhobar {\partial \vvec \over \partial t} +\div(\rhobar \vvec \vvec) &=& - \grad p - \rho \gbar \rhat \\ \nonumber && + 2 \rhobar \vvec \times \omegavec + \div \left[2 \rhobar \nubar (E - {1 \over 3} ( \div \vvec ) \mathbf{1})\right] + {1 \over \mu_0} ( \curl \Bvec ) \times \Bvec \\ \lefteqn{\dxdy{T}{t}+(\mathbf{v}\cdot\nabla){T}=-v_{r}\left[\dxdy{\overline{T}}{r}-(\gamma-1)\overline{T}h_{\rho}\right]+(\gamma-1)Th_{\rho}v_{r}+} \\ \nonumber && \gamma\overline{\kappa}\left[\nabla^{2}T+(h_{\rho}+h_{\kappa})\dxdy{T}{r}\right] + \frac{T_{eq}-T}{\tau_{rad}}+\frac{\eta}{\mu_{o}\rho c_{p}}|\nabla \times \mathbf{B}|^{2} \end{eqnarray} Equation (1) represents the continuity equation in the anelastic approximation [21,22]. This approximation allows some level of compressibility by allowing variation of the reference state density, $\rhobar$, which varies in this model by four orders of magnitude. Equation (2) represents the conservation of magnetic flux. Equation (3) represents conservation of momentum including Coriolis and Lorentz forces. Equation (4) represents the energy equation including a forcing term to mimic stellar insolation (fourth term on right hand side, where T$_{eq}$ is the equilibrium temperature) and Ohmic heating (fifth term on right hand side). The radiative timescale in the Newtonian forcing term, $\tau_{rad}$ is a function that varies between 10$^4$ s at the outermost layers 10$^6$ s at the lowest layers. All other variables take their usual meaning [6]. The magnetic diffusivity $\eta$ (inverse conductivity) is a function of all space. If we separate the magnetic diffusivity into a mean ($\etabar$) and fluctuating ($\eta'$) component: \begin{equation} \eta\left(r,\theta,\phi\right)=\etabar\left(r \right)+\eta'\left(r,\theta,\phi \right) \end{equation} where r, $\theta$ and $\phi$ are the radius, colatitude and longitude, respectively. The magnetic induction equation becomes \begin{equation} {\partial \Bvec \over \partial t} = \curl ( \vvec \times \Bvec-\eta'\curl\Bvec ) - \curl (\etabar \curl \Bvec) \end{equation} Equation (6) is solved along with Equations (1)-(4). The magnetic diffusivity (5) is calculated from the initial temperature profile given by: \begin{equation} T_{eq}\left(r,\theta,\phi\right)=\Tbar(r)+\Delta T_{eq}(r) \cos\theta\cos\phi \end{equation} where $\Tbar(r)$ is mean reference state temperature and $\Delta T_{eq}$ is the specified day-night temperature difference, here set to 1000K and which is extrapolated logarithmically from the surface to 10 bar. Using this temperature profile, the magnetic diffusivity is calculated as[23]: \begin{equation} \eta\left(r,\theta,\phi\right)=230\frac{\sqrt{T}}{\chi_e} \end{equation} and $\chi_e$ is the ionization fraction. The ionization fraction is calculated at each point using a form of the Saha equation taking into account all elements from hydrogen to nickel and typical elemental abundances[24]. The model presented in Figure~1 and~2 is the model for HD209458 b [25] with 800K added at each vertical level and with an imposed day-night temperature variation of 1000K. The rotation rate, radius and gravity are all that of HD209458 b. The temperature and diffusivity profiles of this model are shown in Supplementary Figure 1. While this model is clearly not HAT-P-7 b it has a temperature (and thus, conductivity) structure similar to that expected for HAT-P-7 b. Since it is the temperature (conductivity) structure that dominates the MHD behavior of the atmosphere, this model is probably a faithful, albeit imperfect, representation of the atmospheric dynamics in HAT-P-7 b. The model has 10G poloidal field imposed at the bottom boundary. The model presented in Figure 3 is a model of HAT-P-7 b using an atmosphere model for HAT-P-7 b [18]. The temperature and magnetic diffusivity profiles can be seen in Supplementary Figure 2. Here, dipole fields of 3G, 10G and 20 G are imposed at the bottom boundary to mimic the deep seated dynamo field. Both the models presented have more complex dynamics than those found previously [26,6] because these include a magnetic diffusivity (conductivity) that is a function of all space. This led to more complex field-flow interactions, particularly at the terminators (both) and even led to an atmospheric dynamo [14]. Although it was not included here a time-dependent conductivity could further complicate matters, particularly with regard to the thermal structure of the atmosphere. Currently, we see more Ohmic heating on the night side of the planet, which leads to reduction of the day-night temperature gradient. Naively, if we allowed this to react back on the flow and conductivity we would expect decreased wind driving and increased field-flow coupling. That is, we might expect wind variability at even lower magnetic field strengths. \paragraph{Data Availability Statement} The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. \bibliographystyle{naturemag}
1,108,101,566,103
arxiv
\section{Introduction} Powerful active galactic nuclei (hereafter, AGNs) can drive fast outflows which affect the properties and kinematics of the ambient gas in the nuclear and circumnuclear regions of their host galaxies. By heating the gas, or/and expelling it from the central regions, they may affect the star formation histories of the host galaxy bulge (see e.g., \citealt{fluetsch2019cold}; \citealt{oosterloo2017properties}; \citealt{2017A&A...601A.143F}; \citealt{morganti2017many} and references therein). Even though the mechanisms that can produce molecular outflows are still uncertain (\citealt{morganti2017many}), cold molecular gas is known as the phase that dominates the central regions, and found to be the most massive outflow component in AGN (see e.g., \citealt{kanekar2008outflowing}; \citealt{cicone2014massive}; \citealt{dasyra2011turbulent}; \citealt{dasyra2012cold}; \citealt{2015A&A...580A...1M}; \citealt{feruglio2010quasar}; \citealt{rupke2013multiphase}; \citealt{garcia2015high}; \citealt{carniani2015ionised}; \citealt{2017A&A...601A.143F}; \citealt{morganti2017many}; \citealt{fluetsch2019cold}), making cold molecular gas the main tracer of AGN outflows. Several works aim at understanding the origin of non-circular motions, outflow driving mechanisms and the impact of AGN feedback on the interstellar medium (ISM) by imaging and modelling the kinematics of cold molecular gas in the central regions of AGN host galaxies. Both types of non-circular motions (inflows and outflows) are detected in the nuclear and circumnuclear regions of nearby star forming and AGN host galaxies. Using the ALMA observations of dense molecular gas tracers (CO(3-2), CO(6-5), HCN(4-3), HCO+ (4-3), and CS(7–6)) \citet{garcia2014molecular} studied the fueling and the feedback of star formation and nuclear activity in the nearby Seyfert 2 nucleus NGC 1068. \\ The authors confirm the detection of molecular line and dust continuum emissions from different regions in the source. They also indicate the presence of both an inward radial flow in the starburst ring and the bar region, and a massive outflow, an order of magnitude higher than the star formation rate, in the inner region (r $\sim$ 50 pc out to r $\sim$ 400 pc), interpreting the inward flow as the combined action of the bar and the spiral arms, and the outflow is AGN driven. Based on the tight correlation between the ionized gas outflow, the radio jet, and the occurrence of outward motions in the disc the autors suggest that the outflow is likely AGN driven.\\ With a better spatial resolution of $\sim$4 pc towards the same galaxy, \citet{2016ApJ...823L..12G} studied the dust emission and the distribution and kinematics of molecular gas by using the ALMA observations of the dust continuum at 432 $\mu$m and the CO(6–5) molecular line emission in its circumnuclear disc. They conclude that the overall slow rotation pattern of the disc is perturbed by strong non-circular motions and enhanced turbulence \citep[see also a similar work in][]{2019A&A...632A..61G}. \citet{2014A&A...565A..97C} report the ALMA observations of CO(3-2) emission in the nuclear region of the Seyfert 1 galaxy NGC 1566. They find a conspicuous nuclear trailing spiral, and weak non-circular motions at the periphery of the nuclear spiral arms. Recently, by analysing the nuclear kinematics of the same galaxy NGC 1566 via ALMA observations of the CO(2-1) emission, \citet{slater2019} show the presence of significant non-circular motions in the innermost (200 pc) and along spiral arms in the central kpc. They find a molecular outflow in the disc with velocities of $\sim 180\, \rm{km\,s^{-1}}$ in the nucleus. Using the ALMA observations of CO(3-2) emission around the nucleus of NGC 1433, \citet{2013A&A...558A.124C} find also an intense high-velocity CO emission feature redshifted to $200\, \rm{km\,s^{-1}}$, with a blue-shifted counterpart, at 2$^{''}$ (100 pc) from the centre, interpreting the wide component as an outflow partly driven by the central star formation, but mainly boosted by the AGN through its radio jets. Similarly, using the ALMA observations of the infrared-luminous merger NGC 3256, \citet{Sakamoto_2014} detect high-velocity molecular outflows from the northern and southern nucleus with two different large velocities $> 750\, \rm{km\,s^{-1}}$ and [1000$\sim$2000] $\, \rm{km\,s^{-1}}$, respectively, interpreting the northern outflow as a starburst-driven superwind and the southern one as an outflow driven by a bipolar radio jet from an AGN. In their ALMA CO(2–1) line observations with angular resolutions 0.11$^{''}$-0.26$^{''}$ (9-21 pc), \citet{2018ApJ...859..144A} find strong non-circular motions in the central (0.2$^{''}$-0.3$^{''}$) regions of the nearby Seyfert galaxy NGC 5643 with velocities of up to 110 km/s, explaining the motions as radial outflows in the nuclear disc in the absence of a nuclear bar. \citet{2020A&A...643A.127D} present a detailed analysis of the kinematics and morphology of cold molecular gas in the nuclear/circumnuclear regions of five nearby (19 - 58 Mpc) Seyfert galaxies, Mrk 1066, NGC 2273, NGC 4253, NGC 4388, NGC 7465. The authors detect CO(2-1) emission in all galaxies with disky or circumnuclear ring like morphologies. Moreover, they find in all galaxies, though the bulk of the gas is rotating in the plane of the galaxy, non-circular motions in four of the galaxies (Mrk 1066, NGC 4253, NGC 4388 and NGC 7465). They interpret the non-circular motions in NGC 4253, NGC 4388 and NGC 7465 as streaming motions due to the presence of a large-scale bar, whereas the non-circular motions in the nuclear regions of Mrk 1066 and NGC 4388 are outflows due to the interaction of the AGN wind with molecular gas in the galaxy disc. Fast and massive molecular outflows are capable to impact the nearby ISM, and molecular mass outflow rate is shown to be well correlated with AGN bolometric luminosity \citep{2017A&A...601A.143F}. \citet{2017A&A...601A.143F} also report that in AGNs with bolometric luminosity up to $\sim$ 10$^{46}$ erg s$^{-1}$ the molecular mass outflow rate dominates its ionised counterpart. Moreover, \citet{2017A&A...601A.143F} find that the molecular gas depletion timescale and the molecular gas fraction of galaxies hosting powerful AGN driven winds are 3-10 times shorter and smaller than those of main sequence galaxies with similar star formation rate, stellar mass, and redshift, indicating that, at high AGN bolometric luminosity, the reduced molecular gas fraction may be due to the destruction of molecules by the wind, leading to a larger fraction of gas in the atomic ionised phase. Also using the ALMA observations of CO(2-1), \citet{2015A&A...580A...1M} find that the gas kinematics in the Seyfert 2 galaxy IC 5063 are very complex. The authors detect fast cold molecular gas outflow with velocities up to $650\, \rm{km\,s^{-1}}$, indicating that the outflow can be driven by the central AGN and the radio jet. Similarly, using the CO(2-1) and CO(3-2) ALMA observations of the nearby Seyfert 1.5 galaxy NGC 3227, \citet{2019A&A...628A..65A} show the presence of CO clumps with complex kinematics, dominated by non-circular motions in the central region (1$^{\prime\prime}$ $\sim$73pc). \citet{2020A&A...633A.127F} also show the presence of a prominent jet-driven outflow of CO(2-1) molecular gas along the kinematic minor axis of the Seyfert 2 galaxy ESO 420-G13, at a distance of 340-600 pc from the nucleus. Several similar works were published on constraining the kinematics/dynamics of molecular gas in the nuclear/circumnuclear regions of star forming and AGN host galaxies (e.g., \citealt{Tadhunter2014}; \citealt{10.1093/mnras/stz1244}; \citealt{Salak_2020}; \citealt{refId0}; \citealt{comb+19}; \citealt{audibert2019}; \citealt{sirressi2019testing}, see also review by \citealt{Veilleux2020}).\\ In this paper, we present the analysis of the CO molecular gas kinematics in the nuclear regions of three different types of Seyfert galaxies, NGC 4968, NGC 4845, and MCG-06-30-15, by using the ALMA observation of the bright CO(2-1) emission line as a tracer. These galaxies are different in AGN type, AGN luminosity, and morphology. The knowledge about the nature of cold molecular gas kinematics and its origin in the nuclear and circumnuclear regions of such different AGN host galaxies is essential to verify whether the driving mechanism(s) is the same, which in turn could be used as an input to construct a universal model that describes such physical mechanism(s). The three galaxies belong to the TWIST sample (\citet{2020A&A...633A.127F} and Fern{\'a}ndez-Ontiveros et al, 2021 in prep) which aims at studying 41 AGN extracted from the IRAS 12$\mu m$ flux-limited sample \citep{rush+93} located close enough (10$<$D$<$50 Mpc) to ensure a good spatial resolution through ALMA observations and resolve the morphological structures in the central $\sim$3$\times$3 kpc region, and being above the "knee" of the Seyfert galaxy luminosity function (therefore, being statistically representative to determine the impact of the outflow and inflow mechanisms for the bulk of the AGN population). Moreover the sample has been extensively studied in the past 30 years and have a wealth of ancillary data from the X-rays to the radio (Fern{\'a}ndez-Ontiveros et al, 2021 in prep). The three galaxies in this work extend the preliminary study of the TWIST sample initiated in Fern\'andez-Ontiveros et al. (2020) for ESO 420-G13. These include one type 1 and two type 2 nuclei located at ~ 20-40 Mpc, that sample three different bins in X-ray luminosity. Table~\ref{t1} list the properties of the three objects under study. To study the molecular gas kinematics we use two different softwares, namely the 3D-Based Analysis of Rotating Object via Line Observations $^{3D}$BAROLO \footnote{https://bbarolo.readthedocs.io/en/latest/} (\citealt{teodoro20153d}) and DiskFit (\citealt{peters2017}). This paper is structured as follows. In Section \S~\ref{pppt} we present a brief description of the properties of the sources, the ALMA observations and data reduction. We discuss modeling of the CO molecular gas kinematics in \S~\ref{mod}. The properties and kinematics of the CO molecular gas, and the residuals in each galaxy are presented in \S~\ref{res} and in \S~\ref{MH2}, \S~\ref{dust}. We discuss the results in \S~\ref{dis}, and the main conclusions in \S~\ref{con}.\\ We adopt a flat $\Lambda$CDM cosmology with $\Omega _{\lambda}$ = 0.7, $\Omega _{\lambda}$ = 0.3, and $H_{0}$ = $70\, \rm{km\,s^{-1}}$ Mpc$^{-1}$. \begin{table} \caption{\label{t1} Physical and geometrical parameters of the galaxies. The SFR is computed from the PAH observations \citep{mor21} and 12$\mu m$ luminosity in [${\rm log_{10} L_{\odot}}$] \citep[from][]{rush+93}. X-ray luminosity from \citet{nanda}}. \centering \setlength{\tabcolsep}{2pt} \begin{tabular}{llccr} \hline\\ &NGC 4968&NGC 4845&MCG-6-30-15\\ \hline\\ AGN type&S2&S2&S1.2&\\ Redshift&0.00986&0.00411&0.008&\\ Classification&SB0&SABab&Sab&\\ ${\rm \log L_{2-10keV}} $ & 43.20 & 41.98 &42.74&\\ (erg/s) & & & & \\ ${\rm \log L_{12\mu m}} $ & 9.91 & 9.87 &9.52&\\ (L$_{\odot}$) & & & & \\ SFR (M$_{\odot}$/yr) &4.29& - &1.82&\\ Distance&42 Mpc&18 Mpc&37 Mpc&\\ RA&13h07m05.935s&12h58m01.187s&13h35m53.777s&\\ Dec&$-23^{o}40^{'}36.23^{''}$ &$1^{o}34^{'}32.526^{''}$&$-34^{o}17^{'}44.242^{''}$ &\\ INC ($^{o}$) &60&73&[65-68]&\\ PA ($^{o}$) &[234-256]&[62-81]&[115-122]&\\ \hline\\ \end{tabular}\\ \vspace{.05cm} \end{table} \section{Basic galaxy properties and observations}\label{pppt} NGC 4968 is a nearby Seyfert 2 spiral galaxy, classified as a barred Spiral (SB0) according to the Hubble and de Vaucouleurs galaxy morphological classification, located at a redshift of $z$ = 0.00986 (\citealt{lamassa2017chandra}; \citealt{malkan1998hubble}), corresponding to a distance of 42 Mpc. NGC 4968 Narrow Line Region (NLR) was imaged in the [O III] $\lambda$ 5007 and [N II] $\lambda\lambda$ 6548, 6583 + H$\alpha$ emission lines, as well as in its adjacent continua (centered near 5500 and 8000 {\AA}) using HST by \citet{ferr2000}. From the continuum images these authors estimate a photometric major axis P.A. of 45$^{o}$ and an inclination angle of 60$^{o}$ at radii greater than 10$^{\prime\prime}$ (2.1 kpc) \citep[][see also]{2003ApJS..148..327S}. The NLR extends toward the south-east side of the nucleus and the [OIII] line emission has a wedge-shaped morphology filling the cavity inside the dusty ring. The presence of the dust may be responsible of its morphology but the presence of a ionisation cone projecting against the far side of the galaxy disc cannot be excluded \citep{ferr2000}\\ \cite{strong2004molecular} studied the properties of the molecular gas in NGC 4968 with single dish (15-m) millimetre observations at the ESO-SEST telescope. They estimate the line luminosity (${\rm L_{CO}}$) and molecular mass (${\rm M({H_2})}$) through observations from CO(1-0) (6.0$\times$10$^{7}$ K km s$^{-1}$pc$^{2}$ and $21.0\times10^{7}$ M$_{\odot}$, respectively), and CO(2-1) (4$\times$10$^{7}$ K km s$^{-1}$pc$^{2}$ and $15.0\times10^{7}$ M$_{\odot}$, respectively), making use of a CO-to-H$_{2}$ conversion factor, $\alpha_{\rm CO}$, of 3.47. The galaxy NGC 4845 is a nearby Seyfert 2 with a clear starburst component \citep{Thomas}, classified as an intermediate barred spiral galaxy (SABab), at $z$ = 0.00411 located in the Virgo Southern Extension (\citet{Irwin_2015} and references therein). We adopt the Tully-Fisher distance of 18 Mpc. The inclination on the sky is almost edge-on, revealing contrasted dust lanes on the near side, and a peanut-shape for the bulge. The galaxy contains a bright unresolved core with a surrounding weak central disc (1.8 kpc diameter). The radio spectrum of the core has been known to evolve with time, which could be due to an adiabatic expansion (outflow), likely in the form of a jet or cone (\citealt{Irwin_2015}). Using spectral observations of NGC 4845 in different position angles (PA = 44$^{o}$, 78$^{o}$, 98$^{o}$, 118$^{o}$, 178$^{o}$), \cite{bertola1989evidence} studied the kinematics of the ionised gas in the central region (r $\leq$ 1.5 kpc) and revealed a regular but non-axisymmetric velocity field. Based on photometry, \cite{bertola1989evidence} also point out the presence of a possible slight twisting between the disc and bulge isophotes, interpreting this as an indication of triaxial bulge in NGC 4845. This galaxy shows a ionisation cone with an opening angle of 120$^o$ perpendicular to the circumstellar disc of HII regions, the counter inionisation cone is faint, likely because of exinction \citep{Thomas}. The early-type Sab galaxy MCG-06-30-15 is a Seyfert 1.2 galax (\citealt{2014A&A...570A..13M}) located at a distance of 37 Mpc, $z$ = 0.008 (\citealt{2009ApJ...701..658W}). This active galaxy has a 400 pc diameter stellar kinematically distinct core (KDC) counter-rotating with respect to the main body of the galaxy (\citealt{raimundo2016tracing}). The molecular gas, traced by the H$_2$ 2.12 $\mu$m emission is also counter rotating with respect to the main stellar body of the galaxy, implying that the formation of the distinct core is associated with inflow of external gas into the centre of MCG-6-30-15 and the event that formed the counter-rotating core is also the main mechanism providing gas for the AGN fuelling. Moreover, this shows that external gas accretion is able to significantly replenish the fuelling reservoir in such active galaxies. MCG-06-30-15 NLR was imaged using HST by \citet{ferr2000} in the [O III] $\lambda$ 5007 and [N II] $\lambda\lambda$ 6548, 6583 + H$\alpha$ emission lines, as well as in their adjacent continua (centered near 5500 and 8000 {\AA}). \citet{ferr2000} estimate the photometric major axis P.A. and an inclination angle of 115$^{o}$ and 60$^{o}$, respectively. The [O III] image reveals a nuclear extension aligned parallel to the photometric major axis of the galaxy, which presumably represents gas coplanar with the stellar disc \citep[see also][]{2003ApJS..148..327S}.\\ \citet{Rosario} measure the CO(2-1) single dish flux and discuss the properties of its molecular gas in relation to its AGN properties. These authors estimate the line luminosity (${\rm L_{CO}}$) and molecular mass (${\rm M({H_2})}$) through observations from CO(2-1) (1.3$\times$10$^{7}$ K km s$^{-1}$pc$^{2}$ and $1.4\times10^{7}$ M$_{\odot}$, respectively) using a CO-to-H$_{2}$ conversion factor, $\alpha_{\rm CO}$, of 1.1. \subsection{ALMA observations and data reduction}\label{obs} \subsubsection{ALMA observations} The observations of the bright CO(2-1) line at 230.5 GHz rest frequency (in Band 6) were carried out as part of ALMA cycle 5 under project IDs 2017.1.00236.S in December 2017 and January 2018. With a spatial resolution between 25pc and 48pc, the CO(2-1) maps are sensitive enough to detect molecular masses as low as $\sim$ 10$^{5}$ M$_{\odot}$ (5$\sigma$) per beam. See the summary of ALMA observations in Table \ref{t1}. \subsubsection{Data reduction and imaging} Data were calibrated and post-processed using the Common Astronomy Software Applications (CASA) package \citep{2007ASPC..376..127M}, applying the standard calibration recipes provided by the ALMA Observatory. The data were calibrated using the \textsc{casa} version 4.7.0 and the calibration script provided by the observatory, while further post-processing was done using the CASA version 5.4.0. In all cases, the \textit{hogbom} deconvolution algorithm was applied using \textit{briggs} weighting and a robustness value 2.0 to reconstruct the final datacubes, optimising the value of the limiting flux threshold in each case. The spectral datacubes for the emission lines were produced with a channel width of $\sim 10$ and $\sim 30\, \rm{km\,s^{-1}}$ and a pixel size of about 1/5$^{th}$ to 1/4$^{th}$ of the synthesised beam size (see Table \ref{t1} for the synthesised beam size). The emission line regions were automatically masked during the cleaning process in the spectral cubes using the "auto-multithresh" algorithm in \textit{tclean}.\\ The continuum emission was then subtracted in the spatial frequency domain - i.e. prior to the image reconstruction - using a 1D polynomial interpolation between the adjacent continuum channels at both sides of the respective emission lines Continuum maps were constructed using all the continuum channels in the four spectral windows, that is discarding the channels with CO(2-1) emission. The masking procedure for the continuum data was run interactively during the cleaning process.\\ Finally, all datacubes were corrected for the primary beam attenuation pattern. Using the final CO(2-1) datacubes, the first three moments corresponding to the integrated intensity map of the line, the average velocity field and the average velocity dispersion map were computed numerically. In order to reduce the noise in the moment maps, for each spaxel only those channels with a signal-to-noise (S/N) above $\sim$5 the median absolute deviation were considered, following the approach in \citet{2020A&A...633A.127F}. \begin{table} \caption{\label{t2} ALMA CO(2-1) observation log} \centering \setlength{\tabcolsep}{1.8pt} \begin{tabular}{lllll} \hline\\ &Target line CO(2-1)\\ \hline\\ &NGC 4968&NGC 4845&MCG-06-30-15\\ \hline\\ $\nu_{rest}$ & 230.5 GHz&230.5 GHz&230.5 GHz&\\ Date & Dec 2017&Dec 2017&Dec 2017&\\ &Jan 2018&Jan 2018&Jan 2018&\\ Array configuration & C43-5 &C43-5&C43-5&\\ Spatial resolution & 48pc&45pc&25pc&\\ Channel width &2.5 km s$^{-1}$ &2.5 km s$^{-1}$&2.5 km s$^{-1}$&\\ Rms sensitivity &1.4 mJy/beam& 7 mJy/beam&0.7mJy/beam&\\ Synthesized beam &0.27$^{''}$x0.22$^{''}$ &0.56$^{''}$x0.48$^{''}$&0.16$^{''}$x0.13$^{''}$&\\ \hline\\ \end{tabular}\\ \end{table} \section{Modeling the kinematics of CO(2-1) emission line}\label{mod} \subsection{3D modeling of the main rotating disc} We use the CO(2-1) emission line as a tracer of the main kinematic features of the CO gas in the nuclear region of the galaxies. To constrain the gas kinematical perturbations we construct a 3D disc model using the 3D-Based Analysis of Rotating Object via Line Observations ($^{3D}$BAROLO) software and fit the model to the CO (2-1) emission line datacube. This software automatically fits 3D tilted-ring models to emission-line data cubes. The model assumes that the emitting material at each radius is confined within a geometrically thin disc and its kinematics are dominated by pure rotational motion (\citealt{teodoro20153d}). The model requires the geometrical parameters of the disc, namely the centre position coordinates, the inclination angle (INC) and the kinematic position angle (PA). When the emission peaks at the centre, like in the case of NGC4845, the coordinates can be determined by fitting a 2D Gaussian to the central part (where the peak emission is) of the intensity map (see \citealt{sirressi2019testing}), whereas for maps without a central emission peak (the case of NGC 4968 and MCG-06-30-15) the code uses a source-finder algorithm to identify the emission region, and then calculates the geometrical centroid, weighted by the flux intensity. The centre positions are determined and reported in Table \ref{t2}. The INC, the angle between the normal to the disc axis and the line of sight, can be inferred from the 0$^{th}$ moment map or from the 1$^{st}$ moment map depending on how well these moments can be estimated. For NGC 4845 we determine the INC parameter from the ratio of the two axes, major and minor axes, in the 1$^{st}$ moment (velocity) map (see e.g. \citealt{sirressi2019testing}). But for well resolved data (the case of NGC 4968) it is better to use the 0$^{th}$ moment (intensity) map rather than the velocity field. The inclination angle INC turns out to be 60$^{o}$ (NGC 4968) by fitting an ellipse to the outer ring in the intensity map. For NGC 4845, the INC = 73$^{o}$ is determined from the velocity field. For MCG-06-30-15, 3D-Barolo estimates that the INC angle varies from 65$^{o}$ to 68$^{o}$. \\ 3D-Barolo measures the PA as the angle between the North and the receding half of the major axis (= positive velocities in the velocity field), measured counterclockwise (see \citealt{teodoro20153d}). The PA varies from 234$^{o}$ to 256$^{o}$ (NGC 4968) and from 62$^{o}$ to 81 $^{o}$ (NGC 4845). For MCG-06-30-15, 3D-Barolo estimates that the PA varies from 115$^{o}$ to 122$^{o}$. Using the estimated geometrical parameters and assuming a number of rings at different radial separation we construct a disc model by using the 3D-Barolo kinematic model and fit its emission to the continuum-subtracted ALMA CO(2-1) line datacube. We use the ALMA CO(2-1) line datacube with a channel widths of $10\, \rm{km\,s^{-1}}$. \\ The number of rings can be determined from the spatial extension of the object. The model uses 10 rings with a radial separation of 0.217 arcsec. All rings are placed at $N*RADSEP + RADSEP/2$, where $N$ is the number of rings and $RADSEP$ is the radial separation in arcsec. Since 3D-Barolo measures the noise at the spectral channel edges when it builds the mask, to avoid the noise effect at the edges, instead of using the continuum-subtracted ALMA CO(2-1) line datacube with velocity range = $(-1000, 990)\, \rm{km\,s^{-1}}$, we consider the central regions containing the emission of the galaxies, reducing the velocity range of the input NGC 4968 datacube to $(-340, 340)\, \rm{km\,s^{-1}}$. To construct the disc model for NGC 4968, we use 27 rings with a radial separation of 0.12 arcsec. \\ The velocity range of the input NGC 4845 datacube is reduced to $(-510, 240)\, \rm{km\,s^{-1}}$ and we use 22 rings with a radial separation of 0.55 arcsec. The Barolo fits for N4845 for the velocity and the velocity dispersion are not optimal, however the results do not improve letting the inclination angle to vary between 73 and 80 degrees. For MCG-06-30-15, we feed the continuum subtracted line datacube to the model without reducing its velocity range. The model is able to find the best model reducing the residuals at minimum. This could be due to the fact that the kinematics is purely dominated by regular rotation pattern (i.e. a simple kinematics). A slight deviation appears between the blue and red contours in p-v diagramme. The Barolo fits for N4845 for the velocity and the velocity dispersion are not optimal, however the results do not improve letting the inclination angle to vary between 73 and 80 degrees. The results from $^{3D}$BAROLO, the kinematic maps and $p-v$ diagrams, are given in Figs. \ref{Barolo-4968}, \ref{Barolo-4845} and \ref{Barolo-mcg} for NGC 4968, NGC 4845, and MCG-06-30-15, respectively. The intensity maps of NGC 4968 and MCG-06-30-15 reveal the molecular gas distribution in the nuclear regions as a ring-like morphology. The gas distribution in the NGC 4845 appeared to have an edge-on inclination. \subsection{DiskFit modeling} The DiskFit software fits simple axisymmetric and non-axisymmetric non-parametric models either to photometric images or to kinematic maps (velocity fields) of disc galaxies. DiskFit fits an entire velocity field with a physically motivated model. The underlying assumption is that the circular orbit of a region of gas is affected by higher order perturbations (i.e. "harmonics"). Physically, $m$ = 1 harmonics correspond to "lopsided" perturbations and $m$ = 2 harmonics correspond to bisymmetric (i.e. bar) perturbations. The model is given by \citet{peters2017} : \begin{equation} V_{m} = V_{s} + \sin i [V_{t}\cos (\theta) - V_{m, t}\cos (\theta_{b})\cos (\theta) - V_{m, r} (\sin 2\theta_{b})\sin (\theta)] \end{equation} where $V_{m}$ is the model (with $m$ specifying the harmonic order with $m$ = 1 or $m$ = 2 in the disc plane), $V_{s}$ is the systemic velocity, $V_{t}$ is the mean orbital speed, $\theta$ is the angle between a point in the disc relative to the major axis, $\theta_{b}$ is the angle between a point in the disc relative to the bar, $V_{m, t}$ and $V_{m, r}$ are the tangential and radial components of the non-circular motions, $\theta$ and $\theta_{b}$ are the azimuthal angles relative to the major axis and the non-circular flow axis, respectively, and $i$ is the disc inclination. DiskFit can also fit radial flows (with $m$ = 0 distortions to the flow in the disc plane) and symmetric warps in the outer disc. DiskFit requires a flat inner disc, but it allows for a symmetric warp in the outer disc. The disc is assumed to be flat out to a warp radius $r_{w}$, beyond which both the ellipticity and the position angle of the line of nodes vary in proportion to $(r - r_{w})^{2}$. In addition, DiskFit can find the best-fitting inner warp radius $r_{w}$ and peak change in ellipticity and position angle of the line of nodes, or it can hold any combination of these parameters fixed. For an axisymmetric model, DiskFit estimates the circular speed for kinematic data. For non-axisymmetric model, DiskFit provides quantitative estimates of the non-circular flow speeds and an estimate of the mean circular speed when run on velocity fields. \begin{figure} \centering \includegraphics[width=0.4\textwidth,angle=0]{4968_10kms_kin.pdf} \includegraphics[width=0.4\textwidth,angle=0]{4968_10kms_pv.pdf} \caption{{\bf Upper panels: }\textit{left panels (top to bottom)}: the $0^{th}$ (intensity), $1^{st}$ (velocity field) and $2^{nd}$ (velocity dispersion) moment maps of the ALMA CO(2-1) data of NGC 4968. \textit{central panels}: the same as the left panels but for the best-fit model constructed with $^{3D}$BAROLO. \textit{right panels}: same as the left panels but for the residual (data-model). A black cross marks the centre of the galaxy. The major axis of the molecular disc is shown by a black-dashed line in the velocity map. The yellow ellipse in the bottom-left corner of the data velocity map shows the synthesized beam size (0.27$^{''}$x0.22$^{''}$) with P.A = 245$^{o}$. North and East directions correspond to top and left, respectively. {\bf Lower panels: }: The $p-v$ diagrams extracted from the data-cube (blue solid contours) and model-cube (red solid contours) along the major axis (top panel) and along the minor axis (bottom panel). The contours level of both the data and the model are at [1,2,4,8,16,32,64]*$l$, where $l$ = 0.0012. The rotation velocity of each ring of the best-fit disc model is represented by the yellow solid dots in the top panel.} \label{Barolo-4968} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth,angle=0]{4845_10kms_kin.pdf} \includegraphics[width=0.4\textwidth,angle=0]{4845_10kms_pv.pdf} \caption{The same as in Fig. \ref{Barolo-4968} but for NGC 4845. The synthesized beam size (0.56$^{''}$x0.48$^{''}$) is plotted in yellow in the bottom-left corner of the data velocity map with P.A = 77$^{o}$. The contours level of both the data and the model are at [1,2,4,8,16,32,64]*$l$, where $l$ = 0.00393635.} \label{Barolo-4845} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth,angle=0]{mcg_10kms_kin.pdf} \includegraphics[width=0.4\textwidth,angle=0]{mcg_10kms_pv.pdf} \caption{The same as in Fig. \ref{Barolo-4968} but for MCG-06-30-15. The synthesized beam size (0.16$^{''}$x0.13$^{''}$) is plotted in yellow in the bottom-left corner of the model velocity map with P.A = 118$^{o}$. The contours level of both the data and the model are at [1,2,4,8,16,32,64]*$l$, where $l$ = 0.00120941.} \label{Barolo-mcg} \end{figure} Different axisymmetric and non-axisymmetric DiskFit kinematic models, such as pure rotation (flat disc), pure rotation (flat disc with warp), rotation plus radial motion, lopsided, and bisymmetric models are shown in Figs. \ref{Diskfit-4968}, \ref{Diskfit-4845}, and \ref{Diskfit-mcg} for NGC 4968, NGC 4845, and MCG-06-30-15, respectively. \section{The CO(2-1) residual emission}\label{res} \begin{figure} \centering \includegraphics[width=0.55\textwidth,angle=0]{map_pure_rotation_4968_flat.png} \includegraphics[width=0.55\textwidth,angle=0]{map_rot+rad_4968.png} \includegraphics[width=0.55\textwidth,angle=0]{bi_4968.png} \caption{Diskfit results for NGC 4968. \textit{Top panels}: Pure rotation DiskFit model(flat disc). \textit{Middle panels}: Rotation + radial motion DiskFit model. \textit{Bottom panels}: Bisymmetric DiskFit model. For every panel the top left figure reports the data, the top right the model, the low left the residual obtained subtracting the model from the data and the low right figure the dispersion.} \label{Diskfit-4968} \end{figure} \begin{figure} \centering \includegraphics[width=0.55\textwidth,angle=0]{map_pure_rotation_4845.png} \includegraphics[width=0.55\textwidth,angle=0]{4845_rot+rad.png} \includegraphics[width=0.55\textwidth,angle=0]{bi_4845.png} \caption{DiskFit results for NGC 4845. \textit{Top panels}: Pure rotation DiskFit model (flat disk). \textit{Middle panels}: Rotation + radial motion DiskFit model. \textit{Bottom panels}: Bisymmetric DiskFit model. For every panel the top left figure reports the data, the top right the model, the low left the residual obtained subtracting the model from the data and the low right figure the dispersion.} \label{Diskfit-4845} \end{figure} \begin{figure} \centering \includegraphics[width=0.55\textwidth,angle=0]{map_pure_rotation_MCG63015.png} \includegraphics[width=0.55\textwidth,angle=0]{rad_mcg.png} \caption{DiskFit results for MCG-06-30-15. \textit{Top panels}: Pure rotation DiskFit model (flat disc). \textit{Bottom panels}: Rotation + radial motion DiskFit model. For every panel the top left figure reports the data, the top right the model, the low left the residual obtained subtracting the model from the data and the low right figure the dispersion.} \label{Diskfit-mcg} \end{figure} In the following sections, we present the properties and kinematics of the residual (both from $^{3D}$BAROLO and DiskFit) CO molecular gas emission. \subsection{Residuals from $^{3D}$BAROLO}\label{resB} As explained in Sec. \ref{mod}, 3D-Barolo makes a tilted-ring rotating disc model of the input data, hence if there are non-circular motions, such as outflows, the model will not be able to fully reproduce the observations. The non-circular motions could be misinterpreted as tilted rings of the plane. Some non-circular motions can be isolated from the rotating disc simply by subtracting the regular rotation pattern from the observed kinematics (observation-rotating disc model). \\ To see whether there are significant residuals in the CO(2-1) motions, we calculate the residuals in the kinematic maps (see Fig. \ref{Barolo-4968} for NGC 4968, Fig. \ref{Barolo-4845} for NGC 4845, and Fig. \ref{Barolo-mcg} for MCG-06-30-15) by subtracting the 3D datacube produced by the model from the observations (the input continuum-subtracted CO(2-1) emission line datacube). The $^{3D}$BAROLO model reproduces the observations of NGC 4968 relatively well along the kinematic major axis, leaving important residuals mainly in the north-east direction of the kinematic minor axis, as revealed by the velocity map in Fig. \ref{Barolo-4968}. The residuals reach a maximum velocity of $\sim 90\, \rm{km\,s^{-1}}$, that is blueshifted to the south and south-east of the minor axis and redshifted to the north and north-west of the minor axis with velocity $\sim 160\, \rm{km\,s^{-1}}$. Therefore, the maximum residual velocity is the mean of the two components as we read them from the residual datacube $\sim$ 125 $\rm{km\,s^{-1}}$. For NGC 4845, the model reproduces the observation reasonably well in the kinematic major axis, leaving almost no residuals. However, the model leaves small residuals in the nuclear region and either sides of the kinematic minor axis (see Fig. \ref{Barolo-4845}). The residuals reach a maximum velocity of $\sim 60\, \rm{km\,s^{-1}}$, that is blueshifted to the north-east of the major axis, south and south-east of the kinematic minor axis and redshifted to the north and north-west of the minor kinematic axis with velocity $\sim$50 km s$^{-1}$. The maximum residual velocity turns out to be in this case $\sim 55\, \rm{km\,s^{-1}}$. For MCG-06-30-15, the model reproduces well the observed CO(2-1) kinematics in the nuclear region and along both the kinematic minor and major axes (see Fig. \ref{Barolo-mcg}).\\ The residuals reach a maximum velocity of $\sim 20\, \rm{km\,s^{-1}}$, that is blueshifted to the east direction with velocity $\sim$51 km s$^{-1}$ and redshifted to the south and south-west of the minor axis with velocity $\sim 20\, \rm{km\,s^{-1}}$. The maximum residual velocity is $\sim 35\, \rm{km\,s^{-1}}$. Comparing these residual velocities of the gas in each galaxy with the velocities the regular rotation shows us how important is the residual velocity (which could be due to devation from circular motions) for each galaxy. For NGC4968, the gas rotates at $\sim 200\, \rm{km\,s^{-1}}$ and there is a residual of $\sim 125\, \rm{km\,s^{-1}}$, which is a large fraction of the rotation velocity.\\ In NGC4845, the rotational velocity of the gas is $\sim 200\, \rm{km\,s^{-1}}$ (taking the average of the two components using the input ALMA datacube), and a residual with velocity about $\sim 55\, \rm{km\,s^{-1}}$ is detected. In MCG-06-30-15, the CO rotation velocity is $\sim 200\, \rm{km\,s^{-1}}$ and the residual velocity $\sim 35\, \rm{km\,s^{-1}}$ which is small with respect to the regular rotation velocity. For all galaxies the residual velocities are smaller than the main rotation velocity, indicating that the non circular kinematics very unlikely are tracing powerful outflows (whatever the cause is stellar or AGN related). Of all, the residual velocity of NGC4968 is larger than the other two galaxies with respect the rotation velocity. The least residual velocity is found for MCG, indicating that the kinematics of the gas in the central 1 kpc scale of this galaxy is best described by circular motions. \subsection{Residuals from DiskFit}\label{resD} As shown by the kinematic maps in Figs. \ref{Diskfit-4968}, \ref{Diskfit-4845}, and \ref{Diskfit-mcg} for NGC 4968, NGC 4845, and MCG-06-30-15, respectively, different DiskFit models reproduce the kinematics of the molecular gas as outlined by the CO(2-1) emission. Bisymmetric DiskFit models reproduce the observation much better than pure rotation DiskFit models for NGC 4968 and we consider this model as the best fit. The $\chi^2$ reduces by $\sim$50\% for the radial motion model and by $\sim$63\% for bisymmetric model with respect to its value for pure rotation (flat disk) model, indicating that bisymmetric model is the best physical model for the kinematics of the gas in the central region of NGC 4968. This leads us to conclude that the residuals seen in the pure rotation models (see Fig. \ref{Diskfit-4968}) cannot be ascribed to additional velocity components not belonging to the rotational kinematics. However, still there are residuals left over from the best-fit model, mainly in the south direction of the minor kinematic axis and west of the major axis (see Fig. \ref{Diskfit-4968}). Also the model rotation plus radial motion DiskFit reduces the residual observed in the pure rotation DiskFit model (see rotation plus radial motion DiskFit model in Fig. \ref{Diskfit-4968}, middle panels), and indicates the presence of radial motions in the nuclear region of this galaxy. Similarly, for NGC 4845, pure rotation model reproduces the observation well (see upper panels in Fig. \ref{Diskfit-4845}), however, bisymmetric model performes slightly better, leaving comparably smaller residuals in the kinematic minor axis (see lower panels in Fig. \ref{Diskfit-4845}). In this case the change in $\chi^2$ is small when bisymmetric flows are included, reducing $\chi ^{2}$ by $\sim$30\%. We consider, however, that the bisymmetric model is the best-fit model. Pure rotation (flat disk) DiskFit model reproduces the observation well for MCG-06-30-15, leaving only very small residuals (see upper panels in Fig. \ref{Diskfit-mcg}), and this can be considered the best-fit model for this galaxy. For MCG-06-30-15, unlike NGC 4968, incorporating a radial component into a pure rotation DiskFit model (rotation plus radial motion model) does not reduce the residuals observed in the pure rotation model, indicating the absence of significant radial motions in the nuclear region of this galaxy. \section{Line luminosity and molecular gas mass estimates}\label{MH2} \begin{table} \caption{\label{t3} Results of a Gaussian fit to the total CO(2-1) spectrum. \textit{Upper panel}: best-fit parameters of the Gaussian model components shown in Fig.~\ref{CO-Line}: central velocity ($\mu_{v}$)[km s$^{-1}$], flux amplitude (S$_{peak}$) [mJy], and velocity dispersion ($\sigma_{v}$) [km s$^{-1}$]. The number in the parenthesis indicates the Gaussian components. \textit{Lower panel}: Area over which the line emission is integrated on the collapsed image, the total CO(2-1)) line flux (S$_{CO}$) [Jy km s$^{-1}$], total CO(2-1) line luminosity ($L^\prime_{CO}$) [K km s$^{-1}$ pc$^{2}$] and molecular gas mass (M(${\rm H_2}$) [M$_{\odot}$]. Here we report the molecular mass values corresponding to $\alpha$=0.8. } \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{lllll \hline\\ Parameters&NGC 4968&NGC 4845&MCG-6-30-15\\ \hline\\ $\mu_{\rm v}$(1)&-11$\pm$3&-21$\pm$4& 125$\pm$3&\\ $\mu_{\rm v}$(2)& &-238$\pm$4&-8$\pm$8&\\ $S_{\rm peak}$(1)&0.71$\pm$0.01 &4.85$\pm$0.22&0.26$\pm$0.03&\\ $S_{\rm peak}$(2)& &5.38$\pm$0.22&0.22$\pm$0.01&\\ $\sigma_{\rm v}$(1)&258$\pm$6&157$\pm$10&67$\pm$9&\\ $\sigma_{\rm v}$(2)& &146$\pm$9&171$\pm$21&\\ \hline\\ Area & $3^{\prime\prime} \times 1.4^{\prime\prime}$ & $13^{\prime\prime} \times 3^{\prime\prime}$ & $3.4^{\prime\prime} \times 1.2^{\prime\prime}$\\ S$_{\rm CO(2-1)}$& 36& 617 & 16&\\ L$^{\prime}_{\rm CO(2-1)}$&$4 \times 10^{7}$&$12 \times 10^{7}$&$ 1.3\times 10^{7}$&\\ M$_{\rm tot}({\rm H_2})$ &$ 3 \times 10^{7}$&$ 9 \times 10^{7}$&$ 1 \times 10^{7}$&\\ M$_{\rm res}({\rm H_2})$ &$ 1 \times 10^{7}$&$0.3 \times 10^{7}$ &$ 0.07\times 10^7$&\\ \hline\\ \end{tabular}\\ \vspace{.05cm} \end{table} To estimate the line luminosity ${\rm L^{\prime}_{CO}}$ and the molecular gas mass in the disc M(H$_{2}$), we present the profile of the integrated CO(2-1) emission line in Fig. \ref{CO-Line}. The line profiles are generally smooth enough to be well fitted with a single (NGC 4968) or two (NGC 4845 and MCG-06-30-15) Gaussian profiles. We integrate the line emission over the moment0 maps. The results of the best fit parameters are given in Table \ref{t3} together with the size of the region over which the line flux is estimated. This was chosen by selecting those pixels where the S/N is larger than 3, roughly corresponding to 10\% of the maximum values. Then we derive the emission line luminosity ${\rm L^{\prime}_{CO}}$ using the relation given by \citet{solomon2005molecular}: \begin{equation}\label{eq1} L^{\prime}_{CO} = 3.25 \times 10^{7} S_{CO}\Delta \varv v_{obs}^{-2}D_{L}^{2}(1+z)^{-3}, \end{equation} where $L^{\prime}_{CO}$ is the CO line luminosity given in K km s$^{-1}$ pc$^{2}$, $S_{CO}\Delta \varv$ is the velocity-integrated flux in Jy km s$^{-1}$, $\nu_{obs}$ is the observed frequency in GHz; $z$ is the redshift, and $D_L$ is the luminosity distance in Mpc ($D_{L} = D_{A}(1+z)^{2}$, $D_{A}$ is the angular size distance) is the luminosity distance in Mpc. The molecular mass, ${\rm M(H_2)}$ is estimated using the relation (\cite{solomon2005molecular}): \begin{equation}\label{eq2} M(H_{2}) = \alpha_{CO} L^{\prime}_{CO}, \end{equation} where $\alpha_{\rm CO}$ is the CO-to-H$_{2}$ conversion factor. The value of $\alpha_{\rm CO}$ is highly uncertain, it depends on metallicity and environment \citep[see i.e.][]{bol+13} and may vary in the range 0.8-3.2. For simplicity we use here the lower value which is the average value in active galaxies. Furthermore, strictly speaking the conversion factor entails the CO(1-0) line luminosity, this means that in absence of similar CO(1-0) observations an assumption on the excitation status of the molecular gas must be made. We use the average CO spectral line distribution of \citet{kam+16} corresponding to the $\log ({\rm L_{FIR}})$ luminosity range of 10-10.5~${\rm \log L_\odot}$ to convert from the observed ${\rm L^{\prime}_{CO(2-1)}}$ to ${\rm L^{\prime}_{CO(1-0)}}$ which is used in eq.~\ref{eq1}. The resulting ratio is $\frac{{\rm L^{\prime}_{CO(2-1)}}}{{\rm L^{\prime}_{CO(1-0)}}}=2/3$. For NGC4968 we use the ratio measured by \citet{strong2004molecular} The total gass mass M$_{\rm tot}({\rm H_2})$ in the disc of the three galaxies is listed in Table \ref{t3}, which agrees well with the typical values for other nearby and low-luminosity AGNs, and are in line with the claim that Seyfert 2 galaxies do seem to possess more molecular mass than Seyfert 1 galaxies (see \citet{strong2004molecular} and references therein) and agree with the single dish values published by \citet{strong2004molecular} for NGC4968 and MCG-06-30-15 and by \citet{Rosario} for MCG-06-30-15.\\ Similarly, we determine the line luminosity and molecular gas mass in the modelled disc of each galaxy. The molecular gas mass fraction in the modelled disc of NGC 4968, NGC 4845 and MCG-06-30-15 corresponds to $\sim$69\%, $\sim$97\% and $\sim$92\% of the total molecular gas mass in each galaxy, respectively. In addition to the total molecular gas mass and the gas mass in the main rotating disc, we also estimated the line luminosity and then the molecular mass of the residuals for each galaxy using Eqs. \ref{eq1} and \ref{eq2} and listed in Table~\ref{t3}. The residuals are the emissions resulting from the substraction of the modelled from the observed data cube. \begin{figure} \centering \includegraphics[width=9cm,height=5cm]{N4968-MainLine-Fit.png} \includegraphics[width=9cm,height=5cm]{N4845-MainLine-Fit.png} \includegraphics[width=9cm,height=5cm]{MCG06-30-MainLine-Fit.png} \caption{The Gaussian fit for the profile line of NGC 4968 (top panel), of NGC 4845 (middle panel), and of MCG-06-30-15 (bottom panel). Data are shown by the black line, while the fitted components by the blue and green lines (see detail in Table~\ref{t3}).} \label{CO-Line} \end{figure} \section{Gas and continuum distribution}\label{dust} Figure~\ref{CO-dust} shows the overlay of the collapsed CO intensity map with the map of the continuum at the observing frequencies, derived from the flux averaged over all the four spectral windows once neglecting the channels with the line emission. CO emission is absent from the centre of NGC4968 and not in the other two galaxies. Together with the kinematical analysis discussed above we can argue that this result can be linked to the presence of the bar in the inner part (see Fig.~\ref{NGC4968torque}). Indeed, the torque is positive inside (meaning the gas moves outward) and negative outside (the gas flows inward) this could be the reason why there is no CO in the centre of NGC4968. The continuum distribution is overall more compact than the CO emission especially in NGC4968 and MCG-06-30-15. One may argue that some flux could be lost because of the high resolution of these observations and the lack of additional compact array observations. However because of the integrated CO flux agrees with that measured with single dish observations \citep{strong2004molecular,Rosario}, at least in NGC4968 and MCG-06-30-15, the amount of missing flux should not be significant. The origin of the majority of the continuum emission is very likely due to dust. The central nucleus is expected to emit synchrotron emission from the black hole jet or corona which may have a significant contribution to the radiation in the millimetre range. However, the three galaxies are strong far-IR emitters as detected by infrared satellites \citep[see i.e.][]{grup+16,cor+14,mel+14} and have a radio spectrum typical of a central synchrotron source with a spectral index decreasing with frequencies as $\nu\sim -0.6\div -0.8$ (radio data taken from \citet{con,mun}), which contributes a few percent at 1\,mm wavelength. \begin{figure} \centering \includegraphics[width=0.48\textwidth,angle=0]{n4968-cont.PNG} \includegraphics[width=0.48\textwidth,angle=0]{n4845-cont.PNG} \includegraphics[width=0.48\textwidth,angle=0]{mcg-cont.PNG} \caption{Overlay of CO(2-1) contours on the 1.2 mm continuum image of NGC 4968 with 2 CO contours of 4 and 16\% of maximum (top panel), of NGC 4845 with 2 CO contours of 5 and 20\% of maximum (middle panel), and of MCG-06-30-15 with 2 CO contours of 5 and 20\% of maximum (bottom panel). The continuum scale (colour wedge) is in Jy/beam. The 3$\sigma$ level is 0.1 mJy, therefore 0.0001 Jy/beam in the units of the Figure for N4968, 0.5 mJ=y/beam, then 0.0005 Jy/beam for N4845 and 0.05 mJy/beam therefore 0.00005 for MCG-06-30-15.} \label{CO-dust} \end{figure} \section{Discussion}\label{dis} \subsection{Kinematical perturbations}\label{kinpert} In addition to the kinematic maps, the comparison of the $p-v$ diagrammes of the host galaxy disc (blue contours) with the fitted $^{3D}$BAROLO model of a rotating disc (red contours) along the kinematic major and minor axes (see bottom panels in Figs. \ref{Barolo-4968}, \ref{Barolo-4845}, and \ref{Barolo-mcg}) further shows how well the model fits the data, and helps to see the presence of any deviation from circular motions. For NGC 4968, along the kinematic major axis, the $^{3D}$BAROLO model with the rotating disc (red contours) fits the rotation curve of the host galaxy disc (blue contours) relatively well, leaving only small room for additional significant kinematical components (see the major axis in the bottom panel in Fig. \ref{Barolo-4968}). The $^{3D}$BAROLO rotating disc model fits also well the observation in the nuclear region and along the kinematic minor axis of the host galaxy disc, however, leaving an important deviation at the end of the kinematic minor axis (see the minor axis in the bottom panel in Fig. \ref{Barolo-4968}), which is in agreement with what is observed in the corresponding velocity map in the same figure. In NGC 4845, the $^{3D}$BAROLO model with the rotating disc fits the rotation curve of the host galaxy disc along the kinematic major axis (see the $p-v$ diagram in Fig. \ref{Barolo-4845}). Along the kinematic minor axis the $^{3D}$BAROLO model also fits relative well the observations. However it fails {\it around} the kinematic minor axis we see small deviations (see $p-v$ diagram in Fig. \ref{Barolo-4845}). In MCG-06-30-15, the rotating disc model fits the rotation curve of the host galaxy disc quite well both along the kinematic major and minor axes, presenting only very small deviations (see the $p-v$ diagram in Fig. \ref{Barolo-mcg}), further strengthening what is observed in the corresponding kinematic map (Fig. \ref{Barolo-mcg}). Deviations from circular motions could be caused by different mechanisms, such as outflows driven by AGN or star formation. Also, it has been known that non-circular motions of molecular gas in the nuclear regions of disc galaxies could be due to the existence of a bar like structure, a warped circumnuclear disc, or radial motion due to other mechanisms. For example, using lower resolution CO(2-1) observations, \cite{Schinnerer2000} observed non-circular motions caused by warped structure in the nuclear region (approximately radial distances of 0.7-1$^{"}$) of NGC 3227. Furthermore, it has been shown that gas inflows could be due to gravity torques from non-axisymmetric potentials in the central regions of galaxies, such as streaming motions along a bar (see e.g., \citealt{ruffa2019agn}). Indeed, the presence of residuals/deviations could be either due to some gas components not included within the main rotating disc considered by the model (this could be due to difference in geometry with the main rotating disc), or there is a deviation in the kinematics of the gas from circular motion, or both. In the galaxies of this study the observed residuals could be an indication for the presence of deviations in the kinematics of the gas from circular motions, as shown by the kinematic maps and corresponding $p-v$ diagrams. Bisymmetric Diskfit model performes better in reproducing the observations than other DiskFit models in NGC 4845 and NGC 4968, indicating that the kinematical perturbations in these two galaxies are more likely due to a bar pattern, whereas pure rotation model is the best-fit model for MCG-06-30-15. Note that the bisymmetric DiskFit model describes an elliptical or a bar like flow, and it is not surprising that bisymmetric model appears to be the best-fit model in NGC4968 and NGC 4845, since the galaxies are shown to be barred (see more below). As revealed by both models, we argue that although the circular motion is the dominant kinematics in the molecular disc of all galaxies, there is clear evidence for the presence of non-circular motions in the nuclear regions of NGC 4968 and NGC 4845, mainly in NGC 4968, where non-circular motions appeared to be significant (see Sections~\ref{resB} and~\ref{resD}). However, the smallness in the width of the residual velocity (compared to the circular velocity) indicates the absence of energetic feedback both from the central AGN and star formation in the nuclear regions of the galaxies. Also, the star formation rate in all galaxies is very small (see Table \ref{t1}). Table \ref{t1} lists too the morphological types from either NED or Hyperleda: N4968 has a strong bar and the torque analysis (see below) also confirms that. We show in Fig. \ref{NGC4968rotation} a possible model for the rotation curve in the central kpc of the galaxy, built from its various components, bulge, disc of stars and gas, black hole and dark matter. The latter components contribute negligibly, the dark matter is required however, to explain a flat rotation curve in the outer parts. The bulge mass was taken from the decomposition of the red image, used in the potential calculation, and calibrated with the rotation curve. The black hole mass has been assumed to be 7$\times10^6$ M$_\odot$, about 0.2\% of the mass of the bulge, according to scaling relations (e.g. \citealt{kormendy2013coevolution}). The bulge dominates the region of interest, for the observed molecular component. Thus the mass decomposition allows the computation of the precessing frequency $\Omega-\kappa/2$, and the determination of Lindblad resonances. If we place the corotation just outside the bar, as it is frequently observed in barred spirals (e.g. \cite{1996FCPh...17...95B}), then the pattern speed of the bar is $\Omega_b$ = $52\, \rm{km\,s^{-1}\,kpc^{-1}}$, putting corotation at 3.5 kpc and inner Lindblad resonance (ILR) at R = 300pc, corresponding to the CO ring. \\ N4845 morphology has a nice peanut shape bulge, and this is well known to be due to a bar (e.g., \citealt{1990A&A...233...82C}). From our analysis there is evidence of an additional weak kinematics but we cannot argue that this is due to the presence of the bar. Our kinematical analysis seems to confirm that unlike the one in NGC 4968 the bar in NGC4845 is unable to change significantly the regular rotation pattern of the molecular gas. This additional weak kinematics (see Fig.~\ref{Barolo-4845} and~\ref{Diskfit-4845}) could be due to gas inflowing or outflowing the central region but with the present data there is not firm evidence of it.\\ \begin{figure} \centering \includegraphics[width=0.35\textwidth,angle=0]{4968_rotcurve.png} \caption{\textit{Top panel}: A possible model for the rotation curve in the central kpc of NGC 4948, built from its various components, bulge, disc of stars and gas, black hole and dark matter. This latter contributes negligibly but it is required to explain a flat rotation curve in the outer parts (see text). \textit{Bottom panel}: The precessing frequency of elongated orbits, $\Omega-\kappa/2$, used to model the orbits in the inner region of NGC 4968. The red line is the pattern speed of the bar (e.g. \cite{1996FCPh...17...95B}).} \label{NGC4968rotation} \end{figure} \subsection{Torque computation for NGC 4968}\label{sec:torq} To determine whether the gas is driven outwards or inwards, a calculation of the bar torques is necessary \citep[e.g.][]{audibert2019}. Through the Poisson equation, the gravitational potential is derived from the stellar density, assumed to be the main responsible to the gravity forces in the plane of the galaxy, inside the central kiloparsec. The dark matter is indeed negligible in the very centre (e.g. NGC 4968 in Fig.\ref{NGC4968rotation}). The stellar density is better traced by the HST H-band image (F160W), since in the NIR the dust impact is minimised (see \citet{comb+19} for details). The HST-NIC2 image reveals a central bulge and a stellar disc. To de-project the galaxy to face-on, we have first isolated the bulge, assumed to be spherical, which should not be de-projected. We then de-projected the galaxy disc with the adopted inclination angle of 60$^\circ$ and PA 250$^\circ$. The bulge was then added to the de-projected image. We assumed a constant mass-to-luminosity ratio and calibrated it to retrieve the observed rotation curve, modeled as shown in Fig. \ref{NGC4968rotation} (bottom panel). The gravitational potential is derived from the stellar distribution, assuming a thin disc of scale ratio $h_z/h_r$ =1/12. Both the HST-NIR and CO(2-1) de-projected images of the galaxy have been resampled to the same pixel size of 0.03 arcsec = 6~pc \citep{comb+19}. We have computed the bar gravity torques, following the method described in (e.g. \citealt{garcia2005torq}; \citealt{audibert2019}). The forces are computed at each pixel, by derivating the potential, and then the torques are computed on the gas, taking into account the gas density at the given pixel. The torque map is plotted in Fig. \ref{NGC4968torque} together with the de-projected gas surface density, for comparison. As shown in the top panel of the figure, the torque map reveals the expected butterfly diagram (four-quadrant pattern) in relation to the bar orientation (indicated by straight lines) with torques changing sign in each quadrant. This implies that the observed non-circular motions are well fitted by the gas flow in a bar potential. \begin{figure} \includegraphics[width=0.4\textwidth,angle=0]{torq-map.PNG} \caption{{\it Top panel:} Map of the gravitational torques exerted on the gas by the stellar potential, in the centre of NGC\,4968. The map shows in each pixel the torque derived from the HST-NIR image, multiplied by the gas surface density. Both images have been de-projected to face-on (see text for details). The torques change sign as expected in a four-quadrant pattern (or butterfly diagram). The orientation of the quadrants follows the bar orientation. In this de-projected picture, the major axis of the galaxy is oriented parallel to the horizontal axis. {\it Bottom panel:} The de-projected image of the CO(2-1) emission, at the same scale, and with the same orientation, for comparison.} \label{NGC4968torque} \end{figure} \begin{figure} \includegraphics[width=0.4\textwidth,angle=0]{DL-vsR.PNG} \caption{The radial distribution of the average torque exerted by the stellar bar on the gas of NGC 4968. The torque is normalised to show the fraction of the angular momentum transferred from the gas in one rotation--$dL/L$, estimated from the CO(2-1) de-projected map. The curve is plotted only from a radius = 0.23 arcsec (or 46~pc), which is the largest size of the beam.} \label{NGC4968avetorque} \end{figure} From the torque map we compute azimuthal averages and obtain the effective torque at each radius, per unit mass. This yields the derivative of the gas angular momentum at this radius. To derive the relative variation of angular momentum, we divide by the average angular momentum at this radius (from the rotation curve). This relative variation of angular momentum in one rotation is plotted in Fig. \ref{NGC4968avetorque}. The torque is positive in the very centre, meaning that the gas is driven outwards to the gas ring. Outside of the ring, the torque is negative and peaks at dL/L = -0.3 in one rotation: the gas is driven inwards, to accumulate in the ring. In this region the gas is losing about 30\% of its angular momentum in one rotation.\\ The NLR dynamics does not uniquely show the presence of an outflow because it might be affected by extinction \citep{ferr2000}. If an ionised outflow were present, at least in the central region, the outward motions of the gas (as revealed by the positive torque in the very centre) could be partly interpreted as the interaction between the ionized outflows and the gas. However, since the torque is negative in the outside of the ring, the outflows could be counter balanced by inward motions. \section{Summary and Conclusions}\label{con} We have analysed the properties and kinematics of the cold molecular gas in the nuclear and circumnuclear regions of three Seyfert galaxies, NGC 4968, NGC 4845 and MCG-06-30-15, using ALMA observations of their CO(2-1) emission line. We used the $^{3D}$BAROLO and DiskFit (both axisymmetric and non-axisymmetric) softwares to model the kinematics of the molecular gas.\\ The main findings are summarised as follows: \begin{itemize} \item The intensity maps reveale a ring-like morphology in the nuclear regions of NGC 4968 and MCG-06-30-15, whereas the shape is edge-on in NGC 4845.\\ \item The gas kinematics in the molecular discs of NGC 4845 and MCG-06-30-15 is dominated by pure rotational motion, but there is evidence for non circular-motions in NGC 4968 and weakly in NGC 4845.\\ \item Unlike NGC 4845, where the deviation from circular motion is small, a significant non circular motion are observed in NGC 4968, mainly along the kinematic minor axis, with velocity $\sim 115\, \rm{km\,s^{-1}}$, which are likely due to the bar.\\ \item Moreover, of all DiskFit models, the bisymmetric model is found to be the best-fit model for NGC 4968 and NGC 4845, in agreement with the nuclear bar origin. \\ \item Regular rotation is shown to be the dominant kinematics of the gas in the nuclear region of MCG-06-3-15, and hence pure rotation model is the best fit model for this galaxy. \item The molecular mass, ${\rm M(H_{2}}$), in the nuclear disc is estimated to be $\sim 3-12\times 10^{7} ~{\rm M_\odot}$ (NGC 4968), $\sim 9-36\times 10^{7}~ {\rm M_\odot}$ (NGC 4845), and $\sim 1-4\times 10^{7}~ {\rm M_\odot}$ (MCG-06-30-15), allowing the CO-to-H$_{2}$ conversion factor $\alpha_{CO}$ between 0.8 and 3.2, typical of nearby galaxies of the same type. \item The molecular gas mass of the modeled disc of each galaxy corresponds to $\sim$ 69\%, $\sim$97\% and $\sim$92\% of the total molecular gas mass in NGC 4968, NGC 4845 and MCG-06-30-15, respectively.\\ \item For the galaxy NGC~4968, placing the corotation just outside of the bar indicates a bar pattern speed of $\Omega_b$ = $52\, \rm{km\,s^{-1}\,kpc^{-1}}$, putting corotation at 3.5 kpc and inner Lindblad resonance (ILR) ring at R = 300pc, which corresponds to the CO ring.\\ The computation of the torques exerted by the stellar bar on the gas shows that the torques are positive inside the molecular ring and negative outside, revealing that the gas is accumulating in the inner Lindblad resonance. Thus, the observed non-circular motions in the molecular disc of NGC 4968, could be due to the presence of the bar in the nuclear region. \end{itemize} In summary, in the studied galaxies the radiative feedback or winds, seen in the kinematics of emission lines from ionised gas (see \S~\ref{pppt}), do not substantially alter the environments within molecular clouds that produce the bulk of the low-excitation CO emission. We do not find any strong evidence for gas kinematics close to the central AGN which might be univocally attributed to the effect of the AGN. The cold, star-forming molecular gas in the centre of the host galaxy is not strongly influenced by the presence of the AGN, despite the fact that these AGN are luminous enough to dynamically disturb this material.\\ We cannot claim any strong evidence in these sources of the long sought feedback/feeding effect due to the coupling of mechanical energy from the nucleus with the cold star-forming phase. \section*{Acknowledgements} This paper makes use of the following ALMA data: ADS/JAO.ALMA$\#$2017.1.00236.S.ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This publication has made use of data products from the NASA/IPAC Extragalactic Database (NED). We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). A. Bewketu Belete acknowledges the support by the Brazilian National Council for Scientific and Technological Development (CNPq). R.S. acknowledges the support provided by CONICYT through FONDECYT postdoctoral research grant No. 3200909. JAFO and LS acknowledge financial support by the Agenzia Spaziale Italiana (ASI) under the research contract 2018-31-HH.0. Research activities of the Observational Astronomy Board of the Federal University of Rio Grande do Norte (UFRN) are supported by continuous grants from CNPq, CAPES and FAPERN brazilian agencies. J.R.M. and A.B.B also acknowledge financial support from INCT INEspaço/CNPq/MCT. An anonymous referee is warmly thanked for their comments/suggestions have greatly improved the completeness of the paper.
1,108,101,566,104
arxiv
\section{Introduction} \subsection{Unit resolution} A unit clause is a logical clause with only one literal, like $(a)$ or $(\overline b)$. Unit resolution (also called unit propagation) consists to repeatedly fix the variables occurring into unit clauses in such a way to satisfy these clauses. For example, if there is a clause $(\overline b)$ in the formula, then the variable $b$ is set to \texttt{false}, and the formula is simplified by removing all the clauses containing $\overline b$ as well as all the occurrences of $b$ in the other clauses. Sometimes, unit propagation produces the empty clause, meaning that the formula is not satisfiable. Because unit resolution in not a complete proof system, all unsatisfiable formulae cannot be solved in this way. In \textsc{sat} solvers, unit propagation is used to fixe some variables in order to reduce the number of branches in the search tree. \subsection{The failed literal rule} This is an inference rule allowing \textsc{sat} solvers to fix some variables which cannot be fixed by using only unit propagation. As an example, let us consider the following \textsc{cnf} formula: \begin{equation} \label{eq} \sigma = (a \vee b) \wedge (\overline{b} \vee c) \wedge (\overline{b} \vee \overline{c}) \end{equation} Because there is no unit clause, applying unit resolution to this formula does not fix any variable. Applying the failed literal rule to the literal $\overline{a}$ consists in \emph{trying} to fix this variable to \texttt{false} and then to apply unit propagation. Because the empty clause is produced, $\sigma \wedge \overline{a}$ is not satisfiable. Then the variable $a$ \emph{must} be set to \texttt{true}. The failed literal rule can be also applied to the literal $b$, with the result that the variable $b$ must be set to \texttt{false}. \subsection{Contribution} We will show that applying unit propagation to a \textsc{cnf} formula $(\overline{l} \vee \overline{w}) \wedge \mathtt{reif}( \sigma \wedge w, l)$ has the same effect as applying the failed literal rule to a formula $\sigma$ with the literal $w$. $\sigma' = \mathtt{reif}( \sigma \wedge w, l)$ is said to be the \emph{reified counterpart} of $\sigma \wedge (w)$ in the sense that applying unit propagation to $\sigma'$ cannot produce the empty clause, but fixes $l$ to \texttt{true} if and only if applying unit propagation to $\sigma \wedge (w)$ would produce the empty clause. Although the size of the reified counterpart $\mathtt{reif}(\psi, l)$ of a formula $\psi$ is polynomially related to the size of $\psi$, the interest of the concept is rather theoretical. It sheds new light on the expressive power of unit resolution. \section{Reified unit resolution} The unit propagation process can be decomposed into several steps, where each step $i$ fixes the variables which occur in unit clauses after the step $i-1$ (if applicable) is completed. Because each step fixes at least one variable, and because the empty clause is produced when the same variable is fixed both to \texttt{true} and \texttt{false}, the number of steps cannot exceed $n+1$, where $n$ is the number of variables in the formula. Let $\sigma$ be a \textsc{cnf} formula with $n$ variables, and $\psi$ its reified counterpart. The formula $\psi$ can be decomposed in $n+1$ sub-formulae $\psi_1, \ldots, \psi_{n+1}$, where each $\psi_i$ simulates the effect of the step $i$ of unit propagation on $\sigma$. For each variable $v$ of $\sigma$, there are $2(n+1)$ variables, namely $v_1^+, v_1^-, \ldots, v_{n+1}^+, v_{n+1}^-$, in $\psi$. The formula $\psi$ is designed so that if $v$ is fixed to \texttt{true} (\texttt{false}, respectively) after $i$ propagation steps on $\sigma$, then $v_i^+$ ($v_i^-$, respectively) is fixed to \texttt{true} after $i$ propagation steps on $\psi$. As a manner of speaking, the assignations $v = \mathtt{true}$ and $v = \mathtt{false}$ are decoupled in $v_i^+ = \mathtt{true}$ and $v_i^- = \mathtt{true}$ in $\psi$, and no variable of $\psi$ can be set to \texttt{false} by unit propagation. Let us present the construction of $\psi$ form the formula \begin{equation} \sigma = (\overline{a}) \wedge (a \vee b) \wedge (\overline{b} \vee c) \wedge (\overline{b} \vee \overline{c}) \end{equation} The sub-formula $\psi_1$ must allow unit propagation to fix $a_1^-$ to \texttt{true} because at the first step of unit propagation on $\sigma$, the variable $a$ is fixed to \texttt{false}. Then \begin{equation} \psi_1 = (a_1^-) \end{equation} The sub-formula $\psi_2$ must allow unit propagation to fixe $a_2^-$ to \texttt{true} because $a$ remains to \texttt{true} at the second step of unit propagation on $\sigma$. This can be obtained thanks to the clause $(\overline{a_1^-} \vee a_2^-)$, which will be called a \emph{propagation clause}.\ It must also allow unit propagation on $\psi$ to simulate the effect of unit propagation on $\sigma$ regarding the clause $(a \vee b)$, given that $a$ is set to \texttt{false}. This can be obtained thanks to the clause $(\overline{a_1^-} \vee b_2^+)$, which will be called a \emph{deduction clause}. Because the goal is to build the formula $\psi$ without knowing in advance which variables of $\sigma$ will be fixed by each unit resolution step, all the possible propagation and deduction clauses are added to each sub-formula $\psi_i, i>1$. \begin{equation} \begin{array}{ccl} \psi_i & = & \overbrace{(\overline{a_i^-} \vee a_{i+1}^-) \wedge (\overline{a_i^+} \vee a_{i+1}^+) \wedge (\overline{b_i^-} \vee b_{i+1}^-) \wedge (\overline{b_i^+} \vee b_{i+1}^+) \wedge (\overline{c_i^-} \vee c_{i+1}^-) \wedge (\overline{c_i^+} \vee c_{i+1}^+)}^{\mathrm{propagation\ clauses}}\\ & \wedge & \underbrace{(\overline{a_i^-} \vee b_{i+1}^+) \wedge (\overline{b_i^-} \vee a_{i+1}^+) \wedge (\overline{b_i^+} \vee c_{i+1}^+) \wedge (\overline{c_i^-} \vee b_{i+1}^-) \wedge (\overline{b_i^+} \vee c_{i+1}^-) \wedge (\overline{c_i^+} \vee b_{i+1}^-)}_{\mathrm{deduction\ clauses}}\\ \end{array} \end{equation} For example, the third propagation clause $(\overline{b_i^-} \vee b_{i+1}^-)$ says "if $b_i^- = \mathtt{true}$ at the step $i$ of unit propagation on $\psi$, meaning that $b = \mathtt{false}$ at the step $i$ of unit propagation on $\sigma$ then $b_{i+1}^-$ must be set to \texttt{true} at the step $i+1$ of unit propagation on $\psi$, meaning that $b = \mathtt{false}$ at the step $i+1$ of unit propagation on $\sigma$". As another example, the third deduction clause $(\overline{b_i^+} \vee c_{i+1}^+)$ says "According to the clause $(\overline{b} \vee c)$ of $\sigma$, if $b_i^+ = \mathtt{true}$ at the step $i$ of unit propagation on $\psi$, meaning that $b = \mathtt{true}$ at the step $i$ of unit propagation on $\sigma$, then $c_{i+1}^+$ must be set to \texttt{true} at the step $i+1$ of unit propagation on $\psi$, meaning that $c = \mathtt{true}$ at the step $i+1$ of unit propagation on $\sigma$". The production of the empty clause by unit propagation on $\sigma$ (if applicable) can be reified by adding a new variable $s$ and the following clauses to $\psi$ \begin{equation} (\overline{a_4^+} \vee \overline{a_4^-} \vee s) \wedge (\overline{b_4^+} \vee \overline{b_4^-} \vee s) \wedge (\overline{b_4^+} \vee \overline{b_4^-} \vee s) \end{equation} Clearly, unit propagation on $\psi$ will fix $s$ to \texttt{true} if and only if unit propagation on $\sigma$ produces the empty clause, i.e. implicitly fixes the same variable both to \texttt{true} and \texttt{false}. As it stands, the formula $\psi$ is of little interest because it can only allow to simulate one "scénario" of unit propagation on $\sigma$. It is much more useful to simulate the effects of unit propagation when some variables of $\sigma$ have been previously fixed (for example by other inference rules or by branching rules in the context of the running of a \textsc{sat} solver). To this end, some of (or all) the variables of $\sigma$ can be injected into $\psi$ with the following clauses: \begin{equation} (\overline{a} \vee a_1^+) \wedge (a \vee a_1^-) \wedge (\overline{b} \vee b_1^+) \wedge (b \vee b_1^-) \wedge (\overline{c} \vee c_1^+) \wedge (c \vee c_1^-) \end{equation} Thanks to these additional clauses, unit propagation on $\psi$ can simulate the effect of unit propagation on $\sigma$ under any given partial truth assignment of the variables of $\sigma$. If $\sigma$ includes $n$ variables and $m$ clauses with at most $k$ literals per clause, then each sub-formula $\psi_i$ contains $2n$ binary propagation clauses and at most $km$ $k\mathtt{-ary}$ deduction clauses. It follows that $\psi$ contains $O(n^2+nkm)$ clauses. \section{Concluding remarks} We shown that for any formula $\sigma$, there exists a \emph{satisfiable} formula $\psi$ such that unit propagation on $\psi$ can simulate the behavior of unit propagation on $\sigma$, even when the empty clause is produced. What this tells about the expressive power of unit propagation ? Unit propagation can be seen as a way to compute functions mapping partial truth assignments to $\{ \mathtt{yes}, \mathtt{no} \}$ with two different approaches. In the first approach, the result \texttt{yes} corresponds to the assignment of a particular variable. In the second one, it corresponds to the production of the empty clause. The results presented in this report show that the \emph{same functions} can be computed using these two approaches, and that the required numbers of clauses are polynomially related. \label{pageend} \end{spacing} \end{document}
1,108,101,566,105
arxiv
\section{INTRODUCTION} There is a strong interest in applying robotics not only in the typical, well structured environments they normally operate in, but also in less structured, real world situations. While there has been a lot of progress in the fields of computer vision, path planning and robotic grasping, combining them into a reliably working system still proves to be challenging. The APC is a competition in the field of warehouse logistics, in which objects need to be picked either from a shelve, or a tote. The system we built is based on a Universal Robots UR10 robot (Figure~\ref{fig:robot}). The system was outfitted with a custom vision sensor and vacuum gripper. The software ran on a single laptop and made use of the Robot Operating System (ROS) framework. In the following sections we provide more details on the vision system, object detection, path planning and object manipulation. \section{Perception} A 3D vision system was developed to detect and determine the 6D pose of the objects that need to be handled. \subsection{Time Multiplexed Structured Light} To be able to recognize objects and determine their pose with sufficient accuracy, while keeping system cost low, a custom 3D scanner (Figure~\ref{fig:sensor}) was built from off the shelve components. The system is based on triangulation between binary Gray code pattern sequences projected by a projector, and their image as acquired by a camera~\cite{inokuchi1984range}\cite{posdamer1982surface}. The camera and projector are synchronized and the patterns can be projected and acquired at 120 frames per second. Our choice for this custom sensor over a standard depth sensor such as Time of Flight cameras, the Microsoft Kinect, or the Intel RealSense was mostly based on the higher resolution (1140x912 pixels) and accuracy (0.1mm). The biggest downside is that it requires multiple images (42), and some extra processing. This results in a longer acquisition time (about one second for acquisition and processing). \begin{figure} \centering \includegraphics[width=0.35\textwidth]{img/robot.jpg} \caption{The developed picking system consists of a standard six degree of freedom robot equipped with a custom made 3D vision system and vacuum gripper.} \label{fig:robot} \end{figure} \subsection{Sensor Calibration} To be able to triangulate points, both the camera and the projector need to be calibrated. The camera is calibrated in the typical way, using a chessboard pattern. Note that while the projector can be modeled as a camera, it is not possible to measure the location of the checkerboard corners in the projector reference frame directly. The method proposed by Moreno et al. \cite{moreno2012simple} was used to estimate the position of the corners in the projected image. This method uses the decoded pattern as observed by the camera and creates a local homography to estimate the code that can be associated to the checkerboard corner. This allows the projector to be calibrated, and to calibrate the camera-projector setup as a stereo camera. \section{Detection} A set of 38 products of varying shape, appearance, material and weight was used in the challenge. To cope with this variety, we chose to employ multiple detection algorithms, and apply the most suitable for every object. One object was searched for at a time, starting with the object that was deemed easiest, according to an ordered list that was created manually (taking into account ease of detection and ease of manipulation). Also, for every object, a preferred object detection method was determined in advance. For picking from the shelve, three scans were taken from different angles. For picking from the tote, two scans were taken from different angles. After picking it from the tote, the item was placed on a table and an additional scan was taken in order to confirm it was the correct object. \subsection{Pre-processing} As the shelve and tote geometry were known, it is possible to segment the objects from the background. While the exact position of the tote was known, the shelve was slightly moved before the start of the competition. A calibration routine was used at the start of the competition in which the shelves top corners were scanned (Figure~\ref{fig:corner}) and their exact position was measured, allowing to determine the shelve pose. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{img/scanner.jpg} \caption{The 3D vision system consists of a Digital Micromirror Device projector (TI DLP lightcrafter 4500) and a monochrome camera (Pointgrey Flea3 FL3-U3-13Y3) equipped with a c-mount lens (Computar 8mm F1.4 M0814-MP2). The mount was made from a bent steel plate and attaches directly to the robot wrist. A protective acrylic top shields the components. The projector and camera are synchronized to ensure the camera captures the correct projected patterns.} \label{fig:sensor} \end{figure} \subsection{Detection Algorithms} \subsubsection{Point Pair Features} Some of the objects have a distinctive geometric shape. Point pair features (PPF, Figure~\ref{fig:ppf}) can be used to describe this shape \cite{Abbeloos16ppf}\cite{choi2012voting}\cite{wahl2003surflet}. While a single feature is not very descriptive, the features of all combinations of points on the objects surface typically is. From a measured scene, a subset of points is selected, and the PPFs of all their combinations are calculated. If a similar PPF is present in the objects model, the PPF votes for a 6D pose of the object. If a pose gets enough votes, it is accepted as a detection. The initial detection is followed by an Iterative Closest Point (ICP) procedure to obtain a refined object pose. \begin{figure} \centering \subfloat[Intensity image]{{\includegraphics[width=4cm]{img/frame0.png} }}% \subfloat[Range image]{{\includegraphics[width=4cm]{img/depthmap3.png} }}% \caption{A scan of the top corners of the shelve is used to measure the pose of the shelve with respect to the robot. The detected corner is shown in the range image with a green circle.}% \label{fig:corner}% \end{figure} \begin{figure} \centering \includegraphics[width=0.25\textwidth]{img/ppfNew-eps-converted-to.pdf} \caption{The four dimensional point pair feature vector $\bf{f}$ of two 3D points on the surface of an object: $m_{i}$, and their normals $n_{i}$. The first element, $f_{1}$ is a distance, while $f_{2}$, $f_{3}$ and $f_{4}$ are angles.} \label{fig:ppf} \end{figure} \subsubsection{2D Features} The objects with distinctive texture can be detected using a local features based approach \cite{collet2009object}\cite{grundmann2010robust}, in our case, SIFT is used as a feature descriptor. From these matches between the object template and scene (Figure~\ref{fig:feature}) the pose can be estimated using a Perspective-n-Point algorithm, in this case EPnP~\cite{lepetit2009epnp} was used. Note that in our case, the exact 3D location of all features is measured with the structured light scanner. This additional information allows to estimate the pose with much higher accuracy, and allows to eliminate incorrect matches. \begin{figure*} \centering \subfloat[Grayscale image]{{\includegraphics[width=5.5cm]{img/bin.png} }}% \quad \subfloat[Filtered pointcloud]{{\includegraphics[width=5.5cm]{img/binptcloud.png} }}% \quad \subfloat[Matched object]{{\includegraphics[width=5.5cm]{img/binmatch.png} }}% \caption{Local appearance features of the objects are matched to the scene, allowing to estimate the objects pose. Using the 3D location of the feature point, from the pointcloud, results in higher accuracy and allows to eliminate incorrect matches.}% \label{fig:feature}% \end{figure*} \subsubsection{Range Image Templates} \begin{figure} \centering \subfloat[Scene]{{\includegraphics[width=5cm]{img/rangeimg.png} }}% \subfloat[Selected template]{{\includegraphics[width=3cm]{img/range_template.png} }}% \caption{A set of 1024 templates (range images from different viewpoints) are aligned to the scene using an optimization function. The final pose of the template with the lowest cost is used if it is below a certain threshold.}% \label{fig:Range}% \end{figure} The measured pointcloud can also be represented as a range image (Figure~\ref{fig:Range}). If the above algorithms fail, a brute force approach can be used in which templates obtained from the model are compared to this range image \cite{germann2007automatic}. We use 1024 templates per object. Each of the templates is aligned to the scene in an optimization procedure. To initialize the optimization, the scene is first segmented using euclidean clustering, and the closest point of each of the clusters can be used as a starting point, which is initially aligned to the closest point in the range template to be optimized. If a template with sufficiently low cost is found, an extra ICP step is used to refine the object pose. \section{Manipulation} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{img/grippers.png} \caption{\textbf{Top:} The gripper used for picking from the shelve. \textbf{Bottom:} The gripper used for picking from the tote has an extra degree of freedom. The 3D printed joint is designed to have a bit of play, but seals itself when the vacuum is turned on, so that it does not leak air. The joint can be actuated by a servo motor via a bar linkage. The servo motor is controlled by a micro controller with an accelerometer and has three modes: unactuated, fixed at a certain angle, or set to keep the head vertical.} \label{fig:gripper} \end{figure} Two suction based grippers were custom designed (Figure~\ref{fig:gripper}): one for picking from the shelve and one for picking from the tote. They both consist of pieces of standard aluminum tube and parts made using an additive manufacturing process (fused deposition modeling). Both grippers use high flow vacuum generated by a modified 2000W vacuum cleaner. The air flow is guided through flexible tubing. A small circular piece of soft foam was glued to the grippers to provide better sealing of the gripper to the object. When grasping an object, the end effector was lowered until a certain force threshold was met. This increased the chance of grasping the object properly. \begin{figure*} \centering \subfloat[Pointcloud and detected object]{{\includegraphics[width=5.5cm]{img/binsegmentation.png} }}% \quad \subfloat[Octomap approximation]{{\includegraphics[width=5.5cm]{img/bincollision.png} }}% \quad \subfloat[Planned path]{{\includegraphics[width=5.5cm]{img/binpath.png} }}% \caption{When an object is detected, a collision map is calculated using on octomap approximation of the pointcloud. Note that the points on the objects are not included. These point are cut out of the pointcloud using the a slightly expanded version of the convex hull of the object. A cartesian path planning algorithm is used to search for a collision free path to remove the object from the shelve. }% \label{fig:planning}% \end{figure*} \section{Planning} Once an object is detected the object needs towards towards it to grasp it. The grasping locations were manually defined per object. Most of the robot movements were planned in advance, offline, in joint space. Only the final part of the paths toward the object and out of the shelve need to be planned online. This was done using a cartesian path planning, which creates a tree structure of cartesian movements, with a resolution of 2cm. A greedy search algorithm was used and a check for collisions performed at every node (Figure~\ref{fig:planning}). For removing objects from the tote, no path planning was used and the object was simply lifted vertically. \section{Conclusions} Our approach to the challenges met in the APC was summarized in this paper. The proposed system has some unique features that were not employed by any other teams. Our system also suffered from some drawbacks, one of the major limitations during the challenge was that the 3D scan acquisition speed which had to be lowered due to technical issues. Another limitation was that not all of the objects could be detected reliably. The vacuum detection did not work properly, and could not be used to tell whether an object was successfully picked. Despite these issues, our system was able to pick multiple items correctly in both of the challenges. \bibliographystyle{IEEEtran}
1,108,101,566,106
arxiv
\section{Introduction} \label{intro} We consider approximating the solution of the following distributed control problem. Let $\Omega\subset \mathbb{R}^{d} $ $ (d\geq 2)$ be a Lipschitz polyhedral domain with boundary $\Gamma = \partial \Omega$. The goal is to minimize \begin{align} J(u)=\frac{1}{2}\| y- y_{d}\|^2_{L^{2}(\Omega)}+\frac{\gamma}{2}\|u\|^2_{L^{2}(\Omega)}, \quad \gamma>0, \label{cost1} \end{align} subject to \begin{equation}\label{Ori_problem} \begin{split} -\Delta y&=f+u ~\quad\text{in}~\Omega,\\ y&=g\qquad\quad\text{on}~\partial\Omega, \end{split} \end{equation} It is well known that the optimal control problem \eqref{cost1}-\eqref{Ori_problem} is equivalent to the optimality system \begin{subequations}\label{eq_adeq} \begin{align} -\Delta y &=f+u\quad~\text{in}~\Omega,\label{eq_adeq_a}\\ y&=g\qquad~~~~\text{on}~\partial\Omega,\label{eq_adeq_b}\\ -\Delta z &=y_d-y\quad~\text{in}~\Omega,\label{eq_adeq_c}\\ z&=0\qquad\quad~~\text{on}~\partial\Omega,\label{eq_adeq_d}\\ z-\gamma u&=0\qquad\quad~~\text{in}~\Omega.\label{eq_adeq_e} \end{align} \end{subequations} Different numerical methods for optimal control problems governed by partial differential equations have been extensively studied by many researchers. Numerical methods that have been investigated for this kind of problem include approaches based on standard finite element methods \cite{MR2470142,MR2486088,MR3473693,MR1686151,MR2493560}, mixed finite elements \cite{MR2585589,MR3679859,MR3427830,MR3103238,MR2998296,MR2576747,MR2398768}, and discontinuous Galerkin (DG) methods \cite{MR3022208,MR2644299}. Recently, hybridizable discontinuous Galerkin (HDG) methods have been developed for many partial differential equations; see, e.g., \cite{MR2485455,MR2772094,MR2513831,MR2558780,MR2796169,MR3626531,MR3522968,MR3463051,MR3452794,MR3343926}. HDG methods keep the advantages of DG methods and mixed methods, while also having less globally coupled unknowns. HDG methods have now also been applied to many different optimal control problems \cite{MR3508834,HuShenSinglerZhangZheng_HDG_Dirichlet_control1,HuShenSinglerZhangZheng_HDG_Dirichlet_control2,HuShenSinglerZhangZheng_HDG_Dirichlet_control3}. The embedded discontinuous Galerkin (EDG) methods, originally proposed in \cite{MR2317378}, are obtained from HDG methods by replacing the discontinuous finite element space for the numerical traces with a continuous space. The number of degrees of freedoms for the EDG method are much smaller than the HDG method. This gain in computational efficiency can come with a loss: for the Poisson equation, convergence rates for the EDG method are one order lower than the HDG method \cite{MR2551142}. However, for problems with strong convection the enhanced convergence properties of HDG methods are reduced \cite{FuQiuZhang15}. Therefore, EDG methods are competitive for such problems, and researchers have recently begun to thoroughly investigate EDG methods for various partial differential equations \cite{peraire2011embedded,MR3404541,fernandez2016,MR3528316,fu2017analysis}. Our long term goal is to devise efficient and accurate methods for complicated optimal flow control problems. EDG methods have potential for such problems; therefore, as a first step, we consider an EDG method to approximate the solution of the above optimal control problem for the Poisson equation. We use an EDG method with polynomials of degree $k$ to approximate all the variables of the optimality system \eqref{eq_adeq}, i.e., the state $y$, dual state $z$, the numerical traces, and the fluxes $\bm q = -\nabla y $ and $ \bm p = -\nabla z$. We describe the method in \Cref{sec:EDG}, and in \Cref{sec:analysis} we obtain the error estimates \begin{align*} &\norm{y-{y}_h}_{0,\Omega} = O( h^{k+1}), & &\norm{z-{z}_h}_{0,\Omega} = O( h^{k+1}),\\ &\norm{\bm{q}-\bm{q}_h}_{0,\Omega} = O( h^{k}), & &\norm{\bm{p}-\bm{p}_h}_{0,\Omega} = O( h^{k}), \end{align*} and \begin{align*} &\norm{u-{u}_h}_{0,\Omega} = O( h^{k+1}). \end{align*} We present numerical results in \Cref{sec:numerics}, and then briefly discuss future work. \section{EDG scheme for the optimal control problem} \label{sec:EDG} \subsection{Notation}Throughout the paper we adopt the standard notation $W^{m,p}(\Omega)$ for Sobolev spaces on $\Omega$ with norm $\|\cdot\|_{m,p,\Omega}$ and seminorm $|\cdot|_{m,p,\Omega}$. We denote $W^{m,2}(\Omega)$ by $H^{m}(\Omega)$ with norm $\|\cdot\|_{m,\Omega}$ and seminorm $|\cdot|_{m,\Omega}$, and also $H_0^1(\Omega)=\{v\in H^1(\Omega):v=0 \;\mbox{on}\; \partial \Omega\}$. We denote the $L^2$-inner products on $L^2(\Omega)$ and $L^2(\Gamma)$ by \begin{align*} (v,w) &= \int_{\Omega} vw \quad \forall v,w\in L^2(\Omega),\\ \left\langle v,w\right\rangle &= \int_{\Gamma} vw \quad\forall v,w\in L^2(\Gamma). \end{align*} Furthermore, $ H(\text{div},\Omega) = \{\bm{v}\in [L^2(\Omega)]^d, \nabla\cdot \bm{v}\in L^2(\Omega)\} $. Let $\mathcal{T}_h$ be a collection of disjoint elements that partition $\Omega$. We denote by $\partial \mathcal{T}_h$ the set $\{\partial K: K\in \mathcal{T}_h\}$. For an element $K$ of the collection $\mathcal{T}_h$, let $e = \partial K \cap \Gamma$ denote the boundary face of $ K $ if the $d-1$ Lebesgue measure of $e$ is non-zero. For two elements $K^+$ and $K^-$ of the collection $\mathcal{T}_h$, let $e = \partial K^+ \cap \partial K^-$ denote the interior face between $K^+$ and $K^-$ if the $d-1$ Lebesgue measure of $e$ is non-zero. Let $\varepsilon_h^o$ and $\varepsilon_h^{\partial}$ denote the set of interior and boundary faces, respectively. We denote by $\varepsilon_h$ the union of $\varepsilon_h^o$ and $\varepsilon_h^{\partial}$. We finally introduce \begin{align*} (w,v)_{\mathcal{T}_h} = \sum_{K\in\mathcal{T}_h} (w,v)_K, \quad\quad\quad\quad\left\langle \zeta,\rho\right\rangle_{\partial\mathcal{T}_h} = \sum_{K\in\mathcal{T}_h} \left\langle \zeta,\rho\right\rangle_{\partial K}. \end{align*} Let $\mathcal{P}^k(D)$ denote the set of polynomials of degree at most $k$ on a domain $D$. We introduce the discontinuous finite element spaces \begin{align} \bm{V}_h &:= \{\bm{v}\in [L^2(\Omega)]^d: \bm{v}|_{K}\in [\mathcal{P}^k(K)]^d, \forall K\in \mathcal{T}_h\},\\ {W}_h &:= \{{w}\in L^2(\Omega): {w}|_{K}\in \mathcal{P}^{k}(K), \forall K\in \mathcal{T}_h\},\\ {M}_h &:= \{{\mu}\in L^2(\varepsilon_h): {\mu}|_{e}\in \mathcal{P}^k(e), \forall e\in \varepsilon_h\}. \end{align} Let $M_h(o)$ and $M_h(\partial)$ denote the spaces defined in the same way as $M_h$, but with $ \varepsilon_h $ replaced by $ \varepsilon_h^o$ and $ \varepsilon_h^{\partial}$, respectively. Spatial derivatives of functions in these discontinuous finite element spaces are understood to be taken piecewise on each element $K\in \mathcal T_h$. For EDG methods, we replace the discontinuous finite element space $M_h$ for the numerical traces with the continuous finite element space $\widetilde{M}_h$ defined by \begin{equation} \widetilde{M}_h:=M_h \cap \mathcal{C}^0 (\varepsilon_h). \end{equation} The spaces $\widetilde{M}_h(o)$ and $\widetilde{M}_h(\partial)$ are defined in the same way as $M_h(o)$ and $M_h(\partial)$. \subsection{The EDG Formulation} The mixed weak form of the optimality system \eqref{eq_adeq_a}-\eqref{eq_adeq_e} is given by \begin{subequations}\label{mixed} \begin{align} (\bm q,\bm r_1)-( y,\nabla\cdot \bm r_1)+\langle y,\bm r_1\cdot \bm n\rangle&=0,\label{mixed_a}\\ (\nabla\cdot\bm q, w_1)&= ( f+ u, w_1), \label{mixed_b}\\ (\bm p,\bm r_2)-(z,\nabla \cdot\bm r_2)+\langle z,\bm r_2\cdot\bm n\rangle&=0,\label{mixed_c}\\ (\nabla\cdot\bm p, w_2)&= (y_d- y, w_2), \label{mixed_d}\\ ( z-\gamma u,v)&=0,\label{mixed_e} \end{align} \end{subequations} for all $(\bm r_1, w_1,\bm r_2, w_2,v)\in H(\text{div},\Omega)\times L^2(\Omega)\times H(\text{div},\Omega)\times L^2(\Omega)\times L^2(\Omega)$. Note that the optimality condition \eqref{mixed_e} gives $ u = \gamma^{-1} z $. The EDG method seeks approximate fluxes ${\bm{q}}_h,{\bm{p}}_h \in \bm{V}_h $, states $ y_h, z_h \in W_h $, interior element boundary traces $ \widehat{y}_h^o,\widehat{z}_h^o \in \widetilde{M}_h(o) $, and control $ u_h \in W_h$ satisfying \begin{subequations}\label{HDG_discrete2} \begin{align} (\bm q_h,\bm r_1)_{\mathcal T_h}-( y_h,\nabla\cdot\bm r_1)_{\mathcal T_h}+\langle \widehat y_h^o,\bm r_1\cdot\bm n\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}&=-\langle I_hg,\bm r_1\cdot\bm n\rangle_{\varepsilon_h^\partial}, \label{HDG_discrete2_a}\\ -(\bm q_h, \nabla w_1)_{\mathcal T_h} +\langle\widehat {\bm q}_h\cdot\bm n,w_1\rangle_{\partial\mathcal T_h} - ( u_h, w_1)_{\mathcal T_h} &= ( f, w_1)_{\mathcal T_h}, \label{HDG_discrete2_b \end{align} for all $(\bm{r}_1, w_1)\in \bm{V}_h\times W_h$, where $I_h g$ is a continuous interpolation of $g$ on $\varepsilon_h^\partial$, \begin{align} (\bm p_h,\bm r_2)_{\mathcal T_h}-(z_h,\nabla\cdot\bm r_2)_{\mathcal T_h}+\langle \widehat z_h^o,\bm r_2\cdot\bm n\rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}&=0,\label{HDG_discrete2_c}\\ -(\bm p_h, \nabla w_2)_{\mathcal T_h}+\langle\widehat{\bm p}_h\cdot\bm n,w_2\rangle_{\partial\mathcal T_h} + ( y_h, w_2)_{\mathcal T_h}&= (y_d, w_2)_{\mathcal T_h}, \label{HDG_discrete2_d} \end{align} for all $(\bm{r}_2, w_2)\in \bm{V}_h\times W_h$. \begin{align} \langle\widehat {\bm q}_h\cdot\bm n,\mu_1\rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}&=0\label{HDG_discrete2_e},\\ \langle\widehat{\bm p}_h\cdot\bm n,\mu_2\rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}&=0,\label{HDG_discrete2_f} \end{align} for all $\mu_1,\mu_2\in \widetilde{M}_h(o)$, and the optimality condition \begin{align} (z_h-\gamma u_h, w_3)_{\mathcal T_h} &= 0\label{HDG_discrete2_g}, \end{align} for all $ w_3\in W_h$. The EDG discrete optimality condition \eqref{HDG_discrete2_g} gives $ u_h = \gamma^{-1} z_h $. The numerical traces on $\partial\mathcal{T}_h$ are defined by \begin{align} \widehat{\bm{q}}_h\cdot \bm n &=\bm q_h\cdot\bm n+h^{-1} (y_h-\widehat y_h^o) \quad \mbox{on} \; \partial \mathcal{T}_h\backslash\varepsilon_h^\partial, \label{HDG_discrete2_h}\\ % % % \widehat{\bm{q}}_h\cdot \bm n &=\bm q_h\cdot\bm n+h^{-1} (y_h-I_hg) ~ \ \mbox{on}\; \varepsilon_h^\partial, \label{HDG_discrete2_i}\\ % % % \widehat{\bm{p}}_h\cdot \bm n &=\bm p_h\cdot\bm n+h^{-1} (z_h-\widehat z_h^o)\quad \mbox{on} \; \partial \mathcal{T}_h\backslash\varepsilon_h^\partial,\label{HDG_discrete2_j}\\ % % % \widehat{\bm{p}}_h\cdot \bm n &=\bm p_h\cdot\bm n+h^{-1} z_h\quad\quad\quad\quad\mbox{on}\; \varepsilon_h^\partial.\label{HDG_discrete2_k} \end{align} \end{subequations} Our implementation of the above EDG method and the local solver is similar to the implementation of an HDG scheme for a similar problem described in detail in \cite{HuShenSinglerZhangZheng_HDG_Dirichlet_control2}. \section{Error Analysis} \label{sec:analysis} Next, we provide a convergence analysis of the above EDG method for the optimal control problem. Throughout this section, we assume $ \Omega $ is a bounded convex polyhedral domain, the problem data satisfies $ f \in L^2(\Omega) $ and $ g \in \mathcal{C}^0(\partial \Omega) $, $ h \leq 1 $, and the solution of the optimality system \eqref{eq_adeq} is sufficiently smooth. Below, we prove our main convergence result: \begin{theorem}\label{main_res} We have \begin{align*} \|\bm q-\bm q_h\|_{\mathcal T_h}&\lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|\bm p-\bm p_h\|_{\mathcal T_h}&\lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|y-y_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|z-z_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|u-u_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}). \end{align*} \end{theorem} \subsection{Preliminary material} \label{sec:Projectionoperator} The convergence analysis of the EDG method for the Poisson problem without control has been performed in \cite{MR2551142}. The authors of \cite{MR2551142} use a special projection to split the errors are prove the convergence. We do not use the special projection from \cite{MR2551142} in our analysis; instead, we use the standard $L^2$-orthogonal projection operators $\bm{\Pi}_V$ and $\Pi_W$ satisfying \begin{subequations} \label{def_L2} \begin{align} (\bm \Pi_V \bm q, \bm r)_{K}&=(\bm q,\bm r)_{K} \quad \forall \bm r\in \bm{\mathcal{ P}}_k(K),\\ (\Pi_W y,w)_{K}&=(y,w)_{K}\quad \forall w\in \mathcal{P}_{k}(K). \end{align} \end{subequations} In the conclusion, we briefly mention future work connected to the different EDG analysis approach taken here. We use the following well-known bounds: \begin{subequations}\label{classical_ine} \begin{align} \norm {\bm q -\bm\Pi_V \bm q}_{\mathcal T_h} &\le Ch^{k+1} \norm{\bm q}_{k+1,\Omega}, \ \, \norm {y -{\Pi_W y}}_{\mathcal T_h} \le C h^{k+1} \norm{y}_{k+1,\Omega},\\ % % \norm {y -{\Pi_W y}}_{\partial\mathcal T_h} &\le C h^{k+\frac 1 2} \norm{y}_{k+1,\Omega}, \ \norm {\bm q -\bm\Pi_V \bm q}_{\partial \mathcal T_h} \le C h^{k+\frac 12} \norm{\bm q}_{k+1,\Omega},\\ \norm {y -{ I_h y}}_{\partial\mathcal T_h} &\le C h^{k+\frac 1 2} \norm{y}_{k+1,\Omega}, \ \norm {w}_{\partial \mathcal T_h} \le C h^{-\frac 12} \norm {w}_{ \mathcal T_h}, \forall w\in W_h. \end{align} \end{subequations} where $I_h $ is a continuous interpolation operator, and we have the same projection error bounds for $\bm p$ and $z$. Next, define the EDG operator $ \mathscr B$ by \begin{equation}\label{def_B1} \begin{split} \hspace{1em}&\hspace{-1em} \mathscr B( \bm v_h,w_h,\mu_h;\bm r_1,w_1,\mu_1) \\ &=(\bm q_h,\bm r_1)_{\mathcal T_h}-( y_h,\nabla\cdot\bm r_1)_{\mathcal T_h}+\langle \widehat y_h^o,\bm r_1\cdot\bm n\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}-(\bm q_h, \nabla w_1)_{\mathcal T_h} \\ &\quad +\langle {\bm q}_h\cdot\bm n +h^{-1} y_h,w_1\rangle_{\partial\mathcal T_h}-\langle h^{-1}\widehat{y}_h^o,w_1 \rangle_{\partial \mathcal{T}_h\backslash \varepsilon_h^\partial}\\ &\quad-\langle {\bm q}_h\cdot\bm n+h^{-1}(y_h-\widehat y_h^o),\mu_1\rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}. \end{split} \end{equation} By the definition in \eqref{def_B1}, we can rewrite the EDG formulation of the optimality system \eqref{HDG_discrete2} as follows: find $({\bm{q}}_h,{\bm{p}}_h,y_h,z_h,u_h,\widehat y_h^o,\widehat z_h^o)\in \bm{V}_h\times\bm{V}_h\times W_h \times W_h\times W_h\times \widetilde{M}_h(o)\times \widetilde{M}_h(o)$ such that \begin{subequations}\label{EDG_full_discrete} \begin{align} \mathscr B(\bm q_h,y_h,\widehat y_h^o;\bm r_1,w_1,\mu_1)&=( f+ u_h, w_1)_{\mathcal T_h} + \langle I_hg, h^{-1} w_1-\bm r_1\cdot\bm n \rangle_{\varepsilon_h^\partial},\label{EDG_full_discrete_a}\\ \mathscr B(\bm p_h,z_h,\widehat z_h^o;\bm r_2,w_2,\mu_2)&=(y_d-y_h,w_2)_{\mathcal T_h},\label{EDG_full_discrete_b}\\ (z_h-\gamma u_h,w_3)_{\mathcal T_h}&= 0,\label{EDG_full_discrete_e} \end{align} \end{subequations} for all $\left(\bm{r}_1, \bm{r}_2,w_1,w_2,w_3,\mu_1,\mu_2\right)\in \bm{V}_h\times\bm{V}_h\times W_h \times W_h\times W_h\times \widetilde{M}_h(o)\times \widetilde{M}_h(o)$. Below, we present two fundamental properties of the operator $\mathscr B$, and show the EDG discretization of the optimality system \eqref{EDG_full_discrete} has a unique solution. The strategy of the proofs of these three results is similar to our earlier HDG work \cite{HuShenSinglerZhangZheng_HDG_Dirichlet_control2}; we include the proofs to make this paper self-contained. \begin{lemma}\label{property_B} For any $ ( \bm v_h, w_h, \mu_h ) \in \bm V_h \times W_h \times M_h(o) $, we have \begin{align*} % % % \hspace{1em}&\hspace{-1em} \mathscr B(\bm v_h,w_h,\mu_h;\bm v_h,w_h,\mu_h)\\ % % % &=(\bm v_h,\bm v_h)_{\mathcal T_h}+ \langle h^{-1} (w_h-\mu_h),w_h-\mu_h\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}+\langle h^{-1} w_h,w_h\rangle_{\varepsilon_h^\partial}. \end{align*} \end{lemma} \begin{proof} Compute: \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B(\bm v_h,w_h,\mu_h;\bm v_h,w_h,\mu_h)\\ % % % &=(\bm v_h,\bm v_h)_{\mathcal T_h}-( w_h,\nabla\cdot\bm v_h)_{\mathcal T_h}+\langle \mu_h,\bm v_h\cdot\bm n\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}-(\bm v_h, \nabla w_h)_{\mathcal T_h}\\ % % % & \quad +\langle {\bm v}_h\cdot\bm n +h^{-1} w_h,w_h\rangle_{\partial\mathcal T_h}-\langle h^{-1} \mu_h,w_h\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial} \\ % % % & \quad-\langle {\bm v}_h\cdot\bm n+ h^{-1} (w_h - \mu_h),\mu_h \rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}\\ % % % &=(\bm v_h,\bm v_h)_{\mathcal T_h}+\langle h^{-1} w_h,w_h\rangle_{\partial\mathcal T_h}-\langle h^{-1} \mu_h,w_h\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}\\ % % % &\quad -\langle h^{-1} (w_h-\mu_h),\mu_h \rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}\\ % % % &=(\bm v_h,\bm v_h)_{\mathcal T_h}+ \langle h^{-1} (w_h-\mu_h),w_h-\mu_h\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial}+\langle h^{-1} w_h,w_h\rangle_{\varepsilon_h^\partial}. \end{align*} \end{proof} \begin{lemma}\label{identical_equa} We have $$\mathscr B (\bm q_h,y_h,\widehat y_h^o;\bm p_h,-z_h,-\widehat z_h^o) + \mathscr B (\bm p_h,z_h,\widehat z_h^o;-\bm q_h,y_h,\widehat y_h^o) = 0.$$ \end{lemma} \begin{proof} By the definition of $ \mathscr B $, and integration by parts: \begin{align*} % \hspace{1em}&\hspace{-1em} \mathscr B (\bm q_h,y_h,\widehat y_h^o;\bm p_h,-z_h,-\widehat z_h^o) + \mathscr B (\bm p_h,z_h,\widehat z_h^o;-\bm q_h,y_h,\widehat y_h^o)\\ % % % &=(\bm{q}_h, \bm p_h)_{{\mathcal{T}_h}}- (y_h, \nabla\cdot \bm p_h)_{{\mathcal{T}_h}}+\langle \widehat{y}_h^o, \bm p_h\cdot \bm{n} \rangle_{\partial{{\mathcal{T}_h}}\backslash {\varepsilon_h^{\partial}}} \\ % % % &\quad + (\bm{q}_h , \nabla z_h)_{{\mathcal{T}_h}} - \langle\bm q_h\cdot\bm n +h^{-1} y_h , z_h \rangle_{\partial{{\mathcal{T}_h}}} + \langle h^{-1} \widehat y_h^o, z_h \rangle_{\partial{{\mathcal{T}_h}}\backslash \varepsilon_h^{\partial}} \\ % % % &\quad+ \langle\bm q_h\cdot\bm n +h^{-1} (y_h-\widehat y_h^o), \widehat z_h^o \rangle_{\partial{{\mathcal{T}_h}}\backslash\varepsilon_h^{\partial}}\\ % % % &\quad-(\bm{p}_h, \bm q_h)_{{\mathcal{T}_h}}+ (z_h, \nabla\cdot \bm q_h)_{{\mathcal{T}_h}} -\langle \widehat{z}_h^o, \bm q_h \cdot \bm{n} \rangle_{\partial{{\mathcal{T}_h}}\backslash {\varepsilon_h^{\partial}}}\\ % % % &\quad - (\bm{p}_h , \nabla y_h)_{{\mathcal{T}_h}} +\langle\bm p_h\cdot\bm n +h^{-1} z_h , y_h \rangle_{\partial{{\mathcal{T}_h}}} - \langle h^{-1} \widehat z_h^o, y_h \rangle_{\partial{{\mathcal{T}_h}}\backslash \varepsilon_h^{\partial}}\\ % % % &\quad- \langle\bm p_h\cdot\bm n + h^{-1} (z_h-\widehat z_h^o), \widehat y_h^o \rangle_{\partial{{\mathcal{T}_h}}\backslash\varepsilon_h^{\partial}}\\ &=0. \end{align*} \end{proof} \begin{proposition}\label{ex_uni} There exists a unique solution of the HDG equations \eqref{EDG_full_discrete}. \end{proposition} \begin{proof} Since the system \eqref{EDG_full_discrete} is finite dimensional, we only need to prove solutions are unique. To do this, we show zero is the only solution of the system \eqref{EDG_full_discrete} for problem data $y_d = f =g= 0$. Take $(\bm r_1,w_1,\mu_1) = (\bm p_h,-z_h,-\widehat z_h^o)$, $(\bm r_2,w_2,\mu_2) = (-\bm q_h,y_h,\widehat y_h^o)$, and $w_3 = z_h -\gamma u_h $ in the EDG equations \eqref{EDG_full_discrete_a}, \eqref{EDG_full_discrete_b}, and \eqref{EDG_full_discrete_e}, respectively, and sum to obtain \begin{align*} \hspace{3em}&\hspace{-3em} \mathscr B (\bm q_h,y_h,\widehat y_h^o;\bm p_h,-z_h,-\widehat z_h^o) + \mathscr B (\bm p_h,z_h,\widehat z_h^o;-\bm q_h,y_h,\widehat y_h^o) \\ & = - (y_h,y_h)_{\mathcal T_h} - \gamma^{-1} (z_h,z_h)_{\mathcal T_h} \end{align*} Since $\gamma>0$, \Cref{identical_equa} implies $y_h = u_h = z_h= 0$. Next, take $(\bm r_1,w_1,\mu_1) = (\bm q_h,y_h,\widehat y_h^o)$ and $(\bm r_2,w_2,\mu_2) = (\bm p_h,z_h,\widehat z_h^o)$ in the EDG equations \eqref{EDG_full_discrete_a}-\eqref{EDG_full_discrete_b}. \Cref{property_B} gives $\bm q_h= \bm p_h= \bm 0 $ and $ \widehat y_h^o = \widehat z_h^o=0$. \end{proof} \subsection{Proof of Main Result} For our proof of the convergence results, we follow the strategy in \cite{HuShenSinglerZhangZheng_HDG_Dirichlet_control1} and consider the EDG discretization of the optimality system with the exact optimal control fixed. This results in the following auxiliary problem: find $$({\bm{q}}_h(u),{\bm{p}}_h(u), y_h(u), z_h(u), {\widehat{y}}_h^o(u), {\widehat{z}}_h^o(u))\in \bm{V}_h\times\bm{V}_h\times W_h \times W_h\times \widetilde{M}_h(o)\times \widetilde{M}_h(o)$$ satisfying \begin{subequations}\label{HDG_inter_u} \begin{align} \mathscr B(\bm q_h(u),y_h(u),\widehat{y}_h(u);\bm r_1, w_1,\mu_1)&=( f+ u,w_1)_{\mathcal T_h} \ \nonumber\\ % % % & \quad+ \langle I_hg, h^{-1} w_1-\bm r_1\cdot\bm n \rangle_{\varepsilon_h^\partial},\label{EDG_u_a} \\ \mathscr B(\bm p_h(u),z_h(u),\widehat{z}_h(u);\bm r_2, w_2,\mu_2)&=(y_d-y_h(u), w_2)_{\mathcal T_h},\label{EDG_u_b} \end{align} \end{subequations} for all $\left(\bm{r}_1, \bm{r}_2,w_1,w_2,\mu_1,\mu_2\right)\in \bm{V}_h\times\bm{V}_h \times W_h\times W_h\times \widetilde{M}_h(o)\times \widetilde{M}_h(o)$. We split our proof into seven steps, and estimate the errors between the solutions of the exact optimality system, the auxiliary problem, and the EDG discretization of the optimality system. We start with the auxiliary problem and the mixed formulation of the optimality system \eqref{mixed_a}-\eqref{mixed_d}. In Steps 1-3 below, we estimate the errors in the state $ y $ and the flux $ \bm{q} $. We split the errors with the $ L^2 $ projections and the continuous interpolation operator. We use the following notation: \begin{align}\label{notation} \begin{aligned \delta^{\bm q} &=\bm q-{\bm\Pi}_V\bm q,\\ \delta^y&=y- \Pi_W y,\\ \delta^{\widehat y} &= y-I_h y,\\ \widehat {\bm\delta}_1 &= \delta^{\bm q}\cdot\bm n+ h^{-1} (\delta^y- \delta^{\widehat{y}}), \end{aligned} && \begin{aligned \varepsilon^{\bm q}_h &= {\bm\Pi}_V \bm q-\bm q_h(u),\\ \varepsilon^{y}_h &= \Pi_W y-y_h(u),\\ \varepsilon^{\widehat y}_h &= I_h y-\widehat{y}_h(u),\\ \widehat {\bm \varepsilon }_1 &= \varepsilon_h^{\bm q}\cdot\bm n+h^{-1} (\varepsilon^y_h-\varepsilon_h^{\widehat y}). \end{aligned} \end{align} where $\widehat y_h(u) = \widehat y_h^o(u)$ on $\varepsilon_h^o$ and $\widehat y_h(u) = I_h g$ on $\varepsilon_h^{\partial}$, which implies $\varepsilon_h^{\widehat y} = 0$ on $\varepsilon_h^{\partial}$. \subsubsection{Step 1: The error equation for part 1 of the auxiliary problem \eqref{EDG_u_a}.} \label{subsec:proof_step1} \begin{lemma} \label{lemma:step_1} We have \begin{align} \label{error_y} \hspace{3em}\hspace{-3em} \mathscr B(\varepsilon^{\bm q}_h,\varepsilon^y_h,\varepsilon^{\widehat{y}}_h;\bm r_1, w_1,\mu_1) =-\langle \delta^{\widehat{y}},\bm r_1\cdot\bm n \rangle_{\partial \mathcal{T}_h}-\langle\widehat{\bm \delta}_1,w_1\rangle_{\partial \mathcal{T}_h}+\langle \widehat{\bm \delta}_1,\mu_1 \rangle_{\partial \mathcal{T}_h\backslash\varepsilon_h^\partial}. \end{align} \end{lemma} \begin{proof} By the definition of the EDG operator $\mathscr B$ in \eqref{def_B1}, we have \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B ({\bm \Pi}_V\bm q,{\Pi}_W y,I_h y;\bm r_1,w_1,\mu_1) \\ % % % & =({\bm \Pi}_V\bm q,\bm r_1)_{\mathcal T_h}-({\Pi}_W y,\nabla\cdot\bm r_1)_{\mathcal T_h}+\langle I_h y,\bm r_1\cdot\bm n\rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}\\ % % % &\quad-({\bm \Pi}_V\bm q , \nabla w_1)_{\mathcal T_h} +\langle {\bm \Pi}_V\bm q \cdot\bm n +h^{-1} {\Pi}_W y ,w_1\rangle_{\partial\mathcal T_h}-\langle h^{-1} I_h y ,w_1\rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}\nonumber\\ % % % &\quad-\langle {\bm \Pi}_V\bm q \cdot\bm n+h^{-1} ( {\Pi}_W y -I_h y ),\mu_1\rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}. \end{align*} Using the properties of the $L^2$-orthogonal projections \eqref{def_L2} gives \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B ({\bm \Pi}_V\bm q,{\Pi}_W y,I_h y;\bm r_1,w_1,\mu_1) \\ &=(\bm q,\bm r_1)_{\mathcal T_h}-( y,\nabla\cdot\bm r_1)_{\mathcal T_h}+\langle y,\bm r_1\cdot\bm n\rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}-\langle \delta^{\widehat{y}},\bm r_1\cdot\bm n \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}\\ % % % &\quad-(\bm q,\nabla w_1)_{\mathcal{T}_h}+\langle \bm q\cdot\bm n,w_1 \rangle_{\partial \mathcal{T}_h}+\langle h^{-1} y,w_1 \rangle_{\varepsilon_h^\partial}-\langle \delta^{\bm q}\cdot\bm n+h^{-1} \delta^y,w_1 \rangle_{\partial \mathcal{T}_h}\\ % % % &\quad+\langle h^{-1} \delta^{\widehat{y}},w_1 \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}-\langle \bm q\cdot \bm n,\mu_1 \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial} +\langle \widehat{\bm \delta}_1,\mu_1 \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial} % % % \end{align*} The exact solution $\bm q$ and $y$ satisfies \begin{align*} (\bm q,\bm r_1)_{\mathcal T_h}-(y,\nabla\cdot\bm r_1)_{\mathcal T_h}+\langle y,\bm r_1\cdot\bm n\rangle_{\partial\mathcal T_h}&=0,\\ % % % -(\bm q , \nabla w_1)_{\mathcal T_h}+\langle {\bm q}\cdot\bm n,w_1\rangle_{\partial\mathcal T_h}&= (f+u, w_1)_{\mathcal T_h},\\ % % % \langle {\bm q}\cdot\bm n,\mu_1\rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}&=0, \end{align*} and therefore \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B ({\bm \Pi}_V\bm q,{\Pi}_W y,I_h y;\bm r_1,w_1,\mu_1) \\ &=-\langle y,\bm r_1\cdot\bm n \rangle_{\varepsilon_h^\partial}-\langle \delta^{\widehat{y}},\bm r_1\cdot\bm n \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}+(f+u_h,w_1)_{\mathcal{T}_h}+\langle h^{-1} y,w_1 \rangle_{\varepsilon_h^\partial}\\ % % % &\quad-\langle \delta^{\bm q}\cdot\bm n+h^{-1} \delta^y,w_1 \rangle_{\partial \mathcal{T}_h}+\langle h^{-1} \delta^{\widehat{y}},w_1 \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial}+\langle \widehat{\bm \delta}_1,\mu_1 \rangle_{\partial\mathcal T_h\backslash\varepsilon_h^\partial} \end{align*} Subtracting equation \eqref{EDG_u_a} from the above equation completes the proof. \end{proof} \subsubsection{Step 2: Estimate for $\varepsilon_h^{ q}$.} \label{subsec:proof_step2} \begin{lemma} \label{energy_norm_q} We have \begin{align} \|\varepsilon_h^{\bm q}\|_{\mathcal{T}_h}+h^{-\frac{1}{2}}\|\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\|_{\partial \mathcal{T}_h} \lesssim h^{k} (|\bm q|_{k+1}+|y|_{k+1}). \end{align} \end{lemma} \begin{proof} Take $(\bm r_1,w_1,\mu_1)=(\bm \varepsilon_h^{\bm q},\varepsilon_h^y,\varepsilon_h^{\widehat{y}})$ in equation \eqref{error_y} and use $\varepsilon_h^{\widehat{y}}=0$ on $\varepsilon_h^\partial$ to get \begin{align*} \hspace{3em}\hspace{-3em} \mathscr B(\varepsilon^{\bm q}_h,\varepsilon^y_h,\varepsilon^{\widehat{y}}_h;\bm \varepsilon_h^{\bm q},\varepsilon_h^y,\varepsilon_h^{\widehat{y}}) &=-\langle \delta^{\widehat{y}},\varepsilon_h^{\bm q}\cdot\bm n \rangle_{\partial \mathcal{T}_h}-\langle\widehat{\bm \delta}_1,\varepsilon_h^y\rangle_{\partial \mathcal{T}_h}+\langle \widehat{\bm \delta}_1,\varepsilon_h^{\widehat{y}} \rangle_{\partial \mathcal{T}_h\backslash\varepsilon_h^\partial}\\ &=-\langle \delta^{\widehat{y}},\varepsilon_h^{\bm q}\cdot\bm n \rangle_{\partial \mathcal{T}_h}-\langle\widehat{\bm \delta}_1,\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\rangle_{\partial \mathcal{T}_h}. \end{align*} Next, we have \begin{align*} -\langle \delta^{\widehat{y}},\varepsilon_h^{\bm q}\cdot\bm n \rangle_{\partial \mathcal{T}_h}&\le C\| \delta^{\widehat{y}} \|_{\partial \mathcal{T}_h}\|\varepsilon_h^{\bm q}\|_{\partial \mathcal{T}_h} \le C h^{-\frac{1}{2}} \| \delta^{\widehat{y}} \|_{\partial \mathcal{T}_h}\| \varepsilon_h^{\bm q}\|_{\mathcal{T}_h},\\ \langle\widehat{\bm \delta}_1,\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\rangle_{\partial \mathcal{T}_h}&=-\langle \delta^{\bm q}\cdot\bm n+\frac{1}{h}(\delta^y-\delta^{\widehat{y}}),\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\rangle_{\partial \mathcal{T}_h}\\ &\le (\|\delta^{\bm q}\|_{\partial \mathcal{T}_h}+h^{-1}\| \delta^y \|_{\partial \mathcal{T}_h}+h^{-1}\| \delta^{\widehat{y}} \|_{\partial \mathcal{T}_h})\| \varepsilon_h^y-\varepsilon_h^{\widehat{y}} \|_{\partial \mathcal{T}_h}\\ &=(h^\frac{1}{2}\|\delta^{\bm q}\|_{\partial \mathcal{T}_h}+h^{-\frac{1}{2}}\| \delta^y \|_{\partial \mathcal{T}_h}+h^{-\frac{1}{2}}\| \delta^{\widehat{y}} \|_{\partial \mathcal{T}_h})h^{-\frac{1}{2}}\| \varepsilon_h^y-\varepsilon_h^{\widehat{y}} \|_{\partial \mathcal{T}_h}. \end{align*} The energy property of operator $\mathscr B$ in \Cref{property_B} gives \begin{align*} \|\varepsilon_h^{\bm q}\|_{\mathcal{T}_h}+h^{-\frac{1}{2}}\|\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\|_{\partial \mathcal{T}_h} &\lesssim h^{-\frac{1}{2}}\| \delta^{\widehat{y}} \|_{\partial \mathcal{T}_h}+h^{-\frac{1}{2}}\|\delta^y\|_{\partial \mathcal{T}_h}+h^\frac{1}{2}\|\delta^{\bm q}\|_{\partial \mathcal{T}_h}\\ &\lesssim h^{k} (|\bm q|_{k+1}+|y|_{k+1}). \end{align*} \end{proof} \subsubsection{Step 3: Estimate for $\varepsilon_h^y$ by a duality argument.} \label{subsec:proof_step_3} Next, we introduce the dual problem for any given $\Theta$ in $L^2(\Omega)$: \begin{equation}\label{Dual_PDE} \begin{split} \bm\Phi+\nabla\Psi&=0\qquad~\text{in}\ \Omega,\\ \nabla\cdot\bm{\Phi}&=\Theta \qquad\text{in}\ \Omega,\\ \Psi&=0\qquad~\text{on}\ \partial\Omega. \end{split} \end{equation} Since the domain $\Omega$ is convex, we have the regularity estimate \begin{align}\label{dual_esti} \norm{\bm \Phi}_{1,\Omega} + \norm{\Psi}_{2,\Omega} \le C_{\text{reg}} \norm{\Theta}_\Omega. \end{align} In the proof below for estimating $\varepsilon_h^{y}$, we use the following notation: \begin{align} \delta^{\bm \Phi} &=\bm \Phi-{\bm\Pi}_V\bm \Phi, \quad \delta^\Psi=\Psi- {\Pi}_W \Psi, \quad \delta^{\widehat \Psi} = \Psi-I_h \Psi.\label{notation_2} \end{align} \begin{lemma} \label{dual_y} We have \begin{align} \| \varepsilon_h^y \|_{\mathcal{T}_h} \lesssim h^{k+1} (|\bm q|_{k+1}+|y|_{k+1}). \end{align} \end{lemma} \begin{proof} First, take $(\bm r_1,w_1,\mu_1)=(\bm \Pi_V \bm \Phi, -\Pi_W \Psi,-I_h \Psi)$ in equation \eqref{error_y} to get \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B(\varepsilon^{\bm q}_h,\varepsilon^y_h,\varepsilon^{\widehat{y}}_h;\bm \Pi_V \bm \Phi, -\Pi_W \Psi,-I_h \Psi)\\ &=(\varepsilon_h^{\bm q},\bm \Pi_V \bm \Phi)_{\mathcal T_h}-( \varepsilon_h^y,\nabla\cdot\bm \Pi_V \bm \Phi)_{\mathcal T_h}+\langle \varepsilon_h^{\widehat{y}},\bm \Pi_V \bm \Phi\cdot\bm n\rangle_{\partial\mathcal T_h\backslash \varepsilon_h^\partial} \\ &\quad +(\varepsilon_h^{\bm q}, \nabla \Pi_W \Psi)_{\mathcal T_h}-\langle \varepsilon_h^{\bm q}\cdot\bm n +h^{-1} \varepsilon_h^y,\Pi_W \Psi\rangle_{\partial\mathcal T_h}+\langle h^{-1} \varepsilon_h^{\widehat{y}},\Pi_W \Psi \rangle_{\partial \mathcal{T}_h\backslash \varepsilon_h^\partial}\\ &\quad+\langle \varepsilon_h^{\bm q}\cdot\bm n+h^{-1}(\varepsilon_h^y-\varepsilon_h^{\widehat{y}}),I_h \Psi\rangle_{\partial\mathcal T_h\backslash\varepsilon^{\partial}_h}. \end{align*} Next, integration by parts gives \begin{align*} -(\varepsilon_h^y,\nabla\cdot \bm \Pi_V \bm \Phi)_{\partial \mathcal{T}_h}&=(\nabla \varepsilon_h^y,\bm \Phi)_{\mathcal{T}_h}-\langle \varepsilon_h^y,\bm \Pi_V\Phi \cdot \bm n\rangle_{\partial \mathcal{T}_h}\\ &=-(\varepsilon_h^y,\nabla\cdot \bm \Phi)_{\mathcal{T}_h}+\langle \varepsilon_h^y,\delta^{\bm \Phi} \cdot \bm n\rangle_{\partial \mathcal{T}_h},\\ ( \varepsilon_h^q,\nabla \Pi_W \Psi)_{\mathcal{T}_h}&=-(\nabla\cdot \varepsilon_h^q, \Psi)_{\mathcal{T}_h}+\langle \varepsilon_h^{\bm q}\cdot\bm n,\Pi_W \Psi \rangle_{\partial \mathcal{T}_h}\\ &=(\varepsilon_h^{\bm q},\nabla \Psi)_{\mathcal{T}_h}-\langle \varepsilon_h^{\bm q}\cdot\bm n,\delta^\Psi \rangle_{\partial \mathcal{T}_h}. \end{align*} Since $ \bm \Phi $ and $ \Psi $ satisfy the dual problem \eqref{Dual_PDE} with $\Theta=-\varepsilon_h^y$, we obtain \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B(\varepsilon^{\bm q}_h,\varepsilon^y_h,\varepsilon^{\widehat{y}}_h;\bm \Pi_V \bm \Phi, -\Pi_W \Psi,-I_h \Psi)\\ % % % &=(\varepsilon_h^{\bm q}, \bm \Phi)_{\mathcal T_h}-( \varepsilon_h^y,\nabla\cdot \bm \Phi)_{\mathcal T_h}+\langle \varepsilon_h^y-\varepsilon_h^{\widehat{y}},\delta^{\bm \Phi}\cdot\bm n\rangle_{\partial\mathcal T_h} \\ % % % &\quad +(\varepsilon_h^{\bm q}, \nabla \Psi)_{\mathcal T_h}-\langle \varepsilon_h^{\bm q}\cdot\bm n,\Psi \rangle_{\partial \mathcal{T}_h}-\langle h^{-1} \varepsilon_h^y,\Pi_W \Psi\rangle_{\partial\mathcal T_h}\\ % % % &\quad+\langle h^{-1} \varepsilon_h^{\widehat{y}},\Pi_W \Psi \rangle_{\partial \mathcal{T}_h}+\langle \varepsilon_h^{\bm q}\cdot\bm n+h^{-1} (\varepsilon_h^y-\varepsilon_h^{\widehat{y}}),I_h \Psi\rangle_{\partial\mathcal T_h}\\ % % % &=(\varepsilon_h^y,\varepsilon_h^y)_{\mathcal{T}_h}+\langle \varepsilon_h^y-\varepsilon_h^{\widehat{y}},\delta^{\bm \Phi}\cdot \bm n \rangle_{\partial \mathcal{T}_h}-\langle \varepsilon_h^{\bm q}\cdot\bm n,\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}\\ % % % &\quad +\langle h^{-1} (\varepsilon_h^y-\varepsilon_h^{\widehat{y}}),\delta^\Psi-\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}, \end{align*} where we used $\langle \varepsilon_h^{\widehat{y}},\bm \Phi\cdot \bm n \rangle_{\partial \mathcal{T}_h\backslash\varepsilon_h^\partial}=0$ and $\Psi= \delta^{\widehat{y}}=0$ on $\varepsilon_h^\partial$. On the other hand, from equation \eqref{error_y} and $\langle \delta^{\widehat{y}},{\bm \Phi}\cdot\bm n \rangle_{\partial \mathcal{T}_h} =0$ we have \begin{align*} \hspace{4em}&\hspace{-4em} \mathscr B(\varepsilon^{\bm q}_h,\varepsilon^y_h,\varepsilon^{\widehat{y}}_h;\bm \Pi_V \bm \Phi, -\Pi_W \Psi,-I_h \Psi) \\ &=-\langle \delta^{\widehat{y}},\bm \Pi_V \bm \Phi\cdot\bm n \rangle_{\partial \mathcal{T}_h}+\langle\widehat{\bm \delta}_1,\delta^\Psi-\delta^{\widehat{\Psi}}\rangle_{\partial \mathcal{T}_h}\\ &=\langle \delta^{\widehat{y}},\delta^{\bm \Phi}\cdot\bm n \rangle_{\partial \mathcal{T}_h}+\langle\widehat{\bm \delta}_1,\delta^\Psi-\delta^{\widehat{\Psi}}\rangle_{\partial \mathcal{T}_h}. \end{align*} Comparing with the two equations above, we have \begin{align*} \|\varepsilon_h^y\|_{\mathcal{T}_h}^2&=\langle \delta^{\widehat{y}},\delta^{\bm \Phi}\cdot\bm n \rangle_{\partial \mathcal{T}_h}+\langle\widehat{\bm \delta}_1,\delta^\Psi-\delta^{\widehat{\Psi}}\rangle_{\partial \mathcal{T}_h}-\langle \varepsilon_h^y-\varepsilon_h^{\widehat{y}},\delta^{\bm \Phi}\cdot \bm n \rangle_{\partial \mathcal{T}_h}\\ &\quad +\langle \varepsilon_h^{\bm q}\cdot\bm n,\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}-\langle h^{-1} (\varepsilon_h^y-\varepsilon_h^{\widehat{y}}),\delta^\Psi-\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}\\ &=:T_1+T_2+T_3+T_4+T_5. \end{align*} By the Cauchy-Schwarz inequality, \Cref{energy_norm_q}, and \eqref{classical_ine}, we have \begin{align*} T_1 &\le \|\delta^{\widehat{y}}\|_{\partial \mathcal{T}_h} \|\delta^{\bm \Phi}\|_{\partial \mathcal{T}_h}\lesssim h^\frac{1}{2} \|\delta^{\widehat{y}}\|_{\partial \mathcal{T}_h}\|\bm \Phi\|_{1,\Omega}\lesssim h^\frac{1}{2} \|\delta^{\widehat{y}}\|_{\partial \mathcal{T}_h}\|\varepsilon_h^y\|_{\Omega},\\ % % % &\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1})\| \varepsilon_h^y \|_{\mathcal{T}_h},\\ % % % T_2 &\lesssim h^{\frac{3}{2}} (\|\delta^{\bm q}\|_{\partial \mathcal{T}_h}+h^{-1} \|\delta^y-\delta^{\widehat{y}}\|_{\partial \mathcal{T}_h})\|\Psi\|_{2,\Omega}\\ % % % &\lesssim ( h\|\delta^{\bm q}\|_{\mathcal{T}_h} +h^{\frac{1}{2}}(\|\delta^y\|_{\partial \mathcal{T}_h}+\| \delta^{\widehat{y}} \|_{\partial \mathcal{T}_h}))\| \varepsilon_h^y \|_{\mathcal{T}_h}\\ % % % &\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1})\| \varepsilon_h^y \|_{\mathcal{T}_h},\\ % % % T_3&\le \|\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\|_{\partial \mathcal{T}_h}\| \delta^{\bm \Phi} \|_{\partial \mathcal{T}_h}\lesssim h^\frac{1}{2} \|\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\|_{\partial \mathcal{T}_h} \|\bm \Phi\|_{1,\Omega}\\ % % % &\lesssim h^{1/2} \|\varepsilon_h^y-\varepsilon_h^{\widehat{y}}\|_{\partial \mathcal{T}_h} \|\varepsilon_h^y\|_{\mathcal{T}_h}\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1})\| \varepsilon_h^y \|_{\mathcal{T}_h},\\ % % % T_4 &\lesssim h\|\varepsilon_h^{\bm q}\|_{\mathcal{T}_h}\|\varepsilon_h^y\|_{\mathcal{T}_h}\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1})\| \varepsilon_h^y \|_{\mathcal{T}_h},\\ % % % T_5 &\lesssim h^\frac{1}{2} \| \varepsilon_h^y-\varepsilon_h^{\widehat{y}} \|_{\partial \mathcal{T}_h} \| \varepsilon_h^y \|_{\mathcal{T}_h}\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1})\| \varepsilon_h^y \|_{\mathcal{T}_h}. \end{align*} Summing $T_1$ to $T_5$ gives \begin{align*} \| \varepsilon_h^y \|_{\mathcal{T}_h} \lesssim h^{k+1} (|\bm q|_{k+1}+|y|_{k+1}). \end{align*} \end{proof} The triangle inequality gives convergence rates for $\|\bm q -\bm q_h(u)\|_{\mathcal T_h}$ and $\|y -y_h(u)\|_{\mathcal T_h}$: \begin{lemma}\label{le} \begin{subequations} \begin{align} \|\bm q -\bm q_h(u)\|_{\mathcal T_h} &\le \|\delta^{\bm q}\|_{\mathcal T_h} + \|\varepsilon_h^{\bm q}\|_{\mathcal T_h} \lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}),\\ \|y -y_h(u)\|_{\mathcal T_h} &\le \|\delta^{y}\|_{\mathcal T_h} + \|\varepsilon_h^{y}\|_{\mathcal T_h} \lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}). \end{align} \end{subequations} \end{lemma} \subsubsection{Step 4: The error equation for part 2 of the auxiliary problem \eqref{EDG_u_b}.} \label{subsec:proof_step_4} Next, we consider the dual equation \eqref{mixed_c}-\eqref{mixed_d} in the optimality system and compare with the second part of the auxiliary EDG equation \eqref{EDG_u_b}. We split the errors as before; define \begin{align}\label{notation_1} \begin{aligned \delta^{\bm p} &=\bm p-{\bm\Pi}_V\bm p,\\ \delta^z&=z- \Pi_W z,\\ \delta^{\widehat z} &= z-I_h z,\\ \widehat {\bm\delta}_2 &= \delta^{\bm p}\cdot\bm n+ h^{-1} (\delta^z- \delta^{\widehat{z}}), \end{aligned} && \begin{aligned \varepsilon^{\bm p}_h &= {\bm\Pi}_V \bm p-\bm p_h(u),\\ \varepsilon^{z}_h &= \Pi_W z-z_h(u),\\ \varepsilon^{\widehat z}_h &= I_h z-\widehat{z}_h(u),\\ \widehat {\bm \varepsilon }_2 &= \varepsilon_h^{\bm p}\cdot\bm n+h^{-1} (\varepsilon^z_h-\varepsilon_h^{\widehat z}), \end{aligned} \end{align} where $\widehat z_h(u) = \widehat z_h^o(u)$ on $\varepsilon_h^o$ and $\widehat z_h(u) = 0$ on $\varepsilon_h^{\partial}$. This gives $\varepsilon_h^{\widehat z} = 0$ on $\varepsilon_h^{\partial}$. \begin{lemma}\label{lemma:step2_first_lemma} We have \begin{align}\label{error_z} \hspace{3em}&\hspace{-3em} \mathscr B(\varepsilon^{\bm p}_h,\varepsilon^z_h,\varepsilon^{\widehat{z}}_h;\bm r_2, w_2,\mu_2) \ \nonumber\\ &=\langle\delta^{\widehat{z}},\bm r_2\cdot \bm n\rangle_{\partial \mathcal T_h}+(y_h(u)- y, w_2)_{\mathcal T_h}+ \langle \widehat{\bm \delta}_2,w_2 - \mu_2\rangle_{\partial\mathcal T_h}. \end{align} \end{lemma} The proof is similar to the proof of \Cref{lemma:step2_first_lemma} and is omitted. \subsubsection{Step 5: Estimates for $\varepsilon_h^p$ and $\varepsilon_h^z$ by an energy and duality argument.} \label{subsec:proof_step_5} \begin{lemma}\label{e_sec} Let $ \kappa$ be any positive constant. Then there exists a constant $ C $ that does not depend on $ \kappa $ such tha \begin{align} \|\varepsilon_h^{\bm p}\|_{\mathcal T_h}&+h^{-\frac{1}{2}}\|\varepsilon_h^z-\varepsilon_h^{\widehat z}\|_{\partial\mathcal T_h} \le \mathbb E + \kappa \| \varepsilon^z_h \|_{\mathcal T_h},\label{error_es_p} \end{align} where \begin{align*} \mathbb E = Ch^{-\frac{1}{2}}\| \delta^{\widehat{z}} \|_{\partial\mathcal T_h} + Ch^{-\frac{1}{2}}\| \delta^{z} \|_{\partial\mathcal T_h} + \frac C {\kappa} \| y_h(u) - y \|_{\mathcal T_h} +C\| \delta^{\bm p} \|_{ \mathcal{T}_h}. \end{align*} \end{lemma} \begin{proof} Taking $(\bm r_2,w_2,\mu_2) = (\varepsilon^{\bm p}_h,\varepsilon^z_h,\varepsilon^{\widehat z}_h)$ in \eqref{error_z} in \Cref{lemma:step2_first_lemma} gives \begin{align*} \hspace{1em}&\hspace{-1em} \mathscr B ( \varepsilon^{ \bm p}_h, \varepsilon^z_h, \varepsilon^{\widehat z}_h;\varepsilon^{\bm p}_h, \varepsilon^z_h, \varepsilon^{\widehat z}_h )\\ & =\langle\delta^{\widehat{z}},\varepsilon_h^{\bm p}\cdot \bm n\rangle_{\partial \mathcal T_h}+(y_h(u)- y, \varepsilon_h^z)_{\mathcal T_h}+ \langle \widehat{\bm \delta}_2,\varepsilon_h^z - \varepsilon_h^{\widehat{z}}\rangle_{\partial\mathcal T_h}\\ &\le Ch^{-\frac{1}{2}}\|\delta^{\widehat{z}}\|_{\partial \mathcal{T}_h}\| \varepsilon_h^{\bm p} \|_{\mathcal{T}_h}+\frac{1}{\kappa}\| y_h(u)-y \|_{\mathcal{T}_h}^2+\kappa \| \varepsilon_h^z \|_{\mathcal{T}_h}^2 \\ &\quad + C(h^{\frac{1}{2}}\| \delta^{\bm p} \|_{\mathcal{T}_h} +h^{-\frac{1}{2}}\| \delta^z-\delta^{\widehat{z}} \|_{\partial \mathcal{T}_h}) h^{-\frac{1}{2}}\| \varepsilon_h^z-\varepsilon_h^{\widehat{z}} \|_{\partial \mathcal{T}_h}. \end{align*} \Cref{property_B} gives \begin{align*} \|\varepsilon_h^{\bm p}\|_{\mathcal T_h}&+h^{-\frac{1}{2}}\|\varepsilon_h^z-\varepsilon_h^{\widehat z}\|_{\partial\mathcal T_h}\nonumber\\ &\le Ch^{-\frac{1}{2}}\| \delta^{\widehat{z}} \|_{\partial\mathcal T_h} + Ch^{-\frac{1}{2}}\| \delta^{z} \|_{\partial\mathcal T_h} + \frac C {\kappa} \| y_h(u) - y \|_{\mathcal T_h} \\ &\quad+C\| \delta^{\bm p} \|_{ \mathcal{T}_h}+ \kappa \| \varepsilon^z_h \|_{\mathcal T_h}, \end{align*} where $\kappa$ is any positive constant. \end{proof} \begin{lemma} We have \begin{subequations} \begin{align} \norm{\varepsilon_h^{\bm p}}_{\mathcal T_h} &\lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\label{var_p}\\ \|\varepsilon^z_h\|_{\mathcal T_h} &\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}).\label{var_z} \end{align} \end{subequations} \end{lemma} \begin{proof} First, take $(\bm r_2,w_2,\mu_2) = ( {\bm\Pi}_V\bm{\Phi},-{\Pi}_W\Psi,-I_h\Psi)$ in \eqref{error_z} in \Cref{lemma:step2_first_lemma} to obtain % \begin{align*} \mathscr B &(\varepsilon^{\bm p}_h,\varepsilon^z_h,\varepsilon^{\widehat z}_h;{\bm\Pi}_V\bm{\Phi},-{\Pi}_W\Psi,-I_h\Psi)\\ & = \langle\delta^{\widehat{z}},\bm \Pi_V \bm \Phi\cdot \bm n\rangle_{\partial \mathcal T_h}-(y_h(u)- y, \Pi_W \Psi)_{\mathcal T_h}- \langle \widehat{\bm \delta}_2,\delta^\Psi - \delta^{\widehat{\Psi}}\rangle_{\partial\mathcal T_h}. \end{align*} % Next, consider the dual problem \eqref{Dual_PDE} and let $\Theta = -\varepsilon_h^z$. Using the definition of $\mathscr B$ and the proof technique for \Cref{dual_y} gives \begin{align*} % % % \hspace{1em}&\hspace{-1em} \mathscr B (\varepsilon^{\bm p}_h,\varepsilon^z_h,\varepsilon^{\widehat z}_h;{\bm\Pi}_V\bm{\Phi},-{\Pi}_W\Psi,-I_h\Psi)\\ % % % &=(\varepsilon_h^z,\varepsilon_h^z)_{\mathcal{T}_h}+\langle \varepsilon_h^z-\varepsilon_h^{\widehat{z}},\delta^{\bm \Phi}\cdot \bm n \rangle_{\partial \mathcal{T}_h}-\langle \varepsilon_h^{\bm p}\cdot\bm n,\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}\\ % % % &\quad +\langle h^{-1} (\varepsilon_h^z-\varepsilon_h^{\widehat{z}}),\delta^\Psi-\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}. \end{align*} Here, we used $\langle\varepsilon^{\widehat z}_h,\bm \Phi\cdot\bm n\rangle_{\partial\mathcal T_h}=0$, which holds since $\varepsilon^{\widehat z}_h$ is a single-valued function on interior edges and $\varepsilon^{\widehat z}_h=0$ on $\varepsilon^{\partial}_h$. Comparing the above two equalities gives \begin{align*} % % % \| \varepsilon_h^z\|_{\mathcal T_h}^2 & = \langle\delta^{\widehat{z}},\bm \Pi_V \bm \Phi\cdot \bm n\rangle_{\partial \mathcal T_h}-(y_h(u)- y, \Pi_W \Psi)_{\mathcal T_h}- \langle \widehat{\bm \delta}_2,\delta^\Psi - \delta^{\widehat{\Psi}}\rangle_{\partial\mathcal T_h}\\ % % % &\quad-\langle \varepsilon_h^z-\varepsilon_h^{\widehat{z}},\delta^{\bm \Phi}\cdot \bm n \rangle_{\partial \mathcal{T}_h}+\langle \varepsilon_h^{\bm p}\cdot\bm n,\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h}\\ % % % &\quad -\langle h^{-1} (\varepsilon_h^z-\varepsilon_h^{\widehat{z}}),\delta^\Psi-\delta^{\widehat{\Psi}} \rangle_{\partial \mathcal{T}_h},\\ &=: \sum_{i=1}^6 R_i. \end{align*} Let $ C_0 = \max\{C, 1\} $, where $ C $ is the constant defined in \eqref{classical_ine}. For the terms $R_1$, $R_2$, and $R_3$, we have \begin{align*} R_1&=-\langle \delta^{\widehat{z}}, \delta^{\bm \Phi}\cdot \bm n\rangle_{\partial \mathcal{T}_h} \le C_0 h^{\frac{1}{2}}\| \delta^{\widehat{z}} \|_{\partial \mathcal{T}_h} \| \bm \Phi \|_{1,\Omega}\le C_0 C_{\text{reg}} h^\frac{1}{2} \| \delta^{\widehat{z}} \|_{\partial \mathcal{T}_h} \| \varepsilon_h^z \|_{\mathcal{T}_h},\\ % % % R_2&\le \| y_h(u)-y \|_{\mathcal{T}_h} (\| \delta^{\Psi} \|_{\mathcal{T}_h}+\| \Psi \|_{\Omega})\le C_0C_{\text{reg}} \| y-y_h(u) \|_{\mathcal{T}_h}\| \varepsilon_h^z \|_{\mathcal{T}_h},\\ % % % R_3&\le C_0 h^{\frac{3}{2}} (\delta^{\bm p}\cdot\bm n+\frac{1}{h}\|\delta^z-\delta^{\widehat{z}}\|_{\partial \mathcal{T}_h})\| \Psi \|_{2,\Omega}\\ % % % &\le C_0 C_{\text{reg}}(h\| \delta^{\bm p} \|_{\mathcal{T}_h}+h^\frac{1}{2} \|\delta^z-\delta^{\widehat{z}}\|_{\partial \mathcal{T}_h}) \| \varepsilon_h^z \|_{\mathcal{T}_h}. \end{align*} % For the terms $R_4$, $R_5$ and $R_6$, \Cref{e_sec} gives \begin{align*} R_4&= C_0 h^{\frac{1}{2}}\| \varepsilon_h^z-\varepsilon_h^{\widehat{z}} \|_{\partial \mathcal{T}_h} \| \bm \Phi \|_{1,\Omega}\le C_0C_{\text{reg}}h(\mathbb{E}+\kappa \|\varepsilon_h^z\|_{\mathcal{T}_h})\| \varepsilon_h^z \|_{\mathcal{T}_h}, \\ % % % R_5 &\le C_0 h^{\frac{3}{2}}\| \varepsilon_h^{\bm p} \|_{\partial \mathcal{T}_h}\|\Psi\|_{2,\Omega}\le C_0C_{\text{reg}} h\|\varepsilon_h^{\bm p}\|_{\mathcal{T}_h}\|\varepsilon_h^z\|_{\mathcal{T}_h}\\ &\le C_0C_{\text{reg}} h(\mathbb{E}+\kappa\| \varepsilon_h^z \|_{\mathcal{T}_h})\|\varepsilon_h^z\|_{\mathcal{T}_h}, \\ % % % R_6 &\le C_0 h^{\frac{1}{2}}\| \varepsilon_h^z-\varepsilon_h^{\widehat{z}} \|_{\partial \mathcal{T}_h}\| \Psi \|_{2,\Omega} \le C_0C_{\text{reg}} h^{\frac{1}{2}}(\mathbb{E}+\kappa \| \varepsilon_h^z \|_{\mathcal{T}_h})\|\varepsilon_h^z\|_{\mathcal{T}_h}. \end{align*} Summing $R_1$ to $R_6$ gives \begin{align*} \|\varepsilon^z_h\|_{\mathcal T_h} &\le C ( h\|\delta^{\bm p}\|_{\mathcal{T}_h} +\norm{y - y_h(u)}_{\mathcal T_h} + h^{1/2}\|{\delta^{\widehat z}}\|_{\partial\mathcal T_h}+h^{1/2}\|{\delta^{z}}\|_{\partial\mathcal T_h})\\ &\quad+\mathbb C(\mathbb E +\kappa\|\varepsilon^z_h\|_{\mathcal T_h} ), \end{align*} where $\mathbb C =3 C_0C_{\text{reg}}$. Choose $\kappa=\frac{1}{2\mathbb C}$ gives \begin{align*} \|\varepsilon^z_h\|_{\mathcal T_h}\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}). \end{align*} Finally, \eqref{error_es_p} and \eqref{var_z} imply \eqref{var_p}. \end{proof} The triangle inequality again gives convergence rates for $\|\bm p -\bm p_h(u)\|_{\mathcal T_h}$ and $\|z -z_h(u)\|_{\mathcal T_h}$: \begin{lemma}\label{lemma:step3_conv_rates} \begin{subequations} \begin{align} \|\bm p -\bm p_h(u)\|_{\mathcal T_h} &\le \|\delta^{\bm p}\|_{\mathcal T_h} + \|\varepsilon_h^{\bm p}\|_{\mathcal T_h} \ \nonumber \\ &\lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|z -z_h(u)\|_{\mathcal T_h} &\le \|\delta^{z}\|_{\mathcal T_h} + \|\varepsilon_h^{z}\|_{\mathcal T_h} \ \nonumber \\ & \lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}). \end{align} \end{subequations} \end{lemma} \subsubsection{Step 6: Estimates for $\|u-u_h\|_{\mathcal T_h}$, $\norm {y-y_h}_{\mathcal T_h}$, and $\norm {z-z_h}_{\mathcal T_h}$.} Next, we compare the auxiliary problem to the EDG discretization of the optimality system \eqref{EDG_full_discrete}. The resulting error bounds along with the earlier error bounds in \Cref{le} and \Cref{lemma:step3_conv_rates} give the main convergence result. The proofs in the final steps are similar to the HDG work \cite{HuShenSinglerZhangZheng_HDG_Dirichlet_control2}; we include the proofs here for completeness. For the remainder of the proof, let \begin{equation*} \begin{split} \zeta_{\bm q} &=\bm q_h(u)-\bm q_h,\quad\zeta_{y} = y_h(u)-y_h,\quad\zeta_{\widehat y} = \widehat y_h(u)-\widehat y_h,\\ \zeta_{\bm p} &=\bm p_h(u)-\bm p_h,\quad\zeta_{z} = z_h(u)-z_h,\quad\zeta_{\widehat z} = \widehat z_h(u)-\widehat z_h. \end{split} \end{equation*} Subtracting the auxiliary problem and the EDG problem yields the error equations \begin{subequations}\label{eq_yh} \begin{align} \mathscr B(\zeta_{\bm q},\zeta_y,\zeta_{\widehat y};\bm r_1, w_1,\mu_1)&=(u-u_h,w_1)_{\mathcal T_h}\label{eq_yh_yhu},\\ \mathscr B(\zeta_{\bm p},\zeta_z,\zeta_{\widehat z};\bm r_2, w_2,\mu_2)&=-(\zeta_y, w_2)_{\mathcal T_h}\label{eq_zh_zhu}. \end{align} \end{subequations} \begin{lemma} We have \begin{align}\label{eq_uuh_yhuyh} \hspace{3em}&\hspace{-3em} \gamma\|u-u_h\|^2_{\mathcal T_h}+\|y_h(u)-y_h\|^2_{\mathcal T_h}\nonumber\\ &=( z_h-\gamma u_h,u-u_h)_{\mathcal T_h}-(z_h(u)-\gamma u,u-u_h)_{\mathcal T_h}. \end{align} \end{lemma} \begin{proof} First, \begin{align*} \hspace{3em}&\hspace{-3em} ( z_h-\gamma u_h,u-u_h)_{\mathcal T_h}-( z_h(u)-\gamma u,u-u_h)_{\mathcal T_h}\\ &=-(\zeta_{ z},u-u_h)_{\mathcal T_h}+\gamma\|u-u_h\|^2_{\mathcal T_h}. \end{align*} Next, \Cref{identical_equa} and \eqref{eq_yh} give \begin{align*} 0 &= \mathscr B (\zeta_{\bm q},\zeta_y,\zeta_{\widehat{y}};\zeta_{\bm p},-\zeta_{z},-\zeta_{\widehat z}) + \mathscr B(\zeta_{\bm p},\zeta_z,\zeta_{\widehat z};-\zeta_{\bm q},\zeta_y,\zeta_{\widehat{y}})\\ &= - ( u- u_h,\zeta_{ z})_{\mathcal{T}_h}-\|\zeta_{ y}\|^2_{\mathcal{T}_h}. \end{align*} This gives $ -(u-u_h,\zeta_{ z})_{\mathcal{T}_h}=\|\zeta_{ y}\|^2_{\mathcal{T}_h} $, which completes the proof. \end{proof} \begin{theorem}\label{thm:estimates_u_y_z} We have \begin{subequations} \begin{align}\label{err_yhu_yh} \|u-u_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|y-y_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\\ \|z-z_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}).\label{estimate_z} \end{align} \end{subequations} \end{theorem} \begin{proof} As mentioned earlier, the exact and approximate optimal controls satisfy $ \gamma u = z $ and $ \gamma u_h = z_h $; see \eqref{eq_adeq_e} and \eqref{EDG_full_discrete_e}. Using these equations with the lemma above give \begin{align*} \hspace{3em}&\hspace{-3em} \gamma\|u-u_h\|^2_{\mathcal T_h}+\|\zeta_{ y}\|^2_{\mathcal T_h}\\ &=( z_h-\gamma u_h,u-u_h)_{\mathcal T_h}-( z_h(u)-\gamma u,u-u_h)_{\mathcal T_h}\\ &=-( z_h(u)- z,u-u_h)_{\mathcal T_h}\\ &\le \| z_h(u)- z\|_{\mathcal T_h} \|u-u_h\|_{\mathcal T_h}\\ &\le\frac{1}{2\gamma}\| z_h(u)- z\|^2_{\mathcal T_h}+\frac{\gamma}{2}\|u-u_h\|^2_{\mathcal T_h}. \end{align*} \Cref{lemma:step3_conv_rates} gives \begin{align}\label{eqn:estimate_u_zeta_y} \|u-u_h\|_{\mathcal T_h}+\|\zeta_{ y}\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}). \end{align} Use the triangle inequality and \Cref{le} to obtain \begin{align*} \|y-y_h\|_{\mathcal T_h}&\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}). \end{align*} Finally, the above estimate \eqref{eqn:estimate_u_zeta_y} for $ u $ along with $z = \gamma u $ and $z_h = \gamma u_h$ give the estimate \eqref{estimate_z} for $ z $. \end{proof} \subsubsection{Step 7: Estimates for $\|q-q_h\|_{\mathcal T_h}$ and $\|p-p_h\|_{\mathcal T_h}$.} \begin{lemma} We have \begin{subequations} \begin{align} \|\zeta_{\bm q}\|_{\mathcal T_h} &\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\label{err_Lhu_qh}\\ \|\zeta_{\bm p}\|_{\mathcal T_h} &\lesssim h^{k+1}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}).\label{err_Lhu_ph} \end{align} \end{subequations} \end{lemma} \begin{proof} \Cref{property_B}, the error equation \eqref{eq_yh_yhu}, and the estimate \eqref{eqn:estimate_u_zeta_y} give \begin{align*} \|\zeta_{\bm q}\|^2_{\mathcal T_h} &\lesssim \mathscr B(\zeta_{\bm q},\zeta_y,\zeta_{\widehat y};\zeta_{\bm q},\zeta_y,\zeta_{\widehat y})\\ &=( u- u_h,\zeta_{ y})_{\mathcal T_h}\\ &\le\| u- u_h\|_{\mathcal T_h}\|\zeta_{ y}\|_{\mathcal T_h}\\ &\lesssim h^{2k+2}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1})^2. \end{align*} Similarly, \Cref{property_B}, the error equation \eqref{eq_zh_zhu}, the estimate \eqref{eqn:estimate_u_zeta_y}, \Cref{lemma:step3_conv_rates}, and \Cref{thm:estimates_u_y_z} give \begin{align*} \|\zeta_{\bm p}\|^2_{\mathcal T_h} &\lesssim \mathscr B(\zeta_{\bm p},\zeta_z,\zeta_{\widehat z};\zeta_{\bm p},\zeta_z,\zeta_{\widehat z})\\ &=-(\zeta_{ y},\zeta_{ z})_{\mathcal T_h}\\ &\le\|\zeta_{y}\|_{\mathcal T_h}\|\zeta_{ z}\|_{\mathcal T_h}\\ &\le\|\zeta_{y}\|_{\mathcal T_h} ( \| z_h(u) - z \|_{\mathcal T_h} + \| z - z_h \|_{\mathcal T_h} )\\ &\lesssim h^{2k+2}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1})^2. \end{align*} \end{proof} The above lemma, the triangle inequality, \Cref{le}, and \Cref{lemma:step3_conv_rates} complete the proof of the main result: \begin{theorem} We have \begin{subequations} \begin{align} \|\bm q-\bm q_h\|_{\mathcal T_h}&\lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1}),\label{err_q}\\ \|\bm p-\bm p_h\|_{\mathcal T_h}&\lesssim h^{k}(|\bm q|_{k+1}+|y|_{k+1}+|\bm p|_{k+1}+|z|_{k+1})\label{err_p}. \end{align} \end{subequations} \end{theorem} \section{Numerical Experiments} \label{sec:numerics} Next, we present a numerical example to illustrate our theoretical results. We consider the distributed control problem for the Poisson equation on a square domain $\Omega = [0,1]\times [0,1] \subset \mathbb{R}^2$ and take $\gamma = 1$. We set the exact state and dual state to be $ y(x_1,x_2) = \sin(\pi x_1) $ and $ z(x_1,x_2) = \sin(\pi x_1)\sin(\pi x_2)$, and generate the data $f$, $ g $, and $y_d$ from the optimality system \eqref{eq_adeq}. Numerical results for $ k = 1 $ and $ k = 2 $ for this problem are shown in Table \ref{table_1}--Table \ref{table_2}. The numerical convergence rates match the theory. \begin{table \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $h/\sqrt 2$ &$1/8$& $1/16$&$1/32$ &$1/64$ & $1/128$ \\ \hline $\norm{\bm{q}-\bm{q}_h}_{0,\Omega}$&3.6714e-01 &1.8490e-01 &9.2615e-02 &4.6328e-02 &2.3167e-02 \\ \hline order&-& 0.99& 1.00 &1.00& 1.00\\ \hline $\norm{\bm{p}-\bm{p}_h}_{0,\Omega}$& 3.8422e-01 &1.9228e-01 &9.6161e-02 &4.8083e-02 &2.4042e-02 \\ \hline order&-& 1.00&1.00 &1.00 & 1.00 \\ \hline $\norm{{y}-{y}_h}_{0,\Omega}$&2.4802e-02 &6.3399e-03 &1.5989e-03 &4.0125e-04 &1.0049e-04\\ \hline order&-& 1.97&1.99&2.00 & 2.00 \\ \hline $\norm{{z}-{z}_h}_{0,\Omega}$& 2.8282e-02 &7.0802e-03 &1.7694e-03 &4.4218e-04 &1.1052e-04 \\ \hline order&-& 2.00&2.00&2.00& 2.00 \\ \hline \end{tabular} \end{center} \caption{ Errors for the state $y$, adjoint state $z$, and the fluxes $\bm q$ and $\bm p$ when $k=1$.}\label{table_1} \end{table} \begin{table \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $h/\sqrt 2$ &$1/8$& $1/16$&$1/32$ &$1/64$ & $1/128$ \\ \hline $\norm{\bm{q}-\bm{q}_h}_{0,\Omega}$&2.6598e-02 &6.7755e-03 &1.7029e-03 &4.2631e-04 &1.0662e-04 \\ \hline order&-& 1.97& 1.99 &2.00& 2.00\\ \hline $\norm{\bm{p}-\bm{p}_h}_{0,\Omega}$& 2.6694e-02 &7.0861e-03 &1.7999e-03 &4.5178e-04 &1.1306e-04 \\ \hline order&-& 1.91&1.98 &1.99 & 2.00 \\ \hline $\norm{{y}-{y}_h}_{0,\Omega}$&8.3274e-04 &1.0672e-04 &1.3592e-05 &1.7164e-06 &2.1566e-07\\ \hline order&-& 2.96&2.97&2.99 & 3.00 \\ \hline $\norm{{z}-{z}_h}_{0,\Omega}$& 1.4515e-03 &1.9483e-04 & 2.5202e-05 &3.2009e-06 &4.0316e-07\\ \hline order&-& 2.90&2.95&2.98& 2.99\\ \hline \end{tabular} \end{center} \caption{Errors for the state $y$, adjoint state $z$, and the fluxes $\bm q$ and $\bm p$ when $k=2$.}\label{table_2} \end{table} \section{Conclusions} We proposed an EDG method to approximate the solution of an optimal distributed control problems for the Poisson equation. We obtained optimal a priori error estimates for the control, state, and dual state, but suboptimal estimates for their fluxes. As mentioned earlier, EDG has potential for optimal control problems involving convection dominated partial differential equations and fluid flows. These problems would be interesting to explore in the future. Also, we used a different EDG error analysis strategy to prove the error estimates in this work. We are currently investigating another EDG method, and we have used the different analysis approach to prove optimal convergence rates for all variables. The details will be reported in a future paper. \bibliographystyle{siamplain}
1,108,101,566,107
arxiv
\section{Introduction} Let the base field $K$ be an algebraically closed field of characteristic $p=2$ and let $q=2^e \ge 4$. We consider smooth plane curves given by \begin{equation} \label{d-Galois} Z\prod_{\alpha \in \mathbb F_{q}}(X+\alpha Y+\alpha^2 Z)+\lambda Y^{q+1}=0, \tag{*} \end{equation} and \begin{equation} \label{3-Galois} (X^2+XZ)^2+(X^2+XZ)(Y^2+YZ)+(Y^2+YZ)^2+\lambda Z^4=0 , \tag{**} \end{equation} where $\lambda \in K \setminus \{0, 1\}$. These curves appear in the classification list of smooth plane curves with at least two Galois points (\cite[Theorem 3]{fukasawa2}, see \cite{miura-yoshihara, yoshihara} for definition of Galois point). The automorphism groups of other curves (Fermat, Klein quartic and the curve $x^3+y^4+1=0$) in the list were studied by many authors (see, for example, \cite{hkt, hurt, ks, ritzenthaler}). In this paper, we settle the automorphism groups of these curves, as follows. \begin{theorem} \label{inner} Let $C$ be the plane curve given by $(\ref{d-Galois})$ of degree $q+1$ and genus $g_C=q(q-1)/2$. Then, ${\rm Aut}(C) \cong {\rm PGL}(2, \Bbb F_q)$. In particular, $|{\rm Aut}(C)|=q^3-q$ and $> 84(g_C-1)$ if $q \ge 64$. \end{theorem} \begin{theorem} \label{outer} Let $C$ be the plane curve given by $(\ref{3-Galois})$ of degree four. Then, ${\rm Aut}(C)$ is isomorphic to the symmetric group $S_4$ of degree four. In particular, $|{\rm Aut}(C)|=24$. \end{theorem} It is well known that the order of the automorphism group of any curve with genus $g_C>1$ is bounded by $84(g_C-1)$ in characteristic zero, by Hurwitz. Our curve given by $(\ref{d-Galois})$ is an ordinary curve whose automorphism group exceeds the Hurwitz bound (see Remark \ref{ordinary}). This is different from examples of Subrao \cite{subrao} and Nakajima \cite{nakajima} by the genera. Our theorems are proved by considering the Galois groups at Galois points. Therefore, our study is related to the results of Kanazawa, Takahashi and Yoshihara \cite{kty}, Miura and Ohbuchi \cite{mo}. \section{Proof of Theorem \ref{inner}} According to \cite[Appendix A, 17 and 18]{acgh} or \cite{chang}, any automorphism of smooth plane curves of degree at least four is the restriction of a linear transformation. Therefore, we have an injection $$ {\rm Aut}(C) \hookrightarrow {\rm PGL}(3, K). $$ Let $L_Y$ be the line given by $Y=0$, and let $P_1=(1:0:0)$ and $P_2=(0:0:1)$. A point $P \in \Bbb P^2$ is said to be Galois, if the field extension induced by the projection $\pi_P$ from $P$ is Galois. If $P$ is a Galois point, then we denote by $G_P$ the Galois group. For $\gamma \in {\rm Aut}(C)$, we denote the set $\{Q \in \Bbb P^2 \ | \ \gamma(Q)=Q\}$ by $L_{\gamma}$. We have the following properties for curves with $(\ref{d-Galois})$ (see also \cite{fukasawa2}). \begin{proposition} \label{fundamental} Let $C$ be the plane curve given by $(\ref{d-Galois})$. Then, we have the following. \begin{itemize} \item[(a)] $C \cap L_Y=L_Y(\Bbb F_q)$, where $L_Y(\Bbb F_q)$ is the set of $\Bbb F_q$-rational points of $L_Y$. We denote by $L_Y(\Bbb F_q)=\{P_1, \ldots, P_{q+1}\}$. \item[(b)] The set of Galois points on $C$ coincides with $L_Y(\Bbb F_q)$. \item[(c)] For the projection $\pi_{P_1}$ from $P_1$, the ramification index at $P_1$ is $q$ and there are exactly $(q-1)$ lines $\ell$ such that the ramification index at each point of $C \cap \ell$ is equal to two. Furthermore, $\sigma(P_1)=P_1$ for any $\sigma \in G_{P_1}$. \item[(d)] If $i, j, k$ are different, then there exists $\sigma \in G_{P_i}$ such that $\sigma(P_j)=P_k$. \end{itemize} \end{proposition} \begin{proof} Since the set $C \cap L_Y$ is given by $Y=Z\prod_{\alpha \in \Bbb F_q}(X+\alpha^2Z)=0$, we have (a). See \cite[Section 3]{fukasawa1}, \cite[Section 4]{fukasawa2} for (b). An automorphism $\sigma \in G_{P_1}$ is given by $(x,y) \mapsto (x+\alpha y+\alpha^2, y)$ for some $\alpha \in \Bbb F_q$ (see \cite[Section 4]{fukasawa2}). Then, the set $L_{\sigma}$ coincides with the line defined by $\alpha Y+\alpha^2 Z=0$. It follows from \cite[III.8.2]{stichtenoth} that we have (c). Since $G_{P_i}$ acts on $C \cap \ell \setminus \{P_i\}$ transitively if $\ell$ is a line passing through $P_i$ by a natural property of Galois extension (\cite[III.7.1]{stichtenoth}), we have (d). \end{proof} We determine ${\rm Aut}(C)$. \begin{lemma} \label{injective} The restriction map $\gamma \mapsto \gamma|_{L_Y}$ gives an injection $$ r:{\rm Aut}(C) \hookrightarrow {\rm PGL}(L_Y(\Bbb F_q)) \cong {\rm PGL}(2, \Bbb F_q). $$ \end{lemma} \begin{proof} Let $\gamma \in {\rm Aut}(C)$. Since the set of Galois points is invariant under the linear transformation, $\gamma (L_Y(\Bbb F_q))=L_Y(\Bbb F_q)$, by Proposition \ref{fundamental}(a)(b). Therefore, $r$ is well-defined. Assume that $\gamma|_{L_Y}$ is identity. Then, $\gamma(T_{P_i}C)=T_{\gamma(P_i)}C=T_{P_i}C$ and the point given by $T_{P_1}C \cap T_{P_i}C$ is fixed by $\gamma$ for any $i$. If $P_i=(\beta:0:1) \in L_Y(\Bbb F_q)$, then $T_{P_i}C$ is given by $X+\sqrt{\beta} Y+\beta Z=0$. Since $\gamma|_{T_{P_1}C}$ is an automorphism of $T_{P_1}C \cong \Bbb P^1$ and there are $q$ $(\ge 4)$ points fixed by $\gamma$, $\gamma|_{T_{P_1}C}$ is identity. Since $\gamma|_{L_Y}=1$ and $\gamma|_{T_{P_1}C}=1$, $\gamma$ is identity on $\Bbb P^2$. \end{proof} \begin{lemma} Let $H(C):=\{\gamma \in {\rm Aut}(C)|\gamma(P_1)=P_1, \gamma(P_2)=P_2\}$ and let $H_0:=\{\tau \in {\rm PGL}(L_Y(\Bbb F_q))|\tau(P_1)=P_1, \tau(P_2)=P_2\}$. Then, $r(H(C))=H_0$. In particular, $H_0 \subset r({\rm Aut}(C))$. \end{lemma} \begin{proof} We have $r(H(C)) \subset H_0$. According to \cite[Lemma 4 and Page 100]{fukasawa2}, $H(C)$ is a cyclic group of order $q-1$. We can also prove that $H_0$ is a cyclic group of order at most $q-1$ (see, for example, \cite[Lemma 2(2)]{fukasawa2}). Therefore, we have $r(H(C))=H_0$. \end{proof} \begin{lemma} \label{surjective} The restriction map $r$ is surjective. \end{lemma} \begin{proof} Let $\tau \in {\rm PGL}(L_Y(\Bbb F_q))$ and let $\tau(P_1)=P_i$, $\tau(P_2)=P_j$. We take $k \ne 1, i$. By Proposition \ref{fundamental}(d), there exists $\gamma_1 \in r(G_{P_k})$ such that $\gamma_1\tau(P_1)=P_1$. Further, by Proposition \ref{fundamental}(c)(d), there exists $\gamma_2 \in r(G_{P_1})$ such that $\gamma_2\gamma_1\tau(P_1)=P_1$ and $\gamma_2\gamma_1\tau(P_2)=P_2$. Then, $\gamma_2\gamma_1\tau \in H_0$. By Lemma above, $\gamma_2\gamma_1\tau \in r({\rm Aut}(C))$. This implies $\tau \in r({\rm Aut}(C))$. \end{proof} We have ${\rm Aut(C)} \cong {\rm PGL}(2, \Bbb F_q)$ by Lemmas \ref{injective} and \ref{surjective}. \begin{remark} \label{ordinary} According to Deuring-$\breve{\mbox{S}}$afarevi$\breve{\mbox{c}}$ formula (\cite{subrao}), the $p$-rank $\gamma_C$ of the curve $C$ is computed by ramification indices for the Galois covering $\pi_{P_1}$. Using Proposition \ref{fundamental}(c), we have $$ \frac{\gamma_C-1}q=(-1)+\left(1-\frac{1}{q}\right)+(q-1)\left(1-\frac{1}{2}\right).$$ This implies $\gamma_C=q(q-1)/2=g_C$, i.e. $C$ is ordinary. \end{remark} \begin{remark} We also have the following for ${\rm Aut}(C)$. \begin{itemize} \item[(a)] $|{\rm Aut}(C)|=g_C\times(3+\sqrt{8g_C+1})$. \item[(b)] ${\rm Aut}(C)=\langle G_{P_1}, \ldots, G_{P_{q+1}}\rangle=\langle G_{P_1}, G_{P_2} \rangle$. \end{itemize} \end{remark} \section{Proof of Theorem \ref{outer}} Similarly to the previous section, we have an injection $$ {\rm Aut}(C) \hookrightarrow {\rm PGL}(3, K). $$ Let $L_Z$ be the line given by $Z=0$, and let $P_1=(1:0:0)$, $P_2=(1:1:0)$ and $P_3=(0:1:0)$. If $P$ is a Galois point, then we denote by $G_P$ the Galois group. For $\gamma \in {\rm Aut}(C)$, we denote the set $\{Q \in \Bbb P^2 \ | \ \gamma(Q)=Q\}$ by $L_{\gamma}$. We have the following properties for curves with $(\ref{3-Galois})$ (see \cite[Sections 3 and 4]{fukasawa3}). \begin{proposition} \label{fundamental2} Let $C$ be the plane curve given by $(\ref{3-Galois})$. Then, we have the following. \begin{itemize} \item[(a)] The set of Galois points in $\Bbb P^2 \setminus C$ coincides with $L_Z(\Bbb F_2)=\{P_1, P_2, P_3\}$. \item[(b)] For each $i$, there exists a unique $\sigma_i \in G_{P_i} \setminus \{1\}$ such that $L_{\sigma_i}=L_Z$. \item[(c)] There exist exactly two lines $\ell$ such that $\ell \ni P_1$, $\ell \ne L_Z$ and $\ell$ is the tangent line at two points in $C \cap \ell$. Conversely, if $\ell$ is such a line, then there exists $\tau \in G_{P_1} \setminus \langle \sigma_1 \rangle$ such that $L_{\tau}=\ell$. \item[(d)] There exist exactly four points $Q_1, Q_2, Q_3, Q_4 \in \Bbb P^2 \setminus L_{Z}$ such that the line $\overline{Q_iQ_j}$ which passes through $Q_i, Q_j$ is a tangent line of $C$ for each $i, j$ with $i \ne j$ and $\overline{Q_iQ_j} \ni P_k$ for some $k$. Such points are $(0:0:1)$, $(1:0:1)$, $(0:1:1)$ and $(1:1:1)$. \end{itemize} \end{proposition} \begin{proof} For (a)(d), see \cite[Section 4]{fukasawa3}. For the sake of readers, we explain (b)(c) for $i=1$. Let $\sigma, \tau$ be linear transformations given by $$\sigma(X:Y:Z)=(X+Z:Y:Z), \ \tau(X:Y:Z)=(X+Y:Y:Z). $$ Then, $G_{P_1}=\{1, \sigma, \tau, \sigma\tau\}$. Since $\sigma|_{L_Z}=1$ and $\tau|_{L_Z} \ne 1$, we have (b). Note that the line $L_{\tau}$ is given by $Y=0$ and the line $L_{\sigma\tau}$ is given by $Y+Z=0$. Referring \cite[III. 8.2]{stichtenoth}, we have (c). \end{proof} First we prove the following. \begin{lemma} Let $X=\{Q_1, Q_2, Q_3, Q_4\}$ and let $S(X)$ be the group of all permutations on $X$. Then, there exists an injection ${\rm Aut}(C) \hookrightarrow S(X) \cong S_4$. \end{lemma} \begin{proof} By Proposition \ref{fundamental2}(d), we have a well-defined homomorphism ${\rm Aut}(C) \rightarrow S(X)$ by $\gamma \mapsto \gamma|_X$. If $\gamma \in {\rm Aut}(C)$ fixes $Q_1, Q_2, Q_3, Q_4$, then $\gamma$ fixes $P_1, P_2, P_3$ also. Note that $X \cup \{P_1, P_2, P_3\}=\Bbb P^2(\Bbb F_2)$. Then, $\gamma$ is identity on the projective plane. \end{proof} We prove that $|{\rm Aut}(C)| \ge 24$. Let $H:=\langle \sigma_1, \sigma_2 \rangle$. \begin{lemma} The restriction map $$r : {\rm Aut}(C) \rightarrow {\rm PGL}(L_Z(\Bbb F_2)) \cong S_3; \ \gamma \mapsto \gamma|_{L_Z} $$ is surjective and its kernel coincides with $H$. In particular, $|{\rm Aut}(C)| \ge 24$. \end{lemma} \begin{proof} Let $\gamma \in {\rm Aut}(C)$. Since the set of Galois points is invariant under the linear transformation, $\gamma(\{P_1, P_2, P_3\})=\{P_1, P_2, P_3\}$, by Proposition \ref{fundamental2}(a). Therefore, $r$ is well-defined. We consider the kernel. Assume that $\gamma|_{L_{Z}}$ is identity. Let $\sigma_i \in G_{P_i}$ be an automorphism as in Proposition \ref{fundamental2}(b) for $i=1,2$ and let $\tau, \eta \in G_{P_1} \setminus \langle \sigma_1 \rangle$ with $\tau \ne \eta$. Then, $\gamma(L_{\tau})=L_{\tau}$ or $L_{\eta}$ by Proposition \ref{fundamental2}(c). Therefore, $\sigma_2^k\gamma(L_{\tau})=L_{\tau}$, where $k=0$ or $1$. Since $\sigma_1$ acts on $C \cap L_{\tau}$, $\sigma_1^l\sigma_2^k\gamma$ is identity on $L_{\tau}$ and $L_Z$, where $l=0$ or $1$. This implies that $\sigma_1^l\sigma_2^k\gamma$ is identity on $\Bbb P^2$. We have $\gamma \in H$. We prove that $r$ is surjective. We have an injection ${\rm Aut}(C)/H \hookrightarrow S_3$. Let $\tau_i \in G_{P_i} \setminus \langle \sigma_i \rangle$ for each $i$. Since $\tau_1\tau_2(P_1)=P_2$, $\tau_1\tau_2(P_2)=P_3$ and $\tau_1\tau_2(P_3)=P_1$, the order of $\tau_1\tau_2H \in {\rm Aut}(C)/H$ is three. Since the group ${\rm Aut}(C)/H$ has elements of order two and three, we have ${\rm Aut}(C)/H=S_3$. \end{proof} We have the conclusion, by these two lemmas. \begin{remark} We also have ${\rm Aut}(C)=\langle G_{P_1}, G_{P_2}, G_{P_3}\rangle=\langle G_{P_1}, G_{P_2} \rangle$. \end{remark} \ \begin{center} {\bf Acknowledgements} \end{center} The author was partially supported by JSPS KAKENHI Grant Number 25800002.
1,108,101,566,108
arxiv
\section{Introduction} Answer set programming (ASP) is a declarative programming paradigm, which has its roots in logic programming and nonmonotonic reasoning. It is widely used for knowledge representation and problem solving~\longversion{\cite{brewka2011answer,eiter2009answer,gebser2012answer}}\shortversion{\cite{brewka2011answer}}. In ASP, a problem is encoded as a set of rules (logic program) and is evaluated under stable model semantics~\cite{GelfondLifschitz88,GelfondLifschitz91}, using solvers such as \verb"clingo"~\cite{gebser2011advances,DBLP:journals/corr/GebserKKS14}, \verb"WASP"~\cite{AlvianoDodaroLeone15}, or \verb"DLV"~\cite{AlvianoEtAl17}. Then, answer sets represent solutions to the modeled problem. Oftentimes when modeling with ASP, the number of solutions of the resulting program can be quite high. This is not necessarily a problem when searching for a few solutions,~e.g., optimal solutions~\cite{GebserKaminskiSchaub11,AlvianoDodaro16a} or when incorporating preferences~\longversion{\cite{Brewka04,BrewkaEtAl15,BrewkaEtAl15b,AlvianoRomeroSchaub18}}\shortversion{\cite{Brewka04,BrewkaEtAl15,AlvianoRomeroSchaub18}}. However, there are many situations where reasoning goes beyond simple search for one answer set, for example, planning when certain routes are gradually forbidden~\cite{SonEtAl16}, finding diverging solutions~\cite{everardo2017towards,EverardoEtAl19}, reasoning in probabilistic applications~\cite{LeeTalsaniaWang17}, or debugging answer sets~\longversion{\cite{OetschPT18, DodaroGRRS19, VosKOPT12, Shchekotykhin15, GebserEtAl08}}\shortversion{\cite{OetschPT18,GebserEtAl08}}. Now, if the user is interested in more than a few solutions to gradually identify specific answer sets, tremendous solution spaces can easily become infeasible to comprehend. In fact, it might not even be possible to compute all solutions in reasonable time. Examples where we easily see large solution spaces are configuration problems~\cite{soininen1999developing,soininen2001representing,tiihonen2003practical}, such as for instance PC configuration, and planning problems~\cite{dimopoulos1997encoding,lifschitz1999action,nogueira2001prolog}. Let us consider a simple example to illustrate the use of navigation in ASP. \newcommand{\simnot}{\mathord{\sim}} \begin{example} Consider an online shopping situation where we have a knowledge base on clothes and some rules which specify which combinations would suit well or not. % % \begin{align*} % % % \big\{ \{\predname{outfit}(X,Y): &\predname{clothes}(X,Y) \}; \\ &\leftarrow \predname{outfit}(X,Y1), \\ & \quad\;\;\predname{outfit}(X,Y2), Y1 \neq Y2; & \\ \predname{occasion}(\text{vancouver}) &\leftarrow \predname{outfit}(\text{jacket},\ldots); &\\ \predname{occasion}(\text{conference}) &\leftarrow \predname{outfit}(\text{suit},Y), Y \neq \text{\text{yellow}}; &\\ \predname{occasion}(\text{wistler}) &\leftarrow \predname{outfit}(\text{boots},\ldots)% \ldots \big\} % \end{align*} % Together with input facts from a clothes database like $\predname{clothes}(\text{jacket,blue})$; $\predname{clothes}(\text{shirt,red}); \dots $ one easily obtains more than a million answer sets. Since Canada opened immigration for vaccinated persons, we actually might be able to travel to Vancouver. Say we % zoom in on outfits including shorts, which leads to a rather small, but still incomprehensible sub-space of solutions. Imagine that most of the remaining outfits include chucks and a jacket. Say we want to inspect the most different outfits still remaining, then we aim to choose potential parts of our outfit that provide us with most diverse solutions. Now, we are almost good to go, seeking to find some final additions to our outfit quickly. % % % % \end{example} \noindent Our example illustrates that different solutions in ASP programs can easily be hard to comprehend. Problem specific, handcrafted encoding techniques to navigate the solution space can be quite tedious. Instead, we propose a formal and general framework for interactive navigation towards desired subsets of answer sets analogous to faceted browsing in the field of information retrieval~\cite{Tunkelang09}. Our approach enables solution space exploration by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. To this end we introduce absolute and relative weights to quantify the size of the search space when reasoning under assumptions (facets). We formalize several kinds of search space navigation as goal-oriented and explore modes, and systematically compare the introduced weights regarding their usability for operations under natural properties splitting, reliability, preserving maximal sub-spaces (min-inline), and preserving minimal sub-spaces (max-inline). In addition, we illustrate the computational complexity for computing the weights. Finally, we provide an implementation on top of the solver \verb"clingo" demonstrating the feasibility of our framework for incomprehensible solution spaces. \paragraph{Related Work.} \citex{10.1007/978-3-319-99906-7_14} proposed a framework in which solutions are systematically pruned with respect to facets (partial solutions). While this allows one to move within the answer set space, the user has absolutely no information on how big the effect of activating a facet is in advance, similar to assumptions in propositional satisfiability~\cite{EenSorensson04a}. We go far beyond and % characterize the \emph{weight} of a facet. This is useful to comprehend the effect of navigation steps on the size of the solution space. % Additionally, this allows for zooming into or out of the solution space at a configurable pace. Debugging in answer sets has widely been investigated~\cite{OetschPT18, DodaroGRRS19, VosKOPT12, Shchekotykhin15, GebserEtAl08}. However, we do not aim to correct ASP encodings. All answer sets which are reachable within the navigation are ``original'' answer sets, thus the adaptions we make during the navigation to the program, do not change the set of answer sets of the initial program. Justifications, which describe the support for the truth value of each atom, have been studied as a tool for reasoning and debugging~\cite{El-KhatibPontelliSon05}. Probabilistic reasoning frameworks for logic programs were developed such as $\text{LP}^{\text{MLN}}$~\cite{LeeTalsaniaWang17}, which define notions of probabilities in terms of relative occurrences of stable models and their weights. Computing these probabilities (unless restricted to decision versions in terms of being different from zero) relates to counting probabilities under assumptions. Considering relative occurrences of stable models of weight one relates to search space exploration. However, probabilistic frameworks primarily address modeling conflicting information % and reason about them. We assume large solution spaces and aim for navigating dynamically in the solution space. \section{Background} First, we recall basic notions of ASP, for further details on ASP we refer to standard texts~\cite{CalimeriFGIKKLM20,gebser2012answer}. Then, we introduce fundamental notions of faceted navigation and computational complexity, respectively. % % \paragraph{Answer Set Programming.} By $\A$ we denote the set of (non-ground) \emph{atoms} of a program $\lp$. A literal is an atom $\alpha \in \A$ or its \emph{default negation}, which refers to the absence of information, denoted by $\mathord{\sim}\alpha$. An atom $\alpha$ is a predicate $p(t_0, \dots,t_n)$ of arity $n \geq 0$ where each $t_i$ for $0 \leq i \leq n$ is a \emph{term}, i.e., either a variable or a constant. We say an atom $\alpha \in \A$ is \emph{ground} if and only if $\alpha$ is variable-free. By $\mathit{Grd}(\A)$ we denote ground atoms. A (disjunctive) logic program $\lp$ is a finite set of rules $r$ of the form $$\alpha_0\,|\, \ldots \,|\, \alpha_k \leftarrow \alpha_{k+1}, \dots, \alpha_m, \mathord{\sim} \alpha_{m+1}, \dots, \mathord{\sim} \alpha_n$$ where $0 \leq k \leq m \leq n$ and each $\alpha_i \in \A$ for $0 \leq i \leq n$. For a rule $r$ we denote the head by $H(r) \coloneqq \{\alpha_0, \dots, \alpha_k \}$, the body $B(r)$ consists of the positive body $B^+(r) \coloneqq \{\alpha_{k+1}, \dots, \alpha_m\}$, and the negative body $B^-(r) \coloneqq \{ \alpha_{m+1}, \dots, \alpha_n\}$. If $B(r) = \emptyset$, we omit $\leftarrow$. A rule $r$ where $H(r) = \emptyset$ is called \emph{integrity constraint} and avoids that $B(r)$ is evaluated positively. By $\mathit{grd}(r)$ we denote the set of ground instances of some rule~$r$, obtained by replacing all variables in $r$ by ground terms. Accordingly, $\mathit{Grd}(\lp) \coloneqq \bigcup_{r \in \lp}\mathit{grd}(r)$ denotes the ground instantiation of $\lp$. Without any explicit contrary indication, throughout this paper, we use the term (logic) program to refer to grounded disjunctive programs where $\A = Grd(\A)$. An interpretation $X \subseteq \A$ satisfies a rule $r \in \lp$ if and only if $H(r) \cap X \neq \emptyset$ whenever $B^+(r) \subseteq X$ and $B^-(r) \cap X = \emptyset$. $X$ satisfies $\lp$, if $X$ satisfies each rule $r \in \lp$. An interpretation $X$ is a \emph{stable model} (also called \emph{answer set}) of $\lp$ if and only if $X$ is a subset-minimal model satisfying the Gelfond-Lifschitz reduct of $\lp$ with respect to $X$, defined as $\lp_{X} \coloneqq \{H(r) \leftarrow B^+(r) \mid X \cap B^-(r) = \emptyset, r \in \lp\}$. By $\AS$ we denote the answer sets of~$\lp$. For computing facets, we rely on two notions of consequences of a program, namely, \emph{brave} consequences $\BC \coloneqq \bigcup \AS$ and \emph{cautious} consequences $\CC \coloneqq \bigcap \AS$. \paragraph{Faceted Navigation.} \emph{Faceted answer set navigation} is characterized as a sequence of navigation steps restricting the solution space with respect to partial solutions. Those partial solutions, called \emph{facets}, correspond to ground atoms of a program $\lp$ that are not contained in each solution. We denote the \emph{facets} of~$\Pi$ by $\F \coloneqq \FI \cup \FE$ where $\FI \coloneqq \BC \setminus \CC$ denotes \emph{inclusive facets} and $\FE \coloneqq \{\overline{\alpha} \mid \alpha \in \FI\}$ denotes \emph{exclusive facets} of $\lp$. We say an interpretation $X \subseteq \A$ satisfies an inclusive facet $f \in \FI$, if $f \in X$, which we denote by $X \models f$, and it satisfies an exclusive facet $f \in \FE$, if $f \not \in X$. A navigation step is a transition from one program to another, obtained by adding some integrity constraint that enforces the atom refered to by some inclusive or exclusive facet to be present or absent, respectively, throughout answer sets. By $ic(f)$ we denote the function that translates a facet $f \in \{\alpha, \overline{\alpha}\} \subseteq \F$ into a singleton program that contains its corresponding integrity constraint: \[ \mathit{ic}(f) \coloneqq \begin{cases} \{\leftarrow \mathord{\sim} \alpha\}, & \text{if } f = \alpha;\\ \{\leftarrow \alpha\}, & \text{otherwise.} \end{cases} \] Accordingly, a navigation step from $\lp$ to $\lp'$ is obtained by modifying $\lp$ such that $\lp' = \lp \cup \mathit{ic}(f)$. Faceted navigation w.r.t. some program $\lp$ is possible as long as $\F \neq \emptyset$. \citex{10.1007/978-3-319-99906-7_14} established that if $f \in \F$, then $\lp' \coloneqq \lp \cup ic(f)$ is satisfiable and $\AS[\lp'] = \{X \in \AS \mid X \models f\}$. When referring to $\AS$ as a solution space, we refer to the topological space induced by $2^{\AS}$ on $\AS$. Thus, answer set navigation means choosing among subsets of answer sets. \paragraph{Computational Complexity.} We assume that the reader is familiar with the main concepts of computational complexity theory~\cite{Papadimitriou94,AroraBarak09} and follows standard terminology in the area of counting complexity \cite{DurandHermannKolaitis05,HemaspaandraVollmer95a}. Recall that \complexityClassFont{P}\xspace and \text{\complexityClassFont{NP}}\xspace are the complexity classes of all deterministically and non-deterministically polynomial-time solvable decision problems~\cite{Cook71}, respectively. For a complexity class~$\text{C}$, \text{co-C} denotes the class of all decision problems whose complement is in $\text{C}$. We are also interested in the polynomial hierarchy~\cite{StockmeyerMeyer73,Stockmeyer76,Wrathall76} defined as follows: $\Delta^p_0 \coloneqq \Pi^p_0 \coloneqq \Sigma^p_0 \coloneqq \complexityClassFont{P}\xspace$ and $\Delta^p_i \coloneqq P^{\Sigma^p_i}$, $\Sigma^p_i \coloneqq \text{\complexityClassFont{NP}}\xspace^{\Sigma^p_i}$, $\Pi^p_i \coloneqq \complexityClassFont{co}\text{\complexityClassFont{NP}}\xspace^{\Sigma^p_i}$ for $i>0$ where $C^{D}$ is the class~$C$ of decision problems augmented by an oracle for some complete problem in class $D$. Further, $\complexityClassFont{PH}\xspace \coloneqq \bigcup_{k \in \mathbb{N}} \Delta^p_k$. Note that $\text{\complexityClassFont{NP}}\xspace = \Sigma^p_1$, $\complexityClassFont{co}\text{\complexityClassFont{NP}}\xspace = \Pi^p_1$, $\Sigma^p_2 = \text{\complexityClassFont{NP}}\xspace^\text{\complexityClassFont{NP}}\xspace$, and $\Pi^p_2 = \complexityClassFont{co}\text{\complexityClassFont{NP}}\xspace^\text{\complexityClassFont{NP}}\xspace$. If $\mathcal C$ is a decision complexity class then $\#\cdot\mathcal C$ is the class of all counting problems whose witness function $w$ satisfies (i) $\exists$ polynomial $p$ such that for all $y\in w(x)$, we have that $|y|\leqslant p(|x|)$, and (ii) the decision problem ``given $x$ and $y$, is $y\in w(x)$?'' is in $\mathcal C$. A \emph{witness} function is a function $w\colon\Sigma^*\to\mathcal P^{<\omega}(\Gamma^*)$, where $\Sigma$ and $\Gamma$ are alphabets, mapping to a finite subset of $\Gamma^*$. Such functions associate with the counting problem ``given $x\in\Sigma^*$, find $|w(x)|$''. \section{Routes and Navigation Modes} We introduce \emph{routes} as a notion for characterizing sequences of navigation steps. \begin{definition}\label{def:routes} A \emph{route} $\delta$ is a finite sequence $\route[f_1, \dots, f_n]$ of facets $f_i \in \F$ such that $0 \leq i \leq n \in \mathbb{N}$, denoting $n$ arbitrary navigation steps over $\lp$. We say $\delta$ is a \emph{subroute} of $\delta'$, denoted by $\delta \sqsubseteq \delta'$, whenever if $f_i \in \delta$, then $f_i \in \delta'$. We define % $\lp^{\delta} \coloneqq \lp \cup \mathit{ic}(f_1) \cup \dots \cup \mathit{ic}(f_n)$. By $\De{}$ we denote all possible routes over $\AS$, including the empty route $\epsilon$. \end{definition} \noindent It is easy to see that any permutation of navigation steps of a fixed set of facets always leads to the same solutions. In general, different routes may lead to the same subset of answer sets. We say two routes $\delta, \delta' \in \De{}$ are equivalent if and only if $\AS[\lp^{\delta}] = \AS[\lp^{\delta'}]$. \noindent To ensure satisfiable programs, we aim to select so called \emph{safe} routes. By $\De{s} \coloneqq \{\delta \in \De{} \mid \AS[\lp^{\delta}] \neq \emptyset\}$ we define \emph{safe routes} over $\AS$. Once an unsafe route is taken, some sort of \emph{redirection}, which relates to the notion of \emph{correction sets} \cite{10.1007/978-3-319-99906-7_14}, i.e., a route obtained by retracting conflicting facets, is required to continue navigation. For a program $\lp$, $\delta \in \De{}$ and $f \in \F$. We denote all \emph{redirections} of $\delta$ with respect to $f$ by % % % % % % $\RE{\delta} \coloneqq \{\delta' \sqsubseteq \delta \mid f \in \delta', \AS[\lp^{\delta'}] \neq \emptyset\} \cup \{\epsilon\}$. % % % % The following example illustrates faceted navigation. \begin{example}\label{ex:Pi1} Consider program~$\lp_1 = \{a\,|\,b; c\,|\,d \leftarrow b; e\}$. It is easy to observe that the answer sets are $\AS[\lp_1] = \{\{a, e\}$, $\{b, c, e\}$, $\{b, d,e\}\}$. Thus, we can choose from facets $\F[\lp_1] = \{a, b, c, d, \overline{a}, \overline{b}, \overline{c}, \overline{d}\}$. As illustrated in Figure~\ref{fig:gofree}, if we activate facet $a$ we land at $\AS[\lp_1^{\route[a]}] = \{\{a, e\}\}$. Activating $b$ on $\route[a]$ gives $\AS[\lp_1^{\route[a, b]}] = \emptyset$. To redirect $\route[a, b]$ we can choose from $\RE[b]{\route[a, b]}= \{\route[b]\}. $ \end{example} \begin{figure} \centering \begin{tikzpicture}[ level/.style={sibling distance=34mm/#1}, >=latex, ] \node (a) {\small $\{\{a, e\}, \{b, c, e\}, \{b, d, e\}\}$} child { node (b) {\underline{\small $\{\{a, e\}\}$}} edge from parent [->] node [above left] {\small $\langle a \rangle$} } child { node (c) {\small $\{\{b, c, e\}, \{b, d, e\}\}$} child { node (d) {\underline{\small $\{\{b, c, e\}\}$}} edge from parent [->] node [above left] {\small $\langle \overline{a}, c \rangle$} (d) edge [->, dashed] node [below left] {\small $\langle \textcolor{red}{\overline{a}}, \textcolor{red}{c}, a \rangle$} (b) (c) edge [<-, dashed] node [above] {\small $\langle \textcolor{red}{a}, b\rangle$} (b) } child { node (e) {\underline{\small $\{\{b, d, e\}\}$}} edge from parent [->] node [above right] {\small $\langle \overline{a}, \overline{c} \rangle$} } edge from parent [->] node [above right] {\small $\langle \overline{a} \rangle$} }; \end{tikzpicture} \caption{Goal-oriented and free navigation on program $\lp_1$.} \label{fig:gofree} \end{figure} \noindent We consider two more notions for identifying routes that point to a unique solution. A set of facets is a delimitation, if any safe route constructible thereof leads to a unique answer set. This means that any further step would lead to an unsafe route. \begin{definition}\label{def:delimitation} Let $\lp$ be a program and $F, F' \subseteq \F$ \shortversion{such that}\longversion{such that} $F \coloneqq \{f_1, \dots, f_n\}$. We define $\tau(F)$ as all permutations of $\delta \coloneqq \route[f_1, \dots, f_n]$ and say $F$ is \emph{delimiting} with respect to $\lp$, if $\tau(F) \subseteq \De{s}$ and $\forall F' \supset F: \tau(F') \not \subseteq \De{s}$. By $\DF \subset 2^{\F}$ we denote the set of \emph{delimitations} over $\F$. \end{definition} \noindent We call a route consisting of delimiting facets \emph{maximal safe}. \begin{definition}\label{def:maxsafe} Let $\lp$ be a program, $F \subseteq \F$ and $\delta \in \tau(F) \subseteq \De{}$. We call $\delta$ \emph{maximal safe}, if and only if $F \in \DF$. By $\De{\mathit{ms}}$ we denote the set of maximal safe routes in $\AS$. \end{definition} \noindent In fact, each delimitation corresponds to a unique solution. \shortversion{\begin{lemma}[$\star$\footnote{Statements marked by ``$\star$'' are proven in the extended version: \url{some_url}}]} \longversion{\begin{lemma}} \label{lem:maxsafecar} Let $\lp$ be a program, $F \subseteq \F$ and $\delta \in \tau(F) \subseteq \De{}$. If $\delta \in \De{\mathit{ms}}$, then $|\AS[\lp^{\delta}]| = 1$. \end{lemma} \begin{proof}% Let $\lp$ be a program, $F, F' \subseteq \F$ and $\delta \in \tau(F) \subseteq \De{}$. Suppose $\delta \in \De{ms}$. Then $F \in \DF$ so that $\tau(F) \subseteq \De{s}$ and $\forall F' \supset F: \tau(F') \not \subseteq \De{s}$. Since $\tau(F) \subseteq \De{s}$, we have that $|\AS[\lp^\delta]| > 0$. Note that $\F[\lp^{\delta}] \subseteq \F$. By assumption we have $\forall F' \supset F: \tau(F') \not \subseteq \De{s}$, hence there is no facet $f \in \F \setminus F$ that can be activated in a way that $\lp^{\delta}$ would not become unsatisfiable, so that $\F[\lp^{\delta}] = \emptyset$. Now suppose $\AS[\lp^{\delta}] > 1$. Then $|\F[\lp^{\delta}]| = |\BC \setminus \CC| > 0$, which contradicts $\F[\lp^{\delta}] = \emptyset$ and concludes the proof. \end{proof} \begin{theorem} \label{thm:delimitations} $|\AS| = |\DF|$. \end{theorem} \begin{proof}% \leavevmode Let $\lp$ be a program and $F, F' \subseteq \F$. We need to show that $g: \DF \rightarrow \AS$ defined by $g(F) \coloneqq \bigcup \AS[\lp^{\delta}]$ such that $\delta \in \tau(F)$ is bijective. Note that $g$ is a total function, since by Definition~\ref{def:maxsafe} we have $\delta \in \De{\mathit{ms}}$ and due to Lemma~\ref{lem:maxsafecar}, if $\delta \in \De{\mathit{ms}}$, then $|\AS[\lp^{\delta}]| = 1$, so that $g(F) = \bigcup \AS[\lp^{\delta}] \in \AS$. \paragraph{Injectivity:} Let $F, F' \in \DF$, $\delta \in \tau(F)$, $\delta' \in \tau(F')$ and $X, X' \subseteq \BC$. Suppose $F \neq F'$. It is easy to see that answer sets delimited by $F, F'$ respectively are of the form $\bigcup \AS[\lp^{\delta}] = X \cup \CC$ and $\bigcup \AS[\lp^{\delta'}] = X' \cup \CC$ such that $\forall f \in F: X \models f$ and $\forall f' \in F': X' \models f'$. However, since by assumption $F, F' \in \DF$ and $F \neq F'$, there exists a facet $f'' \in F \cup F'$ that is not satisfied by both $X$ and $X'$, hence $X \neq X'$, so that $\bigcup \AS[\lp^{\delta}] \neq \bigcup \AS[\lp^{\delta'}]$. Therefore by contraposition, if $g(F) = g(F')$, then $F = F'$. \paragraph{Surjectivity:} We need to show that $\forall X \in \AS \exists F \in \DF: g(F) = X$. Let $X \in \AS$ and $F' \subseteq \FI \subseteq \BC$ be an arbitrary set of inclusive facets of $\lp$. Note that, since $F' \subseteq \BC$, we can characterize any answer set $X \in \AS$ by $X = F' \cup \CC$. We can make the distinction of cases: \begin{enumerate} \item Suppose $F' \neq \emptyset$. Then, since $F' \subseteq \FI$, there exists at least one route $\delta' \in \tau(F') \subseteq \De{}$ such that $\bigcup \AS[\lp^{\delta'}] = X = F' \cup \CC$. It is easy to see that we can extend $F'$ to $F''$ by adding all facets $\overline{\alpha} \in \FE$ such that $\alpha \not \in F'$, thus $X \models \overline{\alpha}$, in order to obtain a maximal safe route $\delta'' \in \tau(F'') \subset \De{ms}$, which points to $X$. Therefore $g(F'') = X$. \item Suppose $F' = \emptyset$. Then $X = \CC$. Note that $\forall \alpha \in \F: \emptyset \models \overline{\alpha} \text{ and } \emptyset \not \models \alpha$. Therefore routes to reach $X$ by must contain at least all exclusive facets $f \in \FE$ and no inclusive facets $f' \in \FI$ of $\lp$, hence we can conclude that if $\delta \in \tau(\FE)$, then $\bigcup \AS[\lp^{\delta}] = X$. It is easy to see that if a supersequence $\delta'$ of $\delta$ contains no inclusive facet, then $\delta'$ is equivalent to $\delta$, and otherwise $\delta'$ is not safe. Therefore $\delta$ has to be maximal safe and $\FE$ has to be delimiting, hence $g(\FE) = X$. \end{enumerate} Since $g$ is a bijection, we conclude $|\AS| = |\DF|$. \end{proof} As mentioned, using routes and facets, there are several ways to explore solutions. A \emph{navigation mode} is a function that prunes the solution space according to a search strategy that involves routes and facets. \begin{definition} Let $X_i \in 2^{\De{}} \cup 2^{\F}$ where $0 \leq i \leq n \in \mathbb{N}$. A \emph{navigation mode} is a function $$\nu: X_0 \times \dots \times X_n \rightarrow 2^{\AS}$$ that maps an $n$-ary Cartesian product over subsets of routes over $\lp$ and facets of $\lp$ to answer sets of $\lp$. \end{definition} \noindent The idea of \emph{free} and \emph{goal-oriented} navigation was mentioned by~\citex{10.1007/978-3-319-99906-7_14}. While free navigation follows no particular strategy, during goal-oriented navigation, we narrow down the solution space. Next, we formalize the goal-oriented navigation mode. \begin{definition}\label{def:go} We define the \emph{goal-oriented} navigation mode $\vgo: \De{s} \times \F \rightarrow 2^{\AS}$ by: \[ \vgo[(\delta, f)] \coloneqq \begin{cases} \AS[\lp^{\route[\delta, f]}], & \text{ if } f \in \F[\lp^{\delta}];\\ \AS[\lp^{\delta}], & \text{otherwise.} \end{cases} \] \end{definition} \noindent As illustrated in Figure \ref{fig:gofree}, while during goal-oriented navigation (indicated by solid lines) the space is being narrowed down, until some unique solution (indicated by underscores) is found, in free mode (indicated by both dashed and solid lines) unsafe routes are being redirected, as illustrated on route $\route[a, b]$ where $a$ is retracted. We call the effect of narrowing down the space \emph{zooming in}, the inverse effect \emph{zooming out} and any effect where the number of solutions remains the same, \emph{slide} effect, e.g., activating $a$ on route $\route[\overline{a}, c]$. \section{Weighted Faceted Navigation} During faceted navigation, we can zoom in, zoom out or slide. However, we are unaware of how big the effect of activating a facet will be. Recall that different routes can lead to the same unique solution. The activation of some facet may lead to a unique solution more quickly or less quickly than the activation of another facet, which means that during navigation one has no information on the length of a route. Our framework provides an approach for consciously zooming in on solutions. Introducing \emph{weighted} navigation, we characterize a navigation step with respect to the extent to which it affects the size of the solution space, thereby we can navigate toward solutions at a configurable ``pace'' of navigation, which we consider to be the extent to which the current route zooms into the solution space. The kind of parameter that allows for configuration is called the \emph{weight} of a facet. Weights of facets enable users to inspect effects of facets at any stage of navigation, which allows for navigating more interactively in a systematic way. Any weight or pace is associated with a \emph{weighting function} that can be defined in various ways, specifying the number of program-related objects, e.g., answer sets. \begin{definition} Let $\lp$ be a program, $\delta \in \De{}$, $f \in \F$ and $\delta' \in \RE{\delta}$. We call $\#: \{\lp^{\delta} \mid \delta \in \De{}\} \rightarrow \mathbb{N}$ a \emph{weighting function}, whenever $\#(\lp^{\delta}) > 0$, if $|\AS[\lp]| \geq 2$. The weight $\omega_{\#}$ of $f$ with respect to $\#$, $\lp^{\delta}$ and $\delta'$ is defined as: \[ \omega_{\#}(f, \lp^{\delta}, \delta') \coloneqq \begin{cases} \#(\lp^{\delta}) - \#(\lp^{\delta'}), &\text{ if } \route[\delta, f] \not \in \De[\lp^{\delta}]{s} \text{and } \delta' \neq \epsilon;\\ \#(\lp^{\delta}) - \#(\lp^{\route[\delta, f]}), &\text{otherwise.} \end{cases} \] \end{definition} \noindent The pace indicates the zoom-in effect of a route with respect to a weighting function. \begin{definition}\label{def:pace} Let $\lp$ be a program such that $|\AS| \geq 2$ and $\delta \in \De{s}$. We define the pace $\pace{\#}{\delta}$ of $\delta$ with respect to $\#$ as $\pace{\#}{\delta} \coloneqq \nicefrac{\#(\lp) - \#(\lp^{\delta})}{\#(\lp)}$. \end{definition} Before we instantiate weights with actual weighting functions, we identify desirable properties of weights. Most importantly, weights should indicate zoom-in effects of facets on safe routes,~i.e., a weight should identify which facets lead to a proper sub-space of answer sets. \begin{definition} We call a weight $\omega_{\#}$ \emph{safe-zooming}, whenever if $f \in \F[\lp^{\delta}]$, then $\omega_{\#}(f, \lp^{\delta}, \epsilon) > 0$ for $\delta \in \De{s}$. \end{definition} \noindent Essentially, whenever a weight is \emph{safe-zooming} it is useful to to inspect zoom-in effects during goal-oriented navigation. \begin{definition} We call a weight $\omega_{\#}$ \emph{splitting}, if $\#(\lp^{\delta}) = \omega_{\#}( \alpha, \lp^{\delta}, \delta') + \omega_{\#}( \overline{\alpha}, \lp^{\delta}, \delta')$ for $\delta, \delta' \in \De{s}$ and $\alpha, \overline{\alpha} \in \F[\lp^{\delta}]$. \end{definition} \noindent\emph{Splitting} weights are useful during goal-oriented navigation, as any permissible route $\delta$ in $\vgo$ is safe and if $\#(\lp^{\delta})$ and the weight of a facet $f \in \F[\lp^{\delta}]$ for $\delta \in \De{s}$ are known, we can compute the weight of the respective inverse facet $f' \in \F[\lp^{\delta}]$ arithmetically and thus avoid computing $\#(\lp^{\route[\delta, f']})$. \begin{definition} We call a weight $\omega_{\#}$ \emph{reliable}, whenever $\omega_{\#}{(f, \lp^{\delta}, \epsilon)} = \#(\lp^{\delta})$ if and only if $\route[\delta, f] \not \in \De{s}$ for $\delta \in \De{}$ and $f \in \F$. \end{definition} \noindent The benefit of \emph{reliable} weights, on the other hand, is that they indicate unsafe routes. Hence, reliability can be ignored during goal-oriented navigation, but appears to be useful during free navigation. As we are focused on narrowing down the solution space, we want to know, whether the associated weighting function $\#$ of a weight detects maximal or minimal, respectively, zoom-in effects on safe routes. \begin{definition} For a program $\lp$, $\delta \in \De{}$ and $f \in F$, then: % % % % % % % % % % % % % % \begin{itemize} \item $f$ is \emph{maximal weighted}, denoted by $f \in max_{\omega_{\#}}(\lp^{\delta})$, if $\forall f' \in \F[\lp^{\delta}]: \omega_{\#}(f, \lp^\delta, \epsilon) \geq \omega_{\#}(f', \lp^\delta, \epsilon)$; \item $f$ is \emph{minimal weighted}, denoted by $f \in min_{\omega_{\#}}(\lp^{\delta})$, if $\forall f' \in \F[\lp^{\delta}]: \omega_{\#}(f, \lp^\delta, \epsilon) \leq \omega_{\#}(f', \lp^\delta, \epsilon)$. \end{itemize} % % \end{definition} \noindent A weight is min-inline, if every minimal weighted facet leads to a maximal sub-space of solutions. Analogously, a weight is max-inline, if every maximal weighted facet leads to a minimal sub-space. \begin{definition} Let $\lp$ be a program, $\delta \in \De{s}$ and $f \in \F[\lp^{\delta}]$. We call a weight $\omega_{\#}$ % % % % % % % % % % % % % % % % \begin{itemize} \item \emph{min-inline}, whenever $f \in min_{\omega_{\#}}(\lp^{\delta})$ if and only if $$\forall f' \in \F[\lp^{\delta}] \setminus min_{\omega_{\#}}(\lp^{\delta}): |\AS[\lp^{\route[\delta, f]}]| > |\AS[\lp^{\route[\delta,f']}]|\text{;}$$ \item \emph{max-inline}, whenever $f \in max_{\omega_{\#}}(\lp^{\delta})$ if and only if $$\forall f' \in \F[\lp^{\delta}] \setminus max_{\omega_{\#}}(\lp^{\delta}): |\AS[\lp^{\route[\delta, f]}]| < |\AS[\lp^{\route[\delta,f']}]|.$$ \end{itemize} % % \end{definition} Below, we introduce the \emph{absolute} weight of a facet, which counts answer sets, and two so called \emph{relative} weights, which seek for approximating the number of solutions to compare sub-spaces with respect to their actual size, while avoiding counting. \subsection{Absolute Weight}\label{sec:aw} The most natural weighting function to identify the effect of a navigation step is to observe the number of answer sets on a route. The absolute weight of a facet $f$ is defined as the number of solutions by which the solution space grows or shrinks due to the activation of $f$. \begin{definition} \label{def:aw} % % The \emph{absolute weight} $\omega_{\abs}$ is defined by $\#\mathcal{AS}: \lp^{\delta} \mapsto |\AS[\lp^{\delta}]|$. \end{definition} \begin{example} Let us inspect Figure~\ref{fig:gofree} and the program $\Pi_1$ from Example~\ref{ex:Pi1}. As stated by $\omega_{\abs}(a, \lp_1^{\route[\overline{a}, c]}, \route[a]) = 0$, activating $a$ on $\route[\overline{a}, c]$ induces a slide. $\omega_{\abs}(a, \lp_1^{\route[a]}, \route[b]) = -1$. This tells us that navigating towards $b$ on $\route[a]$ zooms out by one solution. In contrast, $\omega_{\abs}(b, \lp_1^{\route[c]}, \route[\overline{a}]) = 1$ means that we zoom in by one solution. \end{example} \noindent By definition, the absolute weight directly reflects the effect of a navigation step and % satisfies all introduced properties. \shortversion{\begin{theorem}[$\star$]} \longversion{\begin{theorem}} \label{thm:awsplrel} The absolute weight~$\omega_{\abs}$ is safe-zooming, splitting, reliable, min-inline, and max-inline. \end{theorem} \begin{proof}% Let $\lp$ be a program. \paragraph{safe-zooming:} Follows per definition of facets and the fact that if $f \in \F$, then $\AS[\lp^{\route[f]}] = \{X \in \AS \mid X \models f\}$. \paragraph{reliable:} Let $\delta \in \De{}$ and $f \in \F[\lp^{\delta}]$. By Definition~\ref{def:aw}: \begin{align} \omega_{\abs}(f, \lp^{\delta}, \epsilon) = |\AS[\lp^{\delta}]| - |\AS[\lp^{\route[\delta, f]}]| \label{al:splrel0} \end{align} \paragraph{($\Rightarrow$)} Suppose $\omega_{\abs}(f, \lp^{\delta}, \epsilon) = |\AS[\lp^{\delta}]|$. Using~(\ref{al:splrel0}) it follows that $|\AS[\lp^{\route[\delta, f]}]| = 0$, therefore $\route[\delta, f] \not \in \De{s}$. \paragraph{($\Leftarrow$)} Suppose $\route[\delta, f] \not \in \De{s}$. By assumption $\AS[\lp^{\route[\delta, f]}] = \emptyset$, so that $|\AS[\lp^{\route[\delta, f]}]| = 0$. Therefore due to~(\ref{al:splrel0}), we conclude that $\omega_{\abs}(f, \lp^{\delta}, \epsilon) = |\AS[\lp^{\delta}]|$. \paragraph{splitting:} Let $\delta, \delta' \in \De{s}$ and $\alpha, \overline{\alpha} \in \F[\lp^{\delta}]$. Then, since if $f \in \{\alpha, \overline{\alpha}\} \subseteq \F[\lp^{\delta}]$, then $\AS[\lp^{\route[f]}] \neq \emptyset$, it follows that $\lp^{\route[\delta, \alpha]}$ and $\lp^{\route[\delta, \overline{\alpha}]}$ are satisfiable, which means that $\route[\delta, \alpha], \route[\delta, \overline{\alpha}]\in \De[\lp^{\delta}]{s}$. Thus Definition~\ref{def:aw} gives~(\ref{al:splrel0}) for $f \in \{\alpha, \overline{\alpha}\}$, respectively, so that $\delta'$ can be ignored. Define $\SA[\lp^{\delta}]{\alpha} = \{X \in \AS[\lp^{\delta}] \mid X \models \alpha\}$ and $\SA[\lp^{\delta}]{\overline{\alpha}} = \{X \in \AS[\lp^{\delta}] \mid X \models \overline{\alpha}\}$. We know that $\SA[\lp^{\delta}]{\alpha} = \AS[\lp^{\route[\delta, \alpha]}]$ and $\SA[\lp^{\delta}]{\overline{\alpha}} = \AS[\lp^{\route[\delta, \overline{\alpha}]}]$. It is easy to see that \begin{align} \SA[\lp^{\delta}]{\alpha} \text{ and } \SA[\lp^{\delta}]{\overline{\alpha}} \text{ form a partition of } \AS[\lp^{\delta}] \label{al:splrel1} \end{align} hence: \begin{align*} |\AS[\lp^{\delta}]| & = |\SA[\lp^{\delta}]{\alpha}| + |\SA[\lp^{\delta}]{\overline{\alpha}}| \\ & = |\AS[\lp^{\route[\delta, \alpha]}]| + |\AS[\lp^{\route[\delta, \overline{\alpha}]}]| \\ & = (|\AS[\lp^{\delta}]|- |\AS[\lp^{\route[\delta, \overline{\alpha}]}]|) + (|\AS[\lp^{\delta}]|- |\AS[\lp^{\route[\delta, \alpha]}]|) & & \text{(\ref{al:splrel1})} \\ & = \omega_{\abs}(\overline{\alpha}, \lp^{\delta}, \delta') + \omega_{\abs}(\alpha, \lp^{\delta}, \delta') \\ & = \omega_{\abs}(\alpha, \lp^{\delta}, \delta') + \omega_{\abs}(\overline{\alpha}, \lp^{\delta}, \delta') \end{align*} \paragraph{min-inline:} Follows directly from Definition~\ref{def:aw}. \paragraph{max-inline:} Follows directly from Definition~\ref{def:aw}. \end{proof} \noindent Unfortunately, computing absolute weights is expensive. \shortversion{\begin{lemma}[$\star$]\label{lem:compl:absw}} \longversion{\begin{lemma}\label{lem:compl:absw}} Outputting the absolute weight~$\omega_{\abs}$ for a given program~$\Pi$ and route~$\delta$ is $\#\cdot\complexityClassFont{co}\text{\complexityClassFont{NP}}\xspace$-complete. \end{lemma} \begin{proof}% Membership and hardness can be easily established by the complexity of counting the number of answer sets of a disjunctive program $\lp$, which is known to be \#$\cdot$coNP-complete~\cite{fichte2017answer}. \end{proof} \subsection{Relative Weights} Since computing absolute weights is computationally expensive (Lemma~\ref{lem:compl:absw}), we aim for less expensive methods that still retain the ability to compare sub-spaces with respect to their size. Therefore, we investigate two \emph{relative weights}. \paragraph{Facet Counting.} One approach to manipulating the number of solutions and to keeping track of how the number changes over the course of navigation, is to count facets. \begin{definition}\label{def:rw} % % % The \emph{facet-counting weight} $\omega_{\fc}$ is defined by $\#\mathcal{F}: \lp^{\delta} \mapsto |\F[\lp^{\delta}]|$. \end{definition} Next, we establish a positive result in terms of complexity. \longversion{Therefore, recall that}\shortversion{Recall} $\Delta^p_3 \subseteq \complexityClassFont{PH}\xspace \subseteq \complexityClassFont{P}\xspace^\complexityClassFont{\#\Ptime}\xspace$~\cite{Stockmeyer76,Toda91}. \newcommand{\at}[1]{\ensuremath{\mathcal{A}({#1})}^+} \shortversion{\begin{lemma}[$\star$]\label{lem:compl:fcw}} \longversion{\begin{lemma}\label{lem:compl:fcw}} Outputting the facet-counting weight~$\omega_{\fc}$ for a given program~$\lp$ and route~$\delta$ is in $\Delta^p_3$. % \end{lemma} \begin{proof}% In fact, we obtain the membership result by the following construction. % We have $|\F[\lp^{\delta}]| = |\mathcal{BC}(\lp^\delta) \setminus \mathcal{CC}(\lp^\delta)| = |\mathcal{BC}(\lp^\delta)| - |\mathcal{CC}(\lp^\delta)|$. The value of $|\mathcal{BC}(\lp^\delta)|$ is at most $|\at{\lp^\delta}|$ and we can compute $\mathcal{BC}(\lp^\delta)$ by checking for every atom~$\alpha \in \at{\lp^\delta}$ whether $\alpha$ is a brave consequence of $\lp^\delta$, which % is $\Sigma^P_2$-complete~\cite{EiterGottlob95}. % Similar, we can check for $|\mathcal{CC}(\Pi^d)|$ whether $a \in \at{\Pi^d}$ is a cautious consequence of~$\Pi^\delta$, which is $\Pi^P_2$-complete~\cite{EiterGottlob95}. Computing the difference of the two integers takes time $\Theta(\log n)$. \end{proof} \noindent Hence, assuming standard theoretical assumptions, counting facets is easier than counting solutions. However, below we show that counting facets has deficiencies, when it comes to comprehending the solution space regarding its size. \shortversion{\begin{lemma}[$\star$]\label{lem:ifff0}} \longversion{\begin{lemma}\label{lem:ifff0}} $|\AS| \leq 1$ if and only if $|\F| = 0$. \end{lemma} \begin{proof}% Let $\lp$ be a program. \paragraph{($\Rightarrow$)} Suppose $|\F| > 0$. Then $\BC \neq \emptyset$, so that $|\AS| > 0$. Now, suppose $|\AS| = 1$. Then $\BC = \CC$, which means that $|\F| = 0$ and contradicts $|\F| > 0$. Therefore $|\AS| > 1$, which by contraposition concludes the proposition. \paragraph{($\Leftarrow$)} Suppose $|\F| = 0$. Then $\BC = \CC$. Due to the minimality of answer sets we conclude that therefore either $\AS = \emptyset$, so that $|\AS| = 0$, or $|\AS| = 1$. Therefore $|\AS| \leq 1$. \end{proof} \noindent From Lemma~\ref{lem:ifff0} and the fact that for program $\lp_1$ from Example~\ref{ex:Pi1} we have $\omega_{\fc}(c, \lp_{1}^{\route[\overline{a}]}, \epsilon) = |\F[\lp_{1}^{\route[\overline{a}]}]|$, but $\route[\overline{a}, c] \in \De[\lp_1]{s}$, we conclude that $\omega_{\fc}$ is not reliable. Furthermore, since therefore $\omega_{\fc}(c, \lp_{1}^{\route[\overline{a}]}, \epsilon) + \omega_{\fc}(\overline{c}, \lp_{1}^{\route[\overline{a}]}, \epsilon) \neq |\F[\lp_{1}^{\route[\overline{a}]}]|$, $\omega_{\fc}$ is not splitting either. \begin{corollary} The facet-counting weight~$\omega_{\fc}$ is not reliable and not splitting. \end{corollary} The reason for $\omega_{\fc}$ not distinguishing between one and no solution is that we can interpret it as an indicator for how the diversity or similarity, respectively, of solutions changes by activating a facet. Accordingly, whenever a step leads to one or no solution, the thereby reached sub-space contains least-diverse or most-similar solutions, respectively. \begin{example}\label{exm:rw1} Again consider $\Pi_1$ from Example~\ref{ex:Pi1}. While on the absolute level $\omega_{\abs}(\overline{a}, \lp_1, \epsilon) = 1 = \omega_{\abs}(\overline{c}, \lp_1, \epsilon)$, counting facets, $\omega_{\fc}(\overline{a}, \lp_1, \epsilon) = 4$ and $\omega_{\fc}(\overline{c}, \lp_1, \epsilon) = 2$, the relative weights of $\overline{c}$ and $\overline{a}$ differ. The reason is that even though $|\AS[\lp_{1}^{ \route[\overline{a}]}]| = |\AS[\lp_{1}^{ \route[\overline{c}]}]|$, by activating $\overline{c}$ we can still navigate towards $\F[\lp_{1}^{\route[\overline{c}]}] = \{a, \overline{a}, b, \overline{b}, d, \overline{d}\}$, but % activating $\overline{a}$, we can only navigate toward $\F[\lp_{1}^{\route[\overline{a}]}] = \{c, \overline{c}, d, \overline{d}\}$, i.e., answer sets that contain $b$. \end{example} \noindent In other words, while $\#\mathcal{F}$ indicates how ``far apart'' solutions are, $\omega_{\fc}$ indicates to what amount the solutions converge due to navigation steps. \shortversion{\begin{theorem}[$\star$]\label{thm:fcwzoomin}} \longversion{\begin{theorem}\label{thm:fcwzoomin}} \longversion{The facet-counting weight~}\shortversion{Weight~}$\omega_{\fc}$ is safe-zooming. \end{theorem} \begin{proof}% Let $\lp$ be a program and $\delta \in \De{s}$. By Definition~\ref{def:rw}: \begin{align} \omega_{\fc}(f, \lp^{\delta}, \epsilon) & = |\F[\lp^{\delta}]| - |\F[\lp^{\route[\delta, f]}]| \label{al:1} \end{align} Suppose $f \in \{\alpha, \overline{\alpha}\} \subseteq \F[\lp^{\delta}]$. Then we know that $\AS[\lp^{\route[\delta, f]}] = \{X \in \AS[\lp^{\delta}] \mid X \models f\}$, so that either $\forall X \in \AS[\lp^{\route[\delta, f]}]: \alpha \in X$, or $\forall X \in \AS[\lp^{\route[\delta, f]}]: \alpha \not \in X$. Therefore either $\alpha \in \bigcap \AS[\lp^{\route[\delta, f]}] = \CC[\lp^{\route[\delta, f]}]$, or $\alpha \not \in \bigcup \AS[\lp^{\route[\delta, f]}] = \BC[\lp^{\route[\delta, f]}]$. Per definition of facets in both cases $\F[\lp^{\route[\delta, f]}] \subseteq \F[\lp^{\delta}] \setminus \{f\}$. Therefore $|\F[\lp^{\route[\delta, f]}]| < |\F[\lp^{\delta}]|$. Using (\ref{al:1}) gives $\omega_{\fc}(f, \lp^{\delta}, \epsilon) > 0$, which concludes the proof. \end{proof} \noindent Due to Theorem~\ref{thm:fcwzoomin}, we know that $\#\mathcal{F}$ can be used to determine the pace of safe navigation. In fact the facet-counting pace $\mathcal{P}_{\#\mathcal{F}}$ emphasizes that $\omega_{\fc}$ is not directly related to the size of the solution space. \begin{example}\label{exm:rpace} Consider $\Pi_1$ from Example~\ref{ex:Pi1}. While $|\AS[\lp_1^{\route[\overline{c}]}]| = 2$ and $|\AS[\lp_1]| = 3$, which means that activating $\overline{c}$ on $\lp_1$ we lose 1 of 3 solutions so that $\pace{\#\mathcal{AS}}{\route[\overline{c}]} = \nicefrac{1}{3}$, we have $\pace{\#\mathcal{F}}{\route[\overline{c}]} = \nicefrac{1}{4}$. \end{example} \noindent From Lemma~\ref{lem:ifff0}, we immediately conclude: \begin{corollary}\label{cor:fcpace} $\mathcal{P}_{\omega_{\fc}}(\delta) = 1$ if and only if $\delta \in \De{\mathit{ms}}$. In contrast, for all~$\delta \in \De{s}$ we have % \shortversion{% $\pace{\#\mathcal{AS}}{\delta} \leq \frac{|\AS| - 1}{|\AS|}$. % } \longversion{% \[\pace{\#\mathcal{AS}}{\delta} \leq \frac{|\AS| - 1}{|\AS|}.\] % } \end{corollary} \noindent Corollary~\ref{cor:fcpace} states that, in contrast to $\mathcal{P}_{\#\mathcal{AS}}$, the facet counting pace $\mathcal{P}_{\omega_{\fc}}$ detects whether users sit on a unique solution. More importantly it is the better option to find a viable implementation of the pace of navigation for our framework. While in that sense using the relative weight~$\omega_{\fc}$ is beneficial, unfortunately it is not \emph{min-inline}. \begin{example}\label{exm:fcwnotmin} We consider $\lp_2 = \{a\,|\,b\,|\,c;\; d\,|\,e \leftarrow b;\; f \leftarrow c\}$ where $\AS[\lp_2] = \{\{a\}, \{b, d\}, \{b, e\}, \{c, f\}\}$. While $\overline{a} \in min_{\omega_{\fc}}(\lp_2)$ and $\overline{c} \not \in min_{\omega_{\fc}}(\lp_2)$, we have $|\AS[\lp_2^{\route[\overline{a}]}]| = |\AS[\lp_2^{\route[\overline{c}]}]|$. Hence, the relative weight~$\omega_{\fc}$ is not min-inline. % % % % % % % % % % \end{example} \noindent We suspect that the property max-inline is not satisfied by the weight $\omega_{\fc}$ as we observed in our experiments that the activation of some facets, which had no maximal $\omega_{\fc}$ weight, lead to smaller answer set spaces than the activation of facets which had maximal $\omega_{\fc}$ weight. An actual counterexample is still open. \paragraph{Supported Model Counting.} Another approach to comparing sub-spaces with respect to their size, while avoiding answer set counting, is to count supported models. An interpretation $X$ is called \emph{supported model}~\cite{apt1988towards,AlvianoD16} of $\lp$ if $X$ satisfies $\lp$ and for all $\alpha \in X$ there is a rule $r \in \lp$ such that $H(r) \cap X = \{\alpha\}$, $B^+(r) \subseteq X$ and $B^-(r) \cap X = \emptyset$. By $\mathcal{S}(\lp)$ we denote the supported models of $\lp$. It holds that $\AS \subseteq \mathcal{S}(\lp)$~\cite{marek1992relationship}, but the converse does not hold in general. We define \emph{supp weights}, by which in short we refer to supported model counting weights, accordingly as follows. \begin{definition}\label{def:sw} % % The supp weight $\omega_{\smc}$ is defined by $\#\mathcal{S}: \lp^{\delta} \mapsto |\mathcal{S}(\lp)|$. \end{definition} \noindent The \emph{positive dependency graph} of program $\lp$ is $G(\lp) \coloneqq (\A, \{(\alpha_1, \alpha_0) \mid \alpha_1 \in B^+(r), \alpha_0 \in H(r), r \in \lp\})$. $\lp$ is called \emph{tight}, if $G(\lp)$ is acyclic. If $\lp$ is tight, then models of the completion and answer sets coincide~\cite{cois1994consistency}. \longversion{% Since we have $\AS = \mathcal{S}(\lp)$ for tight programs $\lp$, we can immediately obtain the following corollary. } \begin{corollary} If $\lp$ is tight, then for all $f \in \F[\lp^{\delta}]$ we have that $\omega_{\abs}(f, \lp^{\delta}, \delta') = \omega_{\smc}(f, \lp^{\delta}, \delta')$. \end{corollary} \noindent Due to the fact that unsatisfiable programs may have supported models~\cite{marek1992relationship}, $\omega_{\smc}$ is not reliable. Moreover the following example shows that $\omega_{\smc}$ is neither min-inline, nor max-inline. \begin{example}\label{ex:Pi3} We consider $\Pi_3=\{a;\; b\leftarrow a, \mathord{\sim} c;\; c\leftarrow\mathord{\sim} b, \sim d;\; d\leftarrow d\}$ with $\mathcal{S}(\Pi_3)=\{\{a,b\}, \{a,c\}, \{a,b,d\}\}$ and $\AS[\Pi_3]=\{\{a,b\},\{a,c\}\}$. The facets of $\Pi_3$ are given by $\mathcal{F}(\Pi_3)=\{b,\overline{b}, c, \overline{c}\}$. Then, the facets $b$ and $\overline{c}$ both have supp weight 1 and thus are minimal weighted, and the facets $c$ and $\overline{b}$ have supp weight 2 and thus are maximal weighted. As $|\AS[\Pi_3^{\route[b]}]|=|\AS[\Pi_3^{\route[c]}]|=1$ we see that both the minimal and the maximal weighted facets with respect to supp weights have the same number of answer sets. Hence, $\omega_{\smc}$ is neither min-inline, nor max-inline. \end{example} \noindent Although $\omega_{\smc}$ does not satisfy min-inline and max-inline, it shares some properties with $\omega_{\abs}$ and $\omega_{\fc}$. \shortversion{\begin{lemma}[$\star$]\label{lem:nonewsms}} \longversion{\begin{lemma}\label{lem:nonewsms}} \longversion{Let $\lp$ be a program and $\delta \in \De{s}$. If}% \shortversion{For program~$\lp$ and $\delta \in \De{s}$, if} $f \in \F[\lp^{\delta}]$, then \shortversion{% $\mathcal{S}(\lp^{\route[\delta, f]}) = \{X \in \mathcal{S}(\lp^{\delta}) \mid X \models f\}\subset \mathcal{S}(\lp^\delta)$. % } \longversion{% \[\mathcal{S}(\lp^{\route[\delta, f]}) = \{X \in \mathcal{S}(\lp^{\delta}) \mid X \models f\}\subset \mathcal{S}(\lp^\delta).\] % } \end{lemma} \begin{proof}% Let $\lp$ be a program and $\delta \in \De{s}$. Suppose $f \in \{\alpha, \overline{\alpha}\} \subseteq \F[\lp^{\delta}]$. Then, we know that $\AS[\lp^{\route[\delta, f]}] \neq \emptyset$, so that, using the fact that $\AS \subseteq \mathcal{S}(\lp)$, we conclude that $\mathcal{S}(\lp^{\route[\delta, f]}) \neq \emptyset$, It is well known that an integrity constraint $\leftarrow \alpha$ can be encoded as a self-blocking rule $\alpha' \leftarrow \alpha, \mathord{\sim} \alpha'$ where $\alpha'$ is a new introduced atom, so that $ic(f)$ can be encoded as $\alpha' \leftarrow \alpha, \mathord{\sim} \alpha'$ ($\alpha' \leftarrow \mathord{\sim} \alpha, \mathord{\sim} \alpha'$ respectively). Hence, by definition of $\mathcal{S}(\lp)$, it is easy to see that activating $f = \alpha$ rejects any interpretation $X \in \mathcal{S}(\lp^{\delta})$ that contains $\alpha$. Analogously, if $f = \overline{\alpha}$ any interpretation that does not contain $\alpha$ is being rejected. Therefore we conclude that $\mathcal{S}(\lp^{\route[\delta, f]}) = \{X \in \mathcal{S}(\lp^{\delta}) \mid X \models f\}\subset \mathcal{S}(\lp^\delta)$. \end{proof} \shortversion{\begin{theorem}[$\star$]\label{thm:smcwmininline}} \longversion{\begin{theorem}\label{thm:smcwmininline}} \longversion{The supp weight~}\shortversion{Weight~}$\omega_{\smc}$ is safe-zooming and splitting. % \end{theorem} \begin{proof}% Let $\lp$ be a program, $\delta \in \De{s}$ and $f \in \F[\lp^{\delta}]$. \paragraph{safe-zooming:} Follows directly from Lemma~\ref{lem:nonewsms}. \paragraph{splitting:} Suppose $f \in \{\alpha, \overline{\alpha}\}$. Due to Lemma~\ref{lem:nonewsms} it is easy to see that $\mathcal{S}(\lp^{\route[\delta, \alpha]})$ and $\mathcal{S}(\lp^{\route[\delta, \overline{\alpha}]})$ form a partition of $\mathcal{S}(\lp^{\delta})$, from which, analogously to the proof for the splitting property of $\omega_{\abs}$, it follows that $\omega_{\smc}$ is splitting. \end{proof} \noindent Computing supp weights is computationally easier. \shortversion{\begin{lemma}[$\star$]\label{lem:compl:smcw}} \longversion{\begin{lemma}\label{lem:compl:smcw}} Outputting the supp weight~$\omega_{\smc}$ for a given program~$\lp$ and route~$\delta$ is $\complexityClassFont{\#\Ptime}\xspace$-complete. \end{lemma} \begin{proof}% Since we can easily compute $\omega_{\smc}$ using Clark's completion~\cite{clark1978negation} and propositional model counting~\cite{Valiant79b} and vice-versa encode a SAT instance into a logic program while preserving the models~\cite{Niemela99}, we obtain membership and hardness. % % % \end{proof} \noindent However, recalling Lemma~\ref{lem:compl:fcw}, note that counting facets is still the least expensive method. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c} & \it ~\texttt{saf}~ & \it ~\texttt{rel}~ & \it ~\texttt{spl}~ & ~\texttt{min}~ & ~\texttt{max} \\ \toprule % $\omega_{\abs}$ & \ding{51} & \ding{51} & \ding{51} & \ding{51} & \ding{51} \\ $\omega_{\fc}$ & \ding{51} & \ding{55} & \ding{55} & \ding{55} & ?\\ $\omega_{\smc}$ & % \ding{51} & \ding{55} & \ding{51} & \ding{55} & \ding{55} \\ \end{tabular} \caption{Comparing weights regarding \texttt{saf}: is safe-zooming, \texttt{spl}: is splitting, \texttt{rel}: is reliable, \texttt{min}: is min-inline and \texttt{max}: is max-inline.} \label{tab:wc} \end{table} In summary, we can characterize and compare the introduced weights as given in Table~\ref{tab:wc}. Every weight has its advantages that should be used to leverage performance, or characterize the solution space and its sub-spaces. While counting solutions is the most desirable choice, computing $\omega_{\abs}$ is hard. Our results show that, when narrowing down the space by strictly pruning the maximum/minimum number of solutions, at least for tight programs, $\omega_{\smc}$ is the best choice, as it coincides with $\omega_{\abs}$ while remaining less expensive. In general, in contrast to $\omega_{\abs}$, relative weights come with different use cases regarding their interpretation. Even though $\omega_{\fc}$ has deficiencies, it satisfies the most essential property, namely being safe-zooming, and provides information on the similarity/diversity of solutions w.r.t. a route. To conclude, while facet-counting is the most promising method for distinguishing zoom-in effects of facets regarding computational feasibility, counting supported models of tight programs is precise about zoom-in effects. % \subsection{Weighted Navigation Modes} \label{sec:wnm} In the following, we introduce two new navigation modes, called \emph{strictly goal-oriented} and \emph{explore}. They can be understood as special cases of goal-oriented navigation. \begin{definition} Let $\lp$ be a program, $\delta \in \De{s}$ and $f \in \F$. The \emph{strictly goal-oriented} mode $\vsgo{\#}$ and the \emph{explore} $\vexpl{\#}$ mode are defined by: \[ \vsgo{\#}{(\delta, f)} \coloneqq \begin{cases} \AS[\lp^{\route[\delta, f]}], & \text{ if } f \in max_{\omega_{\#}}(\lp^{\delta}); \\ \AS[\lp^{\delta}] & \text{otherwise.} \end{cases}\] \[ \vexpl{\#}{(\delta, f)} \coloneqq \begin{cases} \AS[\lp^{\route[\delta, f]}], & \text{ if } f \in min_{\omega_{\#}}(\lp^{\delta}); \\ \AS[\lp^{\delta}] & \text{otherwise.} \end{cases} \] \end{definition} \begin{corollary} $\vsgo{\#}$ and $\vexpl{\#}$ avoid unsafe routes, hence we can use the restriction $\omega_{\#}\restrict{X}$ of $\omega_{\#}$ where $X \coloneqq \{(f, \delta, \epsilon) \mid f \in \F, \delta \in \De{s}\}$. \end{corollary} \noindent While in strictly goal-oriented mode the objective is to ``rush'' through the solution space, navigating at the highest possible pace in order to reach a unique solution as quick as possible, explore mode keeps the user off one unique solution as long as possible, aiming to provide her with as many solutions as possible to explore while ``strolling'' between sub-spaces. As a consequence, regardless of whether absolute or relative weights are used, during weighted navigation some (partial) solutions may be unreachable. \begin{example}\label{exm:unrsol} Consider $\lp_2$ from Example~\ref{exm:fcwnotmin} where we can choose from facets $\F[\lp_2] = \{ a,$ b, c, d, e, f, $\overline{a}$, $\overline{b}$, $\overline{c}$, $\overline{d}$, $\overline{e}$, $\overline{f}\}$ and $max_{\omega_{\abs}}(\lp_2) = \{a, c, d, e, f\} = max_{\omega_{\fc}}(\lp_2)$. Thus, any solution $X \in \AS[\lp_2] = \{\{a\}, \{b, d\}, \{b, e\}, \{c, f\}\}$ such that $b \in X$ is unreachable in $\vsgo{\#\mathcal{AS}}$ and $\vsgo{\#\mathcal{F}}$. \longversion{Accordingly, since}\shortversion{Since }% $\omega_{\abs}$ is splitting, it follows that $min_{\#\mathcal{AS}}(\lp_2) = \{\overline{a}, \overline{c}, \overline{d}, \overline{e}, \overline{f}\}$. Hence, navigating in $\vexpl{\#\mathcal{AS}}$, one has to sacrifice either partial solution $a$, or $c$ and $f$ right in the beginning. Furthermore, since $min_{\omega_{\fc}}(\lp_2) = \{\overline{a}, \overline{d}, \overline{e}\}$, right in the beginning of navigating in $\vexpl{\#\mathcal{F}}$, one has to sacrifice partial solution $a$, $d$, or $e$. \end{example} \section{Implementation and Evaluation} \begin{figure*}[t] \centering \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{pc-config-fig.pdf} \caption{PC configuration.} \label{subfig:pcc} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{af-st-fig.pdf} \caption{Stable extensions.} \label{subfig:afst} \end{subfigure} \begin{subfigure}{0.32\linewidth} \centering \includegraphics[width=\linewidth]{af-pr-fig.pdf} \caption{Preferred extensions.} \label{subfig:afpr} \end{subfigure} \caption{Comparing random steps in several navigation modes. % The x-axis refers to the respective navigation step, the y-axis refers to the execution time in seconds. Colors in Figure~\ref{subfig:pcc} and~\ref{subfig:afst} follow the legend as given in Figure~\ref{subfig:afpr}. % } \end{figure*} To study the feasibility of our framework, we implemented the \emph{faceted answer set browser} (\verb"fasb") % on top of the \verb"clingo" solver. In particular, we conducted experiments on three instance sets that range from large solution spaces to complex encodings in order to verify the following two hypotheses: (\textbf{H1}) weighted faceted navigation can be performed in reasonable time in an incomprehensible solution space associated with product configuration; and (\textbf{H2}) the feasibility of our framework depends on the complexity of the given problem, i.e., program. The implementation and experiments are publicly available % \cite{2021_5780050, 2021_5767981}. \paragraph{Environment.} \verb"fasb" is designed for desktop systems, enabling users to practicably explore the solution space in an interactive way. Hence, runtime was limited to 600~seconds and the experiments were run on an eight core Intel i7-10510U CPU 1.8~GHz with 16~GB of RAM, running Manjaro Linux 21.1.1 (kernel 5.10.59-1-MANJARO). Runtime was measured in elapsed time by timers in \verb"fasb" itself. \paragraph{Design of Experiment.} Currently, we miss data on real user behavior. Thus, we run three iterations of random navigation steps in each of the implemented modes, to simulate a user and avoid bias regarding the choice of steps. For \emph{go}, \emph{sgo-fc}, and \emph{sgo-abs}, we use the \verb"--random-safe-walk" call, which in the provided mode performs random steps until the current route is maximal safe,~e.g., in \emph{sgo-fc} and \emph{sgo-abs} it computes maximal weighted facets and then chooses one of them to activate randomly. Since, in practice, using \emph{expl-fc} and \emph{expl-abs}, we do not necessarily aim to arrive at a unique solution, we use \verb"--random-safe-steps" for \emph{expl-fc} and \emph{expl-abs} and provide the maximum number~$n$ of steps among iterations in \emph{go}, which performs $n$~random steps in the provided mode. We measure the elapsed time for a mode to filter current facets according to its strategy, then, using the mentioned calls, we randomly select a facet thereof to activate, until we reach a unique solution or took $n$~steps. For any mode except \emph{go}, we ignore the elapsed time of \verb"--activate", for \emph{go} we solely measure elapsed time of the \verb"--activate" call, which in the case of \emph{go} includes runtime of computing facets. \verb"fasb" computes the initial facets at startup, which are used throughout further computations, in particular when performing a first step. Thus, we add elapsed time, due to startup, to the first result in each mode. \paragraph{Instances.} To study (\textbf{H1}), we inspect product configuration~\cite{2020_5777217} where users may configure PC components over a large solutions space until a full configuration is obtained. To verify (\textbf{H2}), we select instances from \emph{abstract argumentation} using the ASPARTIX fixed ASP encodings~\citep{DvorakGRWW20} \verb"stable.lp" and \verb"preferred-cond-disj.dl". There, brave and cautious reasoning in abstract argumentation is of higher complexity for preferred semantics than for the stable semantics~\cite{BaroniCG2011}. For the \emph{stable} argumentation semantics, the problems can be encoded as normal programs. Whereas for preferred, one needs disjunctive programs. As input instance, we used the abstract argumentation framework \verb"A/3/ferry2.pfile-L3-C1-06.pddl.1.cnf.apx" from the benchmark set of (ICCMA'17)~\cite{GagglLMW20}. There, solutions of both semantics coincide with exactly 7696 answer sets. \paragraph{Observations and Results.} In the beginning of PC configuration, we choose from 340 facets resulting in on average in 15 steps in \emph{go} and 13 steps in \emph{sgo-fc} to reach a uniqe solution. Taking 16 steps in \emph{expl-fc}, throughout all iterations the facet-counting pace of the obtained route is 9\%. The number of solutions for the respective generated benchmark \verb"pc_config" remains unknown. Running \verb"clingo" for over 9 hours resulted in more than~$1.3 \cdot 10^9$~answer sets. As expected, for more than a billion solutions, \emph{sgo-abs} and \emph{expl-abs} timed out in the first step. Inspecting Figure~\ref{subfig:pcc}, we see that \emph{sgo-fc} execution time drops significantly from Step 1 to 5, which originates in the fact that Steps 1 to 5 throughout all iterations on the average decreased the number of remaining facets by 35\%. Consequently, it reduces the number of facets to compute weights for and leads to shorter execution times. In \emph{expl-fc}, on the other hand, throughout all iterations each step decreases the facet-count by 2. Except for one outlier, this leads to slowly decreasing, but in general, similar execution times. % Figures~\ref{subfig:afst} and~\ref{subfig:afpr} illustrate the execution times for navigation steps in the argumentation instances. As expected, we see no timeouts when navigating through 7696 stable extensions. Whereas exploring 7696 preferred extensions, works only in mode \emph{go}. Computing cautious consequences was most expensive when considering the execution time of processes at startup for preferred extensions, which emphasizes (\textbf{H2}). From Figure~\ref{subfig:afst}, we see that \emph{go}, \emph{sgo-fc}, and \emph{expl-fc} show a similar trend to Figure~\ref{subfig:pcc}. While \emph{go} and \emph{expl-fc} remain rather steady in execution time, \emph{sgo-fc} drops in the first steps. Moreover, we observe that the execution time of \emph{expl-abs}, in contrast to \emph{expl-fc}, decreases noticeably with every step indicating that counting less answer sets in each step becomes easier, whereas counting facets does not. Throughout all iterations, while \emph{sgo-fc} needs 6 steps, \emph{sgo-abs} only needs 5 steps to reach a unique solution. The significant drop between Step 1 and 2 in \emph{sgo-abs} originates in zooming in by 93\%, pruning 7152 out of 7696 solutions. \paragraph{Summary.} In general (\textbf{H2}) the feasibility of weighted navigation depends on the complexity of the given problem. Regarding product configuration, associated with a large and incomprehensible solution space (\textbf{H1}), weighted navigation can be performed in reasonable time using \verb"fasb". \section{Conclusion and Future Work} We provide a formal, dynamic, and flexible framework for navigating through subsets of answer sets in a systematic way. We introduce absolute and relative weights to quantify the size of the search space when reasoning under assumptions (facets) as well as natural navigation operations. % In a systematic comparison, we prove which weights can be employed under the search space navigation operations. In addition, we illustrate the computational complexity for computing the weights. Our framework is intended as an additional layer on top of a solver, adding functionality for systematically manipulating the size of the solution space during (faceted) answer set navigation. Our implementation, on top of the solver \verb"clingo", demonstrates feasibility of our framework for an incomprehensible solution space. For future work, we believe that an interesting question is to research relative weights which preserve the properties min-inline and max-inline. Furthermore, we aim to investigate whether supported model counting is in fact practically feasible using recent developments in propositional model counting~\cite{BM20,FichteHecherHamiti20,FichteEtAl21b,FichteHecherRoland21,KorhonenJarvisalo2021} and ASP~\cite{FichteHecher19a}. \cleardoublepage \section{Acknowledgements} The authors are stated in alphabetic order. This research was partially funded by the DFG through the Collaborative Research Center, Grant TRR 248 see \url{https://perspicuous-computing.science} project ID 389792660, the Bundesministerium für Bildung und Forschung (BMBF), Grant 01IS20056\_NAVAS, a Google Fellowship at the Simons Institute, and the Austrian Science Fund (FWF), Grant Y698. Work has partially been carried out while Johannes Fichte was visiting the Simons Institute for the Theory of Computing. \longversion{ \bibliographystyle{named}
1,108,101,566,109
arxiv
\section{Keywords}: normal modes, collective behavior, evolution \maketitle \section{Introduction} Normal modes occur in every part of the universe and at all scales from nuclear physics to cosmology. They have been used to model the behavior of a variety of physical systems including atmospheres\cite{NM1}, seismic activity of the earth\cite{NM2}, global ocean behavior\cite{NM3}, vibrations of crystals\cite{NM4}, molecules\cite{NM5}, and nuclei\cite{NM6}, functional motions of proteins, viruses and enzymes\cite{NM7}, the oscillation of rotating stars\cite{NM8}, gravitational wave response\cite{NM9}, black hole oscillations\cite{NM10}, liquids\cite{NM11}, ultracold trapped gases\cite{NM12}, and cold trapped ions for quantum information processing\cite{NM13}. Reflecting the ubiquitous appearance of small vibrations in nature, normal modes couple the complex motion of individual interacting particles into simple collective motions in which the particles move in sync with the same frequency and phase. Systems in equilibrium experiencing small perturbations tend to return to equilibrium if restorative forces are present. These restorative forces can often be approximated by effective harmonic terms that couple the $N$ particles in these systems resulting in dynamics that can be transformed to that of $N$ uncoupled oscillators whose collective coordinates define the normal modes. The power of normal modes lies in their ability to describe the complex motion of $N$ interacting particles in terms of collective coordinates whose character and frequencies reflect the inter-particle correlations of the system, thus incorporating many-body effects into simple dynamic motions. Higher order effects can be expanded in this physically intuitive basis of normal modes. If higher order (e.g. anharmonic) effects are small, these collective motions are eigenfunctions of an approximate Hamiltonian acquiring some measure of stability as a function of time; thus, a system in a single normal mode will have a tendency to remain in that mode until perturbed. Normal modes manifest the symmetry of this underlying approximate Hamiltonian with the possibility of offering analytic solutions to a many-body problem and a clear physical picture of the dynamics. Confined quantum systems in the laboratory with $N$ identical interacting particles have been shown to exhibit collective behaviors thought to arise from general and powerful principles of organization\cite{anderson,anderson2,guidry,laughlin,zaanen}. In a recent paper, collective behavior in the form of $N$-body normal modes successfully described the thermodynamic behavior associated with the superfluidity of an ultracold gas of fermions in the unitary regime\cite{emergence}. Two normal modes, selected by the Pauli principle, were found to play a role in creating and stabilizing the superfluid behavior at low temperatures, a phonon mode at ultralow temperatures and a single particle radial excitation mode, i.e. a particle-hole excitation, as the temperature increases. This radial excitation has a much higher frequency and creates a gap that stabilizes the superfluid behavior. The two normal modes were found to describe the thermodynamic behavior of this gas quite well compared to experimental data. These normal modes are the perturbation solutions at first order in inverse dimensionality of a first principle many-body formalism called symmetry invariant perturbation theory (SPT). This formalism uses a group theoretic approach for the solution of a fully interacting, many-body, three-dimensional Hamiltonian with an arbitrary interaction potential as well as a confining potential\cite{paperI,JMPpaper}. Using the symmetry of the symmetric group which can be accessed at large dimension\cite{FGpaper}, this approach has successfully rearranged the many-body work needed at each order in the perturbation series so that an exact solution can, in principle, be obtained order-by-order using group theory and graphical techniques, i.e. non-numerically\cite{rearrangeprl}. Specifically, the numerical work has been rearranged into analytic building blocks that allow a formulation that does not scale with $N$\cite{JMPpaper, test, toth, rearrangeprl, complexity}. Group theory is used to partition the $N$ scaling problem away from the interaction dynamics, allowing the $N$ scaling to be treated as a separate mathematical problem (cf. the Wigner-Eckart theorem). The exponential scaling is shifted from a dependence on the number of particles, $N$, to a dependence on the order of the perturbation expansion\cite{complexity}. This allows one to obtain exact first-order results that contain beyond-mean-field effects for all values of $N$ from a single calculation, but going to higher order is now exponentially difficult. The analytic building blocks have been calculated and stored to minimize the work needed for new calculations\cite{epaps}. Since the perturbation does not involve the strength of the interaction, strongly interacting systems can be studied. Initially applied to systems of cold bosons, my group has previously derived beyond-mean-field energies\cite{FGpaper,energy}, frequencies\cite{energy}, normal mode coordinates\cite{paperI}, wave functions\cite{paperI} and density profiles\cite{laingdensity} for general isotropic, interacting confined quantum systems of identical bosons. Recently I have extended this formalism to systems of fermions\cite{prl,harmoniumpra,partition,emergence}. I avoid the numerically demanding construction of explicitly antisymmetrized wave functions by applying the Pauli principle at first order ``on paper'' through the assignment of appropriate normal mode quanta\cite{prl,harmoniumpra}. I have determined ground\cite{prl} and excited state\cite{emergence} beyond-mean-field energies and their degeneracies allowing the construction of a partition function\cite{partition}. Analytic expressions for these normal modes have been obtained in a previous paper, Ref.~\cite{paperI}. In Sections 5 and 6 of Ref.~\cite{paperI}, we discussed the symmetry of the $N$-body quantum-confinement problem at large dimension which greatly simplifies the problem, making possible, in principle, an exact solution of this $N$-body problem, with $N(N-1)/2$ interparticle interactions order-by-order. In Section 7 we exploited this symmetry, deriving symmetry coordinates used in the determination of the normal modes of the system. We introduced a particular approach to derive a suitable basis of symmetry coordinates that builds up the complexity slowly and systematically. This is illustrated in detail for each of the five types of symmetry coordinates that transform under the five different irreducible representations for a system of $N$ identical particles. In Section 8 we applied this general theory to derive in detail analytic expressions for the normal-mode coordinates. These functions serve as a natural basis for the determination of higher order terms in the perturbation series, and they offer the possibility of a clear physical picture of the dynamics if higher order terms are small. One major advantage of this approach is that $N$ appears as a parameter in the analytical expressions for the normal modes, as well as the energy spectrum, so the behavior of these modes can be easily studied as a function of $N$. The quantum wave function yields important information about the dynamics of a system beyond that obtainable from the energy spectrum. Although explicit antisymmetrized wave functions are not currently obtained in this SPT formalism, the normal mode solutions at first order constitute a complete basis. Thus the character of the wave function may be revealed if a single normal mode is dominant since the normal modes have clear, macroscopic motions. In this paper, I expand on our earlier discussion of these analytic normal modes, examining in detail the motion of individual particles as they contribute to the five types of normal modes for an $N$-body system of identical particles under quantum confinement. In particular, I study the evolution of collective behavior as a function of $N$ from few-body systems to many-body systems, first making general observations and then choosing as a specific case the Hamiltonian for a confined system of fermions in the unitary regime which is known to support collective behavior. Finally in the Appendices, I present additional details of the derivation of these normal modes. \section{Review of the N-body Normal Mode Derivation} In this Section, I briefly review the derivation of the normal modes that was presented in more detail in Ref.~\cite{paperI} as solutions to the SPT first order perturbation equation. \subsection{${\mathbf{D}}$-dimensional ${\mathbf{N}}$-body Schr\"odinger Equation}\label{sec:SE} The $D$-dimensional Schr\"odinger equation in Cartesian coordinates for a system of $N$ particles interacting via a two-body interaction potential $g_{ij}$, and confined by a spherically symmetric potential $V_{conf}$ is \begin{equation} \label{generalH} H \Psi = \left[ \sum\limits_{i=1}^{N} h_{i} + \sum_{i=1}^{N-1}\sum\limits_{j=i+1}^{N} g_{ij} \right] \Psi = E \Psi \,, \end{equation} \begin{equation} \label{generalH1} \begin{array}{rcl} h_{i} & = & -\frac{\hbar^2}{2 m_{i}}\sum\limits_{\nu=1}^{D}\frac{\partial^2}{\partial x_{i\nu}^2} + V_{\mathtt{conf}}\left(\sqrt{\sum\nolimits_{\nu=1}^{D}x_{i\nu}^2}\right) \,, \\ g_{ij} & = & V_{\mathtt{int}}\left(\sqrt{\sum\nolimits_{\nu=1}^{D}\left(x_{i\nu}-x_{j\nu} \right)^2}\right), \end{array} \end{equation} \noindent where $h_{i}$ is the single-particle Hamiltonian and $x_{i\nu}$ is the $\nu^{th}$ Cartesian component of the $i^{th}$ particle. Transforming the Sch\"odinger equation from Cartesian to the internal coordinates is accomplished using: \begin{equation}\label{eq:int_coords} \renewcommand{\arraystretch}{1.5} \begin{array}{rcl} r_i & = &\sqrt{\sum_{\nu=1}^{D} x_{i\nu}^2}\,, \;\;\; (1 \le i \le N)\,, \;\;\; \\ \gamma_{ij} & = & cos(\theta_{ij})=\left(\sum_{\nu=1}^{D} x_{i\nu}x_{j\nu}\right) / r_i r_j\,, \end{array} \renewcommand{\arraystretch}{1} \end{equation} \noindent $(1 \le i < j \le N)$\,, which are the $D$-dimensional scalar radii $r_i$ of the $N$ particles from the center of the confining potential and the cosines $\gamma_{ij}$ of the $N(N-1)/2$ angles between the radial vectors. A similarity transformation removes the first-order derivatives, while the second derivative terms drop out in the $D\to\infty$ limit, yielding a static zeroth-order problem. First order corrections result in simple harmonic normal-mode oscillations about the infinite-dimensional structure. The Schr\"odinger equation becomes\cite{avery}, $ (T+V)\, \Phi = E \,\Phi$ where: \begin{widetext} \begin{equation} T = {\displaystyle \hbar^2 \sum\limits_{i=1}^{N}\Biggl[-\frac{1}{2 m_i}\frac{\partial^2}{{\partial r_i}^2}- \frac{1}{2 m_i r_i^2} \sum\limits_{j\not=i}\sum\limits_{k\not=i} \frac{\partial}{\partial\gamma_{ij}}(\gamma_{jk}-\gamma_{ij} \gamma_{ik}) \frac{\partial}{\partial\gamma_{ik}} {\displaystyle +\frac{N(N-2)+(D-N-1)^2 \left( \frac{\Gamma^{(i)}}{\Gamma} \right) }{8 m_i r_i^2} \Biggr] } \label{eq:SE_T} \end{equation} \end{widetext} \begin{equation} V=\sum\limits_{i=1}^{N}V_{\mathtt{conf}}(r_i)+ \sum\limits_{i=1}^{N-1}\sum\limits_{j=i+1}^{N} V_{\mathtt{int}}(r_{ij}) \,, \end{equation} $\Gamma$ is the Gramian determinant of the matrix with elements $\gamma_{ij}$ (see Appendix D in Ref~\cite{FGpaper}), and $\Gamma^{(i)}$ is the determinant of the $i^{th}$ principal minor where the row and column of the $i^{th}$ particle have been deleted. The quantity $r_{ij}=\sqrt{r_{i}^2+r_{j}^2-2r_{i}r_{j}\gamma_{ij}}$ is the interparticle separation. From Eq.~(\ref{eq:SE_T}), it is clear that all first-order derivatives have been eliminated from the Hamiltonian. \subsection{Infinite-${\mathbf{D}}$ analysis: Zeroth-order energy}\label{sec:infD} Dimensionally scaled variables are defined: $\bar{r}_i = r_i/\kappa(D), \,\, \bar{E} = \kappa(D) E$ and $\bar{H} = \kappa(D)H$, where $\kappa(D)$ is a scale factor that regularizes the large-dimension limit of the Schr\"odinger equation. The exact form of $\kappa(D)$ is not fixed, but can be chosen to yield scaling results that are as simple as possible while satisfying $\kappa(D) \sim D^2$. Examples of $\kappa(D)$ for different systems are given after Eq.~(10) of Ref.~\cite{laingdensity}. The factor of $\kappa(D)$ plays the role of an effective mass that increases with $D$, suppressing the derivative terms but leaving a centrifugal-like term in an effective potential, \begin{eqnarray} \label{veff} \bar{V}_{\mathtt{eff}}(\bar{r},\gamma;\delta=0)&=&\sum\limits_{i=1}^{N}\left(\frac{\hbar^2}{8 m_i \bar{r}_i^2}\frac{\Gamma^{(i)}}{\Gamma}+\bar{V}_{\mathtt{conf}}(\bar{r},\gamma;\delta=0)\right)\nonumber \\ && +\sum\limits_{i=1}^{N-1}\sum\limits_{j=i+1}^{N} \bar{V}_{\mathtt{int}}(\bar{r},\gamma;\delta=0)\,, \end{eqnarray} where $\delta=1/D$, and the particles become frozen at large $D$. We assume all radii and angle cosines of the particles are equal when $D\to\infty$, i.e. $\bar{r}_{i}=\bar{r}_{\infty} \;\; (1 \le i \le N)$ and $\gamma_{ij}=\overline{\gamma}_\infty \;\; (1 \le i < j \le N)$ where $\bar{r}_{\infty}$ and $\overline{\gamma}_\infty$ satisfy: \begin{equation} \label{minimum1} \left[ \frac{\partial \bar{V}_{\mathtt{eff}}(\bar{r},\gamma;\delta)}{\partial \bar{r}_{i}} \right]_{\delta=0}=0,\,\,\,\,\,\,\, \left[ \frac{\partial \bar{V}_{\mathtt{eff}}(\bar{r},\gamma;\delta)}{\partial \gamma_{ij}}\right]_{\delta=0}=0, \end{equation} resulting in a maximally symmetric structure. In scaled units the zeroth-order ($D\to\infty$) approximation for the energy is: $\bar{E}_{\infty}=\bar{V}_{\mathtt{eff}}(\bar{r}_{\infty})$, while the centrifugal-like term in $\bar{V}_{\mathtt{eff}}$, which is nonzero even for the ground state, is a zero-point energy contribution from the minimum uncertainty principle\cite{chat}. \subsection{The ${\mathbf{1/D}}$ first-order energy correction}\label{sec:firstorder} At zeroth-order, the particles can be viewed as frozen in a maximally symmetric, high-$D$ configuration. Solving Eqs. (\ref{minimum1}) for $\bar{r}_{\infty}$ and $\overline{\gamma}_\infty$ yields the infinite-$D$ structure and zeroth-order energy providing the starting point for the $1/D$ expansion. In order to determine the $1/D$ quantum correction to the energy for large but finite values of $D$, we expand about the minimum of the $D\to\infty$ effective potential. A position vector of the $N(N+1)/2$ internal coordinates is defined as: \begin{equation}\label{eq:ytranspose} \begin{array}[t]{l} {\bar{\bm{y}}} = \left( \begin{array}{c} \bar{\bm{r}} \\ \bm{\gamma} \end{array} \right) \,, \;\;\; \mbox{where} \;\;\; \\ \mbox{and} \;\;\; \bar{\bm{r}} = \left( \begin{array}{c} \bar{r}_1 \\ \bar{r}_2 \\ \vdots \\ \bar{r}_N \end{array} \right) \,. \end{array} \bm{\gamma} = \left( \begin{array}{c} \gamma_{12} \\ \cline{1-1} \gamma_{13} \\ \gamma_{23} \\ \cline{1-1} \gamma_{14} \\ \gamma_{24} \\ \gamma_{34} \\ \cline{1-1} \gamma_{15} \\ \gamma_{25} \\ \vdots \\ \gamma_{N-2,N} \\ \gamma_{N-1,N} \end{array} \right) \,, \end{equation} \noindent The following substitutions are made for all radii and angle cosines: $\bar{r}_{i} = \bar{r}_{\infty}+\delta^{1/2}\bar{r}'_{i}$ and $\gamma_{ij} = \overline{\gamma}_{\infty}+\delta^{1/2}\overline{\gamma}'_{ij}$ and a power series is obtained in $\delta^{1/2}$ for the effective potential about the $D\to\infty$ symmetric minimum: $\left[ \frac{\partial \bar{V}_{\mathtt{eff}}}{\partial \bar{y}_{\mu}} \right]_{\delta^{1/2}=0} = 0$. Defining a displacement vector consisting of the internal displacement coordinates: \begin{equation}\label{eq:ytransposeP} \begin{array}[t]{l} {\bar{\bm{y}}'} = \left( \begin{array}{c}\bar{r}' \\ \overline{\bm{\gamma}}' \end{array} \right) \,, \;\;\; \mbox{where} \;\;\; \\ \mbox{and} \;\;\; \bar{\bm{r}}' = \left( \begin{array}{c} \bar{r}'_1 \\ \bar{r}'_2 \\ \vdots \\ \bar{r}'_N \end{array} \right) \,, \end{array} \overline{\bm{\gamma}}' = \left( \begin{array}{c} \overline{\gamma}'_{12} \\ \cline{1-1} \overline{\gamma}'_{13} \\ \overline{\gamma}'_{23} \\ \cline{1-1} \overline{\gamma}'_{14} \\ \overline{\gamma}'_{24} \\ \overline{\gamma}'_{34} \\ \cline{1-1} \overline{\gamma}'_{15} \\ \overline{\gamma}'_{25} \\ \vdots \\ \overline{\gamma}'_{N-2,N} \\ \overline{\gamma}'_{N-1,N} \end{array} \right) \,, \end{equation} \noindent the expression for $\bar{V}_{\mathtt{eff}}$ becomes: \begin{eqnarray} \lefteqn{\bar{V}_{\mathtt{eff}}({\bar{\bm{y}}'}; \hspace{1ex} \delta) = \left[ \bar{V}_{\mathtt{eff}} \right]_{\delta^{1/2}=0} } \nonumber \\ && + \frac{1}{2} \, \delta \left\{ \sum\limits_{\mu=1}^{P} \sum\limits_{\nu=1}^{P} \bar{y}'_{\mu} \left[\frac{\partial^2 \bar{V}_{\mathtt{eff}}}{\partial \bar{y}_{\mu} \partial \bar{y}_{\nu}}\right]_{\delta^{1/2}=0} \hspace{-1.5em} \bar{y}'_{\nu} + v_o \right\} + O\left(\delta^{3/2}\right) \,, \nonumber \\ \label{Taylor} \end{eqnarray} \noindent where $P =N(N+1)/2$ is the number of internal coordinates and $v_o = \left[ \frac{\partial \bar{V}_{\mathtt{eff}}}{\partial \delta}\right]_{\delta^{1/2}=0}$. The derivative terms in the kinetic energy have a similar series expansion: \begin{equation}\label{eq:T} {\mathcal T}=-\frac{1}{2} \delta \sum\limits_{\mu=1}^{P} \sum\limits_{\nu=1}^{P} {G}_{\mu\nu} \partial_{\bar{y}'_{\mu}} \partial_{\bar{y}'_{\nu}} + O\left(\delta^{3/2}\right), \end{equation} where ${\mathcal T}$ is the derivative portion of the kinetic energy $T$ (see Eq.~(\ref{eq:SE_T})). Thus, determining the energy at first order is reduced to a harmonic problem, which is solved by obtaining the normal modes of the system. From Eqs. (\ref{Taylor}) and (\ref{eq:T}), $\bm{G}$ and ${\bf F}$, both constant matrices, are defined in the first-order $\delta=1/D$ Hamiltonian below: \begin{equation} \label{eq:Gham} \widehat{H}_1=-\frac{1}{2} {\partial_{\bar{y}'}}^{T} {\bm G} {\partial_{\bar{y}'}} + \frac{1}{2} \bar{\bm{y}}^{\prime T} {\bm F} {{\bar{\bm{y}}'}} + v_o \,. \end{equation} \subsection{FG Matrix Method for the Normal Mode Frequencies and Coordinates} The FG matrix method\cite{dcw} is used to obtain the normal-mode vibrations and the harmonic-order energy correction. A review of the FG matrix method is presented in Appendix A of Ref.~\cite{paperI}, but a brief summary is given below. The $b^{\rm th}$ normal mode coordinate may be written as (Eq.~(A9) Ref.~\cite{paperI}) \begin{equation} \label{eq:qyt} [{\bm q'}]_b = {\bm{b}}^T {\bar{\bm{y}}'} \,, \end{equation} where the coefficient vector ${\bm{b}}$ satisfies the eigenvalue equation (Eq.~(A10) Ref.~\cite{paperI}) \begin{equation} \label{eq:FGit} {\bf F} \, \bm{G} \, {\bm{b}} = \lambda_b \, {\bm{b}} \end{equation} with the resultant secular equation (Eq.~(A11) Ref.~\cite{paperI}) $\det({\bf F}\bm{G}-\lambda{\bf I})=0$\,. The coefficient vector also satisfies the normalization condition (Eq.~(A12) Ref.~\cite{paperI}) ${\bm{b}}^T \bm{G} \, {\bm{b}} = 1$\,, and the frequencies are: $\lambda_b=\bar{\omega}_b^2$\, (Eq.~(A3) Ref.~\cite{paperI}). In an earlier paper\cite{FGpaper}, we solve these equations for the frequencies. The number of roots $\lambda$ is equal to $P \equiv N(N+1)/2$. However, due to the $S_N$ symmetry (see Ref.~\cite{hamermesh} and Appendix~\ref{app:Char}), there is a simplification to only five distinct roots, $\lambda_{\mu}$, where $\mu$ runs over ${\bf 0}^-$, ${\bf 0}^+$, ${\bf 1}^-$, ${\bf 1}^+$, and ${\bf 2}$, (see Refs.~\cite{FGpaper, loeser}). Thus the energy through first-order in $\delta=1/D$ (see Eq.~(\ref{eq:E1})) can be written in terms of the five distinct normal-mode vibrational frequencies\cite{FGpaper}. \begin{equation} \overline{E} = \overline{E}_{\infty} + \delta \Biggl[ \sum_{\renewcommand{\arraystretch}{0} \begin{array}[t]{r@{}l@{}c@{}l@{}l} \scriptstyle \mu = \{ & \scriptstyle \bm{0}^\pm,\hspace{0.5ex} & \scriptstyle \bm{1}^\pm & , & \,\scriptstyle \bm{2} \scriptstyle \} \end{array} \renewcommand{\arraystretch}{1} } (n_{\mu}+\frac{1}{2} d_{\mu}) \bar{\omega}_{\mu} \, + \, v_o \Biggr] \,. \label{eq:E1} \end{equation} \noindent where $\overline{E}_{\infty}$ is the energy minimum as $\delta \rightarrow0$, $n_{\mu}$ is the total number of quanta in the normal mode with the frequency $\bar{\omega}_{\mu}$; $\mu$ is a label which runs over the five types of normal modes, ${\bf 0}^-$\,, ${\bf 0}^+$\,, ${\bf 1}^-$\,, ${\bf 1}^+$\,, and ${\bf 2}$\,, (irrespective of the particle number, see Ref.~\cite{FGpaper}and Ref.[15] in \cite{paperI}), and $v_o$ is a constant (defined above and in Ref.~\cite{FGpaper}, Eq. 125). The multiplicities of the five roots are: $d_{{\bf 0}^+} = 1, \hspace{1ex} d_{{\bf 0}^-} = 1,\; d_{{\bf 1}^+} = N-1,\; d_{{\bf 1}^-} = N-1,\; d_{{\bf 2}} = N(N-3)/2$. \subsection{Symmetry of the $\bm{F}$, $\bm{G}$ and $\bm{FG}$ Matrices} \label{sec:symm} The large degeneracy of the frequencies indicates a very high degree of symmetry which is manifested in the $\bm{F}$\,, $\bm{G}$\,, and $\bm{FG}$ matrices which are $P \times P$ matrices. The $S_N$ symmetry of these matrices, whose elements are evaluated for the maximally symmetric structure at large dimension, allows them to be written in terms of six simple submatrices which are invariant under $S_N$\,(See Ref~\cite{FGpaper}). The number of $r_i$ coordinates is $N$ and the number of $\gamma_{ij}$ coordinates is $N(N-1)/2$\,. These matrices are invariant under interchange of the particles, effected by the point group $S_N$\cite{FGpaper}. We can thus write the $\bm{F}$, $\bm{G}$ and $\bm{FG}$ matrices with the following structure: \begin{eqnarray} {\bf F}&=&\left(\begin{array}{cc} {\bf F}_{\bar{\bm{r}}' \bar{\bm{r}}'} & {\bf F}_{\bar{\bm{r}}' \overline{\bm{\gamma}}'} \\ {\bf F}_{\overline{\bm{\gamma}}' \bar{\bm{r}}'} & {\bf F}_{\overline{\bm{\gamma}}' \overline{\bm{\gamma}}'} \end{array} \right) \,\,\,\,\,\, {\bf G}=\left(\begin{array}{cc} {\bf G}_{\bar{\bm{r}}' \bar{\bm{r}}'} & {\bf G}_{\bar{\bm{r}}' \overline{\bm{\gamma}}'} \\ {\bf G}_{\overline{\bm{\gamma}}' \bar{\bm{r}}'} & {\bf G}_{\overline{\bm{\gamma}}' \overline{\bm{\gamma}}'} \\ \end{array} \right) \label{eq:G} \\ {\bf FG}&=&\left(\begin{array}{cc} {\bf FG}_{\bar{\bm{r}}' \bar{\bm{r}}'} & {\bf FG}_{\bar{\bm{r}}' \overline{\bm{\gamma}}'} \\ {\bf FG}_{\overline{\bm{\gamma}}' \bar{\bm{r}}'} & {\bf FG}_{\overline{\bm{\gamma}}' \overline{\bm{\gamma}}'} \end{array} \right) \label{eq:FG} \end{eqnarray} The structure of these matrices results in highly degenerate eigenvalues and causes a reduction from a possible $P=N(N+1)/2$ distinct frequencies to just five distinct frequencies for $L = 0$ systems. \subsection{Symmetry Coordinates} \label{subsec:symnorm} The $\bm{FG}$ matrix is invariant under $S_N$\,, so it does not connect subspaces belonging to different irreducible representations of $S_N$\cite{WDC}. Thus from Eqs.~(\ref{eq:qyt}) and (\ref{eq:FGit}) the normal coordinates must transform under irreducible representations of $S_N$\,. The normal coordinates will be linear combinations of the elements of the internal coordinate displacement vectors $\bar{\bm{r}}'$ and $\overline{\bm{\gamma}}'$ which transform under reducible matrix representations of $S_N$\,, each spanning the corresponding carrier spaces. (Appendix~\ref{app:Char}). The radial displacement coordinate $\bar{\bm{r}}'$ transforms under a reducible representation that reduces to one $1$-dimensional irreducible representation labelled by the partition $[N]$ (the partition denotes a corresponding Young diagram of an irreducible representation (see Appendix~\ref{app:Char})) and one $(N-1)$-dimensional irreducible representation labelled by the partition $[N-1, \hspace{1ex} 1]$. The angular displacement coordinate $\overline{\bm{\gamma}}'$ transforms under a reducible representation that reduces to one $1$-dimensional irreducible representation labelled by the partition $[N]$, one $(N-1)$-dimensional irreducible representation labelled by the partition $[N-1, \hspace{1ex} 1]$ and one $N(N-3)/2$-dimensional irreducible representation labelled by the partition $[N-2, \hspace{1ex} 2]$. We define the symmetry coordinate vector, $S$ as: \begin{equation}\label{eq:trial} \bm{S} = \left( \begin{array}{l} {\bm{S}}_{\bar{\bm{r}}'}^{[N]} \\ {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]} \\ {\bm{S}}_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]} \\ {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]} \\ {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]} \end{array} \right) = \left( \begin{array}{l}W_{\bar{\bm{r}}'}^{[N]} \, \bar{\bm{r}}' \\ W_{\overline{\bm{\gamma}}'}^{[N]} \, \overline{\bm{\gamma}}' \\ W_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]} \bar{\bm{r}}'\\ W_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]} \, \overline{\bm{\gamma}}' \\ W_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]} \, \overline{\bm{\gamma}}' \end{array} \right) \,, \end{equation} where the $W_{\bar{\bm{r}}'}^{[\alpha]}$ and the $ W_{\bar{\bm{\gamma}}'}^{[\alpha]}$ are the transformation matrices. This is shown in Ref.~\cite{paperI} using the theory of group characters to decompose $\bar{\bm{r}}'$ and $\overline{\bm{\gamma}}'$ into basis functions that transform under these five irreducible representations of $S_N$. The process used in Ref.~\cite{paperI} to determine the symmetry coordinates, and hence the $W_{\bar{r}'}^{{\alpha}}$ and $W_{\overline{\gamma}'}^{{\alpha}}$ matrices was chosen to ensure that the $W$ matrices satisfy the orthogonality restrictions between different irreducible representations. This process also ensured that the sets of coordinates transforming irreducibly under $S_N$ have the simplest functional forms possible. One of the symmetry coordinates was chosen to describe the simplest motion possible under the requirement that it transforms irreducibly under $S_N$. The succeeding symmetry coordinate was then chosen to have the next simplest possible functional form that transforms irreducibly under $S_N$ etc. In this way the complexity of the motions described by the symmetry coordinates was minimized, building up slowly as more symmetry coordinates of a given species were added as $N$ increased, with no disruption of lower $N$ symmetry coordinates. This method of determining the symmetry coordinate basis is not unique, but was chosen to minimize the complexity. \subsection{Symmetry Coordinates and Transformation Matrices} The five transformation matrices and the symmetry coordinates for five irreducible representations are: \begin{equation} \label{eq:WNeqsqrtN1} [W^{[N]}_{\bar{\bm{r}}'}]_i = \frac{1}{\sqrt{N}} \, [{\bm{1}}_{\bar{\bm{r}}'}]_i \;,\;\; \;\;\; {\bm{S}}_{\bar{\bm{r}}'}^{[N]} = \frac{1}{\sqrt{N}} \, \sum_{i'=1}^N \overline{r}'_{i'} \,. \end{equation} \begin{eqnarray} [W^{[N]}_{\overline{\bm{\gamma}}'}]_{ij}&=& \sqrt{\frac{2}{N(N-1)}} \,\, [{\bm{1}}_{\overline{\bm{\gamma}}'}]_{ij} \label{eq:WNgeqsqrt2ontnm11} \\ \mbox{and}\,\,\,\,\,\, {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}&=& \sqrt{\frac{2}{N(N-1)}} \,\,\, \sum_{j'=2}^N \sum_{i' < j'} \overline{\gamma}'_{i'j'} \,. \label{eq:SNm0} \end{eqnarray} where $[{\bm{1}}_{\bar{\bm{r}}'}]_i= 1 \;\; \forall \;\; 1 \leq i \leq N \;\;\; \mbox{and} \;\;\; [{\bm{1}}_{\overline{\bm{\gamma}}'}]_{ij} = 1 \;\; \forall \;\; 1 \leq i,j \leq N \,$. \begin{eqnarray} \label{eq:WNm1r} [W^{[N-1, \hspace{1ex} 1]}_{\bar{\bm{r}}'}]_{\xi i}&=& \frac{1}{\sqrt{\xi(\xi+1)}} \left( \sum_{m=1}^\xi \delta_{mi} - \xi \delta_{\xi+1,\, i} \right) \end{eqnarray} \begin{equation} \label{eq:SNm1} [{\bm{S}}_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]}]_\xi = \frac{1}{\sqrt{\xi(\xi+1)}} \left( \sum_{k'=1}^\xi \bar{r}'_{k'} - \xi \bar{r}'_{\xi+1} \right)\,, \end{equation} \noindent where $1 \leq \xi \leq N-1$ and $1 \leq i \leq N$\,. \begin{widetext} \begin{equation} \label{eq:SNm2} \begin{array}{rcl} {[W^{[N-1, \hspace{1ex} 1]}_{\overline{\bm{\gamma}}'}]_{\xi,\,ij}} &=& {\frac{1}{\sqrt{\xi(\xi+1)(N-2)}}} \, \bigg( \big( \Theta_{\xi-i+1} \, [{\bm{1}}_{\bar{\bm{r}}'}]_j + \Theta_{\xi-j+1} \, [{\bm{1}}_{\bar{\bm{r}}'}]_i \big) - \xi \big( \delta_{\xi+1,\, i} \, [{\bm{1}}_{\bar{\bm{r}}'}]_j + \delta_{\xi+1,\, j} \, [{\bm{1}}_{\bar{\bm{r}}'}]_i \big) \bigg) \,,\\ {[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_\xi} &=& {\displaystyle \frac{1}{\sqrt{\xi(\xi+1)(N-2)}} \, \left( \left[ \sum_{l' = 2}^\xi \, \sum_{k'=1}^{l'-1} \overline{\gamma}'_{k'l'} + \sum_{k' = 1}^\xi \, \sum_{l'=k'+1}^{N} \hspace{-1ex} \overline{\gamma}'_{k'l'} \right] - \xi \left[ \sum_{k'=1}^\xi \overline{\gamma}'_{k',\,\xi+1} + \sum_{l'=\xi+2}^N \hspace{-0.5ex} \overline{\gamma}'_{\xi+1,\,l'} \right] \right) \,. } \end{array} \renewcommand{\arraystretch}{1} \end{equation} \end{widetext} \noindent where $1 \leq \xi \leq N-1$ and $1 \leq i < j \leq N$\,. \begin{widetext} \begin{equation} \label{eq:SNm3} \begin{split} {[W^{[N-2, \hspace{1ex}2]}_{\overline{\bm{\gamma}}'}]_{ij,\,mn}} & = \frac{1}{\sqrt{i(i+1)(j-3)(j-2)}} \, \Bigl( (\Theta_{i-m+1} - i \delta_{i+1,\,m}) (\Theta_{j-n} -(j-3)\delta_{jn}) + (\Theta_{i-n+1} - i \delta_{i+1,\,n})\\ & \times (\Theta_{j-m}-(j-3)\delta_{jm}) \Bigr) \, \\ {[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij}} & = \frac{1}{\sqrt{i(i+1)(j-3)(j-2)}} \, \Bigl( \vphantom{\sum_{k=1}^{[j'-1, i]_{min}} \hspace{-2ex} \overline{\gamma}'_{kj'}} \sum_{j'=2}^{j-1} \sum_{k=1}^{[j'-1, i]_{min}} \hspace{-2ex} \overline{\gamma}'_{kj'} + \sum_{k=1}^{i-1} \sum_{j'=k+1}^i \overline{\gamma}'_{kj'} - (j-3) \sum_{k=1}^i \overline{\gamma}'_{kj} \\ & - i \sum_{k=1}^{i} {\overline{\gamma}}'_{k,(i+1)} - i \sum_{j'=i+2}^{j-1} {\overline{\gamma}}'_{(i+1),j'} + i (j-3) {\overline{\gamma}}'_{(i+1),j} \Bigr) \,, \end{split} \end{equation} \end{widetext} \noindent where $1 \leq i \leq j-2$ and $4 \leq j \leq N$ and $1 \leq m < n \leq N$\,. We define the Heaviside step function as: \begin{equation} \label{eq:Heaviside} \begin{array}{r@{\hspace{1ex}}l} {\displaystyle \Theta_{i-j+1} = \sum_{m=1}^i \delta_{mj} } & = 1 \mbox{ when } j-i<1 \\ & = 0 \mbox{ when } j-i \geq 1 \,. \end{array} \end{equation} \subsection{Transformation to Normal Mode Coordinates} The invariance of Eq. (13) under $S_N$ means that the F, G and FG matrices used to solve for the first-order energies and normal modes transform under irreducible representations of $S_N$. When the FG matrix is transformed from the $\bar{\bm{r}}'$ and $\overline{\bm{\gamma}}'$ basis to symmetry coordinates, the full $N(N+1)/2 \times N(N+1)/2$ matrix is reduced to block diagonal form yielding one $2 \times 2$ block for the $[N]$ sector, $N-1$ identical $2 \times 2$ blocks for the $[N-1,1]$ sector and $N(N-3)/2$ identical $1 \times 1$ blocks for the $[N-2,2]$ sector. In the $[N]$ and $[N-1,1]$ sectors, the $2 \times 2$ blocks allow the $\bar{\bm{r}}'$ and $\overline{\bm{\gamma}}'$ symmetry coordinates to mix in the normal coordinates. The $1 \times 1$ structure in the $[N-2,2]$ sector reflects the fact that the $[N-2,2]$ normal modes are entirely angular i.e. there are no $\bar{\bm{r}}'$ symmetry coordinates in this sector. We applied the $\bm{FG}$ method using these symmetry coordinates to determine the eigenvalues, $\lambda_\alpha = {\bar{\omega}_\alpha}^2$, frequencies, $\bar{\omega}_\alpha$ and normal modes, $\bm{q}^\prime$ of the system: \begin{equation} \label{eq:lambda12pm} \lambda^\pm_\alpha = \frac{a_\alpha \pm \sqrt{b_\alpha^2 + 4\,c_\alpha }}{2} \end{equation} for the $\alpha=[N]$ and $[N-1, \hspace{1ex} 1]$ sectors, where \begin{eqnarray} \label{eq:abcalpha} a_\alpha & = & \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\bar{\bm{r}}',\,\bar{\bm{r}}'} + \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\overline{\bm{\gamma}}',\,\overline{\bm{\gamma}}'} \nonumber \\ b_\alpha & = & \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\bar{\bm{r}}',\,\bar{\bm{r}}'} - \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\overline{\bm{\gamma}}',\,\overline{\bm{\gamma}}'} \\ c_\alpha & = & \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\bar{\bm{r}}',\, \overline{\bm{\gamma}}'} \, \times \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\overline{\bm{\gamma}}',\,\bar{\bm{r}}'} \nonumber \,, \end{eqnarray} while $\lambda_{[N-2, \hspace{1ex} 2]} = \bm{\sigma_{[N-2, \hspace{1ex} 2]}^{FG}}$\,. The $\bm{\sigma_\alpha^{FG}}$ are the elements of the $\bm{FG}$ matrix of Eq.~(\ref{eq:FG}) expressed in the basis of symmetry coordinates. The $\bm{\sigma_\alpha^{FG}}$ for the $\alpha=[N]$ and $[N-1, \hspace{1ex} 1]$ sectors are $2 \times 2$ matrices (See Appendix B), while $\bm{\sigma_{[N-2, \hspace{1ex} 2]}^{FG}}$ is a one-component quantity. These quantities are defined generally in Ref.~\cite{paperI} Eqs. (28, 29, 126, 162, 163) and also specifically in Ref.~\cite{FGpaper} for three different confining and interparticle potentials. The normal coordinates are: \begin{equation} \label{eq:qnpfullexp} {\bm{q}'}_{\pm}^{[N]} = c_{\pm}^{[N]} \left( \cos{\theta^{[N]}_{\pm}} \, [{\bm{S}}_{\bar{\bm{r}}'}^{[N]}] \, + \, \sin{\theta^{[N]}_{\pm}} \, [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}] \right) \end{equation} \begin{equation} \label{eq:qnpfullexp1} \begin{split} {\bm{q}'}_{\xi\pm}^{[N-1,1]} & = c_{\pm}^{[N-1,1]} \Bigl( \cos{\theta^{[N-1,1]}_{\pm}} [{\bm{S}}_{\bar{\bm{r}}'}^{[N-1,1]}]_{\xi} \\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \sin{\theta^{[N-1,1]}_{\pm}}[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1,1]}]_{\xi} \Bigr) \, \end{split} \end{equation} \noindent for the $\alpha=[N]$ and $[N-1, \hspace{1ex} 1]$ sectors, $1 \leq \xi \leq N-1$ and \begin{equation} \label{eq:qnm2fullexp} {\bm{q}'}^{[N-2, \hspace{1ex} 2]} = c^{[N-2, \hspace{1ex} 2]} {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]} \, \end{equation} for the $[N-2, \hspace{1ex} 2]$ sector. The $\bar{\bm{r}}'$-$\overline{\bm{\gamma}}'$ mixing angle, $\theta^\alpha_\pm$\,, is given by \begin{equation} \label{eq:tanthetaalphapm} \tan{\theta^\alpha_\pm} = \frac{(\lambda^\pm_\alpha - \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\bar{\bm{r}}',\,\bar{\bm{r}}'})} {\protect[\bm{\sigma_\alpha^{FG}}\protect]_{\bar{\bm{r}}',\, \overline{\bm{\gamma}}'}} = \frac{\protect[\bm{\sigma_\alpha^{FG}}\protect]_{\overline{\bm{\gamma}}',\,\bar{\bm{r}}'}}{(\lambda^\pm_\alpha - \protect[\bm{\sigma_\alpha^{FG}}\protect]_{\overline{\bm{\gamma}}',\,\overline{\bm{\gamma}}'})}\,, \end{equation} while the normalization constants $c^{[\alpha]}$ are given by \begin{equation} \label{eq:calphapm} \begin{array}{l} c_\pm^{[\alpha]} = {\displaystyle \frac{1}{\sqrt{\left( \begin{array}{c} \cos{\theta^{[\alpha]}_\pm} \\ \sin{\theta^{[\alpha]}_\pm} \end{array} \right)^T \bm{\sigma_{\alpha}^{G}} \left( \begin{array}{c} \cos{\theta^{[\alpha]}_\pm} \\ \sin{\theta^{[\alpha]}_\pm} \end{array} \right)}}} \end{array} \end{equation} \begin{equation} \label{eq:calpha2} {\displaystyle c^{[N-2, \hspace{1ex} 2]} = \frac{1}{\sqrt{\bm{\sigma_{[N-2, \hspace{1ex} 2]}^G}}} } \,\,. \end{equation} The $\bm{\sigma_{\alpha}^{G}}$ are related to the elements of the $\bm{G}$ matrix of Eq.~(\ref{eq:Gham}). One determines the $\bar{\bm{r}}'$-$\overline{\bm{\gamma}}'$ mixing angles, $\theta^{[\alpha]}_\pm$ for the $[N]$ and $[N-1,1]$ species from Eq.~(\ref{eq:tanthetaalphapm}). The normalization constants $c^{[\alpha]}$ of Eqs.~(\ref{eq:qnpfullexp}) and (\ref{eq:qnm2fullexp}) are determined from Eqs.~(\ref{eq:tanthetaalphapm}) (\ref{eq:calphapm}) and (\ref{eq:calpha2}). The normal mode vector, ${\bm{q}'}$\,, is then determined through Eqs.~(\ref{eq:qnpfullexp}) and (\ref{eq:qnm2fullexp}). The analytic normal coordinates for $N$ identical particles are: \begin{widetext} \begin{equation} \begin{split} q_\pm^{\prime \, [N] } & = c^{[N]}_\pm \cos{\theta^{[N]}_\pm} \frac{1}{\sqrt{N}} \, \sum_{i'=1}^N \overline{r}'_{i'} + c^{[N]}_\pm \sin{\theta^{[N]}_\pm} \sqrt{\frac{2}{N(N-1)}} \,\,\, \sum_{j'=2}^N \sum_{i' < j'} \overline{\gamma}'_{i'j'}, \\ [{\bm{q}'}_\pm^{[N-1, \hspace{1ex} 1]}]_\xi & = c^{[N-1, \hspace{1ex} 1]}_\pm \cos{\theta^{[N-1, \hspace{1ex} 1]}_\pm} \frac{1}{\sqrt{\xi(\xi+1)}} \left( \sum_{k'=1}^\xi \bar{r}'_{k'} - \xi\bar{r}'_{\xi+1} \right) + c^{[N-1, \hspace{1ex} 1]}_\pm \sin{\theta^{[N-1, \hspace{1ex} 1]}_\pm} \frac{1}{\sqrt{\xi(\xi+1)(N-2)}} \nonumber \\ & \left( \left[ \sum_{l' = 2}^\xi \, \sum_{k'=1}^{l'-1} \hspace{-1ex} \overline{\gamma}'_{k'l'} + \sum_{k' = 1}^\xi \, \sum_{l'=k'+1}^{N} \hspace{-1ex} \overline{\gamma}'_{k'l'} \right] -\xi \left[ \sum_{k' = 1}^\xi \, \overline{\gamma}'_{k', \xi+1} + \sum_{l'=\xi+2}^N \,\hspace{-1ex} \overline{\gamma}'_{\xi+1,l'} \right] \right) \\ \mbox{where}\,\,\, 1 \leq \xi \leq N-1, \\ [{\bm{q}'}^{[N-2, \hspace{1ex} 2]}]_{ij} & = c^{[N-2, \hspace{1ex} 2]} \frac{1}{\sqrt{i(i+1)(j-3)(j-2)}} \, \Bigl( \vphantom{\sum_{k=1}^{[j'-1, i]_{min}} \hspace{-2ex} \overline{\gamma}'_{kj'}} \sum_{j'=2}^{j-1} \sum_{k=1}^{[j'-1, i]_{min}} \hspace{-2ex} \overline{\gamma}'_{kj'} + \sum_{k=1}^{i-1} \sum_{j'=k+1}^i \overline{\gamma}'_{kj'} - (j-3) \sum_{k=1}^i \overline{\gamma}'_{kj} \\ & - i \sum_{k=1}^{i} {\overline{\gamma}}'_{k,(i+1)} - i \sum_{j'=i+2}^{j-1} {\overline{\gamma}}'_{(i+1),j'} + i (j-3) {\overline{\gamma}}'_{(i+1),j} \Bigr) \,, \end{split} \end{equation} \end{widetext} \noindent where $1 \leq i \leq j-2$ and $4 \leq j \leq N$ \section{Motions Associated with the Symmetry Coordinates} \label{sec:symmetrymotions} In this section, I analyze the motions of the five types of symmetry coordinates as expressed in Eqs.~(\ref{eq:WNeqsqrtN1}, \ref{eq:SNm0}, \ref{eq:SNm1} - \ref{eq:SNm3}). For symmetry coordinates, there is no mixing of radial and angular motion, so the motion is either totally radial or totally angular. The symmetry coordinates transform irreducibly under $S_N$ and result in a block diagonal form for the $H_0$ matrix. When these blocks are diagonalized we obtain the normal coordinates which are the solutions to the first order equation. For the $[N]$ and $[N-1,1]$ sectors which are found in both the radial and angular decompositions, there is mixing of these radial and angular symmetry coordinates in the normal modes. For the $[N-2,2]$ sector, there is no radial part, only an angular part, so no mixing; the symmetry coordinates are the normal coordinates apart from a normalization constant. The symmetry coordinates describe motion that is collective with the particles participating in synchronized motion, i.e. moving with the same frequency and phase. Since the symmetry coordinates, except for the $[N-2,2]$ modes, are not solutions to the Hamiltonian at first order, they do not necessarily exhibit the motion of particles governed by the Hamiltonian. Their motions could be mixed significantly with another symmetry coordinate of the same species to form a normal coordinate, a solution to the Hamiltonian at first order. I will analyze the motion of the symmetry coordinates first and then use the knowledge of these motions to understand the normal coordinate behavior. I am interested in the motion of individual particles as they participate in the collective synchronized motion of these symmetry coordinates. To determine the motion of an individual particle, I need to back transform from the known functional form of a particular symmetry coordinate to the scaled internal displacement coordinates, ${\bar{r}'_i}$ and ${\overline{\gamma}'_{ij}}$ and then transform from the scaled to the unscaled displacement coordinates to be able to visualize these displacements. Using Eq.(19) I can obtain the dimensionally scaled $\bar{\bm{r}}'$ and $\overline{\gamma}'$ vectors by back transforming with the transpose of the $W$ matrices. These dimensionally scaled variables can be transformed to the unscaled internal displacement coordinates using $\bar{r}_{i} = \bar{r}_{\infty}+\delta^{1/2}\bar{r}'_{i}$, $\gamma_{ij} = \overline{\gamma}_{\infty}+\delta^{1/2}\overline{\gamma}'_{ij}$ and $ r_i= \kappa(D) \bar{r}_i$. The unscaled internal coordinates, $r_i$ and $\gamma_{ij}$, allow one to determine the radial distance from the confinement center and the interparticle angle of each pair of particles using $\gamma_{ij}=\cos \theta_{ij}$ and $\overline{\gamma}_{\infty}=\cos \theta_{\infty}$, so $\theta_{ij}=\arccos{\gamma_{ij}}$ and $\theta_{\infty} = \arccos{\overline{\gamma}_{\infty}}$. Then $r_i - r_{\infty}$ and $\theta_{ij}-\theta_{\infty}$ give displacements from the maximally symmetric zeroth-order configuration ($r_\infty, \gamma_\infty$) that are easy to visualize, connecting to our physical intuition and thus contributing to our understanding of how the motion of $N$ particles becomes collective. \paragraph{Motions Associated with Symmetry Coordinate ${\bm{S}}_{\bar{\bm{r}}'}^{[N]}$.} The simplest collective motion for a system of identical particles occurs when every particle executes the same motion with the same phase. This type of collective motion occurs for the symmetry coordinates of the $[N]$ modes, both the radial symmetry coordinate ${\bm{S}}_{\bar{\bm{r}}'}^{[N]}$ and angular symmetry coordinate ${\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}$. There is just one symmetry coordinate in each $[N]$ sector. I am interested in the unscaled displacement quantity ${r'}_i = \kappa(D) {\bar{r}'}_i = D^2 {\bar{a}}_{ho} {\bar{r}'}_i$. For the $[N]$ symmetry coordinate ${\bm{S}}_{\bar{\bm{r}}'}^{[N]}$, ${\bar{r}'}_i $ is obtained by back transforming with $W_{\bar{r}'}^{[N]}$. Using Eqs.~(\ref{eq:trial}) and (\ref{eq:WNeqsqrtN1}) the motions associated with symmetry coordinate ${\bm{S}}_{\bar{\bm{r}}'}^{[N]}$ in the unscaled internal displacement coordinates ${\bm{r}'}$ about the unscaled zeroth-order configuration $r_\infty$ are given by: \begin{eqnarray} {\bm{r}'}^{[N]}&=& \overline{a}_{ho} \, D^{2} \, \bar{\bm{r}}^{\prime [N]} = \overline{a}_{ho} \, D^{2} \, [(W_{\bar{\bm{r}}'}^{[N]})]^T\, {\bm{S}}_{\bar{\bm{r}}'}^{[N]} \, \\ &=& \overline{a}_{ho} \, \frac{D^2}{\sqrt{N}} {\bm{S}}_{\bar{\bm{r}}'}^{[N]} \, {\bm{1}}_{\bar{\bm{r}}'} \,.\label{eq:rN} \end{eqnarray} The vector ${\bm{r}'}^{[N]}$ is a $N \times 1$ vector of the unscaled radial displacement coordinates for all the particles participating in this collective motion. The motions of all the particles are thus identical in this symmetry coordinate ${\bm{S}}_{\bar{\bm{r}}'}^{[N]}$ involving identical radial motions out and then back in from the positions of the zeroth-order configuration. This results in a symmetric stretch collective motion, where all the radii expand and contract together with decreasing amplitudes as $N$ increases. For $N=3$, a good molecular comparison is the stretching $A_1$ mode of ammonia. \smallskip \paragraph{Motions Associated with Symmetry Coordinate ${\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}$\,.} Using Eqs.~(\ref{eq:trial}) and (\ref{eq:WNgeqsqrt2ontnm11}), the motions associated with symmetry coordinate ${\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}$ in the unscaled internal displacement coordinates ${\bm{\gamma}'}$ about the zeroth-order configuration ${\bm{\gamma}}_\infty$ are given by \begin{eqnarray} {\bm{\gamma}'}^{[N]}&=& [(W_{\overline{\bm{\gamma}}'}^{[N]})]^T \, {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]} \,\\ &=& \sqrt{\frac{2}{N(N-1)\, }} \, {\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]} \, {\bm{1}}_{\overline{\bm{\gamma}}'} \,. \label{eq:SgammaN} \end{eqnarray} This vector ${\bm{\gamma}'}^{[N]}$ is a $N(N-1)/2 \times 1$ vector of the displacement contributions to the angle cosines for all the particles participating in this collective motion. The motions of all the particles are thus identical in this symmetry coordinate ${\bm{S}}_{\bar{\bm{\gamma}}'}^{[N]}$ involving identical angular motions for each pair of particles from the interparticle angles of the zeroth-order configuration. This results in a symmetric bend collective motion, where all of the interparticle angles expand and contract together with the radii unchanged. For $N=3$, a good molecular comparison is the bending $A_1$ mode of ammonia. As $N$ increases this symmetric bending motion evolves into a center of mass motion with small angular displacements for every interparticle angle while the radii remain fixed. (See Section~\ref{subsec:general} for more detail.) \paragraph{Motions Associated with Symmetry Coordinates $[{\bm{S}}_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]}]_\xi$\,.} Using Eqs.~(\ref{eq:trial}), (\ref{eq:WNm1r}), and (\ref{eq:Heaviside}) the motions associated with symmetry coordinates $[{\bm{S}}_{\bar{\bm{r}}'}^{[N-1,\hspace{1ex} 1]}]_{\xi}$ in the unscaled internal displacement coordinates ${\bm{r}'}$ about the unscaled zeroth-order configuration $\bm{r}_\infty$ are given by \begin{equation} \label{eq:rNm1inr} \begin{array}{r@{\hspace{0.5em}}c@{\hspace{0.5em}}l} (r^{\prime [N-1, \hspace{1ex} 1]}_\xi)_i & = & \overline{a}_{ho} \, D^{2} \, ({\bar{r}}^{\prime [N-1, \hspace{1ex} 1]}_{\xi})_i \\ &=& \overline{a}_{ho} \, D^{2} \, [{\bm{S}}_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]}]_\xi \, [(W_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]})_{\xi}]_i \\ [1.5em] &=& {\displaystyle \overline{a}_{ho} \, \frac{D^2}{\sqrt{\xi(\xi+1)}} [{\bm{S}}_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]}]_{\xi}} \, \\ &&\times \left( \Theta_{\xi-i+1} - \xi \delta_{\xi+1,\, i} \right) \,. \end{array} \end{equation} The above equation gives the radial motion of the $i^{th}$ particle participating in the collective motion of the ${\xi}^{th}$ symmetry coordinate in the radial $[N-1,1]$ sector. In this sector there are $N$-1 radial symmetry coordinates i.e. $1 \leq \xi \leq N-1$, and the ${\xi}^{th}$ symmetry coordinate involves the motion of the first $\xi +1$ particles. (If $i>\xi+1$, the Heaviside and Kronecker delta functions are zero in Eq.~\ref{eq:rNm1inr}). Thus the motion associated with symmetry coordinate $[{\bm{S}}_{\bar{\bm{r}}'}^{[N-1, \hspace{1ex} 1]}]_1$ is an antisymmetric stretch about the zeroth-order configuration involving particles $1$ and $2$\,. As $\xi$ gets larger, the motion involves more particles, $\xi+1$ particles, with the first $\xi$ particles moving one way while the $(\xi+1)^{\rm th}$ particle moves the other way. As $\xi$ increases, the character of the motion evolves from an antisymmetric stretch motion (a good molecular equivalent is an $E$ mode of ammonia), to behavior that becomes more single-particle-like, i.e. a particle-hole excitation associated with radial motion, since the $(\xi+1)^{\rm th}$ radius vector in Eq.~(\ref{eq:rNm1inr}) is weighted by the quantity $\xi$\,. (I examine this dependence on the number of particles as $N$ increases in more detail in Section ~\ref{subsec:general}.) \paragraph{Motions Associated with Symmetry Coordinates $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_\xi$\,.} Using Eqs.~(\ref{eq:trial}) and (\ref{eq:SNm2}), the motions associated with symmetry coordinates $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_\xi$ in the unscaled internal displacement coordinates $\bm{\gamma}'$ about the unscaled zeroth-order configuration $\bm{\gamma}_\infty = \gamma_\infty{\bm{1}}_{\overline{\bm{\gamma}}'}$ are given by: \begin{widetext} \begin{equation} \label{eq:gNm1ing} \begin{split} (\gamma^{\prime [N-1, \hspace{1ex} 1]}_\xi)_{ij} & = [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1,\hspace{1ex} 1]}]_{\xi} \, [(W_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]})_\xi]_{ij} = \frac{1}{\sqrt{\xi(\xi+1)(N-2)}} \, [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1,\hspace{1ex} 1]}]_\xi \,\\ & \times \bigg( \big( \Theta_{\xi-i+1} \, [{\bm{1}}_{\overline{\bm{\gamma}}'}]_{ij} + \Theta_{\xi-j+1} \, [{\bm{1}}_{\overline{\bm{\gamma}}'}]_{ij} \big) - \xi \big( \delta_{\xi+1,\, i} \, [{\bm{1}}_{\overline{\bm{\gamma}}'}]_{ij} + \delta_{\xi+1,\, j} \, [{\bm{1}}_{\overline{\bm{\gamma}}'}]_{ij} \big) \bigg) \,. \end{split} \end{equation} \end{widetext} The above equation gives the angular displacement of the ${i}^{th}$ and ${j}^{th}$ particles participating in the collective motion of the $\xi$ symmetry coordinate in the angular $[N-1,1]$ sector. The ${\xi}^{th}$ symmetry coordinate in this angular sector $[N-1,1]$ involves the angular motion of the first $\xi+1$ particles, thus affecting any angular displacement ${\gamma}_{ij}$ where $1 \leq i \leq \xi+1$ or $1 \leq j \leq \xi+1$. All other angular displacements are zero. (When $i,j > \xi+1$, the Heaviside and Kronecker delta functions are zero. The $\gamma_{12}$ displacement is zero by cancellation.) Thus the motion associated with symmetry coordinate $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_1$ is an antisymmetric bending about the zeroth-order configuration where the angle cosines $\gamma'_{13}$, $\gamma'_{14}$, $\gamma'_{15}$, $\ldots$ increase, $\gamma'_{23}$, $\gamma'_{24}$, $\gamma'_{25}$, $\ldots$ decrease while $\gamma'_{12}$, $\gamma'_{34}$, $\gamma'_{35}$, $\gamma'_{45}$, $\ldots$ remain unchanged. Thus, analogously to the $\bar{\bm{r}}'$ sector of the $[N-1, \hspace{1ex} 1]$ species, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_1$ involves the motions of particles $1$ and $2$ moving with opposite phase to each other. As $\xi$ gets larger, the angular motion involves more particles, $\xi+1$ particles, with the $(\xi+1)^{\rm th}$ particle moving with opposite phase to the first $\xi$ particles. Analogously to the radial sector of the $[N-1, \hspace{1ex} 1]$ species, as $\xi$ increases the motion evolves from an antisymmetric stretch motion (cf. an $E$ mode of ammonia), to behavior that becomes more single-particle-like, i.e. a particle-hole excitation due to angular displacement, since the angle cosines involving the $(\xi+1)^{\rm th}$ particle in Eq.~(\ref{eq:gNm1ing}) are weighted by the quantity $\xi$\,. \smallskip \paragraph{Motions Associated with Symmetry Coordinate $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij}$\,.} Using Eqs.~(\ref{eq:trial}) and (\ref{eq:SNm3}), the motions of the unscaled internal displacement coordinates $\bm{\gamma}'$ about the unscaled zeroth-order configuration $\bm{\gamma}_\infty = \gamma_\infty{\bm{1}}_{\overline{\bm{\gamma}}'}$ associated with the symmetry coordinates $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij}$ are given by: \begin{widetext} \begin{equation}\label{eq:nm2g} \begin{split} (\gamma^{\prime [N-2, \hspace{1ex} 2]}_{ij})_{mn} = & [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij} \, [W_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij,\,mn} = \frac{1}{\sqrt{i(i+1)(j-3)(j-2)}} \, \, \, [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij} \\ & \times \Bigl( (\Theta_{i-m+1} - i \delta_{i+1,\,m})(\Theta_{j-n} -(j-3)\delta_{jn}) + (\Theta_{i-n+1} - i \delta_{i+1,\,n})(\Theta_{j-m} -(j-3)\delta_{jm}) \Bigr) \,, \end{split} \end{equation} \end{widetext} where $1 \leq m < n \leq N$, and $1 \leq i \leq j-2$ and $4 \leq j \leq N$\,. The above equation gives the angular displacement (contribution to the angle cosine) of the ${m}^{th}$ and ${n}^{th}$ particles participating in the collective motion of the ${ij}^{th}$ symmetry coordinate in the $[N-2,2]$ sector. There are $N(N-3)/2$ symmetry coordinates $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij}$ in this sector labeled by $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{14}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{24}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{15}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{25}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{35}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{16}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{26}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{36}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{46}$, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{17}$, $\ldots$ $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{N-2,N}$. From Eq.~(\ref{eq:nm2g}), the symmetry coordinate $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2,\hspace{1ex} 2]}]_{ij}$ only involves motions of the first j particles (Note $i \leq j-2$, and for the particle labels $m,n > j$ the Heaviside and Kronecker delta functions are zero). Thus, the complexity of the functional form of $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{ij}$ and the motions it describes builds up slowly and systematically as more particles are added to the system. The symmetry coordinate, $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{14}$ , involves the motion of only the first four particles with the simultaneous opening of two interparticle angles $\theta_{13}$ and $\theta_{24}$, and closing of two different interparticle angles, $\theta_{14}$ and $\theta_{23}$. (cf. the $E$ mode of methane. Note that there are no $[N-2, \hspace{1ex} 2]$ modes when $N$ drops below four.) For this lowest value of $N$, the positive and negative angular displacements have equal values, however as $N$ increases, this collective motion quickly evolves to create a compressional motion with a single dominant angle opening and closing while the other interparticle angles that number $N(N-1)/2-1$ make small adjustments. (See Section ~\ref{subsec:general} for a more detailed discussion.) \section{Motions Associated with the Normal Modes} \label{normalmodemotions} From Eq.~(\ref{eq:qnpfullexp}) shown again below, the symmetry coordinates in the $[N]$ and $[N-1,1]$ sectors are mixed to form a normal coordinate. \begin{equation} \label{eq:mixing} \begin{split} {\bm{q}'}_{\pm}^{[N]} & = c_\pm^{[N]} \left( \cos{\theta^{[N]}_\pm} \, [{\bm{S}}_{\bar{\bm{r}}'}^{[N]}] \, + \, \sin{\theta^{[N]}_\pm} \, [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}] \right) \\ {\bm{q}'}_{\xi\pm}^{[N-1,1]} & = c_{\pm}^{[N-1,1]} \Bigl( \cos{\theta^{[N-1,1]}_{\pm}}[{\bm{S}}_{\bar{\bm{r}}'}^{[N-1,1]}]_{\xi} \\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \sin{\theta^{[N-1,1]}_{\pm}} [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1,1]}]_{\xi} \Bigr) \, \end{split} \end{equation} \noindent where $1 \leq \xi \leq N-1$. Thus, depending on the value of the mixing angles, the normal modes, which are the solutions at first order of the Schr\"odinger equation will be a mixture of radial and angular behavior for the $[N]$ and $[N-1,1]$ sectors. The $[N-2,2]$ sector has only angular behavior as noted above and so does not mix with other symmetry coordinates. The value of the mixing angles, of course, depends on the Hamiltonian terms at this first perturbation order. Choosing various confining and interparticle potentials will result in different values for the mixing coefficients and different collective motion as dictated by the Hamiltonian. \section{Collective behavior as a function of $N$} \label{sec:collective} In this section, I consider the evolution of behavior of the symmetry coordinates and then the normal coordinates as a function of the number of particles, $N$. For low values of $N$ it is possible to find molecular equivalents to characterize the behavior of the symmetry coordinates. (These comparisons are of course only approximate since the molecular systems have Hamiltonians with the hetero atom providing Coulombic confinement.) For the [N] radial and angular modes, the $A_1$ modes of ammonia, $NH_3$ with their symmetric breathing and bending motions are good equivalents. The radial and angular [N-1,1] modes are similar to the two $E$ ammonia modes for small $N$. These motions show one of the hydrogen atoms moving out of sync with the other two hydrogens, thus described as asymmetric stretching and asymmetric bending modes. The [N-2,2] modes need to have at least four atoms in addition to the center atom of the molecule. Methane, $CH_4$ has a purely angular mode in an $E$ mode of this molecule in which the bonds to the hydrogen atoms show bending motions that alternately open and close interparticle angles. These motions can be viewed at the following links: www.chemtube3d.com/vibrationsnh3/ and www.chemtube3d.com/vibrationsch4/ As $N$ increases, these motions evolve in several ways. There are in fact three ways that an increase in $N$ can affect the character of the normal modes. \smallskip A) As noted above in Section III a.- e., the analytic forms of the particle motion have explicit $N$ or $\xi$ ($1 \leq \xi \leq N-1$) or $i,j$ dependence ($1 \leq i \leq j-2$, $4 \leq j \leq N$) (See Eqs.~(\ref{eq:rN}, \ref{eq:SgammaN} - \ref{eq:nm2g}) that can affect the character of the motion of particles contributing to a particular symmetry coordinate and thus the normal coordinates. As more particles are added to the system, the behavior of these larger systems can become qualitatively quite different from few particle systems with low values of $N$, e.g. $3 \leq N \leq 6$. \smallskip B) Second, the amount of mixing of the radial and angular symmetry coordinates in the $[N]$ and $[N-1,1]$ sectors (i.e. the values of $\cos{\theta^{[N]}_\pm},\, \sin{\theta^{[N]}_\pm}, \, \cos{\theta^{[N-1,1]}_{\pm}}$ and $\sin{\theta^{[N-1,1]}_{\pm}}$ in Eq.~(\ref{eq:mixing})) can evolve as $N$ increases. \smallskip C) And finally, the frequency of vibration of the normal modes can change as a function of $N$. \smallskip Although general observations can be made, all three of these effects ultimately depend on the particular Hamiltonian of the system. As a specific example, I will look at these effects using a recently studied Hamiltonian for a system of identical fermions in the unitary regime. \subsection{Explicit $N$ dependence} \label{subsec:general} In the analytic expressions for the particle motions contributing to the symmetry coordinates, Eqs.~(\ref{eq:rN}, \ref{eq:SgammaN} - \ref{eq:nm2g}), there is some explicit $N$ dependence that affects the behavior as $N$ increases. \medskip \subparagraph{The [N] sector.} For the symmetric stretch motion in the $[N]$ sector, the character of this breathing motion remains the same as $N$ increases with the radial displacements decreasing as $N$ increases. The particles move along their individual radii out and then in toward the center of the trap, keeping their interparticle angles constant as they oscillate about the zeroth order configuration. For the angular motion in the $[N]$ sector, the symmetric bend motion of the interparticle angles for small $N$, (cf. the $A_1$ mode of ammonia) evolves into a center of mass motion as $N$ increases. The particles undergo identical angular displacements that decrease in size as $N$ increases resulting in motion where the whole ensemble "jiggles" in response to an excitation of the center of mass mode. These jiggles are caused by the particles moving past the trap center and then back keeping their radii constant while changing their interparticle angles compared to the zeroth order configuration. As expected, the frequency of this mode, as shown in Section~\ref{sec:Nfreq}, cleanly separates out for all values of $N$ at exactly twice the trap frequency (The atoms move past the trap center twice in one cycle.), reflecting the fact the center of mass motion is independent of particle interactions. \medskip For the $[N-1,1]$ and $[N-2,2]$ sectors, the analytic forms of the $(r^{[N-1, \hspace{1ex} 1]}_\xi)_i, (\gamma^{[N-1, \hspace{1ex}1]}_{\xi})_{ij}$ and $(\gamma^{[N-2, \hspace{1ex} 2]}_{ij})_{mn}$ contain parameters associated with $N$ that change the character of the motion as $N$ changes. \medskip \subparagraph{The [N-1,1] sector.} As discussed above, in the $[N-1,1]$ sector, the appearance of $\xi$ in the last terms of Eqs.~(\ref{eq:rNm1inr}) and (\ref{eq:gNm1ing}) weights the response of the $\xi +1$st particle. For small values of $N$ and thus small $\xi$ ($1 \leq \xi \leq N-1$), the motion of this last $\xi + 1$st particle, which has opposite direction to the first $\xi$ particles, appears as a simple asymmetric stretch or bend. As $N$ becomes large, and thus $\xi$ can also approach large values, this motion acquires such a large displacement compared to the remaining particles that it is more appropriately characterized as a particle-hole excitation due to single particle radial or angular motion away from the other particles ($\xi$ in number) participating in this collective motion. This change in character from asymmetric stretch (or bend) to single particle radial (or angular) excitation happens quite quickly as $N$ and $\xi$ increase with e.g. $N = 10$ and $\xi=N-1=9$ showing obvious single particle behavior as the ${\xi +1}^{st} =10^{th}$ particle moves with a radial displacement nine times larger than the other $\xi$ particles and in the opposite direction. Note that the displacements of all $\xi+1$ particles sum to zero (1 (particle) $\times$ a displacement of $\xi $ = $\xi$ particles $\times$ a displacement of 1) reflecting the fact that this motion is a rearrangement of the particles within the ensemble creating a hole, not the loss of a particle due to radial or angular motion. \medskip \subparagraph{The [N-2,2] sector.} In the $[N-2,2]$ sector which has totally angular behavior, a similar evolution of character is observed as $N$ increases. Consider the symmetry coordinate ${[\bm{S}_{\overline{\bm{\gamma}}'}^{[N-2,2]}]_{ij}}$ where ($1 \leq i \leq j-2$, $4 \leq j \leq N$), and let $i$ and $j$ assume their highest values, $i=N-2$ and $j=N$ so all $N$ particles will be involved in the motion of this symmetry coordinate $[S_{\overline{\gamma}'}^{[N-2,2]}]_{N-2,N}$. Examining each of the $N(N-1)/2$ angular displacements, $(\gamma_{ij}^{\prime [N-2,2]})_{mn}$, for particles $m,n$ ($1 \leq m < n \leq N$) that contribute to the corresponding total angle cosines, $(\gamma_{ij}^{[N-2,2]})_{mn}=\gamma_{\infty}+\delta^{\frac{1}{2}} (\gamma_{ij}^{\prime [N-2,2]})_{mn}$, it is clear from Eq.~\ref{eq:nm2g} that there are 3 different magnitudes of $\gamma'$ appearing for three types of interparticle angles. The displacement $(\gamma_{ij}^{\prime [N-2,2]})_{mn}$ is the first-order correction in $\delta = 1/D$ to the total angle cosine allowing a determination of the value of the interparticle angle, $(\gamma_{ij}^{[N-2,2]})_{mn} = \cos \theta_{mn}$. Then $\theta_{mn} - \theta_{\infty}$ gives the displacement. To determine $\theta_{\infty} = \arccos \gamma_{\infty}$ and thus the displacement, a specific Hamiltonian must be chosen. \smallskip \paragraph{The dominant interparticle angle.} When the $m^{th}$ and $n^{th}$ particles assume the values, $m=i+1=N-1$ and $n=j=N$, the weighting factors $i\delta_{i+1,m}$ and $(j-3) \delta_{jn}$ in Eq.~\ref{eq:nm2g} multiply together producing a large positive factor of $i(j-3)=(N-2)(N-3)$ in the displacement contribution to the angle cosines, $(\gamma_{ij}^{\prime [N-2,2]})_{mn} =(\gamma_{N-2.N}^{\prime [N-2,2]})_{N-1,N}$. \smallskip \paragraph{Nearest neighbor interparticle angles.} For $(\gamma_{ij}^{\prime [N-2,2]})_{mn}$ which have either $m=i+1=N-1$ $\bf{or}$ $n=j=N$, (but not both) one or the other weighting factor in Eq.~\ref{eq:nm2g} contributes resulting in a negative factor of $-(N-3)$ in the expression for the displacement. These $(\gamma_{ij}^{\prime [N-2,2]})_{mn}$ are associated with angles $\theta_{mn}$ that are nearest neighbor interparticle angles with the dominant angle $\theta_{N-1,N}$ discussed in part $\it{a.}$ above. Thus these interparticle angles are responding to the motion of this dominant angle. If the dominant angle is opening, these neighboring angles are compressing and vice versa. These angles are: $\gamma'_{1,N}, \gamma'_{2,N}, \gamma'_{3,N}, \gamma'_{4,N}, ..... \gamma'_{N-2,N}$ and $\gamma'_{1,N-1}, \gamma'_{2,N-1}, \gamma'_{3,N-1}, \gamma'_{4,N-1}, ..... \gamma'_{N-2,N-1}$ numbering $2(N-2)$. (I have dropped the indices $i,j$ referencing the particular symmetry coordinate.) \smallskip \paragraph{Third type of interparticle angle.} This leaves $N(N-1)/2-2(N-2)-1$ interparticle angles which have a third type of displacement for $(\gamma_{ij}^{\prime [N-2,2]})_{mn}$ which includes a small factor of $+2$ since neither weighting factor in Eq.~\ref{eq:nm2g} contributes. All the displacements for the particles $m,n$ contributing to the motion of the $i,j^{th}$ symmetry coordinate include an identical factor of $[S_{\overline{\gamma}'}^{[N-2,2]}]_{i,j}$ as well as a normalization factor of $\frac{1}{\sqrt{i(i+1)(j-3)(j-2)}}$. Note also that all the displacements sum to zero: $1 \times (N-2)(N-3)-2(N-2)\times (N-3) + (N(N-1)/2 -2(N-2)-1)\times 2 \equiv 0$, reflecting the fact that the particles are simply rearranging their positions within the confined angular space they occupy. For very low values of $N$, these expressions yield behavior that is qualitatively analogous to the normal mode behavior seen in few-body molecular systems like ammonis or methane. For example, for $N = 4$, there is one dominant interparticle angle, four nearest neighbor interparticle angles, $2(N-2) = 4$, and a single interparticle angle of the third type, $N(N-1)/2-2(N-2)-1=1$. The displacements of the dominant angle and the single angle of the third type are equal for this lowest value of $N$ (the factor $(N-2)(N-3)=2$ for $N=4$) while the four nearest neighbor angles have a smaller displacement (with a factor of $N-3=1$) similar to the behavior of an E mode of methane. As $N$ increases, the relative numbers of the three different interparticle angles change quickly with the third type of interparticle angle, which is not a nearest neighbor of the dominant angle and thus has a small response, quickly becoming the overwhelmingly largest number of interparticle angles. The first type always has a single dominant angle with the largest correction to the maximally symmetric zeroth-order configuration. The second type, which has a noticeable response to the opening or closing of the dominant angle, has $2(N-2)$ angles, a number that increases linearly with $N$, while the third type of angle which has a negligible response for $N \gg 1$ has $N(N-1)/2-2(N-2)-1$ interparticle angles which increases as $N^2/2$, quickly becoming the greatest part of an ensemble of $N$ particles that is undergoing this collective motion. For example, for $N=10$ with $N(N-1)/2=45$ interparticle angles, there is a single dominant angle, 16 angles that are nearest neighbors and 28 angles with a very small response in the third group. For $N=100$, there are 4950 interparticle angles: one dominant angle, 196 nearest neighbor angles that show a noticeable response and 4753 angles that have a very small response. Thus a picture emerges as $N$ increases of compressional or phonon behavior for the motion in this $[N-2,2]$ sector. These $[N-2,2]$ modes involve oscillations in the angles that push the atoms together and pull them apart with no change in the particles' radial positions. This type of motion is best characterized as a compressional stationary wave i.e. a phonon oscillation. This is consistent with the very low frequency of this mode compared to the frequencies of the other four types of normal coordinates as will be shown in Section~\ref{sec:Nfreq} and the large zero-point energy as seen in Eq.~(\ref{eq:E1}). \medskip \subparagraph{Examples.} The three types of interparticle angles have {\it displacement} values that also depend on the value of $N$, evolving from displacements that are roughly comparable for all three types of angles for low values of $N$, e.g. $4 \leq N \leq 6$ to displacements that are quite different in magnitude differing by factors $\sim N$ and $\sim N^2$ as $N$ increases. Using the ratios of the different factors discussed above in the expressions for the angular displacements, the dominant angle has an angular displacement value of $(N-2)(N-3)$ that is a factor of $N-2$ times larger than the angular displacements of $-(N-3)$ of the nearest neighbor angles and is a factor of $(N-2)(N-3)/2$ times larger than the angular displacements of 2 for the third type of interparticle angle. As explicit examples, I will look at the ratios of the three types of displacements for two values of $N$: $N=4$ and $N=10$ and will again consider the motion of individual particles participating in the collective motion of the symmetry coordinate $[S_{\overline{\gamma}'}^{[N-2,2]}]_{ij}$ which has the highest values of $i$ and $j$ ($i=N-2, j=N$) and thus involves the motion of all $N$ particles. For $N=4$, the symmetry coordinate is expected to have behavior similar to an E mode of methane; and for $N=10$, it will be seen that the behavior of this symmetry coordinate in the $[N-2,2]$ sector has already evolved into compressional behavior. For the lowest value of $N=4$, the displacement of the dominant angle is twice (the factor $N-2 = 2$) the displacement of the nearest neighbor angles and equal to the displacement (with factor $(N-2)(N-3)/2 = 1$) of the single angle in the third type. Thus, the dominant angle and the single angle of the third type have a contribution to the angle cosine that is twice as large as the four nearest neighbor angles. The interparticle angle $\theta_{12}$ associated with particles 1 and 2 opens and closes by the same amount and in sync with the dominant angle $\theta_{34}$ between particles 3 and 4. The nearest neighbor angles, $\theta_{13}, \theta_{14}, \theta_{23}, \theta_{24}$, open (and then close) by an amount that is roughly half as large in response to the closing (opening) of the dominant angle. (Since the value of $\gamma_{\infty}$ is typically close to zero which is the value for mean field interactions, the angular displacements to the angle cosines roughly give the actual angles of these motions.) Thus, the four particles involved in the collective motion of the symmetry coordinate ${[\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{24}$ perform a simultaneous opening and closing of two different interparticle angles, $\theta_{34}$ and $\theta_{12}$ similar to an $E$ mode of methane with the neighboring angles making smaller adjustments. A similar analysis for the case of $N=10$ shows clearly that the motion of the particles participating in this symmetry coordinate has evolved from the methane picture of two interparticle angles opening and closing in sync, to behavior that looks much more like a compressional wave. For $N=10$ and letting $i$ and $j$ assume their largest values, $i=8, j=10$, I analyze the behavior of the particles participating in the symmetry coordinate ${[\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{8,10}$. The dominant interparticle angle $\theta_{9,10}$ closes (and then opens) with an angular displacement that is eight times (the factor $N-2 =8$) the angular displacement of the 16 nearest neighbor angles and 28 times (the factor $(N-2)(N-3)/2 = 28$) the angular displacement of the 28 interparticle angles of the third type. Thus both the nearest neighbor angles and especially the third type of interparticle angle which has become the largest group have already begun to have a very small response compared to the change in the dominant angle. This trend is expected to continue as $N$ increases and the number of interparticle angles in the third type becomes very large. (For $N=100$, the dominant angle closes by an angular displacement that is 98 times (the factor $N-2 =98$) larger than the 196 nearest neighbor angles which by open a very small amount, while the 4753 interparticle angles of the third type adjust by an even smaller amount, 4753 times (the factor $(N-2)(N-3)/2 = 98\times 97/2 = 4753$) smaller than the dominant angle, more than three orders of magnitude smaller, clearly a negligible response. \medskip \subsection{Mixing coefficients as a function of N} For the mixing coefficients that determine the radial/angular mixing in the normal modes for the $[N]$ and $[N-1,1]$ sectors, very few general comments can be made without specifying a particular Hamiltonian. The mixing coefficients as defined in Eq.~(\ref{eq:tanthetaalphapm}) have a complicated $N$ dependence that originates in the Hamiltonian terms at first order. All the terms in the Hamiltonian have explicit $N$ dependence which affects the mixing of the radial and angular modes of the $[N]$ and $[N-1,1]$ sectors. In particular, the type of confining potential and the particular interparticle interaction chosen will affect the $N$ dependence of the normal modes' behavior. (Of course, the type of confining potential and interparticle interaction potential have effects on the character of the normal modes through the mixing coefficients (as well as the frequencies) apart from the $N$ dependence that I am studying in this paper.) In the case of the recently studied system of identical fermions in the unitary regime which has been intensely studied in the laboratory, the mixing coefficients for the $[N]$ sector have the following form: \begin{widetext} \begin{eqnarray} \mbox{cos}\theta^{[N]}_+ &=& \frac{\sqrt{2} \sqrt{N-1}(c+(N/2-1)d)} {\sqrt{2(N-1)(c+(N/2-1)d)^2 +(-a-(N-1)b+{\lambda}_{[N]}^+)^2}}\label{eq:cos0p} \\ \mbox{sin}\theta^{[N]}_+ &=& \frac{-a-(N-1)b+{\lambda}_{[N]}^+} {\sqrt{2(N-1)(c+(N/2-1)d)^2+(-a-(N-1)b +{\lambda}_{[N]}^+)^2}} \label{eq:sins0p}\\ \mbox{cos}\theta^{[N]}_- &=& \frac{\sqrt{2} \sqrt{N-1}(c+(N/2-1)d)} {\sqrt{2(N-1)(c+(N/2-1)d)^2 +(-a-(N-1)b+{\lambda}_{[N]}^-)^2}}\label{eq:cos0m} \\ \mbox{sin}\theta^{[N]}_- &=& \frac{-a-(N-1)b+{\lambda}_{[N]}^-} {\sqrt{2(N-1)(c+(N/2-1)d)^2+(-a-(N-1)b +{\lambda}_{[N]}^-)^2}} \label{eq:sin0m} \end{eqnarray} \end{widetext} \noindent where ${\lambda}_{[N]}^\pm$ is given by Eq.~\ref{eq:lambda12pm}. The quantities $a,b,c,d, \mbox{ and } {\lambda}_{[N]}^{\pm}$, are defined in Eq.(42) in Ref.~\cite{FGpaper} in terms of the $F$ and $G$ elements and have explicit $N$ dependence as well as $N$ dependence from the $F$ and $G$ elements of Eq.~\ref{eq:Gham} from the specific Hamiltonian. These $F$ and $G$ elements are defined in Ref.~\cite{FGpaper} for three different Hamiltonians in Eqs. (75, 76, 100, 101, 119, 120) and exhibit explicit $N$ dependence that originates in the Hamiltonian terms at first order. Thus there are three layers of analytic expressions that can bring in $N$ dependence: the expressions for $\cos\theta^{[N]}_{\pm}$ and $\sin\theta^{[N]}_{\pm}$ in Eqs.~(\ref{eq:cos0p}-\ref{eq:sin0m}) above, the expressions for $a,b,c,d, \mbox{ and } \lambda_{[N]}^{\pm}$ and the expressions for the $F$ and $G$ elements for a specific Hamiltonian. I show the resulting behavior of the mixing coefficients as a function of $N$ for a system of identical fermions in the unitary regime in Figs.~(\ref{fig:one}-\ref{fig:four}). In Fig.~\ref{fig:one}, I have plotted the square of the mixing coefficients, ${|\cos\theta^{[N]}_+|^2}$ and ${|\sin\theta^{[N]}_+|^2}$ for $q^{\prime [N]}_+$: \begin{equation} {\bm{q}'}_+^{[N]} = c_+^{[N]} \left( \cos{\theta^{[N]}_+} \, [{\bm{S}}_{\bar{\bm{r}}'}^{[N]}] \, + \, \sin{\theta^{[N]}_+} \, [{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}] \right) \nonumber \end{equation} \noindent as a function of $N$. The square of these coefficients gives the probability associated with each symmetry coordinate, $[{\bm{S}}_{\bar{\bm{r}}'}^{[N]}]$ or $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N]}] $, in the expression for the normal mode, $q^{\prime [N]}_{+}$. The plot shows that the character of the normal mode $q^{\prime [N]}_{+}$ is almost purely angular for $N \lesssim 30$ after which there is a gradual crossing of character between $N=30$ and $N=200$ when the normal mode has mixed radial and angular character. For $N \geq 200$ the character is $> 90\%$ radial and for $N \gg 1$, the character becomes almost purely radial. The other normal mode in the $[N]$ sector, $q^{\prime [N]}_-$ has complementary behavior, starting out totally radial and switching to totally angular as shown in Fig.~\ref{fig:two}. In this case,, the crossover is quite sharp, starting again around $N \backsim 30$, but finishing the crossover by $N=50$. So this normal mode, $q^{\prime [N]}$, is purely radial for low $N$ and purely angular for very large values of $N$. It has mixed radial/angular character for a very small range of $N$. A bit of inspection reveals that this behavior is being dictated to a large extent by the explicit $N$ dependence in the expressions for $\cos\theta^{[N]}_{\pm}$ and $\sin\theta^{[N]}_{\pm}$ ( Eqs.~(\ref{eq:cos0p}-\ref{eq:sin0m}) above) which have alternating limits of 0 or 1 as $N\rightarrow 1$ or $N\rightarrow \infty$. The position and shape of the crossover is influenced by the other sources of $N$ dependence that originate in the specific Hamiltonian. \begin{figure} \includegraphics[scale=0.9]{cos0psin0p.eps} \renewcommand{\baselinestretch}{0.8} \caption{The square of the mixing coefficients $|\cos\theta^{[N]}_+|^2$ and $|\sin\theta^{[N]}_+|^2$ for the normal mode $q^{\prime [N]}_+$ as a function of N.} \label{fig:one} \end{figure} \begin{figure} \includegraphics[scale=0.9]{cos0msin0m.eps} \renewcommand{\baselinestretch}{0.8} \caption{The square of the mixing coefficients $|\cos\theta^{[N]}_-|^2$ and $|\sin\theta^{[N]}_-|^2$ for the normal mode $q^{\prime [N]}_-$ as a function of N.} \label{fig:two} \end{figure} The mixing coefficients for the $[N-1,1]$ sector have the following form: \begin{widetext} \begin{eqnarray} \mbox{cos}\theta^{[N-1,1]}_+ &=& \frac{\sqrt{N-2}(c-d)} {\sqrt{(N-2)(c-d)^2 + (-a+b+{\lambda}_{[N-1,1]}^+)^2}}\label{eq:cos1p} \\ \mbox{sin}\theta^{[N-1,1]}_+ &=& \frac{-a+b+{\lambda}_{[N-1,1]}^+} {\sqrt{(N-2)(c-d)^2 + (-a+b+{\lambda}_{[N-1,1]}^+)^2}}\label{eq:sin1p} \\ \mbox{cos}\theta^{[N-1,1]}_- &=& \frac{\sqrt{N-2}(c-d)} {\sqrt{(N-2)(c-d)^2 + (-a+b+{\lambda}_{[N-1,1]}^-)^2}}\label{eq:cos1m} \\ \mbox{sin}\theta^{[N-1,1]}_- &=& \frac{-a+b+{\lambda}_{[N-1,1]}^-} {\sqrt{(N-2)(c-d)^2 + (-a+b+{\lambda}_{[N-1,1]}^-)^2}}\label{eq:sin1m} \end{eqnarray} \end{widetext} \noindent where the quantities $a,b,c,d, \mbox{ and } {\lambda}_{[N-1,1]}^{\pm}$, as well as the $F$ and $G$ elements are defined as before. In Fig.~\ref{fig:three}, I have plotted the square of the mixing coefficients for $q^{\prime [N-1,1]}_{\xi +}$, again for a Hamiltonian describing a system of identical fermions in the unitary regime. \begin{figure} \includegraphics[scale=0.9]{cos1psin1p.eps} \renewcommand{\baselinestretch}{0.8} \caption{The square of the mixing coefficients $|\cos\theta^{[N-1,1]}_+|^2$ and $|\sin\theta^{[N-1,1]}_+|^2$ for the normal mode $q^{\prime [N-1,1]}_+$ as a function of N.} \label{fig:three} \end{figure} \begin{figure} \includegraphics[scale=0.9]{cos1msin1m.eps} \renewcommand{\baselinestretch}{0.8} \caption{The square of the mixing coefficients $|\cos\theta^{[N-1,1]}_-|^2$ and $|\sin\theta^{[N-1,1]}_-|^2$ for the normal mode $q^{\prime [N-1,1]}_-$ as a function of N.} \label{fig:four} \end{figure} The plot shows that the character of the normal mode $q^{\prime [N-1,1]}_{\xi +}$ is almost purely angular for $N \lesssim 10$ after which there is a rather sudden crossing of character and then a gradual trend toward purely radial character. For $N \gg 1$, the character is almost purely radial. The other normal mode in the $[N-1,1]$ sector, $q^{\prime [N-1,1]}_{\xi -}$ has complementary behavior, starting out totally radial and switching to totally angular as shown in Fig.~\ref{fig:four}. In this case the crossing is quite sharp. Analogous to the $[N]$ sector, this behavior is being dictated to a large extent by the explicit $N$ dependence in the expressions for $\cos\theta^{[N-1,1]}_{\pm}$ and $\sin\theta^{[N-1,1]}_{\pm}$ in Eqs.~(\ref{eq:cos1p}-\ref{eq:sin1m}) above which have alternating limits of 0 or 1 as $N\rightarrow 2$ or $N\rightarrow \infty$. The position and shape of the crossover is influenced by the other sources of $N$ dependence that originate in the specific Hamiltonian. \subsection{Normal mode frequencies as a function of $N$.} \label{sec:Nfreq} Analytic expressions for the normal mode frequencies were derived in Ref.~\cite{FGpaper} using a method outlined in Appendices B and C of that paper which derives analytic formulas for the roots, $\lambda_{\mu}$, of the $FG$ secular equation. The normal-mode vibrational frequencies, $\bar{\omega}_{\mu}^2$, are related to the roots $\lambda_{\mu}$ of ${\bf FG}$ by: \begin{equation}\label{eq:omega_p} \lambda_{\mu}=\bar{\omega}_{\mu}^2, \end{equation} The two frequencies associated with the $\lambda_{0}$ roots of multiplicity one are of the form \begin{equation} \bar{\omega}_{{0}^{\pm}}=\sqrt{\eta_0 \pm \sqrt{{\eta_0}^2-\Delta_0}}, \end{equation} \noindent where: \begin{equation}\label{eq:lam0defs} \begin{split} \eta_0 &= \frac{1}{2}\Biggl[a-(N-1)b+g+2(N-2)h \\ & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, +\frac{(N-2)(N-3)}{2}\iota \Biggr] \\ \Delta_0 &= (a-(N-1)b)\left[g+2(N-2)h-\frac{(N-2)(N-3)}{2}\iota\right] \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\frac{N-2}{2}(2c+(N-2)d)(2e+(N-2)f). \end{split} \end{equation} For the two $N-1$ multiplicity roots, the frequencies are of the form \begin{equation} \bar{\omega}_{{1}^{\pm}}=\sqrt{\eta_1 \pm \sqrt{{\eta_1}^2-\Delta_1}}, \end{equation} \noindent where $\eta_1$ and $\Delta_1$ are given by: \begin{eqnarray}\label{eq:lam1defs} \eta_1 &=& \frac{1}{2}\left[a-b+g+(N-4)h-(N-3)\iota\right] \nonumber \\ \Delta_1 &=&(N-2)(c-d)(e-f)+(a-b) \nonumber \\ &&\times\left[g+(N-4)h-(N-3)\iota\right]. \end{eqnarray} The frequency ${\bar{\omega}}_2$, associated with the root $\lambda_2$ of multiplicity $N(N-3)/2$ is given by: \begin{equation} \bar{\omega}_2=\sqrt{g-2h+\iota}. \end{equation} The quantities $a,b,c,d,e,f,g,h,i$ are defined as before in Eq. (42) in Ref~\cite{FGpaper} in terms of the $F$ and $G$ elements and have explicit $N$ dependence as well as $N$ dependence from the $F$ and $G$ elements from a particular Hamiltonian. Thus the analytic expressions for the frequencies (like the mixing coefficients) have three layers of $N$ dependence: explicit $N$ dependence in the formulas for $\eta_0$, $\Delta_0$, $\eta_1$ and $\Delta_1$ in Eqs~(\ref{eq:lam0defs}) and (\ref{eq:lam1defs}); explicit $N$ dependence in the formulas for the quantities $a,b,c,d,e,f,g,h,i$, and finally the $N$ dependence in the $F$ and $G$ elements from a specific Hamiltonian in these formulas. In Figs.~(\ref{fig:five})-(\ref{fig:seven}), I show the $N$ dependence of the frequencies for the five types of normal modes for a Hamiltonian of an ensemble of identical fermions in the unitary regime. In Fig.~(\ref{fig:five}), the frequencies $\bar{\omega}_{{0}^+}$ and $\bar{\omega}_{{0}^-}$ in the $[N]$ sector show a clear avoided crossing as the characters of the normal modes $q^{\prime [N]}_+$ and $q^{\prime [N]}_-$ change from angular to radial for $q^{\prime [N]}_+$ and radial to angular for $q^{\prime [N]}_-$. The radial behavior is associated with a frequency that starts below the center of mass frequency for low $N$ and then rises above the center of mass frequency as $N$ increases. Note that the angular behavior is associated with a frequency that is exactly twice the trap frequency for all values of $N$ revealing the separation of a center of mass coordinate. As analyzed in Section~\ref{sec:symmetrymotions}, this angular motion in the $[N]$ sector looks like a symmetric bending motion for small values of $N$ as seen in the $A_1$ mode of ammonia, but evolves into a rigid center of mass movement of the whole ensemble with the radial interparticle distances remaining rigidly constant as the entire ensemble moves relative to the center of the trap creating small displacements for the interparticle angles. In Fig.~(\ref{fig:six}), the frequencies $\bar{\omega}_{{1}^+}$ and $\bar{\omega}_{{1}^-}$ in the $[N-1,1]$ sector show behavior that starts out with both frequencies $\sim 1.3$ times the trap frequency for low values of $N$. As $N$ increases these two frequencies rapidly separate, one, $\bar{\omega}_{{1}^-}$, associated with angular behavior, going to the trap frequency and the other, $\bar{\omega}_{{1}^+}$, describing radial behavior, increasing slowly. In both these sectors, $[N]$ and $[N-1,1]$, the radial frequency is higher than the corresponding angular frequency which is expected and is also seen for the small molecules like ammonia and methane that offer good molecular equivalents. In Fig.~(\ref{fig:seven}), the frequency $\bar{\omega}_{2}$ of the $[N-2,2]$ sector shows behavior that starts out for low values of $N$ at values near the trap frequency. As $N$ increases this frequency rapidly decreases to extremely small values, several orders of magnitude smaller than the trap frequency consistent with the slow oscillations of a phonon mode. Inspection of the sources of $N$ dependence reveal that the frequencies shown in Figs.~(\ref{fig:five})-(\ref{fig:seven}), for a Hamiltoninan of identical fermions in the unitary regime show a complicated dependence on all three layers of $N$ dependence. This result differs from the behavior of the mixing coefficients for the same Hamiltonian which showed behavior as a function of $N$ that was dominated by the explicit $N$ dependence in Eqs.~(\ref{eq:cos0p}-\ref{eq:sin0m}) and Eqs.~(\ref{eq:cos1p}-\ref{eq:sin1m}) and was not as sensitive to the other sources of $N$ from the specific Hamiltonian. \begin{figure} \includegraphics[scale=0.9]{omega0.eps} \renewcommand{\baselinestretch}{0.8} \caption{The frequencies $\bar{\omega}_{{0}^{\pm}}$ in units of the trap frequency for the normal modes $q^{\prime [N]}_{\pm}$ as a function of N.} \label{fig:five} \end{figure} \begin{figure} \includegraphics[scale=0.9]{omega1.eps} \renewcommand{\baselinestretch}{0.8} \caption{The frequencies $\bar{\omega}_{{1}^{\pm}}$ in units of the trap frequency for the normal modes $q^{\prime [N-1,1]}_{\pm}$ as a function of N.} \label{fig:six} \end{figure} \begin{figure} \includegraphics[scale=0.7]{omega2.eps} \renewcommand{\baselinestretch}{0.8} \caption{The frequency $\bar{\omega}_{2}$ in units of the trap frequency for the normal mode $q^{\prime [N-2,2]}$ as a function of N.} \label{fig:seven} \end{figure} \section{Summary and Final Thoughts} \label{sec:SumConc} In this study, I have looked in detail at both the macroscopic collective behavior and the microscopic contributions of individual particles to this behavior for the normal mode solutions to the symmetry-invariant perturbation first order equation in inverse dimensionality for a system of confined, interacting identical particles. These normal mode solutions were previously obtained analytically as a function of $N$ and used to obtain accurate results for energies, frequencies, wave functions, and density profiles for systems of identical bosons \cite{energy,test,toth,laingdensity} and energies, frequencies\cite{prl} and thermodynamic quantities\cite{emergence} for ultracold fermions in the unitary regime. These solutions have been tested against an exactly solvable model problem of harmonically interacting particles under harmonic confinement\cite{test}. Comparing this wave function to the exact analytic wave function obtained in an independent solution, exact agreement was found (to ten or more digits of accuracy), confirming this general theory for a fully interacting $N$-body system in three dimensions\cite{test} and verifying the analytic expressions for this normal mode basis. We also tested this general, fully interacting wave function for bosons of Ref.~\cite{JMPpaper}, exact through first order, by deriving a property, the density profile of the ground state, for the same model problem. Our density profile is indistinguishable from the $D=3$ first-order result from the independent solution of this fully interacting, N-body problem\cite{toth}. These earlier studies verifying the general formalism, did not focus on the physical character of this symmetry basis used to obtain the normal modes solutions. As mentioned earlier, our construction of the symmetry coordinates was done systematically as described in Ref.~\cite{JMPpaper} so the symmetry coordinates which transform irreducibly under $S_N$ have the simplest functional forms possible. The first symmetry coordinate involves only two of the particles and each succeeding symmetry coordinate was chosen to have the next simplest functional form possible under the requirement that it transforms irreducibly under $S_N$ etc. With this choice the complexity of the motions described by the symmetry coordinates was kept to a minimum, building up incrementally as additional particles were involved in the motion, ensuring that there was no disruption of lower $N$ symmetry coordinates. This process was chosen primarily to simplify the mathematical complexity of this basis. Other choices would have resulted in different mathematical functions that still comprised a basis for the normal mode solutions in each sector. Note that the symmetry coordinates depend only on the symmetry structure of the Hamiltonian, not on specific details of the interparticle potential, unlike the normal mode coordinates which depend on the specific details of the potentials involved. Our initial studies using these symmetry coordinates were focused on ground states of systems of ultracold bosons\cite{energy,laingdensity} and later fermions\cite{prl} The first use of excited states using this many-body formalism was in a recent paper studying thermodynamic quantities for identical fermions in the unitary regime. Constructing the partition function in this study required the use of a large number of excited states from the normal mode spectrum (which has an infinite number of equally spaced states). These states are chosen specifically to comply with the enforcement of the Pauli principle, thus connecting the Pauli principle to many-body interaction dynamics through the normal modes. The success of this study in obtaining thermodynamic quantities for the energy, entropy and heat capacity that agree quite well with experimental data has increased the interest in investigating the physical character of these states since they offer the possibility of acquiring physical intuition into the dynamics of the collective motion supported by this unitary regime. In particular, the phonon character of the normal modes with the lowest frequency and the radial (or angular) excitation of a single particle out of this mode, i.e. a particle-hole excitation, present a picture of the dynamics that leads to a gapped spectrum and collective behavior in the form of superfluidity. With this motivation, I have investigated closely both the macroscopic behavior of each of the five types of normal modes and the microscopic contributions of each particle to this collective behavior, studying the evolution of collective motion as the number of particles increases. \subparagraph{Summary.} In summary, my analysis has shown a consistent picture of behavior evolving smoothly and rapidly from the low $N$ systems that have good molecular equivalents, as seen in the behavior of ammonia and methane to very different character for the collective motion of larger $N$ systems. A number of observations have been made from this analysis that may prove useful in understanding the contribution of particle behavior to the emerging collective behavior of an ensemble. I list them below: \smallskip 1) The analytic expressions for the normal modes produce behavior for small $N$ that is analogous to the known behavior of small molecular systems such as ammonia and methane whose atoms move under Coulombic confinement. \smallskip 2) As $N$ increases, the behavior of these same analytic functions rapidly changes character, with the exception of the symmetric stretch/breathing motion (part i) below. \smallskip \indent\indent i) In the $[N]$ sector, the breathing motion of the radial $[N]$ mode retains this character as $N$ increases with the symmetric radial displacements simply decreasing in amplitude. \smallskip \indent\indent ii) The angular mode in the $[N]$ sector evolves from a symmetric bending character for small $N$ to a center of mass motion for large $N$. This change in character occurs for fairly small $N$. For example, for $N = 10$, the motion would be viewed more appropriately as a center of mass rigid motion of the $N(N-1)/2 = 45$ interparticle angles making identical small adjustments, rather than viewed as a symmetric bending motion. \smallskip \indent\indent iii) and iv) The asymmetric stretch and asymmetric bending character of the normal modes in the $[N-1,1]$ sector for low $N$ evolves smoothly into radial and angular single particle excitations, i.e. particle-hole excitations. Again, this happens quickly as $N$ increases. By $N=10$, the $10^{th}$ particle participating in the motion of the symmetry coordinate $[{\bm{S}}_{\bar{\bm{r}}'}^{[N-1,\hspace{1ex} 1]}]_{\xi=9}$ or $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_{\xi=9}$ has a displacement that is nine times the displacements of the other particles resulting in behavior that is viewed more appropriately as a radial or angular excitation. \smallskip \indent\indent v) Similarly, the bending modes of the small $N$ systems in the $[N-2,2]$ sector quickly and smoothly evolve into compressional or phonon behavior as $N$ increases. \smallskip Interestingly, this change in character from small $N$ to large $N$ is dictated by fairly simple analytic forms that create the evolution in character for the symmetry coordinates. Inspecting the forms of the microscopic motions of the individual particles for the different sectors: $[N], [N-1,1],$ and $[N-2,2]$ (See Eqs.~(\ref{eq:rN}, \ref{eq:SgammaN}, \ref{eq:rNm1inr}- \ref{eq:nm2g}), the relevant $N$ dependence in the $[N]$ sector is just in a normalization factor creating a decrease in amplitude as $N$ increases for these symmetric motions. However in the $[N-1,1]$ and $[N-2,2]$ sectors, the functional $N$ dependence is more involved. Ignoring the leading common factors including a normalization factor, the $N$ dependence is determined by an intricate balancing of Kronecker delta functions and Heaviside functions that give zero or unity depending on the value of their indices which involve integers referring to specific particles. This intricate accounting of $1's$ and $0's$ determines the motion of the $N$ particles of this normal mode for both small $N$ and large $N$. Not surprisingly, the character depends on knowing the interplay of all the individual particles one by one, whose contributions are kept track of perfectly by the Kronecker delta and Heaviside functions. 3) The behavior of the normal modes which have a mixture of the radial and angular symmetry coordinates in the $[N]$ and $[N-1,1]$ sectors was investigated for a particular Hamiltonian of current interest, that of an ensemble of identical confined fermions in the unitary regime. This behavior is seen to transition from totally radial, i.e. a pure radial symmetry coordinate to totally angular behavior, i.e. a pure angular symmetry coordinate (or vice versa) as $N$ changes. Thus for very small $N$, ($N \le 20$ for the $[N]$ sector and $N \le 10$ for the $[N-1,1]$ sector) or for very large $N$, $N \gg 1$, the normal modes adopt the character of a pure symmetry coordinate displaying either totally radial or totally angular behavior. For some intermediate values of $N$ there is a region where the normal coordinates show significant mixing of the radial and angular symmetry coordinates making the behavior more difficult to characterize. In some cases, when the crossing is very sharp, this region is quite small, while other cases show a broader evolution of character from radial to angular or angular to radial. For large $N$, the normal modes evolve into purely radial or purely angular behavior in the case of identical, confined fermions in the unitary regime. This means that the analytically derived symmetry coordinates are eigenfunctions of this first order perturbation equation. When the Hamiltonian is transformed into block diagonal form by the symmetry coordinates, the off diagonal elements are negligible in these regimes. This result has implications for the stability of collective behavior in this regime since the symmetric coordinates are eigenfunctions of an approximate underlying Hamiltonian and thus have some degree of stabiltiy unless the sytem is perturbed. e.g. by an increase in temperature etc. Although the construction of the symmetry coordinates was chosen to minimize the mathematical complexity and is not a unique basis of coordinates, the symmetry coordinates clearly contain information about the dynamics of this many-body problem of identical particles. Regardless of the strategy of their construction, they are, by definition and by construction, coordinates that transform under the irreducible representations of the symmetric group of $N$ identical objects so the Hamiltonian of this first order equation which is invariant under the $N!$ symmetry operations of the symmetric group is transformed to block diagonal form when expressed in this basis. In this case, the blocks are small, $2 \times 2$, in the $[N]$ and $[N-1,1]$ sectors that have both radial and angular representations and $1 \times 1$ blocks, i.e. diagonal for the $[N-2,2]$ sector. This Hamiltonian term in the first order perturbation equation contains beyond-mean-field effects. Thus the normal coordinates whose frequencies and mixing coefficients depend on the interparticle interactions are, in fact, beyond-mean-field ${\it analytic}$ solutions to a many-body Hamiltonian through first order. \smallskip 4) Except for the center of mass frequency which separates out for all values of $N$ at twice the trap frequency, the frequencies of oscillation of the normal modes also evolve as $N$ increases. For the case studied of fermions in the unitary regime, the radial frequencies increase in both the $[N]$ and $[N-1,1]$ sectors while the angular frequencies trend toward the trap frequency in the case of the $[N-1,1]$ sector and very small values in the case of the phonon mode in the $[N-2,2]$ sector. (See Figs. 5 -7) The frequency of the $[N-2,2]$ modes which starts out for low $N$ as a bending mode with a frequency near the trap frequency quickly decreases as the motion evolves into phonon compressional behavior, going to extremely small values for this low energy mode, several orders of magnitude smaller than the trap frequency. (See Fig. 7.) \smallskip 5) The normal coordinates provide a basis not just for the ground state, but for the spectrum of excited states for $L=0$ and for higher order corrections in the perturbation expansion. I analyzed just one of the $N-1$ degenerate symmetry coordinates in each of the $[N-1,1]$ sectors that have a frequency of $\bar{\omega}_1$ and just one of the $N(N-3)/2$ degenerate symmetry coordinates in the $[N-2,2]$ sector that have a frequency of $\bar{\omega}_2$. In these cases, the symmetry coordinate was chosen to have the highest indice(s) possible: $\xi = N-1$ for $[{\bm{S}}_{\bar{\bm{r}}'}^{[N-1,\hspace{1ex} 1]}]_{N-1}$ and $[{\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-1, \hspace{1ex} 1]}]_{N-1}$ and indices $i,j$ equal to the highest values $(i=N-2, j=N)$ for ${[\bm{S}}_{\overline{\bm{\gamma}}'}^{[N-2, \hspace{1ex} 2]}]_{i,j}$ in order to involve all $N$ particles in the motion. Symmetry coordinates with lower values for $\xi$ or $i$ and $j$ would result in modified behavior involving fewer particles from that analyzed in Section~\ref{sec:collective}. This raises the question of what behavior is actually dominant in an ensemble? A definitive answer to this question is answered by obtaining the actual wave function for a particular state of $N$ bosons or fermions with the correct permutation symmetry enforced and looking at the dominant contributions from the degenerate normal modes of the $[N-1,1]$ or $[N-2,2]$ sectors. Some light can be shed on this question by noting that as $N$ increases a higher and higher percentage of the degenerate symmetry coordinates in the $[N-1,1]$ and $[N-2,2]$ sectors will have evolved into the ``large $N$'' collective behavior as described above. For example, for $N = 10$, over 80 \% of the degenerate symmetry coordinates in the $[N-1,1]$ sectors have evolved into motion that is more appropriately described as single particle radial excitation or single particle angular excitation behavior rather than an antisymmetric stretching or bending motion. For $N=100$, over 90\% of these degenerate symmetry coordinates have evolved into ``large $N$" collective behavior. Similarly in the $[N-2,2]$ sector, for $N=10$, over 80\% of the $N(N-3)/2$ degenerate symmetry coordinates have evolved into ``large $N$'' collective behavior with a single dominant interparticle angle and significantly smaller responses from the remaining interparticle angles. For $N =100$, the percentage is over 90\% and by $N=1000$, over 99\% of the degenerate symmetry coordinates in this sector have ``large N'' collective behavior. \subparagraph{Conclusions.} This investigation into the evolution of collective behavior as a function of $N$ suggests that this type of collective behavior defined by the normal modes of the system smoothly and quickly evolves from the well-known vibrational motions of small $N$ systems that have been characterized as symmetric breathing and bending, asymmetric stretching and bending and the simultaneous opening and closing of interparticle angles, to the ``large $N$'' collective behavior that is more appropriately described as breathing, center of mass, radial and angular particle-hole excitations and phonon. Thus, the analysis of behavior for these five analytic expressions for the normal mode solutions for a confined system of $N$ identical particles yields consistent, physically intuitive behaviors that have been observed in the laboratory. The transition to ``large $N$'' collective behavior happens at very low values of $N$ e.g. $N=10$, which is consistent with the good agreement obtained in previous few-body studies for thermodynamic quantities\cite{adhikari,hu6,hu7,hu8,blume1,blume2,levinsen,grining}. What are the dynamics that can drive one of these collective behaviors to become the dominant behavior of a system with competing behaviors, collective or not, suppressed? My analysis of the $N$ dependence of the symmetry coordinates for a Hamiltonian that is known to support collective behavior in the form of superfluidity at ultracold temperatures in the unitary regime has revealed two phenomena that have the potential to support the creation and stabilization of collective behavior. First the mixing of radial and angular behavior in the normal modes is seen to limit to pure radial or pure angular behavior for very large $N$ resulting in symmetry coordinates that are eigenfunctions of an approximate Hamiltonian governing the physics of the unitary regime, thus acquiring some amount of stability if unperturbed. Second, from Figs.~(\ref{fig:five})-(\ref{fig:seven}), one can see that for low values of $N$ the five different frequencies start out closer in value, but as $N$ increases these five frequencies spread out creating large gaps between the values of these five frequencies. These gaps could provide the stability for collective behavior if mechanisms to prevent the transfer of energy to other modes exist (such as low temperatures) or can be constructed. \section{Acknowledgments} I would like to thank the National Science Foundation for financial support under Grant No. PHY-1607544.
1,108,101,566,110
arxiv
\section{Introduction} In recent decades plenty of remarkable results has been achieved in the observational cosmology including precise measurements of the Cosmic Microwave Background (CMB) radiation \cite{CMB}, systematic observations of nearby and distant Type Ia supernovae (SNe Ia) \cite{supernova}, study of baryon acoustic oscillations \cite{BAO}, mapping the large-scale structure of the Universe, microlensing observations, and many others (see, for example, the review \cite{Observations}). These achievements have set new serious challenges before theoretical physics and prompted many speculations mostly based on phenomenological ideas which involve new dynamical sources of gravity that act as dark energy, and/or various modifications to general relativity. The spectrum of models, having been postulated and explored in recent years, is extremely wide and includes, in particular, Quintessence \cite{quintessence}, $K$-essence \cite{Kessense}, Ghost Condensates \cite{Ghost}, Dvali-Gabadadze-Porrati gravity \cite{DGP}, Galileon gravity \cite{Ggravity}, and $f(R)$ gravity \cite{fRgravity} (see Refs. \cite{SahSta, PeeRat, Nob, CopSamTsu, CalKam, SilTro, Cli_etal, AmeTsu} for detailed reviews of these and other models). The most of phenomenological models represents various modifications of scalar-tensor theories. Of particular interest are models allowing for nonminimal couplings between derivatives of scalar fields and the curvature. As was shown by Amendola \cite{Amendola}, a theory with derivative couplings cannot be recast into the Einsteinian form by a conformal rescaling $\tilde g_{\mu\nu} = e^{2\omega} g_{\mu\nu}$. He also supposed that an effective cosmological constant and then the inflationary phase can be recovered without considering any effective potential if a nonminimal derivative coupling is introduced. Amendola himself \cite{Amendola} investigated a cosmological model with the Lagrangian containing the only derivative coupling term $\kappa_2 R_{\mu\nu}\phi^{,\mu}\phi^{,\nu}$ and presented some analytical inflationary solutions. A general model containing $\kappa_1 R\phi_{,\mu}\phi^{,\mu}$ and $\kappa_2 R_{\mu\nu}\phi^{,\mu}\phi^{,\nu}$ has been discussed by Capozziello {\em et al} \cite{Capozziello}. They showed that the de Sitter spacetime is an attractor solution in the model. Further investigations of cosmological and astrophysical models with nonminimal derivative couplings have been continued in \cite{kincoupl,Bruneton,Tsujikawa,Sus:2009,SarSus:2010, Sus:2012, SusRom:2012}. Note that generally the order of field equations in models with nonminimal derivative couplings is higher than two. However, it reduces to second order in the particular case when the kinetic term is only coupled to the Einstein tensor, i.e. $\kappa G_{\mu\nu}\phi^{,\mu}\phi^{,\nu}$ (see, for example, Ref. \cite{Sus:2009}).\footnote{It is worth noting that a general single scalar field Lagrangian giving rise to second-order field equations had been derived by Horndeski \cite{Horndeski} in 1974. The model with $\kappa G_{\mu\nu}\phi^{,\mu}\phi^{,\nu}$ represents a particular form of the Horndeski Lagrangian. Recent interest in second-order gravitational theories is also connected with the Dvali-Gabadadze-Porrati braneworld \cite{DGP} and and Galileon gravity \cite{Ggravity}.} In our recent works \cite{Sus:2009,SarSus:2010,Sus:2012} we have investigated cosmological scenarios with with the nonminimal derivative coupling $\kappa G_{\mu\nu}\phi^{,\mu}\phi^{,\nu}$, focusing on models with zero and constant potentials. According to the parameter choices, we have obtained the variety of behaviors including a Big Bang, an expanding universe with no beginning, a cosmological turnaround, an eternally contracting universe, a Big Crunch, and a cosmological bounce \cite{SarSus:2010}. However, the most interesting and important feature we have found is that the non-minimal derivative coupling provides an essentially new inflationary mechanism and naturally describe transitions between various cosmological phases without any fine-tuning potential. The inflation is driving by terms in the field equations responsible for the non-minimal derivative coupling. At early times these terms are dominating, and the cosmological evolution has the quasi-de Sitter character $a(t)\propto e^{H_\kappa t}$ with $H_\kappa=1/\sqrt{9\kappa}$, where $\kappa$ is a coupling parameter with dimension of ({\em length})$^{2}$. Note that the estimations give $\kappa\simeq 10^{-74}$ sec$^2$ \cite{Sus:2012}. Later, in the course of the cosmological evolution the domination of $\kappa$-terms is canceled, the usual matter comes into play, and the Universe enters into the matter-dominated epoch. The scalar potential plays very important and, frequently, crucial role in scalar-tensor theories of gravity. Could the potential drastically modify cosmological scenarios with the non-minimal derivative coupling found in models with zero and/or constant potentials? In the present paper we study this problem for a power-law potential $V(\phi)=V_0\phi^N$. \section{Action and field equations} Let us consider the theory of gravity with the action \begin{equation}\label{action} S=\int d^4x\sqrt{-g}\left\{ \frac{R}{8\pi} -\big[g^{\mu\nu} + \kappa G^{\mu\nu} \big] \phi_{,\mu}\phi_{,\nu} -2V(\phi)\right\}, \end{equation} where $V(\phi)$ is a scalar field potential, $g_{\mu\nu}$ is a metric, $R$ is the scalar curvature, $G_{\mu\nu}$ is the Einstein tensor, and $\kappa$ is the coupling parameter with dimension of ({\em length})$^2$. In the spatially-flat Friedmann-Robertson-Walker cosmological model the action \Ref{action} yields the following field equations \cite{Sus:2012} \begin{subequations}\label{genfieldeq} \begin{eqnarray} \label{00cmpt} &&3H^2=4\pi\dot{\phi}^2\left(1-9\kappa H^2\right) +8\pi V(\phi),\\ &&\displaystyle 2\dot{H}+3H^2=-4\pi\dot{\phi}^2 \left[1+\kappa\left(2\dot{H}+3H^2 +4H\ddot{\phi}\dot{\phi}^{-1}\right)\right] \nonumber\\ \label{11cmpt} && \ \ \ \ \ \ +8\pi V(\phi),\\ \label{eqmocosm} &&(\ddot\phi+3H\dot\phi)-3\kappa(H^2\ddot\phi +2H\dot{H}\dot\phi+3H^3\dot\phi)=-V_\phi, \end{eqnarray} \end{subequations} where a dot denotes derivatives with respect to time, $H(t)=\dot a(t)/a(t)$ is the Hubble parameter, $a(t)$ is the scale factor, $\phi(t)$ is a homogenous scalar field, and $V_\phi=dV/d\phi$. It is worth noticing that Eq. \Ref{eqmocosm} can be rewritten as follows \begin{equation}\label{eqmoint} \big[a^3(1-3\kappa H^2)\dot\phi\big]\!\dot{\phantom{\phi}}=-a^3V_\phi. \end{equation} In the case $V(\phi)\equiv const$, when $V_\phi=0$, Eq. \Ref{eqmoint} can be easily integrated: \begin{equation}\label{intphi} \dot\phi=\frac{C}{a^3(1-3\kappa H^2)}, \end{equation} where $C$ is a constant of integration. Note that equations \Ref{11cmpt} and \Ref{eqmocosm} are of second order, while \Ref{00cmpt} is a first-order differential constraint for $a(t)$ and $\phi(t)$. The constraint (\ref{00cmpt}) can be rewritten as: \begin{equation}\label{constrphigen} \dot\phi^2=\frac{3H^2-8\pi V(\phi)}{4\pi(1-9\kappa H^2)}, \end{equation} or equivalently as \begin{equation}\label{constralphagen} H^2=\frac{4\pi\dot\phi^2+8\pi V(\phi)}{3(1+12\pi\kappa\dot\phi^2)}. \end{equation} Therefore, as long as the parameter $\kappa$ and the potential $V(\phi)$ are given, the above relations provide restrictions for the possible values of $H$ and $\dot\phi$, since they have to give rise to non-negative $\dot\phi^2$ and $H^2$, respectively. Assuming the non-negativity of the potential, i.e. $V(\phi)\ge0$, we can conclude from Eqs. \Ref{constrphigen} and \Ref{constralphagen} that in the theory with the positive $\kappa$ possible values of $\dot\phi$ are unbounded, while $H$ takes restricted values. Vice versa, the negative $\kappa$ leads to bounded $\dot\phi$ and unbounded $H$. Hereafter we will suppose that $\kappa>0$. \section{Dynamical system} In order to find asymptotic regimes of the system \Ref{genfieldeq} we introduce the following set of dimensionless variables \begin{eqnarray} && x=\frac{8\pi\dot\phi^2}{6H^2(1+8\pi\kappa\dot\phi^2)},\quad y=-\frac{8\pi\kappa\dot\phi^2}{2(1+8\pi\kappa\dot\phi^2)}, \nonumber\\ && z=\frac{8\pi V}{3H^2(1+8\pi\kappa\dot\phi^2)},\quad v=\frac{\dot \phi}{\phi H}. \label{def:xyzv} \end{eqnarray} Generally, $x$ characterizes the kinetic energy, and $z$ characterizes the potential energy of the scalar field, while $y$ is connected with non-minimal kinetic coupling. Correspondingly, $z= 0$ if $V=0$, and $y=0$ if $\kappa=0$. Using new variables, we can rewrite Eq. \Ref{00cmpt} as follows \begin{equation} x+y+z=1. \end{equation} The latter is a constraint for values of $x$, $y$ and $z$. Using this constraint, we can exclude $y$ from subsequent relations. Differentiating Eqs. \Ref{00cmpt}, \Ref{eqmocosm}, and $v=\frac{\dot \phi}{\phi H}$, we obtain \begin{eqnarray}\label{xzv} x'& = &2x\left[X(3-2x-2z)-Y\right],\\ z'& = &z\left[\beta v-2Y+4X(1-x-z)\right],\\ v'& = &v\left[X-Y-v\right]. \end{eqnarray} where the prime means a derivative with respect to $\ln a$,\footnote{One has the following relation: $\frac{d}{dt}=H\frac{d}{d(\ln a)}$.} and the following notations are used: \begin{equation}\label{def:betaXY} \beta=\frac{\phi V_\phi}{V}, \quad X=\frac{\ddot\phi}{\phi H},\quad Y=\frac{\dot H}{H^2}. \end{equation} The dimensionless parameter $\beta$ depends on the specific form of $V(\phi)$. Hereafter we will discuss the power-law potential \begin{equation} V(\phi)=V_0\phi^N. \end{equation} In this case we have $\beta=N={\rm const}$. To express $X$ and $Y$ via $x,y,v$ and $N$, we differentiate Eq. \Ref{00cmpt}, and divide the obtained relation by $\frac{3}{4\pi}H^3(1+8\pi\kappa\dot\phi^2)$. After some algebra we obtain \begin{equation} 2X(3-2x-3z)-2Y(x+z)+N v z=0. \end{equation} Then, dividing Eq. \Ref{eqmocosm} by $\frac{3}{4\pi\dot\phi^2}H^3(1+8\pi\kappa\dot\phi^2)$, we can find \begin{equation} X(1-z)+2Y(1-x-z)+3(1-z)+\frac12N v z=0. \end{equation} Resolving this system with respect to $X$ and $Y$ yields \begin{eqnarray} X & = & \frac{1}{\Delta}\left[\frac12 N v z(x+z-2)-3(1-z)(x+z)\right],\\ Y & = & \frac{1}{\Delta}\bigg[N v z(x+z-1)+3(1-z)(2x+3z-3)\bigg], \label{Y} \end{eqnarray} where $\Delta=-9x(1-z)-11z+5z^2+4x^2+6$. Substituting these relations into \Ref{xzv} we obtain finally the following dynamical system: \begin{widetext} \begin{subequations}\label{dynsys} \begin{eqnarray} x'& = &\frac{2x}{\Delta}\left[(\textstyle\frac12 N v z(x+z-2)-3(1-z)(x+z))(3-2x-2z)- N v z(x+z-1)-3(1-z)(2x+3z-3)\right],\\ z'& = &\frac{z}{\Delta}\left[N v \Delta-2N v z(x+z-1)-6(1-z)(2x+3z-3)+2(N v z(x+z-2)-6(1-z)(x+z))(1-x-z)\right],\\ v'& = &\frac{v}{\Delta}\left[\frac12 N v z(x+z-2)-3(1-z)(x+z)-N v z(x+z-1)-3(1-z)(2x+3z-3)-v\Delta\right]. \end{eqnarray} \end{subequations} \end{widetext} It is worth noting that the equations of the system \Ref{dynsys} are not independent, because there exists the following dependence between the variables $x$, $z$, and $v$: \begin{equation}\label{constr_xzv} z v^N(1-x-z)=-6^N(8\pi)^{\frac{2-N}{2}} V_0\kappa x^{\frac{N+2}{2}}(2x+2z-3)^{\frac{2-N}{2}}. \end{equation} Since the above relation is too complicated, in practice we solve the system \Ref{dynsys} straightforwardly, and then exclude surplus solutions. \subsection{Stationary points, stability analysis, and asymptotics} In this section we study stationary points of the dynamical system \Ref{dynsys} and perform a stability analysis of these points. To find a stationary point $(x_0,z_0,v_0)$, we set $x_0'=z_0'=v_0'=0$ in Eqs. \Ref{dynsys} and solve the resulting algebraic equations. Then, we investigate its stability with respect to small perturbations $\delta x$, $\delta z$, and $\delta v$ around $(x_0,z_0,v_0)$. Explicitly, we substitute \begin{equation} x=x_0+\delta x,\quad z=z_0+\delta z,\quad v=v_0+\delta v \end{equation} into Eqs. \Ref{dynsys} and keep terms up to the first order in $\delta x$, $\delta z$, $\delta v$. This leads to a system of first-order differential equations \begin{equation} \frac{d}{d(\ln a)}\left( \begin{array}{c} \delta x\\ \delta z\\ \delta v \end{array} \right) ={\cal M} \left( \begin{array}{c} \delta x\\ \delta z\\ \delta v \end{array} \right), \end{equation} where $\cal M$ is a $3\times 3$ matrix which depends on $(x_0,z_0,v_0)$. The stability of the stationary point $(x_0,z_0,v_0)$ is determined by corresponding eigenvalues $(\lambda_1,\lambda_2,\lambda_3)$ of $\cal M$. In particular, if real parts of all eigenvalues are negative the point is stable (local sink), if all real parts are positive the point is unstable being stable while integrating in the opposite time direction (local source), if there are eigenvalues with different signs of their real parts the point is a saddle. In the table \ref{tab01} we enumerate all stationary points of the dynamical system \Ref{dynsys}, briefly characterize their stability, and give asymptotics for $a(t)$ and $\phi(t)$. It is necessary to stress that we only consider those points which satisfy the additional constraint \Ref{constr_xzv}. Below, let us discuss the stationary points in more detail. \begin{table*} \caption{\label{tab01} Stationary points of the dynamical system \Ref{dynsys}.} \begin{tabular}{cclclcl} \hline\hline \textbf{No} &\quad & \textbf{Stationary point} &\quad & \textbf{Stability} &\quad & \textbf{Conditions of an existence}\\ \hline 1. &\quad & $x=0$, $y=1$, $z=0$, $v=0$ &\quad & Unstable node &\quad & $\forall N$, $\kappa<0$, $t\rightarrow t_0$\\ 2. &\quad & $x=\frac12$, $y=-\frac12$, $z=1$, $v=0$ &\quad & Complex type &\quad & $0<N<2$, $\kappa>0$, $t\rightarrow\infty$\\ 3. &\quad & $x=1$, $y=0$, $z=0$, $v=0$ &\quad & Saddle point &\quad & $V(\phi)\equiv 0$, $\forall\kappa$, $t\rightarrow \infty$\\ 4. &\quad & $x=0$, $y=-\frac12$, $z=\frac32$, $\textstyle v=\frac{12}{3N+2}$ &\quad & Stable node &\quad & $N>2$, $\forall\kappa$, $t\rightarrow t_0$\\ 5. &\quad & $x=\frac32$, $y=-\frac12$, $z=0$, $v=-3$ &\quad & Unstable node &\quad & $0<N<2$, $\kappa>0$, $t\rightarrow-\infty$\\ \hline\hline \end{tabular} \end{table*} \subsubsection{The stationary point $x=0$, $y=1$, $z=0$, $v=0$.} In this case the eigenvalues read $$\textstyle \lambda_1=3,\ \lambda_2=3,\ \lambda_3=\frac32. $$ Since all eigenvalues are positive, this point represents an unstable node for any $\kappa$, $V_0$, and $N$. Substituting $x=0$, $y=1$, $z=0$, and $v=0$ into Eq. \Ref{Y}, we find that $Y=-\frac32$ at the stationary point. Then, using the definition $Y=\dot H/H^2$, we can obtain an asymptotical form of $a(t)$: \begin{equation}\label{as-a-1} a(t)=a_0(t-t_0)^{2/3}. \end{equation} An asymptotic for $\phi(t)$ can be found from the relation $y=-\frac{8\pi\kappa\dot\phi^2}{2(1+8\pi\kappa\dot\phi^2)}$ (see Eq. \Ref{def:xyzv}); putting $y=1$ into the latter yields \begin{equation}\label{as-p-1} \phi(t)=\phi_0+\phi_1(t-t_0), \end{equation} where $\phi_1^2=-\frac{1}{12\pi\kappa}$ and $\kappa<0$. Additionally one can substitute the asymptotics \Ref{as-a-1} and \Ref{as-p-1} into Eqs. \Ref{def:xyzv} and check that $x\to0$, $y\to1$, $z\to0$, and $v\to0$ as $t\to t_0$, where $t_0$ is an initial moment of time. Note the same asymptotic was also obtained in the model with $V(\phi)\equiv 0$ \cite{Sus:2009}. \subsubsection{The stationary point $x=\frac12$, $y=-\frac12$, $z=1$, $v=0$.} In this case the eigenvalues are $$\textstyle \lambda_1=0,\ \lambda_2=0,\ \lambda_3=-3. $$ Since two of these eigenvalues are equal to zero, one needs an additional study to characterize a stability of the stationary point. In the next section we will discuss this problem using a numerical analysis. To find an asymptotic for $a(t)$, we take into account that $\frac{y}{x}=-1$ at the stationary point. By using the definitions \Ref{def:xyzv} for $x$ and $y$, we can obtain $H^2=\frac{1}{3\kappa}$, which is possible only if $\kappa>0$. Now, the asymptotic for $a(t)$ reads \begin{equation}\label{as-a-2} a(t)=a_0 e^{\frac{t}{\sqrt{3\kappa}}}. \end{equation} Analogously, to find an asymptotic for $\phi(t)$, we use the relation $\frac{z}{x}=2$. Substituting Eqs. \Ref{def:xyzv} into this relation and integrating, we can obtain \begin{equation}\label{as-p-2} \phi(t)=\phi_0 t^{\frac{2}{2-N}}, \end{equation} where $\phi_0=\left[\frac12 (2-N)\sqrt{V_0}\right]^{\frac{2}{2-N}}$. Additionally, substituting the asymptotics \Ref{as-a-2} and \Ref{as-p-2} into \Ref{def:xyzv}, one can check that $x\to \frac12$, $y\to -\frac12$, $z\to1$, and $v\to0$ in the limit $t\to\infty$ only if $0<N<2$. \subsubsection{The stationary point $x=1$, $y=0$, $z=0$, $v=0$.} In this case the eigenvalues are $$ {\lambda}_1=-6,\ {\lambda}_2=6, \ {\lambda}_3=0. $$ Since two of three eigenvalues have opposite signs, this stationary point is a saddle point for any $\kappa$, $V_0$, and $N$. From Eq. \Ref{Y} we find $Y=-3$. Then, from the definition \Ref{def:betaXY} we obtain an asymptotic for $a(t)$ as follows \begin{equation}\label{as-a-4} a(t)=a_0 t^{1/3}. \end{equation} From the definitions \Ref{def:xyzv} we conclude that $\dot\phi^2\to0$ if $y\to 0$, and $8\pi\dot\phi^2/6H^2\to 1$ if $x\to 1$. Integrating the relation $8\pi\dot\phi^2/6H^2=1$ and using Eq. \Ref{as-a-4}, we can obtain an asymptotic for $\phi(t)$: \begin{equation}\label{as-p-4} \textstyle \phi(t)=\phi_0+\phi_1\ln t. \end{equation} with $\phi_1^2=\frac{1}{12\pi}$. Additionally, one should check that the relations \Ref{def:xyzv} provide necessary limiting values. Substituting the asymptotics \Ref{as-a-4} and \Ref{as-p-4} into \Ref{def:xyzv}, we can see that $x\to 1$, $y\to 0$, and $v\to 0$ at $t\to\infty$. However, it is worth noting that the necessary limit $z=0$ is only fulfilled if $V_0=0$, i.e. $V(\phi)\equiv 0$. In this case we obtain the well-known solution for a minimally coupled (i.e. $y=0$ or, equivalently, $\kappa=0$) massless (i.e. $V=0$) scalar field \cite{Sus:2009}. \subsubsection{The stationary point $x=0$, $y=-\frac12$, $z=\frac32$, $v=\frac{12}{3N+2}$.} In this case the eigenvalues are $$ \lambda_1=-\frac{6(N-2)}{3N+2},\ \lambda_2=-\frac{6(N+2)}{3N+2}, \ \lambda_3=-6. $$ Note that $\lambda_2$ and $\lambda_3$ are negative, while a sign of $\lambda_1$ depends on $N$. Namely, (i) $\lambda_1<0$ if $N>2$, and hence the stationary point is an attractive node; (ii) $\lambda_1>0$ if $N<2$, and the stationary point is a saddle point; (iii) $\lambda_1=0$ if $N=2$, and one needs an additional study to characterize a stability of the stationary point. Assume that $N\not=2$. Now, using Eq. \Ref{Y}, we find $Y=\frac{3(N-2)}{2N+2}$. The corresponding asymptotics for $a(t)$ are as follows: \begin{equation}\label{as-a-6} a(t)=a_0(t-t_0)^{-\frac{3 N+2}{3(N-2)}}. \end{equation} In order to obtain the asymptotical behavior of $\phi(t)$, we use the definition $v=\frac{\dot\phi}{\phi H}$. In our case we find $\frac{\dot\phi}{\phi H}=\frac{12}{3N+2}$. Then, integrating gives \begin{equation} \phi=C a^{\frac{12}{3N+2}}, \end{equation} where $C$ is a constant of integration. Substituting Eq. \Ref{as-a-6} into the latter relation yields \begin{equation}\label{as-p-6} \phi(t)=\phi_0(t-t_0)^{-\frac{4}{N-2}} \end{equation} Additionally, substituting the asymptotics \Ref{as-a-6} and \Ref{as-p-6} into \Ref{def:xyzv}, one can check that $x\to 0$, $y\to -\frac12$, $z\to\frac32$, and $v\to\frac{12}{3N+2}$ in the limit $t\to t_0$ only if $N>2$. It is clear that for $N>2$ this point represents a Big Rip asymptotic. \subsubsection{The stationary point $x=\frac{3}{2}$, $y=-\frac{1}{2}$, $z=0$, $v=-3$.} In this case the eigenvalues are $$ \lambda_1=3(2-N), \ \lambda_2=6, \ \lambda_3=3. $$ Since two of three eigenvalues are positive, this point is unstable for any $\kappa$ and $V_0$. Namely, it is a saddle if $N>2$, or an unstable node if $N<2$. Using Eq. \Ref{Y}, we calculate $Y=0$, and hence the relation $Y=\frac{\dot H}{H^2}$ yields $H=const$. Now, using the other relation $\frac{y}{x}=-3\kappa H^2$, we obtain $H^2=\frac{1}{9\kappa}$, which is possible only if $\kappa>0$. The resulting asymptotic for $a(t)$ is \begin{equation}\label{as5-a} a(t)=a_0 e^{\frac{t}{\sqrt{9\kappa}}}. \end{equation} Substituting $v=-3$ and $H=\frac{1}{\sqrt{9\kappa}}$ into the relation $v=\frac{\dot{\phi}}{\phi H}$, we can obtain the asymptotic for $\phi(t)$: \begin{equation}\label{as5-phi} \phi(t)=\phi_0 e^{-\frac{t}{\sqrt{\kappa}}}. \end{equation} Taking into account the definition \Ref{def:xyzv}, we can see that $\dot\phi^2\to\infty$ if $y\to-\frac12$. Hence the asymptotics \Ref{as5-a} and \Ref{as5-phi} are realized at $t\to-\infty$. Additionally, let us consider an asymptotical behavior of $z$. For the power-law potential the definition \Ref{def:xyzv} gives $ z=\frac{8\pi V_0\phi^N}{3H^2(1+8\pi\kappa\dot\phi^2)}. $ Substituting the asymptotics \Ref{as5-a} and \Ref{as5-phi} into the latter relation, we can see that $z\rightarrow 0$ at $t\to-\infty$ only if $0<N<2$. Note that the same asymptotic have also been obtained in the model with $V(\phi)\equiv 0$ \cite{Sus:2009}. \section{Examples of cosmological scenarios} In this section we examine some specific cosmological scenarios corresponding to specific potential choices. Since we are mostly interesting in the inflation driven by nonminimal kinetic coupling, hereafter we will assume $\kappa>0$. First, let us separate the equation for $\phi$ and $H$. For this aim we resolve Eqs. \Ref{11cmpt} and \Ref{eqmocosm} with respect to $\dot H$ and $\ddot\phi$ and then, using the constraints \Ref{constrphigen} and \Ref{constralphagen}, we can eliminate $\dot\phi$ and $H$ from respective equations and find \begin{widetext} \begin{equation}\label{phi2gen} \ddot\phi=\frac{-2\sqrt{3\pi}\dot\phi [1+8\pi\kappa\dot\phi^2-8\pi\kappa V(\phi)] \sqrt{[\dot{\phi}^2+2V(\phi)](12\pi\kappa\dot\phi^2+1)} -(12\pi\kappa\dot\phi^2+1)(4\pi\kappa\dot\phi^2+1)V_\phi} {1+12\pi\kappa\dot\phi^2+96\pi^2\kappa^2\dot\phi^4 +8\pi\kappa V(\phi)(12\pi\kappa\dot\phi^2-1)}, \end{equation} \begin{equation}\label{a2gen} \dot H=\frac{-(1-3\kappa H^2)(1-9\kappa H^2)[ 3H^2-8\pi V(\phi)]+ 4\sqrt{\pi}\kappa H\sqrt{(1-9\kappa H^2)[3H ^2-8\pi V(\phi)]}\,V_\phi} {1-9\kappa H^2+54\kappa^2H^4-8\pi\kappa V(\phi)(1+9\kappa H^2)}. \end{equation} \end{widetext} We mention however that although the $\phi$-equation does not contains $H$-terms, the $H$-equation in general contains $\phi$-terms arising from the potential $V(\phi)$. For this reason, in practice we will construct a solution $H(t)$ by substituting $\phi$, found as a numerical solution of Eq. \Ref{phi2gen}, into \Ref{constralphagen}. \subsection{Oscillatory asymptotic} Asymptotical properties of Eqs. \Ref{phi2gen}, \Ref{a2gen} depend on asymptotical values of the derivative $\dot\phi$ and the scalar potential $V(\phi)$. First, let us suppose that corresponding asymptotical values are sufficiently small, so that $$ 8\pi\kappa\dot\phi^2\ll 1, \quad 8\pi\kappa V(\phi)\ll 1. $$ By neglecting corresponding terms, Eq. \Ref{phi2gen} takes the following approximate form: \begin{equation}\label{aseqVgen} \ddot\phi=-2\sqrt{3\pi}\dot\phi \sqrt{\dot{\phi}^2+2V(\phi)}-V_\phi. \end{equation} It is worth noting that this equation does not contain $\kappa$ and has the same form as in the theory of the usual minimally coupled scalar field. It has well-know asymptotics which are represented as damped oscillations. In the particular case of the quadratic potential $V(\phi)=V_0\phi^2$ one has \cite{Star} \begin{equation}\label{damposcilphi} \phi_{t\to\infty}\approx \frac{\sin m t}{\sqrt{3\pi}\,m t}, \end{equation} and \begin{equation}\label{damposcilH} H_{t\to\infty}\approx H_{MD}(t)\,\left[1-\frac{\sin 2mt}{2mt}\right], \end{equation} where $m=\sqrt{2V_0}$ is a scalar mass and $H_{MD}(t)=2/3t$ is the Hubble parameter in the matter-dominated Universe filled with nonrelativistic matter with $p\ll\rho$. \subsection{Exponential asymptotic} Now, let us assume that the scalar field has an exponential asymptotic: \begin{equation}\label{asformphi} \phi(t) \approx \phi_0 e^{\lambda t}, \end{equation} where $\lambda>0$ if $t\to\infty$, and $\lambda<0$ if $t\to-\infty$. In this case, asymptotically, $\phi\sim\dot\phi\sim\ddot\phi$. Since asymptotical properties of $V(\phi)=V_0\phi^N$ depend on $N$, we will consider different cases separately. \vskip6pt $\mathbf{N<2}$.~In this case, asymptotically, $ V(\phi)=V_0\phi^N\ll\dot\phi^2,\ V_\phi=NV_0\phi^{N-1}\ll\dot\phi. $ Substituting the asymptotic \Ref{asformphi} into \Ref{phi2gen} and using the relevant asymptotical properties, we can find \begin{equation} \lambda=-\frac{1}{\sqrt{\kappa}}. \end{equation} Since $\lambda=-1/\sqrt{\kappa}<0$, the corresponding asymptotic \Ref{asformphi} is carried out at the distant past, i.e. at $t\to-\infty$. Moreover, the requirement that $\lambda$ should be real yields $\kappa>0$. Now, from Eq. \Ref{asformphi} we find \begin{equation}\label{asphi1} \phi_{t\to-\infty}\sim e^{-{t}/{\sqrt{\kappa}}}. \end{equation} Then, using the constraint \Ref{constralphagen}, we can obtain the asymptotic for $H$: \begin{equation}\label{asH1} H_{t\to-\infty}\sim {1/\sqrt{9\kappa}}. \end{equation} This inflationary asymptotic corresponds to the stationary point $5$ which is a local source for phase trajectories. It should be emphasized that the standard inflation regime for a minimally coulped scalar field does not share this property. Moreover, the exponential regime is highly unprobable during the contraction phase of the Universe and requires some special initial conditions \cite{Star}. On the contrary, in the theory with nonminimal kinetic coupling all trajectories in our numerical experiments (for full numerical results see below) have reached this regime in a far past. \vskip6pt $\mathbf{N=2}$.~The point $5$ does not exist for quadratic potential, so we provide a special analysis for this physically important case. We have now $V(\phi)=V_0\phi^2\sim\dot\phi^2,\ V_\phi=2V_0\phi\sim\dot\phi$. Using the asymptotic \Ref{asformphi}, we can find from Eqs. \Ref{phi2gen}, \Ref{constralphagen} the following asymptotical solutions: \begin{equation}\label{asphi2} \phi_{t\to\pm\infty}\sim \exp\left[-\frac{t}{\sqrt{\kappa}}\,\frac{1-\mu}{\sqrt{1+2\mu}}\right], \end{equation} \begin{equation}\label{asH2} H_{t\to\pm\infty}\sim \sqrt{\frac{1+2\mu}{9\kappa}}, \end{equation} where $\kappa>0$ and $\mu$ is an auxiliary parameter, which can be found as a solution of the following equation: \begin{equation}\label{cubicmu} \kappa V_0=\frac{\mu(1-\mu)^2}{(1+2\mu)}. \end{equation} The latter is a cubic equation with respect to $\mu$. Generally, it has three roots $-0.5<\mu_1<0.25$, $0.25<\mu_2<1$, and $\mu_3>1$ provided $\kappa V_0<3/32$. For $\kappa V_0=3/32$ two roots are coinciding, so that $\mu_1=\mu_2=0.25$. In case $\kappa V_0>3/32$ the only root $\mu_3>1$ remains. Supposing that $\kappa V_0\ll 1$, we can easily obtain the approximate solution of Eq. \Ref{cubicmu}: \begin{equation} \mu_1\approx \kappa V_0,\quad \mu_2\approx 1-\sqrt{3\kappa V_0},\quad \mu_3\approx 1+\sqrt{3\kappa V_0}. \end{equation} Correspondingly, Eqs. \Ref{asphi2} and \Ref{asH2} lead to the following three asymptotics: \begin{eqnarray} {\rm A1.} && \phi_{t\to-\infty}\sim \exp\left[-\frac{t}{\sqrt{\kappa}}(1-2\kappa V_0)\right], \nonumber\\ && H_{t\to-\infty}\sim \frac{1}{\sqrt{9\kappa}}(1+\kappa V_0),\\ {\rm A2.} && \phi_{t\to\infty}\sim \exp\left[t\sqrt{3 V_0}\right], \nonumber\\ && H_{t\to\infty}\sim \frac{1}{\sqrt{3\kappa}}\left(1+\sqrt{\frac{\kappa V_0}{3}}\right),\\ {\rm A3.} && \phi_{t\to-\infty}\sim \exp\left[-t\sqrt{3 V_0}\right], \nonumber\\ && H_{t\to-\infty}\sim \frac{1}{\sqrt{3\kappa}}\left(1-\sqrt{\frac{\kappa V_0}{3}}\right). \end{eqnarray} Note that the asymptotics A1 and A3 are realized at the distant past $t\to-\infty$, while the asymptotic A2 is carried out at the future $t\to\infty$. \vskip6pt $\mathbf{N>2}$.~In this case $V(\phi)=V_0\phi^N\gg\dot\phi^2,\ V_\phi=NV_0\phi^{N-1}\gg\dot\phi$, and one can check straightforwardly that $\phi(t)\sim e^{\lambda t}$ cannot be an asymptotic of Eq. \Ref{phi2gen} at $t\to\pm\infty$. \subsection{Cosmological model with $V(\phi)=V_0|\phi|^{3/2}$} Let us consider the specific choice for the scalar potential. We start with the $N<2$ case. Namely, we assume that $N=\frac32$, so that \begin{equation} V(\phi)=V_0|\phi|^{3/2}. \end{equation} In order to present the cosmological scenario in this case more transparently, we perform a numerical elaboration of the model given by Eqs. \Ref{phi2gen} and \Ref{constralphagen}. The numerical results are presented in Figs. \ref{figP-N32} and \ref{figH-N32}. \begin{figure}[ht] \begin{center} \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig1a.jpg}\\~~~~~~~~(a)}\\% \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig1b.jpg}\\~~~~~~~~(b)}% \end{center}% \caption{\label{figP-N32} Phase diagrams for the scalar field $\phi(t)$ are presented for the coupling parameter $\kappa=0.1$ and the potential $V(\phi)=V_0|\phi|^{3/2}$ with $V_0=0.1$. The solutions are constructed for initial conditions $\phi(0)=\dot\phi(0)=\{0.5,1,1.5,2.3,3.5,5\}$ [plot (a)], and $\phi(0)=-\dot\phi(0)=\{0.1, 1, 2.5, 5, 7.5\}$ [plot (b)]. Phase trajectories in the vicinity of zero are shown separately on small plots.} \end{figure} \begin{figure}[ht] \begin{center} \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig2a.jpg}\\~~~~~~~~(a)}\\% \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig2b.jpg}\\~~~~~~~~(b)}% \end{center}% \caption{\label{figH-N32} Graphs of $H(t)$ are presented for the coupling parameter $\kappa=0.1$ and the potential $V(\phi)=V_0|\phi|^{3/2}$ with $V_0=0.1$. The solutions are constructed for initial conditions $\phi(0)=\dot\phi(0)=\{0.04, 0.5, 1, 1.5, 3.5\}$ [plot (a)], and $\phi(0)=-\dot\phi(0)=\{0.05, 1, 2.5, 10, 100, 500, 5000\}$ [plot (b)]. The lower and upper dotted line show the asymptotics $1/\sqrt{9\kappa}$ and $1/\sqrt{3\kappa}$, respectively.} \end{figure} As was shown in the previous section, a typical trajectory starts with the inflationary regime \Ref{asphi1}-\Ref{asH1}. The numerical analysis gives that there exist two possibilities for the final fate of the trajectory: either it reaches the near-Einstein regime with small and dumping oscillation of the scalar field, or the evolution finishes in the second inflation described by Eqs. \Ref{as-a-2}-\Ref{as-p-2}. The latter leads to an eternal inflation, so it suffers from the graceful exit problem. Our numerical data show that for both $\kappa$ and $V_0$ being below unity the oscillatory regime dominates in the future evolution, avoiding any difficulties with the graceful exit. In Fig. \ref{figP-N32} a family of trajectories has been plotted for $\kappa=0.1$, $V_0=0.1$. Most of trajectories ends with scalar field oscillations. What is interesting, trajectories with big enough initial values of $\dot\phi$ plotted in Fig. \ref{figH-N32}b fall into oscillation not directly, but passing through a temporal second inflation phase. This property once more indicates the existence of rather complicated dynamics in the vicinity of the point $2$. This point acts first as an attractor, and then as a repeller, giving the desired exit from inflation. Such the dynamical behavior is similar for the standard inflation case \cite{Grishchuk} (for a recent development see, for example, \cite{Arefeva}, without, of course, any preceding inflation phase). The qualitatively different cosmological behavior is represented by the other set of initial conditions plotted in Fig. \ref{figH-N32}a. Here we can see a trajectory which transits from the initial inflationary regime into the secondary one which never ends. We come to the conclusion that for some initial data (the trajectory 6 corresponds to big enough initial values of the scalar field and its time derivative) the point $2$ can be stable. A complete description of the point $2$ requires a future work, here we only indicate that for reasonably small values of $\kappa$ all trajectories with not so big initial values of $\dot\phi$ do not enter an eternal secondary inflation. We should also stress an important difference between standard inflation and inflation described by the point $5$ of the present model. In the standard inflation initial conditions are almost completely erased. In the inflation under consideration the behavior of the scalar field is different from the usual slow-roll regime, and the value of $\dot\phi$ at the end of inflation can be different and depends on initial conditions. This leads to different fate of trajectories after inflation, which can be clearly seen in the Fig. \ref{figH-N32} where there are trajectories falling into oscillatory regime directly after first inflation, trajectories reaching oscillation after transient second inflation and trajectories never leaving the second inflation. \subsection{Cosmological model with $V(\phi)=V_0\phi^{2}$} We remind a reader that in this case there are in general one or three exponential asymptotics depending on the value of the product $\kappa V_0$. In the latter case in all our numerical simulations the asymptotic (A1) is a local source. After this inflation ends the trajectory can go either to the asymptotic (A2) or to oscillatory regime. Our numerical results show that the asymptotic (A2) appears to be stable and the only possible way to reach ``graceful'' exit is to avoid it. On the other hand, the third asymptotic (A3) can provide a transient inflation phase for appropriate initial conditions (see Figs. \ref{figP-N2} and \ref{figH-N2}). Note that the case of single root is not favorable for inflationary scenario, because the single asymptotic appears to be stable and does not allow an exit from inflation. \begin{figure}[ht] \begin{center} \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig3a.jpg}\\~~~~~~~(a)}\\% \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig3b.jpg}\\~~~~~~~(b)} \end{center}% \caption{\label{figP-N2} Phase diagrams for the scalar field $\phi(t)$ are presented for the coupling parameter $\kappa=0.1$ and the potential $V(\phi)=V_0\phi^{2}$ with $V_0=0.1$. The solutions are constructed for initial conditions $\phi(0)=\dot\phi(0)=\{0.5,1,1.5,2.3,3.5,5\}$ [plot (a)], and $\phi(0)=-\dot\phi(0)=\{0.1, 1, 2.5, 5, 7.5\}$ [plot (b)]. Phase trajectories in the vicinity of zero are shown separately on small plots.} \end{figure} \begin{figure}[ht] \begin{center} \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig4a.jpg}\\~~~~~~~(a)}\\% \parbox{7.5cm}{\includegraphics[width=7.5cm]{fig4b.jpg}\\~~~~~~~(b)}% \end{center}% \caption{\label{figH-N2} Graphs of $H(t)$ are presented for the coupling parameter $\kappa=0.1$ and the potential $V(\phi)=V_0\phi^{2}$ with $V_0=0.1$. The solutions are constructed for initial conditions $\phi(0)=\dot\phi(0)=\{0.04, 0.5, 1, 1.5, 3.5\}$ [plot (a)], and $\phi(0)=-\dot\phi(0)=\{0.05, 1, 2.5, 10, 300, 30000, 3000000\}$ [plot (b)]. The dotted lines show the asymptotics $H_{t\to-\infty}\approx 1/\sqrt{9\kappa}(1+\kappa V_0)$ [lower line], $H_{t\to\infty}\approx 1/\sqrt{3\kappa}\left(1-\sqrt{{\kappa V_0}/{3}}\right)$ [middle line], and $H_{t\to\infty}\approx 1/\sqrt{3\kappa}\left(1+\sqrt{{\kappa V_0}/{3}}\right)$ [upper line].} \end{figure} Since for $N \ge 2$ the early time inflationary regime is absent, we do not consider this case in the present paper. For such potentials the point $4$ is stable, so we can suggest that the dynamics is dominated by a phantom-like behavior. We leave studies of non-inflationary regimes in the model under consideration to a future work. \section{Conclusions} We have considered cosmological dynamics for the FRW Universe filled with a scalar field with kinetic coupling in the action \Ref{action}. One of the most intriguing feature of this model found earlier \cite{Sus:2009, SarSus:2010, Sus:2012} is existence of inflationary behavior at early time in the case of zero or constant potential of the scalar field, i.e. solely due to the coupling. This regime exists only for positive coupling constant $\kappa$. In the present paper we study influence of nonzero scalar field potential (for a negative $\kappa$ inflationary regime is absent for zero potential, and nonzero potential leads to inflation qualitatively the same as in the case of minimally coupled scalar field \cite{Tsujikawa}). We have found that for the case of the quadratic potential, most interesting with the physical point of view, the inflationary regime exists for appropriate values of scalar field mass and coupling constant. As for other power-law potentials, using theory of dynamical system methods, we have found two other stable asymptotic regimes. One regime leads to big rip singularity, and exists for potentials steeper than the quadratic one. In this case the inflationary regime does not exist, so steep potentials destroy the scenario of Ref.\cite{Sus:2012}. On the other hand, for potentials which are more sloping than the quadratic one the inflationary regime appears to be exactly the same as for zero/constant potential. However, a new stable asymptotic regime appears which represents exponential expansion and power-law increase of the scalar field. From the viewpoint of expansion dynamics, it is an eternal inflation, so if the initial inflation ends by reaching this regime, actual exit from inflation is absent. This is a danger for this model. Our numerical study shows, however, that for wide range of parameters of the theory a trajectory which exits from initial inflation typically does not reach the eternal secondary inflation regime, and scalar field finally falls into oscillations. In a summary, the scenario of initial inflation driven by the nonminimal kinetic coupling survives for a wide range of parameters provided the scalar potential is not steeper than the quadratic one. \section*{Acknowledgments} We are grateful to A.A.~Starobinsky for usual discussions. The work was supported in part by the Russian Foundation for Basic Research grants Nos. 11-02-01162 and 11-02-00643.
1,108,101,566,111
arxiv
\section{\label{Introduction}Introduction} Liquid crystals represent an interesting opportunity to study a unique interplay between topology, anisotropy, and elasticity in materials. The entropy driven local ordering of rod-like molecules accounts for anisotropic optical and transport properties even in homogeneous nematics. Furthermore, external fields or topological defects can distort the local ordering of the molecules giving rise to several elastic modes \cite{deGennes75,selinger19}. The ability to quantitatively model these complex features of liquid crystals is imperative to address recent applications, including electrokinetics of colloidal particles or biological materials \cite{lazo14,peng15,peng18}, surface and texture generation and actuation in nematic surfaces \cite{most15,baba18}, systems of living nematics \cite{genkin17}, and stabilization of liquid shells \cite{hokmabad19}. Liquid crystals generally belong to one of two main classes: Thermotropics are short molecules that undergo ordering through changes in temperature, while lyotropics are more complex molecules or assemblies of molecules in solvent that order through changes in concentration. Thermotropics have been extensively studied, both theoretically and experimentally, due to their applications in displays \cite{deGennes75,yeh09}. However, because of their small characteristic length scale, the fine structure of defects and two phase domains (commonly referred to as tactoids) are generally beyond the resolution of standard optical techniques. On the other hand experimental studies of defect core structures and tactoids have been recently undertaken in so called lyotropic chromonic liquid crystals. These materials are composed of disc-like molecules that stack to form rod-like structures \cite{collings10,collings15}. The characteristic length scale that determines the size of defects and tactoid interfacial thickness in chromonics are thousands of times larger than those in thermotropics, and hence are readily observable with conventional optical techniques. Such experiments have revealed anisotropic geometries of the order parameter near the core of defects, and \lq\lq cusp-like'' features on the interface of tactoids \cite{kim13,zhou17}. To mathematically model a liquid crystal in its nematic phase a unit vector \(\mathbf{n}\), the director, is typically defined to characterize the local orientation of the molecules. Because the molecules are apolar, any model involving \(\mathbf{n}\) must be symmetric with respect to \(\mathbf{n} \to -\mathbf{n}\). Distorted nematic configurations are described by three independent elastic modes: splay, twist, and bend. The energy cost of each mode is associated with three elastic constants \(K_1\), \(K_2\), and \(K_3\) in the Oseen-Frank free energy \cite{selinger19,frank58}. Models and computations often assume that these constants are equal, though it has been shown for chromonics that the values of all three constants are widely different for the relevant range of temperatures and molecular concentrations \cite{zhou14}. Additionally, topological defects and tactoids lead to large distortions of the underlying order. To model defected configurations using the Oseen-Frank free energy either a short distance cutoff is introduced, and the defect core treated separately, or a new variable representing the degree of order of the molecules is added to the free energy \cite{leslie66,ericksen91}. This new variable also has the effect of regularizing singularities at the core of defects. The method has recently allowed the study of tactoids within the coexistence region \cite{zhang18}. Resolving the degree of orientational order and the orientation poses several challenges computationally, however. The director is undefined both at the core of defects and in the isotropic phase, and half-integer disclinations (the stable line defects in liquid crystals) cannot be adequately described computationally with a polar vector. Therefore, the model that is widely used to describe either disclinations or tactoids is the phenomenological Landau-de Gennes (LdG) free energy \cite{meiboom82,golovaty19,popanita97}. In the LdG framework, the order parameter is defined to be a traceless and symmetric tensor, \(\mathbf{Q}\), typically proportional to a macroscopic quantity, e.g. the magnetic susceptibility \cite{gramsbergen86,lubensky70}. The free energy is then assumed to be an analytic function in powers of \(\mathbf{Q}\). To model spatial inhomogeneity, an expansion in gradients of \(\mathbf{Q}\) is typically added to the free energy. Such an expansion in gradients can be mapped to the elastic modes in the director \(\mathbf{n}\) in the Oseen-Frank elastic energy \cite{selinger19}. The validity of the LdG free energy in regions of large variation of the order is not well understood, and it has been shown that the simplest LdG elastic expansions that capture differences in the Oseen-Frank constants result in unbounded free energies \cite{longa87,ball10}. Therefore, when working in the LdG framework, one must introduce more computationally complex assumptions to bound the free energy. In this work, we present an alternative field theoretic model of a nematic liquid crystal that is based on a microscopic description, and that allows for anisotropic elastic energy functionals that can capture the elasticity observed in chromonics. The model presented here is a computational implementation of the model introduced by Ball and Majumdar \cite{ball10}, which itself is a continuum extension of the well known Maier-Saupe model for the nematic-isotropic phase transition \cite{maier59}. The Maier-Saupe model is a mean field molecular theory in which the orientation of the molecules of the liquid crystal is described by a probability distribution function, so that each molecule interacts only with the average of its neighbors. Below, we define \(\mathbf{Q}\) microscopically, based on a probability distribution that is allowed to vary spatially (as in the hypothesis of local equilibrium in nonequilibrium thermodynamics). Our ultimate goal is to develop a computationally viable implementation of the model for fully anisotropic systems. We present below the results of several proof of concept computations on various prototypical liquid crystal configurations, albeit in the one elastic constant approximation. All our results are compared with those from the LdG free energy for analogous configurations. In Section \ref{Model} we briefly summarize the model as put forth in Ref. \cite{ball10} with minor adjustments to notation and conceptual understanding. In Section \ref{compMeth} we present the computational implementation of the model and derive the equations that are solved numerically. We also briefly discuss the conventions used to compare to the LdG free energy. In Section \ref{results} we compare the free energies of the model presented here with that given by LdG and show that they are both non-convex. We then present computational results from the model for a one dimensional nematic-isotropic interface, a two-dimensional tactoid, and a two-dimensional disclination. All of these are compared to results given by LdG. Finally, in Section \ref{Conclusion} we summarize and discuss the computational model and results, and discuss future potential for the model. \section{\label{Model}Model} Following Ref. \cite{ball10}, we consider a tensor order parameter defined over a small volume at \(\mathbf{r}\) \begin{equation}\label{MicroQ} \mathbf{Q}(\mathbf{r}) = \int_{S^2} \big(\bm{\xi} \otimes \bm{\xi} - \frac{1}{3}\mathbf{I}\big) p(\bm{\xi};\mathbf{r}) \, d \bm{\xi} \end{equation} where \(\bm{\xi}\) is a unit vector in \(S^2\), \(\mathbf{I}\) is the identity tensor, and \(p(\bm{\xi};\mathbf{r})\) is the canonical probability distribution of molecular orientation in local equilibrium at some temperature $T$ at \(\mathbf{r}\). Due to the symmetry of the molecules, \(p(\bm{\xi};\mathbf{r})\) must have a vanishing first moment; hence, \(\mathbf{Q}\) is defined as the second moment of the orientational probability distribution. With this definition, the order parameter is symmetric, traceless, and, most importantly, has eigenvalues that are constrained to lie in the range \(-1/3 \leq q \leq 2 / 3\). The situation where \(q = -1/3, \, 2/3\) represents perfect ordering of the molecules (i.e. the variance of the distribution goes to zero), and is therefore interpreted as unphysical. We note that Eq. \eqref{MicroQ} can be generalized to biaxial molecules, that is, molecules that are microscopically plate-like, by appropriately changing the domain of the probability distribution to three Euler angles, and considering the second moment of the extended probability distribution. Such a description may be useful in studying similar defects and domains for biaxial molecules, as in Ref. \cite{chiccoli19}. A mean field free energy functional of \(\mathbf{Q}(\mathbf{r})\) is defined by \begin{equation} \label{FreeE} F[\mathbf{Q}(\mathbf{r})] = H[\mathbf{Q}(\mathbf{r})] - T \Delta S \end{equation} where \(H\) is the energy of a configuration, and \(\Delta S\) its entropy relative to the uniform distribution. The energy is chosen to be \begin{equation} \label{H} H[\mathbf{Q}(\mathbf{r})] = \int_\Omega \Big(-\alpha\Tr[\mathbf{Q}^2] + f_e(\mathbf{Q},\nabla \mathbf{Q})\Big) \, d\mathbf{r} \end{equation} where \(\alpha\) is an interaction parameter, and \(f_e\) is an elastic energy. The term \(-\alpha \Tr[\mathbf{Q}^2]\) originates from the Maier-Saupe model, and incorporates an effective contact interaction that promotes alignment \cite{maier59,selinger16}. In the spatially homogeneous case \(f_e = 0\). The entropy is the usual Gibbs entropy \begin{equation} \label{deltaS} \Delta S = - n k_B \int_\Omega \bigg(\int_{S^2} p(\bm{\xi};\mathbf{r}) \ln\Big(4 \pi p(\bm{\xi};\mathbf{r})\Big)\, d\bm{\xi}\bigg) \,d\mathbf{r} \end{equation} where \(n\) is the number density of molecules. It should be noted that the outer integral is on the physical domain of the system, and the inner integral is on the unit sphere, the domain of the probability distribution. This model, with these definitions, is equivalent to the Maier-Saupe model in the spatially homogeneous case \cite{maier59}. We extend the Maier-Saupe treatment to spatially nonuniform configurations by minimization of Eq. \eqref{FreeE} subject to boundary conditions that lead to topological defects in the domain, or two-phase configurations at coexistence. We then find configurations \(\mathbf{Q}(\mathbf{r})\) that are not uniform, and that minimize Eq. \eqref{FreeE} subject to the constraint \eqref{MicroQ}. \begin{figure} \includegraphics[width = \columnwidth]{probdist.eps} \caption{Examples of the probability distribution, \(p(\bm{\xi})\) of Eq. \eqref{prob}, on the sphere spanned by \(\bm{\xi}\) for (a) a uniaxial configuration and (b) a biaxial configuration. Note that the probability distribution involves a uniaxial molecule, but a biaxial order parameter can occur for a probability distribution with biaxial second moment. Only northern hemispheres are displayed since the probability distribution is symmetric about the equator due to the symmetry of the molecules. For these plots, (a) \(\bm{\Lambda} = 4 \diag(-1,\,-1,\,0.5)\) and (b) \(\bm{\Lambda} = 10\diag(-0.25,\,-1,\,0.25)\).} \label{fig:probdist} \end{figure} The entropy, Eq. \eqref{deltaS}, can be maximized, subject to the constraint \eqref{MicroQ}, by introducing a tensor of Lagrange multipliers, \(\bm{\Lambda}(\mathbf{r})\), for each component of the constraint \cite{ball10,katriel86}. The resulting probability that maximizes the entropy is given by \begin{align} p(\bm{\xi};\mathbf{r}) &= \frac{\exp[\bm{\xi}^T \bm{\Lambda}(\mathbf{r})\bm{\xi}]}{Z[\bm{\Lambda}(\mathbf{r})]} \label{prob} \\ Z[\bm{\Lambda}(\mathbf{r})] &= \int_{S^2} \exp[\bm{\xi}^T \bm{\Lambda}(\mathbf{r})\bm{\xi}] \, d\bm{\xi} \label{Z} \end{align} where \(Z\) can be interpreted as a single particle partition function. Fig. \ref{fig:probdist} shows graphical examples of the probability distribution on the unit sphere. We mention that the single particle partition function can only be computed numerically, and hence the minimization procedure described next has to be carried out numerically in its entirety. The minimization of \(F\) in Eq. \eqref{FreeE} with \(p(\bm{\xi};\mathbf{r})\) given by Eqs. \eqref{prob} and \eqref{Z} is therefore reformulated in terms of two tensor fields on the domain, \(\mathbf{Q}(\mathbf{r})\) and \(\bm{\Lambda}(\mathbf{r})\) (from here on the dependence on \(\mathbf{r}\) will be dropped for brevity). \(\bm{\Lambda}\) acts as an effective interaction field which mediates interactions among molecules. Substituting Eq. \eqref{prob} into the constraint, Eq. \eqref{MicroQ}, leads to a relation between \(\mathbf{Q}\) and \(\bm{\Lambda}\): \begin{equation} \label{consist} \mathbf{Q} + \frac{1}{3} \mathbf{I} = \frac{\partial \ln Z[\bm{\Lambda}]}{\partial \bm{\Lambda}}. \end{equation} It has been shown that if the eigenvalues of \(\mathbf{Q}\) approach the endpoints of their physically admissible values, both \(\bm{\Lambda}\) and the free energy diverge. This feature is not present in the LdG theory, which can lead to nonphysical configurations for certain choices of the elastic energy, \(f_e\), in Eq. \eqref{H} \cite{ball10,bauman16}. The fields \(\mathbf{Q}\) and \(\bm{\Lambda}\) that minimize Eq. \eqref{FreeE} and satisfy Eq. \eqref{consist} are the equilibrium configuration for a given set of boundary conditions. In the next section we describe a computational implementation of the model presented here. \section{\label{compMeth}Computational Method} \subsection{Molecular Theory} To find the configuration \(\mathbf{Q}\) that minimizes the free energy of the molecular field theory we numerically solve the differential equations \(\delta F / \delta \mathbf{Q} = 0\). This, in principle, is a system of nine equations. However, since \(\mathbf{Q}\) is traceless and symmetric, there are only five degrees of freedom. The eigenvalues of \(\mathbf{Q}\) describe two degrees of freedom since \(\mathbf{Q}\) is traceless. The eigenvectors of \(\mathbf{Q}\) form an orthonormal frame (since \(\mathbf{Q}\) is symmetric) which accounts for the other three degrees of freedom: the first vector has two degrees of freedom since it is a unit vector, the second vector has one degree of freedom since it is a unit vector and must be orthogonal to the first vector, and the third vector is determined from the other two vectors since it must be orthogonal to both. The eigenvalues are related to the amount of order in the system, while the eigenvector which corresponds to the largest eigenvalue is the director, \(\mathbf{n}\). This is illustrated in Fig. \ref{fig:probdist} which shows the probability distribution for molecules with a director along the z-axis. Fig. \ref{fig:probdist}a shows a uniaxial configuration in which two of the eigenvalues are degenerate, leading to arbitrary eigenvectors in the xy-plane. It is possible for the probability distribution to be of the form in Fig. \ref{fig:probdist}b in which the director is still along the z-axis, but all three eigenvalues are distinct. In this case, we call the probability distribution biaxial since it leads to a second moment, \(\mathbf{Q}\), that is biaxial. It is known that biaxiality of the order parameter is important near defects and at interfaces in systems of uniaxial molecules as modeled by the LdG free energy \cite{pismen99,popanita97,mottram14}. Despite the uniaxial character of the molecules, Eq. (\ref{MicroQ}), the molecular theory detailed here can accommodate biaxial order. Local biaxial order will be parametrized as \begin{equation} \label{Qdef} \mathbf{Q} = S(\mathbf{n} \otimes \mathbf{n} - \frac{1}{3} \mathbf{I}) + P(\mathbf{m} \otimes \mathbf{m} - \bm{\ell} \otimes \bm{\ell}) \end{equation} where \(\{\mathbf{n},\mathbf{m},\bm{\ell}\}\) are an orthonormal triad of vectors. This representation explicitly includes the five degrees of freedom of \(\mathbf{Q}\), namely, three for the orthonormal set of vectors and two for the amplitudes \(S\) and \(P\). In addition to \(\mathbf{n}\) being the director, \(S\) represents the amount of uniaxial order, and \(P\) the amount of biaxial order. That is, \(S = (3/2)\, q_1\) and \(|P| = (1/2)\, (q_2 - q_3)\) where \(q_i\) are the eigenvalues of \(\mathbf{Q}\), and \(q_3 \leq q_2 \leq q_1\). Because we are primarily concerned with experiments in thin nematic films, we further reduce the degrees of freedom of \(\mathbf{Q}\) by only considering spatial variation in at most two dimensions. If we write \(\mathbf{n} = (\cos \phi, \,\sin \phi, \,0)\), \(\mathbf{m} = (-\sin \phi, \,\cos\phi,\,0)\), and \(\bm{\ell} = (0,\,0,\,1)\), where \(\phi\) is the angle the director makes with the x-axis, we need only one degree of freedom to describe the eigenframe of \(\mathbf{Q}\). We can then further simplify the computations by transforming to the auxiliary variables \cite{sen86} \begin{align} \eta &= S - \frac{3}{2}(S - P) \sin^2 \phi \nonumber \\ \mu &= P + \frac{1}{2}(S - P) \sin^2 \phi \label{aux} \\ \nu &= \frac{1}{2}(S - P) \sin 2\phi. \nonumber \end{align} This transformation is equivalent to expressing \(\mathbf{Q}\) in terms of a new basis for traceless, symmetric matrices. While we do this for ease of computation, we can transform back to the original parametrization after calculating the eigenvalues and eigenvectors of \(\mathbf{Q}\). Although all of our calculations are conducted with the set \(\{\eta,\mu,\nu\}\), we will present our results in terms of the more physically intuitive \(S\), \(P\), and \(\phi\). The tensor order parameter in this representation is \begin{equation} \label{auxQ} \mathbf{Q} = \begin{bmatrix} \frac{2}{3}\eta & \nu & 0\\ \nu & -\frac{1}{3} \eta + \mu & 0 \\ 0 & 0 & -\frac{1}{3} \eta - \mu \end{bmatrix}. \end{equation} We can now substitute Eq. \eqref{auxQ} into Eq. \eqref{MicroQ} to write the constraint in terms of \(\eta\), \(\mu\), and \(\nu\). Following the procedure of Section \ref{Model}, we introduce three Lagrange multipliers \(\Lambda_1\), \(\Lambda_2\), and \(\Lambda_3\) corresponding to \(\eta\), \(\mu\) and \(\nu\) respectively, and a partition function \begin{equation} \label{auxZ} Z[\Lambda_1,\Lambda_2,\Lambda_3] = \int_{S^2} \exp\bigg[\frac{3}{2} \Lambda_1 \xi_1^2 + \Lambda_2(\frac{1}{2} \xi_1^2 + \xi_2^2) + \Lambda_3\xi_1 \xi_2\bigg]\, d\bm{\xi} \end{equation} while the relation from Eq. \eqref{consist} manifests itself as the three equations \begin{align} \frac{\partial \ln Z}{\partial \Lambda_1} &= \eta + \frac{1}{2} \nonumber \\ \frac{\partial \ln Z}{\partial \Lambda_2} &= \mu + \frac{1}{2} \label{auxConstraint} \\ \frac{\partial \ln Z}{\partial \Lambda_3} &= \nu \nonumber \end{align} that implicitly relate the variables \(\eta\), \(\mu\), and \(\nu\) to the Lagrange multipliers. Note that since \(Z[\Lambda_1,\Lambda_2,\Lambda_3]\) cannot be obtained analytically, relation \eqref{auxConstraint} can only be solved numerically. The free energy, Eq. \eqref{FreeE}, is rewritten as \begin{widetext} \begin{equation} F = \int_{\Omega} \Big( f_b(\eta,\mu,\nu,\Lambda_1,\Lambda_2,\Lambda_3) + f_e(\eta,\mu,\nu,\nabla \eta,\nabla \mu,\nabla\nu)\Big) \, d\mathbf{r} \end{equation} where \(f_b\) is a bulk free energy density that does not depend on gradients of the fields. Written explicitly, \begin{equation} \label{FreeEAux} f_b = -2\alpha \big(\frac{1}{3}\eta^2 + \mu^2 + \nu^2\big) + \\ n k_B T \Big(\Lambda_1 \big(\eta + \frac{1}{2}\big) + \Lambda_2 \big(\mu + \frac{1}{2}\big) + \Lambda_3 \nu + \ln(4\pi) - \ln Z[\Lambda_1,\Lambda_2,\Lambda_3]\Big). \end{equation} \end{widetext} We will focus in this paper on an isotropic elastic energy \(f_e = L \partial_k Q_{ij} \partial_k Q_{ij}\) where repeated indices are summed, and \(L\) is the elastic constant. This is the `one constant approximation' so that mapping this elastic energy to the Oseen-Frank elastic energy yields the same value for all three elastic constants \cite{longa87}. Written in terms of the auxiliary variables we have \begin{equation} \label{elasticE} f_e = 2 L \big(\frac{1}{3} |\nabla \eta|^2 + |\nabla \mu|^2 + |\nabla \nu|^2\big). \end{equation} Before deriving the differential equations to be solved we redefine quantities in a dimensionless way: \begin{equation} \label{units} \tilde{f_b} = \frac{f_b}{n k_B T}, \quad \tilde{f_e} = \frac{f_e}{n k_B T}, \quad \tilde{x} = \frac{x}{\xi_{MS}}, \quad \tilde{L} = \frac{L}{\xi_{MS}^2 n k_B T} \end{equation} where \(\xi_{MS}\) is a length scale which we set by defining the value of the dimensionless parameter \(\tilde{L}\) instead. For the rest of the paper the tildes are omitted for brevity. To derive the equilibrium equations, we note that Eq. \eqref{auxConstraint} relates \(\eta\), \(\mu\), and \(\nu\) as functions of \( \{ \Lambda_i \} \) through the unknown single particle partition function. It has been shown that these relations are invertible when \(\eta\), \(\mu\), and \(\nu\) give physical eigenvalues of \(\mathbf{Q}\) \cite{katriel86}. We can then regard \(\Lambda_1\), \(\Lambda_2\), and \(\Lambda_3\) as functions of \(\eta\), \(\mu\), and \(\nu\) via the inverse of Eq. \eqref{auxConstraint}. Although an analytic inverse does not exist we can numerically invert this equation using a Newton-Raphson method. We create a MATLAB scattered interpolant from values given by the Newton-Raphson method. We select interpolant points from the values \(0 \leq S \leq 0.7\), \(0 \leq P \leq 0.1\), and \(-\pi/2 \leq \phi \leq \pi/2\) with \(\Delta S = \Delta P = 0.05\) and \(\Delta \phi = 0.0245\). These values are then transformed to \(\eta\), \(\mu\), and \(\nu\) through Eqs. \eqref{aux} and the Newton-Raphson method is run using these values to find \(\Lambda_i\) for the chosen interpolant points. The MATLAB scattered interpolant is then created and used in the numerical minimization procedure. The Euler-Lagrange equations are derived by taking the variations of Eqs. \eqref{FreeEAux} and \eqref{elasticE} with respect to \(\eta\), \(\mu\), and \(\nu\) while using Eqs. \eqref{auxConstraint} to simplify. The dimensionless equations are \begin{align} \frac{4}{3} L \nabla^2 \eta = \Lambda_1 - \frac{4}{3}\frac{\alpha}{n k_B T} \eta \nonumber \\ 4 L \nabla^2 \mu = \Lambda_2 - 4 \frac{\alpha}{n k_B T} \mu \label{ELEqs} \\ 4 L \nabla^2 \nu = \Lambda_3 - 4 \frac{\alpha}{n k_B T} \nu \nonumber \end{align} where, again, \(\Lambda_i\) are numerically calculated as functions of \(\eta\), \(\mu\), and \(\nu\). Eqs. \eqref{ELEqs} are the central equations of this study and are solved numerically in the following section for various cases of interest. To numerically solve Eqs. \eqref{ELEqs} we use a finite differencing scheme. For one-dimensional configurations, an implicit backward Euler method is used with 129 discrete points and time step \(\Delta t = 0.1 \Delta x^2\). For two-dimensional configurations a Gauss-Seidel relaxation method with \(257^2\) discrete points is used \cite{press02}. We iterate until the calculated energy of a configuration fails to change to within \(10^{-7}\). We check that the calculated energy of the initial condition is larger than the energy of the final configuration. In all cases we use Dirichlet boundary conditions that depend on the case being studied, as described in the relevant section. The MATLAB code used for the numerical solutions can be found in Ref. \cite{schimming20}. \subsection{Landau-de Gennes Theory} Here, we summarize the conventions and notation used in the calculations to compare the LdG free energy with the molecular field theory presented in the previous section. The bulk energy density is of the form \begin{equation} \label{LdGE} f_{LdG} = \frac{1}{2} a(T - T^*) \Tr [\mathbf{Q}^2] - \frac{1}{3} B \Tr [\mathbf{Q}^3] + \frac{1}{4} C \big( \Tr [\mathbf{Q}^2] \big)^2 \end{equation} where \(a\), \(B\), and \(C\) are material parameters, and \(T^*\) is the temperature at which the isotropic phase loses its stability. We use the same elastic free energy defined above when comparing to the molecular field theory as well. For the sake of computation, we define the following dimensionless quantities: \begin{equation} \label{LdGunits} \tilde{f}_{LdG} = \frac{f_{LdG}}{C}, \quad \tilde{f_e} = \frac{f_{LdG}}{C}, \quad \tilde{x} = \frac{x}{\xi_{LdG}}, \quad \tilde{L} = \frac{L}{\xi_{LdG}^2 C} \end{equation} which leaves \(a (T - T^*)/C\), \(B/C\), and \(\tilde{L}\) as dimensionless parameters for the model. \(\xi_{LdG}\) here is a length scale for the model defined by the value of \(\tilde{L}\) similar to \(\xi_{MS}\) in Eq. \eqref{units}. As before, the tilde is subsequently dropped for brevity. Computations are done using the same auxiliary variables defined in Eq. \eqref{aux} with the same finite difference scheme outlined above to solve the Euler-Lagrange equations resulting from \(f_{LdG}\). \section{\label{results}Results} \begin{figure} \includegraphics[width = \columnwidth]{phaseD.eps} \caption{Equilibrium value of the uniaxial order, \(S\), versus the parameter \(\alpha / (n k_B T)\). At high \(T\), the system is in an isotropic phase, while at low \(T\) the system is in a uniaxial nematic phase. A first order phase transition occurs at \(\alpha / (n k_B T) \approx 3.4049\).} \label{fig:phaseD} \end{figure} \subsection{Uniform Configuration and Bulk Free Energy} We first check our numerical method and methodology with known results for the Maier-Saupe free energy. As mentioned above, this model should be equivalent to the Maier-Saupe model in the case of a uniform system, \(f_e = 0\). In this case, it has been shown that minimizers of the bulk free energy, Eq. \eqref{FreeEAux}, will be uniaxial states \cite{ball10}. Thus, because we are considering a uniform system, the choice of director is arbitrary. We choose \(\phi = 0\) for this analysis so the auxiliary variables defined by Eq. \eqref{aux} give \(\eta = S\), \(\mu = P\), and \(\nu = 0\). Further, since we know the system will be uniaxial we can take \(\mu = P = 0\). One can show that this implies \(\Lambda_2 = \Lambda_3 = 0\) from Eq. \eqref{auxConstraint}. Because the system is uniform, \(S\) is constant, and hence \(\nabla^2 S = 0\). Defining \(S_N\) as the value of \(S\) in uniform equilibrium, we find, from Eq. \eqref{ELEqs}: \begin{equation} \label{equilLam} \Lambda_1 = \frac{4}{3} \frac{\alpha}{n k_B T} S_N \end{equation} which is a well known result for the Maier-Saupe model when \(\Lambda_1\) is regarded as an effective interaction strength \cite{deGennes75,maier59,selinger16}. We then substitute Eq. \eqref{equilLam} into Eq. \eqref{FreeEAux} and numerically minimize it to find the value of \(S\) in equilibrium for a uniform system. Fig. \ref{fig:phaseD} shows \(S_N\) as a function of \(\alpha / (n k_B T)\). At high temperatures, the equilibrium phase is isotropic with \(S=0\). At low temperatures a uniaxial nematic phase is stable with \(S = S_N\). A first order phase transition occurs at \(\alpha / (n k_B T) \approx 3.4049 \) with \(S_N = 0.4281\). The diagram of Fig. \ref{fig:phaseD} agrees with previous studies of the Maier-Saupe model which has been used successfully to describe phase transitions in experiments \cite{selinger16}. We can further elucidate the nature of the molecular field theory by examining the bulk free energy density, Eq. \eqref{FreeEAux}, restricted to a uniaxial configuration. For a uniform, uniaxial system, the free energy density is \begin{equation} \label{bulkDens} f_b(S) = -\frac{2}{3}\frac{\alpha}{n k_B T} S^2 + \Lambda_1 \Big( S + \frac{1}{2} \Big) - \ln Z[\Lambda_1] + \ln(4 \pi) \end{equation} where \(\Lambda_1\) is calculated as a function of \(S\) through Eq. \eqref{auxConstraint}. This function is plotted in Fig. \ref{fig:bulkE} for three different values of \(\alpha / (n k_B T)\). As \(\alpha / (n k_B T)\) increases we find that \(f_b\) becomes non-convex, leading to a coexistence region in the phase diagram, and a first order phase transition. It is well known that these features are also present in the LdG free energy of Eq. \eqref{LdGE} \cite{gramsbergen86}. The primary difference between LdG and the Maier-Saupe theory is that in the latter \(f_b\) diverges when \(S = -1/2\) or \(S = 1\), that is, when the eigenvalues leave the physical range. The non-convexity obtained agrees with similar plots for the Maier-Saupe free energy in Ref. \cite{selinger16}. \begin{figure} \includegraphics[width = \columnwidth]{bulkfreeE.eps} \caption{Bulk free energy density as a function of the uniaxial order, \(S\) for three values of the parameter \(\alpha / (n k_B T)\). As \(\alpha / (n k_B T)\) increases, the free energy becomes non-convex, leading to coexistence between the isotropic and nematic phases.} \label{fig:bulkE} \end{figure} The non-convexity and similarity of the bulk free energy to LdG suggest that there should exist stable interfacial configurations at coexistence as well as stable solutions for topological defects in the nematic phase. In the following three subsections we demonstrate just this and compare to results given by LdG theory. \subsection{Planar Isotropic-Nematic Interface} We consider a one-dimensional configuration with a planar interface in which the order parameter \(\mathbf{Q}(\mathbf{r}) = \mathbf{Q}(x)\). We solve Eqs. \eqref{ELEqs} on a domain of size \(\mathcal{L} = 100 \xi_{MS}\) with Dirichlet boundary conditions where \(S = S_N\) at \(x = -50\xi_{MS}\) and \(S = 0\) at \(x = 50\xi_{MS}\). We set \(\alpha / (n k_B T) = 3.4049\) and \(S_N = 0.4281\) so that the isotropic and nematic bulk phases coexist. An important note is that since we are using the \lq\lq one-constant approximation'' for the elastic free energy there are no anisotropic effects, such as anchoring, in our analysis. It is known that anisotropy changes the width of an interface for different director orientations, however, because we are only considering isotropic terms here the structure of the interfacial profile should not change if the angle of the director in the nematic phase, \(\phi\), is changed \cite{popanita97}. Fig. \ref{fig:interface} shows the equilibrium uniaxial order parameter \(S\) for \(\phi=0\). We find a smooth, diffuse interface with \(P=0\), that is, no biaxiality. We also find that changing the angle of the director does not change the solution, as expected. We can calculate the width of the interface by finding the points where \(S = 0.1 S_N\) and \(S = 0.9 S_N\) and define them as \(x_1\) and \(x_2\) respectively. Then we define the width as \(x_1 - x_2\). \begin{figure} \includegraphics[width = \columnwidth]{oneDinterface.eps} \caption{\(S\) as a function of position for a one-dimensional interface. Dirichlet boundary conditions maintain \(S = S_N\) at the left boundary while \(S = 0\) at the right boundary. \(L = 1\) for this configuration.} \label{fig:interface} \end{figure} In order to compare with the LdG free energy, Eq. \eqref{LdGE}, we recall that the interfacial profile for this configuration is known exactly \begin{equation} \label{SLdG} S_{LdG}(x) = \frac{S_N}{2} \bigg(1 - \tanh \Big(\frac{x}{w_{LdG}}\Big)\bigg) \end{equation} with \begin{equation} \label{w} w_{LdG} = \frac{6\sqrt{6}}{B / C}\sqrt{L} \end{equation} which sets the width of the interface. This implies that \((x_1 - x_2) \propto \sqrt{L}\). One can similiarly show that the bulk energy contribution, i.e. the bulk contribution to the surface tension, \(\sigma \propto \sqrt{L}\). With this in mind, we compare the scaling of the molecular field theory solutions that we obtain with \(\sqrt{L}\). To this end, we find the interface widths and bulk surface tensions for solutions to Eqs. \eqref{ELEqs} for a variety of values of \(L\). The bulk surface tension is found by numerically integrating the bulk free energy density, Eq. \eqref{FreeEAux}. Interface widths and bulk surface tensions are plotted in Fig. \ref{fig:widths} for both the molecular field theory and LdG. We find both \((x_1 - x_2) \propto \sqrt{L}\) and \(\sigma \propto \sqrt{L}\) for the molecular field theory. Note that the LdG solution allows additional tuning via the parameter \(B/C\), which we have set to 9 in Fig. \ref{fig:widths}. In Fig. \ref{fig:widths}b the discrepency between the LdG solution and the molecular field theory computations highlights that even if the widths of LdG interfaces are tuned to be similar to those of the molecular field theory, the surface tensions cannot be, and vice versa. \begin{figure} \includegraphics[width = \columnwidth]{width2.eps} \caption{(a) Interface width and (b) bulk surface tension versus \(\sqrt{L}\). Dots represent the molecular field theory (MFT) computations while the solid lines are derived from the analytical solution for LdG, Eq. \eqref{SLdG}, with \(B/C = 9\). Both the interface width and excess free energy (i.e. surface tension) scale linearly with the parameter \(\sqrt{L}\), the same scaling relationship as that of Landau-de Gennes.} \label{fig:widths} \end{figure} We note that the similarity in bulk free energy landscape likely leads to the similarity in solutions for LdG and the molecular field theory. Anisotropic effects have yet to be analyzed for our model, for which it is known for LdG there is nonzero biaxiality at interfaces \cite{popanita97}. This will be the subject of a future study. \begin{figure*} \includegraphics[width = \textwidth]{tactoids.eps} \caption{Plots of \(S(x,y)\) for (a) a tactoid with \(m=1\) director configuration at the outer boundary and (b) tactoid with \(m=-1 / 2\) director configuration at the outer boundary. The radius in (a) is \(R/\xi_{MS} = 19.92 \pm 0.2\) and the radius in (b) is \(R/\xi_{MS} = 4.59 \pm 0.2\). The smaller size of the \(m = -1/2\) tactoid is due to the director distortion energy's \(m^2\) dependence. For both computations \(L=1\).} \label{fig:tactoids} \end{figure*} \subsection{Tactoids} We consider a two-dimensional square domain of size \(\mathcal{L} = 100 \xi_{MS}\). We set \(S=S_N\), \(P=0\), and \(\phi = m \theta\) at the outer boundary, where \(\theta\) is the polar angle and \(m\) is the winding number of \(\phi\). We set \(\alpha / (n k_B T) = 3.4049\) and \(L = 1\). As initial conditions we set \(S = 0\) within a disc centered at the origin of radius \(R = 15 \xi_{MS}\). By \lq\lq tactoid" we refer to a two-phase domain separated by an interface. In the isotropic region \(S = P = 0\). We consider distorted boundary conditions to ensure an interface forms in the simulation. Because the director can vary as a function of position in two dimensions, the boundary conditions imposed will change the size and shape of the object under consideration. Since we are only considering isotropic gradients in the elastic free energy, there is no anchoring term at the interface, i.e. there is not a difference in energy based on the orientation of the molecules relative to the interface. Thus, we expect the tactoids to be cylindrical. The topology of the boundary conditions does impact the size of the tactoids, however. This is due to a balance between two energies: the surface tension, which in two dimensions is proportional to \(R\), the radius of the tactoid, and the elastic energy in the nematic region from Oseen-Frank which is proportional to \(m^2 \ln(\mathcal{L}/R)\). Due to the symmetry of the molecules, half integer \(m\) is allowed and costs four times less director distortion energy than integer \(m\). Hence, we expect that tactoids with integer boundary conditions should be approximately four times larger than those with half integer boundary conditions. In Fig. \ref{fig:tactoids}, we show equilibrium configurations for boundary conditions with \(m=1\) and \(m=-1/2\). In both cases an isotropic region with \(S = P = 0\) is present at the center of the computational domain. As expected, both configurations are cylindrical in shape and we find that \(R/ \xi_{MS} =19.92 \pm 0.2\) for the \(m=1\) configuration and \(R / \xi_{MS} = 4.59 \pm 0.2\) for the \(m=-1/2\) configuration. To find the radii we take a cut from the center of the tactoid to the outer boundary and find the point where \(S = 0.5 S_N\). It should be noted that LdG, in the one-constant approximation in elastic energy, gives similar results in terms of the size and shape of tactoids. It is known for the LdG bulk free energy with anisotropic elastic free energies that the shape of the tactoids also changes due to anchoring at the interface \cite{golovaty19}. Anisotropic effects on the shape of tactoids in the molecular field theory will be the subject of a future study. \subsection{Nematic Disclinations} We consider next the case of disclination lines in thin films. We consider a two-dimensional square of size \(\mathcal{L} = 10 \xi_{MS}\). For all calculations \(L=1\) and \(\alpha / (n k_B T) > 3.4049\), so nematic ordering is energetically advantageous. At the outer boundary we fix the system to be uniaxial (\(P=0\)) and fix the director orientation, \(\phi = (-1/2) \theta\). The initial configuration is \(S(r) = S_N\big(1 - \exp(r / 2)\big)\) with \(P=0\) everywhere. In Fig. \ref{fig:radDefect} we show the director profile, and the radial profile of equilibrium \(S\) and \(P\) from the center of a disclination to the boundary of the domain for the parameter \(\alpha / (n k_B T) = 4\). For the director, \(\phi = -(1/2) \theta\) outside the core. Much like solutions for the LdG free energy, we see a disclination core that is biaxial \cite{meiboom82,schopohl87}. The biaxiality of the core was explained topologically by Lyuksyutov, assuming a LdG bulk free energy \cite{lyuksyutov78}. Using this free energy for analysis, one can define a \lq\lq biaxial length'' scale for the disclinations, \(R_b \approx \sqrt{K/(B S^3)}\), where \(K\) is on the order of the Frank constants and \(B\) is the parameter associated with the cubic term in the LdG bulk energy, Eq. \eqref{LdGE}. For distances from the core smaller than \(R_b\), the elastic energy becomes comparable to the cubic term in the LdG free energy and the system can remove the elastic singularity by becoming biaxial, since a biaxial order parameter can remove the singularity. We note that at the core, \(S = P\) in both models. Using the parametrization from Eq. \eqref{Qdef}, one can show that this is interpreted as a uniaxial order parameter, but for a disc if \(S > 0\) or a rod aligned with the z-axis if \(S < 0\). For both models, \(S > 0\) at the core. Thus, we interpret the biaxial solution as a macroscopic \lq\lq transformation'' of rods far away from the core to discs at the core. Microscopically, the probability distribution describing individual molecules becomes more and more spread out in the x-y plane in an attempt to alleviate the elastic energy singularity. \begin{figure*} \includegraphics[width = \textwidth]{radialSPdefect2.eps} \caption{(a) Director profile and (b) radial plots of the uniaxial order \(S\) and the biaxial order \(P\) for a nematic disclination. The spatial extent of biaxiality is on the order of the radius of the disclination core. Here, \(\alpha / (n k_B T) = 4\) and \(L= 1\).} \label{fig:radDefect} \end{figure*} We emphasize that it is not obvious that the molecular field theory should give biaxial core solutions for the disclinations since, by construction, the model is markedly different from LdG. While LdG is an expansion of a macroscopic order parameter, the model here is based on a microscopic description. Because of this, it is difficult to quantitatively compare the solutions for the disclinations given by the two models. While we note that the spatial extent of the biaxiality for the disclinations is on the order of the radius of the defects, there is not a cubic term in the free energy to define a length such as \(R_b\). Instead, this behavior is induced by the single particle partition function which appears in Eq. \eqref{FreeEAux} since the Maier-Saupe energy is purely quadratic in \(\mathbf{Q}\). Another aspect of the disclinations that we can compare, at least qualitatively, to the LdG model is the scaling of the radius of disclinations with temperature. To find the radius, we take a cut from the center of the disclination to the boundary and find the point where \(S - P = S_N ( 1 - e^{-1})\). The results are plotted in Fig. \ref{fig:defectSize}. We show both the scaling for the molecular field theory and for results given by LdG. It can be seen that the scaling is similar for both models in a wide range of temperatures up to the coexistence temperature, where the isotropic phase becomes energetically favorable. We are currently investigating the effects of anisotropic elastic free energies on disclinations. It is known that the director structure becomes less symmetric away from the disclination core if the Frank constants for bend and splay are not equal, and recent experiments have found anisotropic core structures \cite{zhou17}. \begin{figure} \includegraphics[width = \columnwidth]{defectRadii.eps} \caption{Radius of disclinations plotted as a function of temperature for (a) the molecular field theory of section \ref{Model} and (b) the Landau-de Gennes model. \(T^*\) is the temperature where the isotropic phase loses its metastability, while the dotted line on the plots indicate where coexistence between phases is for the respective model. For the molecular field theory we use \(L = 1\), and for Landau-de Gennes \(L = 1\) and \(B / C = 4\) for all simulations.} \label{fig:defectSize} \end{figure} \section{\label{Conclusion}Conclusion} In this work, we have presented a computational implementation of the model of reference \cite{ball10}. We show that the model can be interpreted as replacing direct interactions between molecules via an effective interaction field \(\bm{\Lambda}\) in the mean field approximation. Further, we investigate the similarity between the free energy of this molecular field theory and the LdG free energy and compare solutions given by both for the cases of interfaces, tactoids, and topological defects. We find that all have qualitatively similar results which is an interesting result given that the construction of the two models is very different. This model allows for a more fundamental understanding of the underlying microscopic and mesoscopic physics at play, and can serve as an alternative to the LdG free energy when describing systems with inhomogeneous ordering. The extension of the Maier-Saupe model to a field theory allows us to understand not just the phase transition but also inhomogeneous configurations, and can possibly be used to describe experiments like those of Refs. \cite{zhou17,kim13}. Moving forward, we are currently investigating the results of adding anisotropy to the elastic free energy, which has been done to some extent for the LdG model \cite{golovaty19}. Importantly, however, one can consider in this framework the values of the elastic constants for chromonics that have been determined experimentally \cite{zhou14}, while avoiding boundedness issues in LdG theory when bend and splay constants are different. Further, because of the microscopic nature of the model, one can, in principle, use a more physically realistic Hamiltonian to describe the molecular system, as opposed to the effective Maier-Saupe Hamiltonian that is used here. One can also generalize the computations to more complex molecules, such as plate-like molecules, by modifying Eq. \eqref{MicroQ}. \begin{acknowledgments} We are indebted to Shawn Walker and Sergij Shiyanovskii for useful discussions. This research is supported by the National Science Foundation under contract DMR-1838977, and by the Minnesota Supercomputing Institute. \end{acknowledgments}
1,108,101,566,112
arxiv
\section{Introduction}\label{sec:intro} \IEEEPARstart{S}{teganography} is the art and science of concealing information within a carrier object. This terminology encompasses a wide range of techniques and applications, including but not limited to covert communications~\cite{2005_1511007}, ownership identification~\cite{1997_650120}, copyright protection~\cite{BARNI1998357}, broadcast monitoring~\cite{1999_VIVA}, and traitor tracing~\cite{2006_1634364}. An important application of steganography is data authentication, which plays a vital role in cybersecurity. The advent of data-centric artificial intelligence is accompanied by cybersecurity concerns. It has been reported that machine-learning models are vulnerable to adversarial attacks such as invisible perturbations crafted to cause wrong decisions~\cite{2015_Perturb_Goodfellow}, poisonous data collected for re-training during deployment~\cite{Poisoning17}, and malware codes hidden in neural network parameters~\cite{10.1145/3427228.3427268}. A proper authentication mechanism ensures that the integrity of data has not been undermined and that the identity of users has not been forged, thereby serving as a precaution against these insidious threats. Digital signatures are an authentication mechanism based upon modern cryptography~\cite{10.1145/359340.359342}. This mechanism can be incorporated into a trustworthy forensic camera in such a way that photographs are generated and stored along with digital signatures~\cite{267415}. However, storing such auxiliary metadata might entail the risk of accidental loss and mismanagement during the data lifecycle. Steganography can serve as a potential remedy to embed the auxiliary information about the data into the data itself in an invisible manner. Yet, steganography distortion, albeit generally imperceptible to human sensory systems, might not be admissible in some fidelity-sensitive situations such as legal proceedings, medical diagnosis, and military reconnaissance. This is where the notion of reversible computing comes into play~\cite{2001_Fridrich_Invertible, 2003_1196739, 2004_1315703, 2007_4291553, 2013_6329433}. A fundamental element of reversible steganography, in common with lossless compression, is predictive analytics~\cite{1948_6773024, 1056936, 623176}. Prediction error modulation is a cutting-edge reversible steganographic technique composed of an analytics module and a coding module~\cite{2005_1381493, 2007_4099409, 2008Fallahpour, 2009_4811982, 2011_5762603, 2014_6746082, Hwang:2016aa}. The recent development of deep learning has advanced the frontier of reversible steganography. It has been reported that deep neural networks can be applied as powerful predictive models~\cite{2020_9245471, Hu:2021aa, Chang:2021aa, chang2022deep}. Despite an inspiring progress in the analytics module, the design of the coding module based largely on heuristics. While there are studies on \emph{end-to-end} deep learning that attempts to use neural networks for automatic reversible computing, perfect reversibility cannot be promised~\cite{Jung:2019aa, Duan:2019aa, Lu_2021_CVPR}. From a certain point of view, it is hard for neural networks, as a monolithic black box, to learn the intricate logics of reversible computing. Explainability of intelligent machinery is an ongoing open research topic~\cite{Castelvecchi:2016aa, 8631448, Barredo-Arrieta:2020aa, 9369420}. Therefore, it seems advisable to follow the \emph{modular} framework at the time of writing. This study is in pursuit of developing an optimal coding for reversible steganography. We model reversible steganographic coding as a mathematical optimisation problem and propose an optimisation algorithm for addressing the nonlinear nature of this problem. The remainder of this paper is organised as follows. Section~\ref{sec:back} outlines the background regarding reversible steganography. Section~\ref{sec:optim} formulates the nonlinear discrete optimisation problem and discusses the complexity of brute-force search. Section~\ref{sec:linear} presents linearisation techniques for tackling the nonlinear discrete optimisation problem. Section~\ref{sec:sim} analyses the optimality of solutions through simulation experiments. Section~\ref{sec:conclusion} provides concluding remarks. \section{Background}\label{sec:back} Prediction error modulation is a reversible steganographic technique that consists of an analytics module and a coding module. The analytics module begins by splitting a cover image into the \emph{context} and \emph{query} sets, denoted by $\boldsymbol{c}$ and $\boldsymbol{q}$, respectively. A conventional way is to divide pixels into two halves according to a chequered pattern. Then, a predictive model is applied to predict the query pixel intensities from the context pixel intensities. A contemporary practice is to employ an artificial neural network in computer vision. The coding module embeds a message $\boldsymbol{\omega}$ into the cover image by modulating the prediction errors $\boldsymbol{\varepsilon} = \boldsymbol{q} - \tilde{\boldsymbol{q}}$. The modulated errors $\boldsymbol{\varepsilon}^{\prime}$ is then added to the predicted intensities, causing distortion to the query pixels. The stego image is created by merging the context set $\boldsymbol{c}$ and the modulated query set $\boldsymbol{q}^{\prime}$. The decoding procedure is similar to the encoding procedure. It begins by predicting the query pixel intensities. Since the context set is kept unchanged, the prediction in the decoding phase is guaranteed to be identical to that in the encoding phase given the same predictive model. The message is extracted and the query set is recovered by demodulating the prediction errors. The image is reversed to its original state by merging the context set and the recovered query set. The procedures for encoding and decoding are depicted schematically in Figure~\ref{fig:sys} and also provided in Algorithms~\ref{alg:enc} and~\ref{alg:dec}. We would like to note that the message may contain some auxiliary information for handling pixel intensity overflow. This paper does not go into details of every aspect of the stego-system; instead, our study focuses on mathematical optimisation of reversible steganographic coding. \begin{figure}[t] \centerline{\includegraphics[width=0.85\columnwidth]{Figures/OptimRevStego.pdf}} \caption{Workflow of reversible steganography with prediction error modulation.} \label{fig:sys} \end{figure} \begin{figure}[t] \begin{algorithm}[H] \centering \caption{Encoding}\label{alg:enc} \begin{algorithmic} \Input $\text{cover}$, $\boldsymbol{\omega}$ \Output $\text{stego}$ \\ \LineComment{analytics module} \State $[\boldsymbol{c}, \tilde{\boldsymbol{q}}] \gets \operatorname{split}(\text{cover})$ \State $[\tilde{\boldsymbol{c}}, \tilde{\boldsymbol{q}}] \gets \operatorname{predict}([\boldsymbol{c},\boldsymbol{0}])$ \\ \LineComment{coding module} \State $\boldsymbol{\varepsilon} \gets \boldsymbol{q} - \tilde{\boldsymbol{q}}$ \State $\boldsymbol{\varepsilon}^{\prime} \gets \operatorname{modulate}(\boldsymbol{\varepsilon}, \boldsymbol{\omega})$ \State $\boldsymbol{q}^{\prime} \gets \tilde{\boldsymbol{q}} + \boldsymbol{\varepsilon}^{\prime}$ \State $\text{stego} \gets \operatorname{merge}(\boldsymbol{c}, \boldsymbol{q}^{\prime})$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \centering \caption{Decoding}\label{alg:dec} \begin{algorithmic} \Input $\text{stego}$ \Output $\text{cover}$, $\boldsymbol{\omega}$ \\ \LineComment{analytics module} \State $[\boldsymbol{c}, \boldsymbol{q}^{\prime}] = \operatorname{split}(\text{stego})$ \State $[\tilde{\boldsymbol{c}}, \tilde{\boldsymbol{q}}] = \operatorname{predict}([\boldsymbol{c}, \boldsymbol{0}])$ \\ \LineComment{coding module} \State $\boldsymbol{\varepsilon}^{\prime} = \boldsymbol{q}^{\prime} - \tilde{\boldsymbol{q}}$ \State $[\boldsymbol{\varepsilon}, \boldsymbol{\omega}] = \operatorname{demodulate}(\boldsymbol{\varepsilon}^{\prime})$ \State $\boldsymbol{q} = \tilde{\boldsymbol{q}} + \boldsymbol{\varepsilon}$ \State $\text{cover} = \operatorname{merge}(\boldsymbol{c}, \boldsymbol{q})$ \end{algorithmic} \end{algorithm} \end{figure} \section{Nonlinear Discrete Optimisation}\label{sec:optim} The essence of reversible steganographic coding is to designate one or multiple error values as the carrier and to determine how the values change to represent different message digits. A conventional heuristic for reversible steganographic coding is to choose the prediction errors of the peak frequency as the carrier. While the peak frequency implies the highest capacity, this capacity-greedy strategy is not necessarily optimal in terms of minimising distortion. \subsection{Problem Modelling} According to the typical law of error, the frequency of an error can be expressed as an exponential function of its numerical magnitude, disregarding sign~\cite{Wilson:1923aa}. In other words, the frequency distribution of prediction errors is expected to centre around zero. In general, a smaller absolute error tends to have a higher occurrence. A special exception is that the occurrence of zero might be lower than the occurrence of a certain absolute error considering that the latter is the sum of both positive and negative error occurrences. Consider an absolute error histogram as shown in Figure~\ref{fig:error_distrib}. The problem of reversible steganographic coding is to establish a mapping between the values in $[0, n]$ and the values in $[0, n+\vartheta]$, where $\vartheta$ denotes the extra quota and is typical defined to be less than or equal to the number of successive empty bins in the absolute error histogram. Encoding is a \emph{one-to-many} mapping that links a cover value to one or more stego values. A message digit can only be represented if the connections are greater than one. Different cover value can never yield the same stego value in order to avoid an overlap between values and ambiguity in decoding. Therefore, a cover value of non-zero occurrence may also be changed to a different stego value even if it is not assigned to represent any digit. We confine that each cover value can only be mapped to the nearest available stego values since a \emph{non-cross} mapping reduces the problem dimension drastically. We choose to start from the mapping of value $0$ to the mapping of value $n$ because it is advisable to allocate a slighter cumulative distortion to a value of higher occurrence. An example of a cover/stego mapping is illustrated in Figure~\ref{fig:mapping}. \begin{figure}[t] \centerline{\includegraphics[width=0.97\columnwidth]{Figures/ErrorDistrib.pdf}} \caption{Example of absolute error distribution with highlighted zero occurrences.} \label{fig:error_distrib} \end{figure} Let us denote by $a_i$ the frequency of the value $i$ and $x_i$ the number of extra cover-to-stego links for the value $i$. The total number of links for the value $i$ equals $x_i + 1$. The number of bits can be represented by the value $i$ is $\log_2(x_i + 1)$ and thus the capacity is computed by \begin{equation} \mathfrak{C} = \sum_{i=0}^n a_i \log_2(x_i + 1) . \end{equation} Given the total number of cover-to-stego links $x_i$, the probability of changing a cover value to each stego value is $1/(x_i + 1)$. The deviations of the first to the last stego value are $y_i+ 0$ to $y_i + x_i$ respectively, where $y_i$ denotes the sum of all the previous extra links (i.e. the cumulative deviation). Hence, the expected distortion in terms of the squared deviations is computed by \begin{equation} \mathfrak{D} = \sum_{i=0}^n a_i \left(\frac{(0+ y_i)^2 + \dots + (x_i + y_i)^2}{x_i + 1}\right) , \end{equation} where \begin{equation} y_i = \sum_{j=0}^{i-1} x_j . \end{equation} We can simplify the algebraic expression by \begin{equation} \begin{split} &\frac{(0+ y_i)^2 + \dots + (x_i + y_i)^2}{x_i + 1} \\ = &\frac{ (0^2 + 2y_i\cdot 0 + y_i^2) + \dots +(x_i^2 + 2y_i\cdot x_i + y_i^2) }{x_i + 1} \\ = &\frac{ (0^2 + \dots + x_i^2) + 2y_i(0 + \dots + x_i) + y_i^2(x_i+1) }{x_i + 1}\\ = &\frac{x_i(x_i+1)(2x_i+1)}{6(x_i+1)} + \frac{2y_ix_i(x_i+1)}{2(x_i+1)} + \frac{y_i^2(x_i+1)}{x_i+1}\\ = &\frac{1}{3}x_i^2 + \frac{1}{6}x_i + x_iy_i + y_i^2 .\\ \end{split} \end{equation} The reason for computing squared deviations rather than absolute deviations is that image quality is often measured by the peak signal-to-noise ratio (PSNR), which is defined via the mean squared error (MSE). Our goal is to solve for the decision variables $x_i \in \{0,\dots \vartheta\}$ that minimise the distortion objective subject to the capacity constraint. The sum of all the extra cover-to-stego links cannot exceed the quota $\vartheta$. To summarise, the mathematical optimisation problem for reversible steganographic coding is \begin{equation*} \begin{alignedat}{3} & \text{min} & \enspace & \mathfrak{D} = \sum_{i=0}^n a_i \left( \frac{1}{3}x_i^2 + \frac{1}{6}x_i + x_iy_i + y_i^2 \right) ,\\ & \text{s.t.} & & \mathfrak{C} = \sum_{i=0}^n a_i \log_2(x_i + 1) \geq \text{payload} ,\\ &&& \sum_{i=0}^n x_i \leq \vartheta ,\\ & \text{var.} && x_i \in \{0,\cdots,\vartheta\}, \quad & \hspace{-0.5cm} \forall i=0,\dots,n . \end{alignedat} \end{equation*} \begin{figure}[t] \centerline{\includegraphics[width=0.97\columnwidth]{Figures/Mapping_Hist.pdf}} \caption{Example of reversible steganographic coding.} \label{fig:mapping} \end{figure} \subsection{Brute-Force Search} Brute-force search is a baseline method for benchmarking optimisation algorithms. The solution space that exhausts all possible combinations of the decision variables is equal to $(\vartheta+1)^{n+1} \in \mathcal{O}(c^n)$. By taking into account of the quota constraint, we can reduce the solution space from the number of possible combinations to the number of feasible combinations. In number theory and combinatorics, the partition function $\operatorname{part}(t)$ computes the number of ways of writing $t$ as a sum of positive integers in $[1,t]$. Let $\boldsymbol{\Lambda}_{t}$ denote a matrix of $\operatorname{part}(t)$ rows and $t$ columns that enumerates all possible partitions: \begin{equation} \boldsymbol{\Lambda}_{t} = \begin{bNiceArray}{*{1}{c}} \boldsymbol{\lambda}_{1}\\ \vdots\\ \boldsymbol{\lambda}_{\operatorname{part}(t)}\\ \end{bNiceArray} = \begin{bNiceArray}{*{3}{c}}[] \lambda_{1,1} & \cdots & \lambda_{1,t}\\ \vdots & \ddots & \vdots \\ \lambda_{\operatorname{part}(t),1} & \cdots & \lambda_{\operatorname{part}(t),t} \\ \end{bNiceArray}. \end{equation} Each vector $\boldsymbol{\lambda}_{\ell}$ represents a possible partition in which each element is the quantity of a candidate integer (i.e. the summand). For example, $\boldsymbol{\Lambda}_{2}$, $\boldsymbol{\Lambda}_{3}$ and $\boldsymbol{\Lambda}_{4}$ are \begin{equation*} \begin{bNiceArray}{*{2}{c}}[first-row,last-col,code-for-first-row=\scriptscriptstyle,code-for-last-col=\scriptscriptstyle] 1 & 2 \\ 2 & 0 & \boldsymbol{\lambda}_{1} \\ 0 & 1 & \boldsymbol{\lambda}_{2} \\ \end{bNiceArray},\quad \begin{bNiceArray}{*{3}{c}}[first-row,last-col,code-for-first-row=\scriptscriptstyle,code-for-last-col=\scriptscriptstyle] 1 & 2 & 3 & \\ 3 & 0 & 0 & \boldsymbol{\lambda}_{1} \\ 1 & 1 & 0 & \boldsymbol{\lambda_{2}} \\ 0 & 0 & 1 & \boldsymbol{\lambda_{3}} \\ \end{bNiceArray},\quad \begin{bNiceArray}{*{4}{c}}[first-row,last-col,code-for-first-row=\scriptscriptstyle,code-for-last-col=\scriptscriptstyle] 1 & 2 & 3 & 4 & \\ 4 & 0 & 0 & 0 & \boldsymbol{\lambda_{1}} \\ 2 & 1 & 0 & 0 & \boldsymbol{\lambda_{2}} \\ 0 & 2 & 0 & 0 & \boldsymbol{\lambda_{3}} \\ 1 & 0 & 1 & 0 & \boldsymbol{\lambda_{4}} \\ 0 & 0 & 0 & 1 & \boldsymbol{\lambda_{5}} \\ \end{bNiceArray}. \end{equation*} The total number of feasible solutions can be calculated by adding up the number of feasible solutions given by each individual partition matrix from $\boldsymbol{\Lambda}_{1}$ to $\boldsymbol{\Lambda}_{\vartheta}$ (due to the quota constraint); that is, \begin{equation} \sum_{t=1}^{\vartheta} \operatorname{feasible}(\boldsymbol{\Lambda}_{t}, n^*) , \end{equation} where $n^* = n + 1$ denotes the number of integers in $[0, n]$. For each matrix $\boldsymbol{\Lambda}_{t}$, the number of feasible solutions is computed by summing up the number of possible combinations given by each partition vector $\boldsymbol{\lambda}_{\ell}$, denoted by \begin{equation} \operatorname{feasible}(\boldsymbol{\Lambda}_{t}, n^*) = \sum_{\ell=1}^{\operatorname{part}(t)} \operatorname{comb}(\boldsymbol{\lambda}_{\ell}, n^*) . \end{equation} A combination is a selection of values from a set of $n^*$ values based on a given partition vector, as expressed by \begin{equation} \operatorname{comb}(\boldsymbol{\lambda}_{\ell}, n^*) = \prod_{i=1}^{t} \binom{n^* - \sum_{j=1}^{i-1} \lambda_{j}^* }{\lambda_{i}^*} , \end{equation} where $\lambda_{i}^* = \lambda_{\ell,i}$ is the simplified notation regardless of the index of the partition vector. It is a product of $t$ binomial coefficients and each term is to choose (and remove) an unordered subset of $\lambda_{i}^*$ values from the remaining values in the set of $n^*$ values. Let us take $\boldsymbol{\Lambda}_{3}$ for example. The number of combinations for partition vectors $\boldsymbol{\lambda}_{1}$, $\boldsymbol{\lambda}_{2}$ and $\boldsymbol{\lambda}_{3}$ are computed as follows: \begin{equation*} \operatorname{comb}(\boldsymbol{\lambda}_{1}, n^*) = \binom{n^*}{3}\binom{n^* - 3}{0}\binom{n^* - 3 - 0}{0} , \end{equation*} \begin{equation*} \operatorname{comb}(\boldsymbol{\lambda}_{2}, n^*) = \binom{n^*}{1}\binom{n^* - 1}{1}\binom{n^* - 1 - 1}{0} , \end{equation*} \begin{equation*} \operatorname{comb}(\boldsymbol{\lambda}_{3}, n^*) = \binom{n^*}{0}\binom{n^* - 0}{0}\binom{n^* - 0 - 0}{1} . \end{equation*} The number of combinations can be approximated by \begin{equation} \begin{split} & \prod_{i=1}^{t} \binom{n^* - \sum_{j=1}^{i-1} \lambda_{j}^* }{\lambda_{i}^*} \\ = & \binom{n^*}{\lambda_1^*} \binom{n^* - \lambda_1^*}{\lambda_2^*} \cdots \binom{n^* - \lambda_1^* - \lambda_2^* - \dots - \lambda_{t-1}^* }{\lambda_{t}^*}\\ = & \frac{n^*!}{\lambda_1^*!(n^*-\lambda_1^*)!} \times \frac{(n^*-\lambda_1^*)!}{\lambda_2^*!(n^*- \lambda_1^*-\lambda_2^*)!} \times \dots \\ = & \frac{n^*!}{\lambda_1^*!\lambda_2^*!\dots \lambda_{t}^*!(n^*-\sum_{j=1}^{t} \lambda_{j}^*)!}\\ = & \frac{n^*(n^*-1)(n^*-2)\dots(n^*-(\sum_{j=1}^{t} \lambda_{j}^* - 1))}{\lambda_1^*!\lambda_2^*!\dots \lambda_{t}^*!}\\ \leq & \frac{n^*(n^*-1)(n^*-2)\dots(n^*- (t - 1))}{\lambda_1^*!\lambda_2^*!\dots \lambda_{t}^*!} \approx n^{t} . \end{split} \end{equation} Hence, the complexity of this speed-up brute-force algorithm is approximately equal to \begin{equation} \sum_{t=1}^{\vartheta} \sum_{\ell=1}^{\operatorname{part}(t)} \operatorname{comb}(\boldsymbol{\lambda}_{\ell}, n^*) \approx \sum_{t=1}^{\vartheta} \operatorname{part}(t)\cdot n^{t} \in \mathcal{O}(n^c) . \end{equation} \begin{figure}[t] \centerline{\includegraphics[width=0.9\columnwidth]{Figures/onehot_explained.pdf}} \caption{Example of binary integer decision variable.} \label{fig:onehot} \end{figure} \section{Linearisation}\label{sec:linear} The difficulty of our optimisation problem lies in the nonlinear nature of the capacity constraint and the distortion objective. To apply off-the-shelf optimisation tools, we have to tackle these nonlinearities. \subsection{Logarithmic Capacity Constraint} The capacity constraint involves the calculation of logarithm of variables $\log_2(x_i+1)$. The logarithmic function is nonlinear. A useful linearisation trick is to remodel the problem with binary integer variables. We binarise each decision variable $x_i$ with the domain $[0, \vartheta]$ into a 0/1 vector or a one-hot vector of length $\vartheta + 1$, as illustrated in Figure~\ref{fig:onehot}. The vector consists of 0s with the exception of a single 1 of which the position indicates the value of $x_i$; that is, \begin{equation} \mathbf{x}_i = [\mathrm{x}_i^0, \cdots, \mathrm{x}_i^{\vartheta}] \in \{0,1\}^{\vartheta + 1} , \end{equation} such that \begin{equation} \mathds{1} \cdot \mathbf{x}_i^{\intercal} = 1, \quad \forall i = 0, \dots ,n . \end{equation} We can retrieve $x_i$ by the dot product of vectors \begin{equation} x_i = [0,\cdots,\vartheta] \cdot \mathbf{x}_i^{\intercal} = \mathbf{v}\mathbf{x}_i^{\intercal} . \end{equation} Accordingly, the quota constraint becomes \begin{equation} \sum_{i=0}^n \mathbf{v}\mathbf{x}_i^{\intercal} \leq \vartheta . \end{equation} In a similar manner, the logarithm can be derived by the dot product of vectors \begin{equation} \begin{split} \log_2(x_i+1) &= \left[ \log_2(0+1), \cdots, \log_2(\vartheta+1) \right] \cdot \mathbf{x}_i^{\intercal}\\ &= \mathbf{v}_{\operatorname{log}} \mathbf{x}_i^{\intercal} . \end{split} \end{equation} Hence, we rewrite the capacity constraint as \begin{equation} \mathfrak{C} = \sum_{i=0}^n a_i \mathbf{v}_{\operatorname{log}} \mathbf{x}_i^{\intercal} . \end{equation} \subsection{Quadratic Distortion Objective} The distortion objective involves three nonlinear terms $x_i^2$, $y_i^2$ and $x_i y_i$. These terms are quadratic functions of variables. The first term can be approached by the dot product as before; that is \begin{equation} x_i^2 = [0^2, \dots, \theta^2]\cdot \mathbf{x}_i^{\intercal} = \mathbf{v}_{\operatorname{sq}} \mathbf{x}_i^{\intercal} . \end{equation} The remaining two terms contain the partial sum of variables $y_i$, which is computed by \begin{equation} y_i = \sum_{j=0}^{i-1} \mathbf{v}\mathbf{x}_j^{\intercal} . \end{equation} To linearise the univariate quadratic term $y_i^2$ and the bivariate quadratic term $x_i y_i$, we introduce two non-negative continuous slack variables $z_{y_i^2} \geq 0$ and $z_{x_iy_i} \geq 0$. Replacing the quadratic terms with the dot product and the slack variables results in a linear distortion objective \begin{equation} \mathfrak{D} = \sum_{i=0}^n a_i \left( \frac{1}{3}\mathbf{v}_{\operatorname{sq}} \mathbf{x}_i^{\intercal} + \frac{1}{6}x_i + z_{x_iy_i} + z_{y_i^2} \right) . \end{equation} We begin by solving this mixed-integer linear programming problem, which does not yet reflect the quadratic terms regarding the cumulative distortion, and obtain an initial solution for $\tilde{x}_i$. The slack variables would be zeros because the objective is to minimise distortion. To make the slack variables reflect the quadratic terms properly, we add the following constraints \begin{equation} \begin{alignedat}{2} z_{y_i^2} &\geq y_i^2,\\ z_{x_iy_i} &\geq x_iy_i. \end{alignedat} \end{equation} In this way, we reformulate the problem with a nonlinear objective into the problem with a linear objective and nonlinear constraints. We make use of the solution obtained previously to linearise these nonlinear constraints and solve the mixed-integer linear programming problem iteratively. To begin with, we express the variables in terms of the previous solution: \begin{equation} \begin{split} x_i &= \tilde{x}_{i} + \delta_{x_i} ,\\ y_i &= \tilde{y}_{i} + \delta_{y_i} , \end{split} \end{equation} where $\tilde{x}_{i}$ and $\tilde{y}_{i}$ are treated as constants. Then, we apply the Taylor series to approximate the univariate quadratic term as \begin{equation} \begin{split} f(y_i) &= f(\tilde{y}_{i} + \delta_{y_i}) \\ &= f(\tilde{y}_{i}) + f^{\prime}(\tilde{y}_{i}) \delta_{y_i} + \cdots \\ &= \tilde{y}_{i}^2 + 2\tilde{y}_{i}\delta_{i} + \cdots \\ &\approx \tilde{y}_{i}^2 + 2\tilde{y}_{i}(y_i - \tilde{y}_{i})\\ &= 2\tilde{y}_{i}y_i - \tilde{y}_{i}^2 , \end{split} \end{equation} and similarly the bivariate quadratic term as \begin{equation} \begin{split} f(x_i, y_i) &= f(\tilde{x}_{i} +\delta_{x_i}, \tilde{y}_{i} + \delta_{y_i})\\ &= f(\tilde{x}_{i},\tilde{y}_{i}) + \frac{\partial f}{\partial x_i} \delta_{x_i} + \frac{\partial f}{\partial y_i} \delta_{y_i} + \cdots \\ &= \tilde{x}_{i}\tilde{y}_{i} + \tilde{y}_{i}\delta_{x_i} + \tilde{x}_{i}\delta_{y_i} + \cdots \\ &\approx \tilde{x}_{i}\tilde{y}_{i} + \tilde{x}_{i}(y_i - \tilde{y}_{i}) + \tilde{y}_{i}(x_i - \tilde{x}_{i})\\ &= \tilde{x}_{i}y_i + \tilde{y}_{i}x_i - \tilde{x}_{i}\tilde{y}_{i} . \end{split} \end{equation} As a result, the nonlinear constraints are transformed into the linear constraints \begin{equation} \begin{alignedat}{2} 2\tilde{y}_{i}y_i - z_{y_i^2} &\leq \tilde{y}_{i}^2 ,\\ \tilde{x}_{i}y_i + \tilde{y}_{i}x_i - z_{x_iy_i} &\leq \tilde{x}_{i}\tilde{y}_{i} . \end{alignedat} \end{equation} To recapitulate, the nonlinear discrete optimisation problem is approached by means of an iterative method that solves a mixed-integer linear programming problem with binary integer variables and non-negative continuous slack variables: \begin{equation*} \begin{alignedat}{3} & \text{min} & \enspace & \mathfrak{D} = \sum_{i=0}^n a_i \left( \frac{1}{3}\mathbf{v}_{\operatorname{sq}} \mathbf{x}_i^{\intercal} + \frac{1}{6}\mathbf{v} \mathbf{x}_i^{\intercal} + z_{x_iy_i} + z_{y_i^2} \right) ,\\ & \text{s.t.} & \enspace & \mathfrak{C} = \sum_{i=0}^n a_i \mathbf{v}_{\operatorname{log}} \mathbf{x}_i^{\intercal} \geq \text{payload} ,\\ &&& \sum_{i=0}^n \mathbf{v}\mathbf{x}_i^{\intercal} \leq \vartheta ,\\ &&& \mathds{1} \cdot \mathbf{x}_i^{\intercal} = 1, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ &&& 2\tilde{y}_{i}y_i - z_{y_i^2} \leq \tilde{y}_{i}^2, \quad & \hspace{-2.0cm} \forall i=0, \dots, n ,\\ &&& \tilde{x}_{i}y_i + \tilde{y}_{i}x_i - z_{x_iy_i} \leq \tilde{x}_{i}\tilde{y}_{i}, \quad &\hspace{-2.0cm} \forall i=0, \dots, n ,\\ & \text{var.} & \enspace & \mathbf{x}_i \in \{0,1\}^{\vartheta + 1}, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ &&& z_{y_i^2} \geq 0, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ &&& z_{x_iy_i}\geq 0, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ & \ast & & x_i = \mathbf{v} \mathbf{x}_i^{\intercal} \quad \& \quad y_i = \sum_{j=0}^{i-1} \mathbf{v}\mathbf{x}_j^{\intercal} . \end{alignedat} \end{equation*} \begin{figure*}[t!] \centering \subfloat[Aeroplane]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10001.pdf}} \hfil \subfloat[Lena]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10002.pdf}} \hfil \subfloat[Mandrill]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10003.pdf}} \hfil \subfloat[Peppers]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10004.pdf}} \caption{Absolute error histograms with highlighted empty bins.} \label{fig:img_hist} \end{figure*} \begin{figure*}[t!] \vspace{0.5cm} \centering \subfloat[Aeroplane ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta1.pdf}} \hfil \subfloat[Lena ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta1.pdf}} \hfil \subfloat[Mandrill ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta1.pdf}} \hfil \subfloat[Pappers ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta1.pdf}} \\%theta 2 \subfloat[Aeroplane ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta2.pdf}} \hfil \subfloat[Lena ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta2.pdf}} \hfil \subfloat[Mandrill ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta2.pdf}} \hfil \subfloat[Pappers ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta2.pdf}} \\%theta 3 \subfloat[Aeroplane ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta3.pdf}} \hfil \subfloat[Lena ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta3.pdf}} \hfil \subfloat[Mandrill ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta3.pdf}} \hfil \subfloat[Pappers ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta3.pdf}} \\%theta 4 \subfloat[Aeroplane ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta4.pdf}} \hfil \subfloat[Lena ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta4.pdf}} \hfil \subfloat[Mandrill ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta4.pdf}} \hfil \subfloat[Pappers ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta4.pdf}} \caption{Payload\textendash distortion curves for optimality analysis against brute-force search.} \label{fig:optim_analysis} \end{figure*} \section{Simulation}\label{sec:sim} We carry out experimental analysis on the optimality of the proposed method benchmarked against the brute-force method. The experimental setup is described as follows. We apply the residual dense network (RDN) as the predictive model~\cite{2018_8578360}. This neural network model is characterised by a tangled labyrinth of residual and dense connections, and has its origin in low-level computer vision (e.g. super-resolution, denoising and debluring). The model is trained on the BOSSbase dataset~\cite{2011_BOSSbase}, which originated from an academic competition for digital steganography. This dataset comprises a large collection of greyscale photographs covering a wide variety of subjects and scenes. The algorithms are tested on selected images from the USC-SIPI dataset~\cite{2006_USC_SIPI}. All the images are resized to a resolution of $256 \times 256$ pixels via Lanczos resampling~\cite{1979_Lanczos}. The border pixels along with a half of the rest pixels are designated as the context. Accordingly, the number of query pixels equals $(254 \times 254)/2$. We display both distortion and capacity as divided by the number of query pixels. Figure~\ref{fig:img_hist} shows the absolute error distribution for each test image. It is observed that most of the error values are within the range from about $30$ to $50$. We conservatively set $n=55$ in the sense that nearly every value of non-zero occurrence is included. We implement the algorithms with respect to different quota settings ($\vartheta = 1, 2, 3, 4$). Figure~\ref{fig:optim_analysis} evaluates the performance of the proposed optimisation algorithm. Each point of the curve indicates the minimum distortion of a solution under a specific capacity constraint. In the vast majority of cases, the solutions found by the proposed method are identical to those given by the brute-force method. When failing to find the optimal solutions, the reached objective values are within a small distance from the optimal ones. Hence, even though the optimal solutions cannot be always guaranteed, the results suggest that the proposed method can achieve a near-optimal performance. \section{Conclusion}\label{sec:conclusion} This paper studies a mathematical optimisation problem in reversible steganography. We formulate the prediction error coding as a nonlinear discrete optimisation problem. The objective is to minimise distortion under a constraint on capacity. We discuss the complexity of a brute-force method and the linearisation techniques for logarithmic capacity constraint and quadratic distortion objective. The problem is transformed into an iterative mixed-integer linear programming problem with binary integer variables and slack variables. Our simulation results validate the near-optimality of the proposed algorithm. \bibliographystyle{Transactions-Bibliography/IEEEtran} \section{Introduction}\label{sec:intro} \IEEEPARstart{S}{teganography} is the art and science of concealing information within a carrier object. This terminology encompasses a wide range of techniques and applications, including but not limited to covert communications~\cite{2005_1511007}, ownership identification~\cite{1997_650120}, copyright protection~\cite{BARNI1998357}, broadcast monitoring~\cite{1999_VIVA}, and traitor tracing~\cite{2006_1634364}. An important application of steganography is data authentication, which plays a vital role in cybersecurity. The advent of data-centric artificial intelligence is accompanied by cybersecurity concerns. It has been reported that machine-learning models are vulnerable to adversarial attacks such as invisible perturbations crafted to cause wrong decisions~\cite{2015_Perturb_Goodfellow}, poisonous data collected for re-training during deployment~\cite{Poisoning17}, and malware codes hidden in neural network parameters~\cite{10.1145/3427228.3427268}. A proper authentication mechanism ensures that the integrity of data has not been undermined and that the identity of users has not been forged, thereby serving as a precaution against these insidious threats. Digital signatures are an authentication mechanism based upon modern cryptography~\cite{10.1145/359340.359342}. This mechanism can be incorporated into a trustworthy forensic camera in such a way that photographs are generated and stored along with digital signatures~\cite{267415}. However, storing such auxiliary metadata might entail the risk of accidental loss and mismanagement during the data lifecycle. Steganography can serve as a potential remedy to embed the auxiliary information about the data into the data itself in an invisible manner. Yet, steganography distortion, albeit generally imperceptible to human sensory systems, might not be admissible in some fidelity-sensitive situations such as legal proceedings, medical diagnosis, and military reconnaissance. This is where the notion of reversible computing comes into play~\cite{2001_Fridrich_Invertible, 2003_1196739, 2004_1315703, 2007_4291553, 2013_6329433}. A fundamental element of reversible steganography, in common with lossless compression, is predictive analytics~\cite{1948_6773024, 1056936, 623176}. Prediction error modulation is a cutting-edge reversible steganographic technique composed of an analytics module and a coding module~\cite{2005_1381493, 2007_4099409, 2008Fallahpour, 2009_4811982, 2011_5762603, 2014_6746082, Hwang:2016aa}. The recent development of deep learning has advanced the frontier of reversible steganography. It has been reported that deep neural networks can be applied as powerful predictive models~\cite{2020_9245471, Hu:2021aa, Chang:2021aa, chang2022deep}. Despite an inspiring progress in the analytics module, the design of the coding module based largely on heuristics. While there are studies on \emph{end-to-end} deep learning that attempts to use neural networks for automatic reversible computing, perfect reversibility cannot be promised~\cite{Jung:2019aa, Duan:2019aa, Lu_2021_CVPR}. From a certain point of view, it is hard for neural networks, as a monolithic black box, to learn the intricate logics of reversible computing. Explainability of intelligent machinery is an ongoing open research topic~\cite{Castelvecchi:2016aa, 8631448, Barredo-Arrieta:2020aa, 9369420}. Therefore, it seems advisable to follow the \emph{modular} framework at the time of writing. This study is in pursuit of developing an optimal coding for reversible steganography. We model reversible steganographic coding as a mathematical optimisation problem and propose an optimisation algorithm for addressing the nonlinear nature of this problem. The remainder of this paper is organised as follows. Section~\ref{sec:back} outlines the background regarding reversible steganography. Section~\ref{sec:optim} formulates the nonlinear discrete optimisation problem and discusses the complexity of brute-force search. Section~\ref{sec:linear} presents linearisation techniques for tackling the nonlinear discrete optimisation problem. Section~\ref{sec:sim} analyses the optimality of solutions through simulation experiments. Section~\ref{sec:conclusion} provides concluding remarks. \section{Background}\label{sec:back} Prediction error modulation is a reversible steganographic technique that consists of an analytics module and a coding module. The analytics module begins by splitting a cover image into the \emph{context} and \emph{query} sets, denoted by $\boldsymbol{c}$ and $\boldsymbol{q}$, respectively. A conventional way is to divide pixels into two halves according to a chequered pattern. Then, a predictive model is applied to predict the query pixel intensities from the context pixel intensities. A contemporary practice is to employ an artificial neural network in computer vision. The coding module embeds a message $\boldsymbol{\omega}$ into the cover image by modulating the prediction errors $\boldsymbol{\varepsilon} = \boldsymbol{q} - \tilde{\boldsymbol{q}}$. The modulated errors $\boldsymbol{\varepsilon}^{\prime}$ is then added to the predicted intensities, causing distortion to the query pixels. The stego image is created by merging the context set $\boldsymbol{c}$ and the modulated query set $\boldsymbol{q}^{\prime}$. The decoding procedure is similar to the encoding procedure. It begins by predicting the query pixel intensities. Since the context set is kept unchanged, the prediction in the decoding phase is guaranteed to be identical to that in the encoding phase given the same predictive model. The message is extracted and the query set is recovered by demodulating the prediction errors. The image is reversed to its original state by merging the context set and the recovered query set. The procedures for encoding and decoding are depicted schematically in Figure~\ref{fig:sys} and also provided in Algorithms~\ref{alg:enc} and~\ref{alg:dec}. We would like to note that the message may contain some auxiliary information for handling pixel intensity overflow. This paper does not go into details of every aspect of the stego-system; instead, our study focuses on mathematical optimisation of reversible steganographic coding. \begin{figure}[t] \centerline{\includegraphics[width=0.85\columnwidth]{Figures/OptimRevStego.pdf}} \caption{Workflow of reversible steganography with prediction error modulation.} \label{fig:sys} \end{figure} \begin{figure}[t] \begin{algorithm}[H] \centering \caption{Encoding}\label{alg:enc} \begin{algorithmic} \Input $\text{cover}$, $\boldsymbol{\omega}$ \Output $\text{stego}$ \\ \LineComment{analytics module} \State $[\boldsymbol{c}, \tilde{\boldsymbol{q}}] \gets \operatorname{split}(\text{cover})$ \State $[\tilde{\boldsymbol{c}}, \tilde{\boldsymbol{q}}] \gets \operatorname{predict}([\boldsymbol{c},\boldsymbol{0}])$ \\ \LineComment{coding module} \State $\boldsymbol{\varepsilon} \gets \boldsymbol{q} - \tilde{\boldsymbol{q}}$ \State $\boldsymbol{\varepsilon}^{\prime} \gets \operatorname{modulate}(\boldsymbol{\varepsilon}, \boldsymbol{\omega})$ \State $\boldsymbol{q}^{\prime} \gets \tilde{\boldsymbol{q}} + \boldsymbol{\varepsilon}^{\prime}$ \State $\text{stego} \gets \operatorname{merge}(\boldsymbol{c}, \boldsymbol{q}^{\prime})$ \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \centering \caption{Decoding}\label{alg:dec} \begin{algorithmic} \Input $\text{stego}$ \Output $\text{cover}$, $\boldsymbol{\omega}$ \\ \LineComment{analytics module} \State $[\boldsymbol{c}, \boldsymbol{q}^{\prime}] = \operatorname{split}(\text{stego})$ \State $[\tilde{\boldsymbol{c}}, \tilde{\boldsymbol{q}}] = \operatorname{predict}([\boldsymbol{c}, \boldsymbol{0}])$ \\ \LineComment{coding module} \State $\boldsymbol{\varepsilon}^{\prime} = \boldsymbol{q}^{\prime} - \tilde{\boldsymbol{q}}$ \State $[\boldsymbol{\varepsilon}, \boldsymbol{\omega}] = \operatorname{demodulate}(\boldsymbol{\varepsilon}^{\prime})$ \State $\boldsymbol{q} = \tilde{\boldsymbol{q}} + \boldsymbol{\varepsilon}$ \State $\text{cover} = \operatorname{merge}(\boldsymbol{c}, \boldsymbol{q})$ \end{algorithmic} \end{algorithm} \end{figure} \section{Nonlinear Discrete Optimisation}\label{sec:optim} The essence of reversible steganographic coding is to designate one or multiple error values as the carrier and to determine how the values change to represent different message digits. A conventional heuristic for reversible steganographic coding is to choose the prediction errors of the peak frequency as the carrier. While the peak frequency implies the highest capacity, this capacity-greedy strategy is not necessarily optimal in terms of minimising distortion. \subsection{Problem Modelling} According to the typical law of error, the frequency of an error can be expressed as an exponential function of its numerical magnitude, disregarding sign~\cite{Wilson:1923aa}. In other words, the frequency distribution of prediction errors is expected to centre around zero. In general, a smaller absolute error tends to have a higher occurrence. A special exception is that the occurrence of zero might be lower than the occurrence of a certain absolute error considering that the latter is the sum of both positive and negative error occurrences. Consider an absolute error histogram as shown in Figure~\ref{fig:error_distrib}. The problem of reversible steganographic coding is to establish a mapping between the values in $[0, n]$ and the values in $[0, n+\vartheta]$, where $\vartheta$ denotes the extra quota and is typical defined to be less than or equal to the number of successive empty bins in the absolute error histogram. Encoding is a \emph{one-to-many} mapping that links a cover value to one or more stego values. A message digit can only be represented if the connections are greater than one. Different cover value can never yield the same stego value in order to avoid an overlap between values and ambiguity in decoding. Therefore, a cover value of non-zero occurrence may also be changed to a different stego value even if it is not assigned to represent any digit. We confine that each cover value can only be mapped to the nearest available stego values since a \emph{non-cross} mapping reduces the problem dimension drastically. We choose to start from the mapping of value $0$ to the mapping of value $n$ because it is advisable to allocate a slighter cumulative distortion to a value of higher occurrence. An example of a cover/stego mapping is illustrated in Figure~\ref{fig:mapping}. \begin{figure}[t] \centerline{\includegraphics[width=0.97\columnwidth]{Figures/ErrorDistrib.pdf}} \caption{Example of absolute error distribution with highlighted zero occurrences.} \label{fig:error_distrib} \end{figure} Let us denote by $a_i$ the frequency of the value $i$ and $x_i$ the number of extra cover-to-stego links for the value $i$. The total number of links for the value $i$ equals $x_i + 1$. The number of bits can be represented by the value $i$ is $\log_2(x_i + 1)$ and thus the capacity is computed by \begin{equation} \mathfrak{C} = \sum_{i=0}^n a_i \log_2(x_i + 1) . \end{equation} Given the total number of cover-to-stego links $x_i$, the probability of changing a cover value to each stego value is $1/(x_i + 1)$. The deviations of the first to the last stego value are $y_i+ 0$ to $y_i + x_i$ respectively, where $y_i$ denotes the sum of all the previous extra links (i.e. the cumulative deviation). Hence, the expected distortion in terms of the squared deviations is computed by \begin{equation} \mathfrak{D} = \sum_{i=0}^n a_i \left(\frac{(0+ y_i)^2 + \dots + (x_i + y_i)^2}{x_i + 1}\right) , \end{equation} where \begin{equation} y_i = \sum_{j=0}^{i-1} x_j . \end{equation} We can simplify the algebraic expression by \begin{equation} \begin{split} &\frac{(0+ y_i)^2 + \dots + (x_i + y_i)^2}{x_i + 1} \\ = &\frac{ (0^2 + 2y_i\cdot 0 + y_i^2) + \dots +(x_i^2 + 2y_i\cdot x_i + y_i^2) }{x_i + 1} \\ = &\frac{ (0^2 + \dots + x_i^2) + 2y_i(0 + \dots + x_i) + y_i^2(x_i+1) }{x_i + 1}\\ = &\frac{x_i(x_i+1)(2x_i+1)}{6(x_i+1)} + \frac{2y_ix_i(x_i+1)}{2(x_i+1)} + \frac{y_i^2(x_i+1)}{x_i+1}\\ = &\frac{1}{3}x_i^2 + \frac{1}{6}x_i + x_iy_i + y_i^2 .\\ \end{split} \end{equation} The reason for computing squared deviations rather than absolute deviations is that image quality is often measured by the peak signal-to-noise ratio (PSNR), which is defined via the mean squared error (MSE). Our goal is to solve for the decision variables $x_i \in \{0,\dots \vartheta\}$ that minimise the distortion objective subject to the capacity constraint. The sum of all the extra cover-to-stego links cannot exceed the quota $\vartheta$. To summarise, the mathematical optimisation problem for reversible steganographic coding is \begin{equation*} \begin{alignedat}{3} & \text{min} & \enspace & \mathfrak{D} = \sum_{i=0}^n a_i \left( \frac{1}{3}x_i^2 + \frac{1}{6}x_i + x_iy_i + y_i^2 \right) ,\\ & \text{s.t.} & & \mathfrak{C} = \sum_{i=0}^n a_i \log_2(x_i + 1) \geq \text{payload} ,\\ &&& \sum_{i=0}^n x_i \leq \vartheta ,\\ & \text{var.} && x_i \in \{0,\cdots,\vartheta\}, \quad & \hspace{-0.5cm} \forall i=0,\dots,n . \end{alignedat} \end{equation*} \begin{figure}[t] \centerline{\includegraphics[width=0.97\columnwidth]{Figures/Mapping_Hist.pdf}} \caption{Example of reversible steganographic coding.} \label{fig:mapping} \end{figure} \subsection{Brute-Force Search} Brute-force search is a baseline method for benchmarking optimisation algorithms. The solution space that exhausts all possible combinations of the decision variables is equal to $(\vartheta+1)^{n+1} \in \mathcal{O}(c^n)$. By taking into account of the quota constraint, we can reduce the solution space from the number of possible combinations to the number of feasible combinations. In number theory and combinatorics, the partition function $\operatorname{part}(t)$ computes the number of ways of writing $t$ as a sum of positive integers in $[1,t]$. Let $\boldsymbol{\Lambda}_{t}$ denote a matrix of $\operatorname{part}(t)$ rows and $t$ columns that enumerates all possible partitions: \begin{equation} \boldsymbol{\Lambda}_{t} = \begin{bNiceArray}{*{1}{c}} \boldsymbol{\lambda}_{1}\\ \vdots\\ \boldsymbol{\lambda}_{\operatorname{part}(t)}\\ \end{bNiceArray} = \begin{bNiceArray}{*{3}{c}}[] \lambda_{1,1} & \cdots & \lambda_{1,t}\\ \vdots & \ddots & \vdots \\ \lambda_{\operatorname{part}(t),1} & \cdots & \lambda_{\operatorname{part}(t),t} \\ \end{bNiceArray}. \end{equation} Each vector $\boldsymbol{\lambda}_{\ell}$ represents a possible partition in which each element is the quantity of a candidate integer (i.e. the summand). For example, $\boldsymbol{\Lambda}_{2}$, $\boldsymbol{\Lambda}_{3}$ and $\boldsymbol{\Lambda}_{4}$ are \begin{equation*} \begin{bNiceArray}{*{2}{c}}[first-row,last-col,code-for-first-row=\scriptscriptstyle,code-for-last-col=\scriptscriptstyle] 1 & 2 \\ 2 & 0 & \boldsymbol{\lambda}_{1} \\ 0 & 1 & \boldsymbol{\lambda}_{2} \\ \end{bNiceArray},\quad \begin{bNiceArray}{*{3}{c}}[first-row,last-col,code-for-first-row=\scriptscriptstyle,code-for-last-col=\scriptscriptstyle] 1 & 2 & 3 & \\ 3 & 0 & 0 & \boldsymbol{\lambda}_{1} \\ 1 & 1 & 0 & \boldsymbol{\lambda_{2}} \\ 0 & 0 & 1 & \boldsymbol{\lambda_{3}} \\ \end{bNiceArray},\quad \begin{bNiceArray}{*{4}{c}}[first-row,last-col,code-for-first-row=\scriptscriptstyle,code-for-last-col=\scriptscriptstyle] 1 & 2 & 3 & 4 & \\ 4 & 0 & 0 & 0 & \boldsymbol{\lambda_{1}} \\ 2 & 1 & 0 & 0 & \boldsymbol{\lambda_{2}} \\ 0 & 2 & 0 & 0 & \boldsymbol{\lambda_{3}} \\ 1 & 0 & 1 & 0 & \boldsymbol{\lambda_{4}} \\ 0 & 0 & 0 & 1 & \boldsymbol{\lambda_{5}} \\ \end{bNiceArray}. \end{equation*} The total number of feasible solutions can be calculated by adding up the number of feasible solutions given by each individual partition matrix from $\boldsymbol{\Lambda}_{1}$ to $\boldsymbol{\Lambda}_{\vartheta}$ (due to the quota constraint); that is, \begin{equation} \sum_{t=1}^{\vartheta} \operatorname{feasible}(\boldsymbol{\Lambda}_{t}, n^*) , \end{equation} where $n^* = n + 1$ denotes the number of integers in $[0, n]$. For each matrix $\boldsymbol{\Lambda}_{t}$, the number of feasible solutions is computed by summing up the number of possible combinations given by each partition vector $\boldsymbol{\lambda}_{\ell}$, denoted by \begin{equation} \operatorname{feasible}(\boldsymbol{\Lambda}_{t}, n^*) = \sum_{\ell=1}^{\operatorname{part}(t)} \operatorname{comb}(\boldsymbol{\lambda}_{\ell}, n^*) . \end{equation} A combination is a selection of values from a set of $n^*$ values based on a given partition vector, as expressed by \begin{equation} \operatorname{comb}(\boldsymbol{\lambda}_{\ell}, n^*) = \prod_{i=1}^{t} \binom{n^* - \sum_{j=1}^{i-1} \lambda_{j}^* }{\lambda_{i}^*} , \end{equation} where $\lambda_{i}^* = \lambda_{\ell,i}$ is the simplified notation regardless of the index of the partition vector. It is a product of $t$ binomial coefficients and each term is to choose (and remove) an unordered subset of $\lambda_{i}^*$ values from the remaining values in the set of $n^*$ values. Let us take $\boldsymbol{\Lambda}_{3}$ for example. The number of combinations for partition vectors $\boldsymbol{\lambda}_{1}$, $\boldsymbol{\lambda}_{2}$ and $\boldsymbol{\lambda}_{3}$ are computed as follows: \begin{equation*} \operatorname{comb}(\boldsymbol{\lambda}_{1}, n^*) = \binom{n^*}{3}\binom{n^* - 3}{0}\binom{n^* - 3 - 0}{0} , \end{equation*} \begin{equation*} \operatorname{comb}(\boldsymbol{\lambda}_{2}, n^*) = \binom{n^*}{1}\binom{n^* - 1}{1}\binom{n^* - 1 - 1}{0} , \end{equation*} \begin{equation*} \operatorname{comb}(\boldsymbol{\lambda}_{3}, n^*) = \binom{n^*}{0}\binom{n^* - 0}{0}\binom{n^* - 0 - 0}{1} . \end{equation*} The number of combinations can be approximated by \begin{equation} \begin{split} & \prod_{i=1}^{t} \binom{n^* - \sum_{j=1}^{i-1} \lambda_{j}^* }{\lambda_{i}^*} \\ = & \binom{n^*}{\lambda_1^*} \binom{n^* - \lambda_1^*}{\lambda_2^*} \cdots \binom{n^* - \lambda_1^* - \lambda_2^* - \dots - \lambda_{t-1}^* }{\lambda_{t}^*}\\ = & \frac{n^*!}{\lambda_1^*!(n^*-\lambda_1^*)!} \times \frac{(n^*-\lambda_1^*)!}{\lambda_2^*!(n^*- \lambda_1^*-\lambda_2^*)!} \times \dots \\ = & \frac{n^*!}{\lambda_1^*!\lambda_2^*!\dots \lambda_{t}^*!(n^*-\sum_{j=1}^{t} \lambda_{j}^*)!}\\ = & \frac{n^*(n^*-1)(n^*-2)\dots(n^*-(\sum_{j=1}^{t} \lambda_{j}^* - 1))}{\lambda_1^*!\lambda_2^*!\dots \lambda_{t}^*!}\\ \leq & \frac{n^*(n^*-1)(n^*-2)\dots(n^*- (t - 1))}{\lambda_1^*!\lambda_2^*!\dots \lambda_{t}^*!} \approx n^{t} . \end{split} \end{equation} Hence, the complexity of this speed-up brute-force algorithm is approximately equal to \begin{equation} \sum_{t=1}^{\vartheta} \sum_{\ell=1}^{\operatorname{part}(t)} \operatorname{comb}(\boldsymbol{\lambda}_{\ell}, n^*) \approx \sum_{t=1}^{\vartheta} \operatorname{part}(t)\cdot n^{t} \in \mathcal{O}(n^c) . \end{equation} \begin{figure}[t] \centerline{\includegraphics[width=0.9\columnwidth]{Figures/onehot_explained.pdf}} \caption{Example of binary integer decision variable.} \label{fig:onehot} \end{figure} \section{Linearisation}\label{sec:linear} The difficulty of our optimisation problem lies in the nonlinear nature of the capacity constraint and the distortion objective. To apply off-the-shelf optimisation tools, we have to tackle these nonlinearities. \subsection{Logarithmic Capacity Constraint} The capacity constraint involves the calculation of logarithm of variables $\log_2(x_i+1)$. The logarithmic function is nonlinear. A useful linearisation trick is to remodel the problem with binary integer variables. We binarise each decision variable $x_i$ with the domain $[0, \vartheta]$ into a 0/1 vector or a one-hot vector of length $\vartheta + 1$, as illustrated in Figure~\ref{fig:onehot}. The vector consists of 0s with the exception of a single 1 of which the position indicates the value of $x_i$; that is, \begin{equation} \mathbf{x}_i = [\mathrm{x}_i^0, \cdots, \mathrm{x}_i^{\vartheta}] \in \{0,1\}^{\vartheta + 1} , \end{equation} such that \begin{equation} \mathds{1} \cdot \mathbf{x}_i^{\intercal} = 1, \quad \forall i = 0, \dots ,n . \end{equation} We can retrieve $x_i$ by the dot product of vectors \begin{equation} x_i = [0,\cdots,\vartheta] \cdot \mathbf{x}_i^{\intercal} = \mathbf{v}\mathbf{x}_i^{\intercal} . \end{equation} Accordingly, the quota constraint becomes \begin{equation} \sum_{i=0}^n \mathbf{v}\mathbf{x}_i^{\intercal} \leq \vartheta . \end{equation} In a similar manner, the logarithm can be derived by the dot product of vectors \begin{equation} \begin{split} \log_2(x_i+1) &= \left[ \log_2(0+1), \cdots, \log_2(\vartheta+1) \right] \cdot \mathbf{x}_i^{\intercal}\\ &= \mathbf{v}_{\operatorname{log}} \mathbf{x}_i^{\intercal} . \end{split} \end{equation} Hence, we rewrite the capacity constraint as \begin{equation} \mathfrak{C} = \sum_{i=0}^n a_i \mathbf{v}_{\operatorname{log}} \mathbf{x}_i^{\intercal} . \end{equation} \subsection{Quadratic Distortion Objective} The distortion objective involves three nonlinear terms $x_i^2$, $y_i^2$ and $x_i y_i$. These terms are quadratic functions of variables. The first term can be approached by the dot product as before; that is \begin{equation} x_i^2 = [0^2, \dots, \theta^2]\cdot \mathbf{x}_i^{\intercal} = \mathbf{v}_{\operatorname{sq}} \mathbf{x}_i^{\intercal} . \end{equation} The remaining two terms contain the partial sum of variables $y_i$, which is computed by \begin{equation} y_i = \sum_{j=0}^{i-1} \mathbf{v}\mathbf{x}_j^{\intercal} . \end{equation} To linearise the univariate quadratic term $y_i^2$ and the bivariate quadratic term $x_i y_i$, we introduce two non-negative continuous slack variables $z_{y_i^2} \geq 0$ and $z_{x_iy_i} \geq 0$. Replacing the quadratic terms with the dot product and the slack variables results in a linear distortion objective \begin{equation} \mathfrak{D} = \sum_{i=0}^n a_i \left( \frac{1}{3}\mathbf{v}_{\operatorname{sq}} \mathbf{x}_i^{\intercal} + \frac{1}{6}x_i + z_{x_iy_i} + z_{y_i^2} \right) . \end{equation} We begin by solving this mixed-integer linear programming problem, which does not yet reflect the quadratic terms regarding the cumulative distortion, and obtain an initial solution for $\tilde{x}_i$. The slack variables would be zeros because the objective is to minimise distortion. To make the slack variables reflect the quadratic terms properly, we add the following constraints \begin{equation} \begin{alignedat}{2} z_{y_i^2} &\geq y_i^2,\\ z_{x_iy_i} &\geq x_iy_i. \end{alignedat} \end{equation} In this way, we reformulate the problem with a nonlinear objective into the problem with a linear objective and nonlinear constraints. We make use of the solution obtained previously to linearise these nonlinear constraints and solve the mixed-integer linear programming problem iteratively. To begin with, we express the variables in terms of the previous solution: \begin{equation} \begin{split} x_i &= \tilde{x}_{i} + \delta_{x_i} ,\\ y_i &= \tilde{y}_{i} + \delta_{y_i} , \end{split} \end{equation} where $\tilde{x}_{i}$ and $\tilde{y}_{i}$ are treated as constants. Then, we apply the Taylor series to approximate the univariate quadratic term as \begin{equation} \begin{split} f(y_i) &= f(\tilde{y}_{i} + \delta_{y_i}) \\ &= f(\tilde{y}_{i}) + f^{\prime}(\tilde{y}_{i}) \delta_{y_i} + \cdots \\ &= \tilde{y}_{i}^2 + 2\tilde{y}_{i}\delta_{i} + \cdots \\ &\approx \tilde{y}_{i}^2 + 2\tilde{y}_{i}(y_i - \tilde{y}_{i})\\ &= 2\tilde{y}_{i}y_i - \tilde{y}_{i}^2 , \end{split} \end{equation} and similarly the bivariate quadratic term as \begin{equation} \begin{split} f(x_i, y_i) &= f(\tilde{x}_{i} +\delta_{x_i}, \tilde{y}_{i} + \delta_{y_i})\\ &= f(\tilde{x}_{i},\tilde{y}_{i}) + \frac{\partial f}{\partial x_i} \delta_{x_i} + \frac{\partial f}{\partial y_i} \delta_{y_i} + \cdots \\ &= \tilde{x}_{i}\tilde{y}_{i} + \tilde{y}_{i}\delta_{x_i} + \tilde{x}_{i}\delta_{y_i} + \cdots \\ &\approx \tilde{x}_{i}\tilde{y}_{i} + \tilde{x}_{i}(y_i - \tilde{y}_{i}) + \tilde{y}_{i}(x_i - \tilde{x}_{i})\\ &= \tilde{x}_{i}y_i + \tilde{y}_{i}x_i - \tilde{x}_{i}\tilde{y}_{i} . \end{split} \end{equation} As a result, the nonlinear constraints are transformed into the linear constraints \begin{equation} \begin{alignedat}{2} 2\tilde{y}_{i}y_i - z_{y_i^2} &\leq \tilde{y}_{i}^2 ,\\ \tilde{x}_{i}y_i + \tilde{y}_{i}x_i - z_{x_iy_i} &\leq \tilde{x}_{i}\tilde{y}_{i} . \end{alignedat} \end{equation} To recapitulate, the nonlinear discrete optimisation problem is approached by means of an iterative method that solves a mixed-integer linear programming problem with binary integer variables and non-negative continuous slack variables: \begin{equation*} \begin{alignedat}{3} & \text{min} & \enspace & \mathfrak{D} = \sum_{i=0}^n a_i \left( \frac{1}{3}\mathbf{v}_{\operatorname{sq}} \mathbf{x}_i^{\intercal} + \frac{1}{6}\mathbf{v} \mathbf{x}_i^{\intercal} + z_{x_iy_i} + z_{y_i^2} \right) ,\\ & \text{s.t.} & \enspace & \mathfrak{C} = \sum_{i=0}^n a_i \mathbf{v}_{\operatorname{log}} \mathbf{x}_i^{\intercal} \geq \text{payload} ,\\ &&& \sum_{i=0}^n \mathbf{v}\mathbf{x}_i^{\intercal} \leq \vartheta ,\\ &&& \mathds{1} \cdot \mathbf{x}_i^{\intercal} = 1, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ &&& 2\tilde{y}_{i}y_i - z_{y_i^2} \leq \tilde{y}_{i}^2, \quad & \hspace{-2.0cm} \forall i=0, \dots, n ,\\ &&& \tilde{x}_{i}y_i + \tilde{y}_{i}x_i - z_{x_iy_i} \leq \tilde{x}_{i}\tilde{y}_{i}, \quad &\hspace{-2.0cm} \forall i=0, \dots, n ,\\ & \text{var.} & \enspace & \mathbf{x}_i \in \{0,1\}^{\vartheta + 1}, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ &&& z_{y_i^2} \geq 0, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ &&& z_{x_iy_i}\geq 0, \quad & \hspace{-2.0cm} \forall i = 0, \dots ,n ,\\ & \ast & & x_i = \mathbf{v} \mathbf{x}_i^{\intercal} \quad \& \quad y_i = \sum_{j=0}^{i-1} \mathbf{v}\mathbf{x}_j^{\intercal} . \end{alignedat} \end{equation*} \begin{figure*}[t!] \centering \subfloat[Aeroplane]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10001.pdf}} \hfil \subfloat[Lena]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10002.pdf}} \hfil \subfloat[Mandrill]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10003.pdf}} \hfil \subfloat[Peppers]{\includegraphics[width=0.5\columnwidth]{Figures/exp/img_hist10004.pdf}} \caption{Absolute error histograms with highlighted empty bins.} \label{fig:img_hist} \end{figure*} \begin{figure*}[t!] \vspace{0.5cm} \centering \subfloat[Aeroplane ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta1.pdf}} \hfil \subfloat[Lena ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta1.pdf}} \hfil \subfloat[Mandrill ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta1.pdf}} \hfil \subfloat[Pappers ($\vartheta=1$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta1.pdf}} \\%theta 2 \subfloat[Aeroplane ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta2.pdf}} \hfil \subfloat[Lena ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta2.pdf}} \hfil \subfloat[Mandrill ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta2.pdf}} \hfil \subfloat[Pappers ($\vartheta=2$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta2.pdf}} \\%theta 3 \subfloat[Aeroplane ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta3.pdf}} \hfil \subfloat[Lena ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta3.pdf}} \hfil \subfloat[Mandrill ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta3.pdf}} \hfil \subfloat[Pappers ($\vartheta=3$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta3.pdf}} \\%theta 4 \subfloat[Aeroplane ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10001_n55_theta4.pdf}} \hfil \subfloat[Lena ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10002_n55_theta4.pdf}} \hfil \subfloat[Mandrill ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10003_n55_theta4.pdf}} \hfil \subfloat[Pappers ($\vartheta=4$ \& $n=55$)]{\includegraphics[width=0.5\columnwidth]{Figures/exp/optim_10004_n55_theta4.pdf}} \caption{Payload\textendash distortion curves for optimality analysis against brute-force search.} \label{fig:optim_analysis} \end{figure*} \section{Simulation}\label{sec:sim} We carry out experimental analysis on the optimality of the proposed method benchmarked against the brute-force method. The experimental setup is described as follows. We apply the residual dense network (RDN) as the predictive model~\cite{2018_8578360}. This neural network model is characterised by a tangled labyrinth of residual and dense connections, and has its origin in low-level computer vision (e.g. super-resolution, denoising and debluring). The model is trained on the BOSSbase dataset~\cite{2011_BOSSbase}, which originated from an academic competition for digital steganography. This dataset comprises a large collection of greyscale photographs covering a wide variety of subjects and scenes. The algorithms are tested on selected images from the USC-SIPI dataset~\cite{2006_USC_SIPI}. All the images are resized to a resolution of $256 \times 256$ pixels via Lanczos resampling~\cite{1979_Lanczos}. The border pixels along with a half of the rest pixels are designated as the context. Accordingly, the number of query pixels equals $(254 \times 254)/2$. We display both distortion and capacity as divided by the number of query pixels. Figure~\ref{fig:img_hist} shows the absolute error distribution for each test image. It is observed that most of the error values are within the range from about $30$ to $50$. We conservatively set $n=55$ in the sense that nearly every value of non-zero occurrence is included. We implement the algorithms with respect to different quota settings ($\vartheta = 1, 2, 3, 4$). Figure~\ref{fig:optim_analysis} evaluates the performance of the proposed optimisation algorithm. Each point of the curve indicates the minimum distortion of a solution under a specific capacity constraint. In the vast majority of cases, the solutions found by the proposed method are identical to those given by the brute-force method. When failing to find the optimal solutions, the reached objective values are within a small distance from the optimal ones. Hence, even though the optimal solutions cannot be always guaranteed, the results suggest that the proposed method can achieve a near-optimal performance. \section{Conclusion}\label{sec:conclusion} This paper studies a mathematical optimisation problem in reversible steganography. We formulate the prediction error coding as a nonlinear discrete optimisation problem. The objective is to minimise distortion under a constraint on capacity. We discuss the complexity of a brute-force method and the linearisation techniques for logarithmic capacity constraint and quadratic distortion objective. The problem is transformed into an iterative mixed-integer linear programming problem with binary integer variables and slack variables. Our simulation results validate the near-optimality of the proposed algorithm. \bibliographystyle{Transactions-Bibliography/IEEEtran}
1,108,101,566,113
arxiv
\section{Introduction} Singular value decomposition (SVD) is routinely performed to process data organized in the form of matrices, thanks to its optimality for low-rank approximation, and relationship with principal component analysis; and perturbation analysis of SVD plays a central role in studying the performance of these procedures. More and more often, however, multidimensional data in the form of higher-order tensors arise in applications. While higher-order tensors provide us a more versatile tool to encode complex relationships among variables, how to perform decompositions similar to SVD and how these decompositions behave under perturbation are often the most fundamental issues in these applications. In general, decomposition of higher-order tensors is rather delicate and poses both conceptual and computational challenges. See \cite{kolda2009tensor, cichocki2015tensor} for recent surveys of some of the difficulties as well as existing techniques and algorithms to tackle them. In particular, we shall focus here on a class of tensors that allows for direct generalization of SVD. The so-called orthogonally decomposable (odeco) tensors have been previously studied by \cite{kolda2001ortho, chen2009tensor, robeva2016orthogonal, belkin2018eigenvectors} among others, and commonly used in high dimensional data analysis \citep{anand2014tensor, anand2014sample, anandkumar2014guaranteed, liu2017characterizing}. The main goal of this work is to study the effect of perturbation on the singular values and vectors of an odeco tensor or odeco approximations of a nearly odeco tensor, and demonstrate how it could provide a powerful and unifying treatment to many different problems in high dimensional data analysis. More specifically, an orthogonally decomposable tensor $\mathscr{T}\in {\mathbb R}^{d\times\cdots\times d}$ can be written as \begin{equation} \label{eq:odec1} \mathscr{T}=\sum_{i=1}^d\lambda_k\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)}\otimes\dots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)} \end{equation} where $\lambda_1\ge \lambda_2\ge\cdots\lambda_d\ge 0$, and the matrices $\bU^{(q)}=[\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}\,\dots\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_d^{(q)}]\in \RR^{d\times d}$ for $1\le q\le p$ are orthonormal. It is well known that such a decomposition is essentially unique. Here we are interested in its stability: how perturbation to $\mathscr{T}$ may affect our ability to reconstruct the spectral parameters $\lambda_k$s and $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k$s, which we shall refer to as the essential singular values and vectors, or simply singular values and vectors when no confusion occurs, of $\mathscr{T}$. See Section 2 for discussion of singular values and vectors for tensors. Perturbation theory of this nature is well-developed in the case of matrices ($p=2$) and can be traced back to the classical works \cite{weyl1912asymptotische}, \cite{davis1970rotation} and \cite{wedin1972perturbation}. See, e.g., \cite{stewart1990matrix} for a comprehensive survey. These results provide the essential tools for numerous applications in various scientific and engineering domains. As multilinear arrays appear more and more often in these applications, many attempts have been made to develop similar tools for higher order tensors in recent years. Because of the unique challenges associated with higher order tensors, most if not all existing studies along this direction customize their analysis and hence the resulting bounds for a specific algorithm or method. See, e.g., \cite{anand2014tensor, mu2015successive, mu2017greedy, belkin2018eigenvectors}. The aim of this article is to fill in the important step of providing universal perturbation bounds that is in the same spirit as matrix perturbation analysis and independent of a specific algorithm. Doing so not only provides universal perturbation bounds that can be useful for all these applications together, but also allows us to recognize the fundamental similarities and differences between matrices and higher order tensor from yet another perspective. In particular, consider, in addition to $\mathscr{T}$, a second odeco tensor $\tilde{\mathscr{T}}$: \begin{equation} \label{eq:odec2} \tilde{\mathscr{T}}=\sum_{i=1}^d\tilde{\lambda}_k\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)}\otimes\dots\otimes\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}. \end{equation} We are interested in how the differences between the two sets of values $\lambda_k$s and $\tilde{\lambda}_k$s as well as the vectors $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k$s and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_k$s are characterized by the spectral norm of the difference $\mathscr{T}-\tilde{\mathscr{T}}$, in a spirit similar to classical results for matrices. We show that there exist a \emph{numerical} constant $C_0\ge 1$ and a permutation $\pi: [d]\to[d]$ such that for all $k=1,\ldots,d$, \begin{equation} \label{eq:weyl} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le C_0\|\mathscr{T}-\tilde{\mathscr{T}}\|, \end{equation} and \begin{equation} \label{eq:wedin} \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)})\le C_0\cdot{\|\mathscr{T}-\tilde{\mathscr{T}}\|\over \lambda_k}, \end{equation} under the convention that $1/0=+\infty$. Here and in what follows $\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}})$ is the angle between two vectors $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}$ taking value in $[0,\pi/2]$, and the spectral norm of a tensor $\mathscr{A}\in \RR^{d\times\cdots\times d}$ is defined by $$ \|\mathscr{A}\|=\max_{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d-1}} \langle \mathscr{A}, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(1)}\otimes\cdots\otimes \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(p)}\rangle. $$ and ${\cal S}} \def\cS{{\cal S}^{d-1}$ denotes the unit sphere in $\RR^{d}$. We want to emphasize that the constant $C_0$ in \eqref{eq:weyl} and \eqref{eq:wedin} is absolute and independent of $\mathscr{T}$, $\tilde{\mathscr{T}}$, and their dimensionality $d$ or $p$. This is especially relevant and important when dealing with high dimensional problems either statistically or numerically, as we shall demonstrate in Section \ref{sec:app}. In particular, we can take the constant $C_0$ above to be 17. We did not attempt to optimize this constant to its fullest extent, as a much better value can be provided if there is more information on how $\mathscr{T}$ and $\tilde{\mathscr{T}}$ are related: if a singular value $\lambda_k$ is sufficiently large relative to the size of perturbation $\|\mathscr{T}-\tilde{\mathscr{T}}\|$, then we can take the constant $C_0=1$ in \eqref{eq:weyl} and arbitrarily close to $1$ in \eqref{eq:wedin}. In particular, under infinitesimal perturbations such that $\|\mathscr{T}-\tilde{\mathscr{T}}\|=o(\lambda_k)$, we have \begin{equation} \label{eq:weylpert} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le \|\tilde{\mathscr{T}}-\mathscr{T}\|, \end{equation} and \begin{equation} \label{eq:wedinpert} \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)})\le {\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k}+o\left({\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k}\right). \end{equation} Both bounds are sharp in that the leading terms cannot be further improved. This is clear by considering two rank-one tensors differing only in the nonzero singular value or in one of its corresponding singular vectors. Note that every matrix is odeco, \eqref{eq:weyl}-\eqref{eq:wedinpert} therefore directly extends classical results for matrices ($p=2$) by \cite{weyl1912asymptotische}, \cite{davis1970rotation}, \cite{wedin1972perturbation} among others. However, in spite of the similarity in appearance, there are also crucial distinctions between matrices and higher order tensors ($p\ge 3$). In particular, the $\sin\Theta$ theorems of Davis-Kahan-Wedin bound the perturbation effect on the $k$th singular vector by $C\|\tilde{\mathscr{T}}-\mathscr{T}\|/\min_{j\neq k}|\lambda_j-\lambda_k|$. The dependence on the gap $\min_{j\neq k}|\lambda_j-\lambda_k|$ between $\lambda_k$ and other singular values is unavoidable for matrices. This is not the case for higher-order odeco tensors where perturbation affects the singular vectors in separation. Indeed the crux of our technical argument is devoted to proving this by careful control of spillover effect of not knowing other singular tuples on $(\lambda_k,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)})$ and showing that the approximation errors do not accumulate. In general, a perturbed odeco tensor $\mathscr{X}=\mathscr{T}+\mathscr{E}$ may no longer be odeco and hence it may not be possible to match its singular value/vector tuples with the essential singular value/vector tuples of the unperturbed odeco tensor. To overcome this obstacle, we shall consider instead an odeco approximation $\tilde{\mathscr{T}}$ to $\mathscr{X}$ such that $\|\tilde{\mathscr{T}}-\mathscr{X}\|\le C_1\|\mathscr{E}\|$ for some constant $C_1>0$. By triangular inequality, \begin{equation} \label{eq:aprxodec} \|\mathscr{T}-\tilde{\mathscr{T}}\|\le\|\mathscr{T}-\mathscr{X}\|+ \|\tilde{\mathscr{T}}-\mathscr{X}\|=(C_1+1)\|\mathscr{E}\|. \end{equation} Then \eqref{eq:weyl} and \eqref{eq:wedin} imply that $$ |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le C_0(C_1+1)\|\mathscr{E}\| $$ and $$ \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)})\le C_0(C_1+1)\cdot{\|\mathscr{E}\|\over \lambda_k}, $$ where $(\tilde{\lambda}_k,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k: 1\le q\le p)$s are the (essential) singular value/vectors tuple of $\tilde{\mathscr{T}}$. These bounds complement the well known identifiability of odeco decomposition that states if $\mathscr{E}=0$, then all $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k$s are uniquely defined. When $\mathscr{E}\neq 0$, $\mathscr{X}$ is not necessarily odeco but our results indicate that when the perturbation is small, any ``reasonable'' odeco approximation of $\mathscr{X}$ would have ``similar'' essential singular values and vectors. This is more general than identifiability and in fact characterizes the \emph{stability} of odeco decomposition or the local geometry of the space of odeco tensors. It is natural to consider deriving perturbation bounds for higher order tensors by first flattening them into matrices and then applying the existing bounds for matrices. As we shall show, such a na\"ive approach is suboptimal in that it inevitably leads to perturbation bounds in terms of the matricized spectral norm. Although it is possible to further bound matricized spectral norms using tensor spectral norms, it leads to an extra multiplicative factor depending on the dimension ($d$) polynomially, and makes the resulting bounds unsuitable for applications in high dimensional problems. Our results demonstrate that there could be tremendous gain by treating higher order tensors as tensors instead of matrices. Moreover, the matricization approach fails to yield meaningful perturbation bounds for an essential singular vector when the corresponding singnular value is not simple, e.g., when $\lambda_k=\lambda_{k+1}$. We summarize classical perturbation bounds for matrices and those we establish for odeco tensors in Table \ref{tab:foo}. \begin{table}[htbp] \begin{center} \caption{Comparison of Perturbation Bounds in terms of $\|\mathscr{E}\|$, up to a constant factors.} \label{tab:foo} \end{center} \begin{center} \begin{tabular}{c|cc} \hline\hline & Singular values & Singular vectors\\ & ($|\lambda_k-\tilde{\lambda}_{\pi(k)}|$) & ($\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)})$\\ \hline & & \\ Matrix & $\|\mathscr{E}\|$ & $\|\mathscr{E}\|\over \min\{\lambda_{k-1}-\lambda_k, \lambda_k-\lambda_{k+1}\}$\\ \hline & \multicolumn{2}{c}{with matricization}\\ Odeco Tensor & ${\rm poly}(d)\cdot\|\mathscr{E}\|$ & ${{\rm poly}(d)\cdot\|\mathscr{E}\|\over \min\{\lambda_{k-1}-\lambda_k, \lambda_k-\lambda_{k+1}\}}$\\ \cline{2-3} & \multicolumn{2}{c}{without matricization}\\ & $\|\mathscr{E}\|$ & $\|\mathscr{E}\|\over \lambda_k$\\ \hline \end{tabular} \end{center} \end{table} Given the importance of perturbation analysis in fields such as machine learning, numerical analysis, and statistics, it is conceivable that our analysis and algorithms can prove useful in many situations. For illustration, we shall consider a specific example, namely high dimensional tensor SVD. Our general perturbation bound immediately leads to new insights to the problem. In particular, we establish minimax optimal rates for estimating the singular vectors of an odeco tensor when contaminated with Gaussian noise. Our result indicates that any of its singular vectors can be estimated as well as if all other singular values are zero, or in other words, as in the rank one case. Our development is related to the fast-growing literature on using tensor methods in statistics and machine learning. In particular, there is a fruitful line of research in developing algorithm dependent bounds for odeco tensors. In these applications, we always encounter a noisy version of the signal tensor and dimension-independent perturbation bounds of the singular values and vectors are the most critical tool in the analysis. See \cite{janzamin2019spectral} for a recent survey. A significant conceptual difference between these bounds and those classical perturbation bounds for matrices is that they are specific to the algorithms used in computing $\tilde{\mathscr{T}}$ or equivalently its SVD. The perturbation bounds we provide complement these earlier developments in a number of ways. First of all, our bounds could be readily used for perturbation analysis of any algorithm that produces an odeco approximation, allowing us to derive bounds on the singular values and vectors from those on the approximation error of the tensor itself. As such we do not rely on the specific form of the error tensor (as in \cite{anand2014tensor}, \cite{anand2014sample} or \cite{belkin2018eigenvectors}) and also have the weakest possible assumption on the signal to noise ratio. On the other hand, our bounds can also serve as a benchmark on how well any procedure, computationally feasible or not, could perform. Indeed as we can see from the high dimensional data analysis example, our perturbation bounds often yield tight information theoretical limits for statistical inferences. In fact a similar rate optimality continues to hold for a number of other tensor data problems. The rest of the paper is organized as follows. In the next section, we derive perturbation bounds for a pair of odeco tensors. Section 3 extends these bounds to nearly odeco tensors. Proofs of the main results are presented in Section 4. \section{Perturbation Bounds between Odeco Tensors} In this section, we shall consider perturbation analysis for a pair of odeco tensors. We first review some basic properties of odeco tensors and then consider two ways to derive perturbation bounds between a pair of odeco tensor: one through matricization and the other by treating tensors as tensors. While we focus primarily on the so-called essential singular values and vectors, we shall also brief discuss how our techniques may be used to derive perturbation bounds for general singular values and vectors. \subsection{Odeco Tensors} We say a $p$th order tensor $\mathscr{T}\in \RR^{d_1\times\cdots\times d_p}$ is odeco if it can be expressed as \begin{equation} \label{eq:svd} \mathscr{T}=\sum_{k=1}^{d_{\min}}\lambda_k \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)} \end{equation} for some scalars $\lambda_1\ge \ldots \ge \lambda_{d_{\min}}\ge 0$ and unit vectors $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}$s such that $\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{k_1}^{(q)}, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{k_2}^{(q)}\rangle =\delta_{k_1k_2}$ where $d_{\min}=\min\{d_1,\ldots,d_p\}$ and $\delta$ is the Kronecker's delta. Note that there is no loss of generality in assuming that $\lambda_k$s are nonnegative as we can flip the sign of $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}$s accordingly. See, e.g., \cite{kolda2001ortho, robeva2016orthogonal, robeva2017singular} for further discussion of orthogonally decomposable tensors. For brevity, we shall write $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}] $$ if \eqref{eq:svd} holds. Here $\bU^{(q)}\in \RR^{d_q\times d_{\min}}$ with $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}$ as its $k$th column. Recall that, in general, singular values and vectors for a tensor $\mathscr{T}$ are defined as tuples $(\lambda,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(1)},\dots,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(p)})\in \RR\times \RR^{d_1}\times\dots\times \RR^{d_p}$ such that $\|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}\|=1$ and $$ \mathscr{T}\times_{j\neq q}\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(j)}=\lambda \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}\quad{\rm for}\quad \,q=1,\dots,p. $$ See, e.g., \cite{hackbusch2012tensor, qi2017tensor} for further details. For odeco tensors, all possible singular values and vectors of $\mathscr{T}$ can be characterized by $\lambda_k$s and $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k$s: if $\lambda_r>0=\lambda_{r+1}$, then the real singular values and singular vectors of $\mathscr{T}$ are either tuples $(\lambda,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(1)},\dots,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(p)})$ of the form: $$ \lambda = \left(\sum_{k\in S}\dfrac{1}{\lambda_k^{\tfrac{2}{p-2}}}\right)^{-\tfrac{p-2}{2}},\qquad \,\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k\rangle =\begin{cases} \chi_k^{(q)}\left(\lambda\over\lambda_k\right)^{1/(p-2)}\,&\text{if }k\in S\\ 0\quad&\text{otherwise} \end{cases}$$ where $S\subset[r]$, $S\neq \emptyset$, and $\chi_k^{(q)}\in\{+1,-1\}$ satisfy $\displaystyle\prod_{q=2}^p\chi_k^{(q)}=1$ for all $1\le k\le r $; or $\lambda=0$, and $(\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(1)},\dots,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(p)})$ are such that for every $1\le k\le d_{\min}$, there exist at least two values of $q\in\{1,\dots,p\}$ with $\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k\rangle=0$. See \cite{robeva2017singular} for details. In this article, we shall focus primarily on the perturbation of singular value/vector tuples $(\lambda_k,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}: 1\le q\le p)$s and refer to them as the \emph{essential singular values and vectors}, or with some abuse of notation singular values and vectors for short, of $\mathscr{T}$ with the exception of Section 2.5 where we shall explicitly discuss how perturbation bounds for other singular value and vectors can be obtained. In the case when all $\lambda_k$s are distinct, the essential singular values and vectors can be identified by the so-called higher-order SVD (HOSVD) which applies SVD after flattening a higher-order tensor to a matrix, for example, by collapsing all indices except the first one. See, e.g., \cite{de2000multilinear, de2000best}. However, this is not the case when the singular values have multiplicity more than one (i.e., when $\lambda_k=\lambda_{k+1}$ for some $k$ on the right hand side of \eqref{eq:svd}) since HOSVD can only identify the singular space associated with a singular value. This subtle difference also has important practical implications. In general, the essential singular vectors of odeco tensors cannot be computed via HOSVD unless all singular values are distinct. Nonetheless computing the essential singular value/vectors for an odeco tensor is tractable. For example, it can be computed via Jennrich's algorithm when $p=3$. See, e.g., \cite{harshman1970foundations, leurgans1993decomposition}. More generally, efficient algorithms also exist to take full advantage of the orthogonal structure. In particular, if an odeco tensor is symmetric so that $d_1=\cdots=d_p=:d$, and $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)}=\cdots=\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)}=:\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k$ for all $k=1,\ldots,d$, \cite{belkin2018eigenvectors} showed that $\pm\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k$s are the only local maxima of $$F(\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}):=|\langle \mathscr{T}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\otimes\cdots\otimes \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\rangle|$$ over ${\cal S}} \def\cS{{\cal S}^{d-1}$. In addition, there is a full measure set ${\cal U}} \def\cU{{\cal U}\subset {\cal S}} \def\cS{{\cal S}^{d-1}$ such that a gradient iteration algorithm with initial value arbitrarily chosen from ${\cal U}} \def\cU{{\cal U}$ converges to one of the $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k$s. In light of these properties, one can enumerate all the essential singular values and essential singular vectors by repeatedly applying the gradient iteration algorithm with an initial value randomly chosen from the orthogonal complement of the linear space spanned by those already identified local maxima. For this property, these vectors are also called robust singular vectors in the literature \citep[see][]{anand2014tensor}. Interested readers are referred to \cite{belkin2018eigenvectors} for further details. From a slightly different perspective, \cite{hashemi2018spectral} study trivariate analytic functions that are two way odeco, and describe how CP decomposition enables one to derive low rank approximation of such functions. The argument presented in \cite{belkin2018eigenvectors} relies heavily on the hidden convexity of $F$, which no longer holds when $\mathscr{T}$ is not symmetric. However, their main observations remain valid for general odeco tensors. More specifically, write \begin{equation} \label{eq:defF} F(\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)},\ldots, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}):=|\langle \mathscr{T}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle| \end{equation} with slight abuse of notation. Denote by \begin{equation} \label{eq:defG} G(\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)},\ldots, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}):=\left({\mathscr{T}\times_2\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(2)}\cdots\times_p\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\over\|\mathscr{T}\times_2\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(2)}\cdots\times_p\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\|},\ldots,{\mathscr{T}\times_1\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\cdots\times_{p-1}\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p-1)}\over\|\mathscr{T}\times_1\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\cdots\times_{p-1}\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p-1)}\|}\right). \end{equation} the gradient iteration function for $F$ so that $$G_n=\underbrace{G\circ G\circ\cdots\circ G}_{n{\rm\ times}}$$ maps from a set of initial values to the output from running the gradient iteration $n$ times. Similar to the symmetric case, we have the following result for general odeco tensors: \begin{theorem}\label{th:comp} Let $\mathscr{T}$ be an odeco tensor, and $F$ and $G$ be defined by \eqref{eq:defF} and \eqref{eq:defG} respectively. Then the set $\{(\pm\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)},\ldots,\pm\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)}): \lambda_k>0\}$ is a complete enumeration of all local maxima of $F$. Moreover, there exists a full measure set ${\cal U}} \def\cU{{\cal U}\subset {\cal S}} \def\cS{{\cal S}^{d_1-1}\times\cdots\times{\cal S}} \def\cS{{\cal S}^{d_p-1}$ such that for any $(\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)},\ldots,\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)})\in {\cal U}} \def\cU{{\cal U}$, $G_n(\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)},\ldots,\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)})\to (\sigma_1\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)},\ldots,\sigma_p\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)})$ as $n\to \infty$, for some $1\le k\le d_{\min}$, and $\sigma_1,\ldots,\sigma_p\in \{\pm 1\}$. \end{theorem} The main architect of the proof of Theorem \ref{th:comp} is similar to that for symmetric cases. See, e.g., \cite{belkin2018eigenvectors}. For completeness, a detailed proof is included in the Appendix. In light of Theorem \ref{th:comp}, we can then compute all the essential singular value/vector tuples of an odeco tensor sequentially by applying gradient iterations and random initializations, in the same manner as the symmetric case. \subsection{Perturbation Bounds via Matricization} Let $\mathscr{T}$ and $\tilde{\mathscr{T}}$ be two odeco tensors: $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}], $$ and $$ \tilde{\mathscr{T}}=[\{\tilde{\lambda}_k: 1\le k\le d_{\min}\}; \tilde{\bU}^{(1)},\ldots,\tilde{\bU}^{(p)}], $$ We are interested in characterizing the difference between the two sets of singular values and vectors in terms of the ``perturbation'' $\tilde{\mathscr{T}}-\mathscr{T}$. It is instructive to first briefly review classical results in the matrix case, i.e., $p=2$. Note that every matrix is odeco. Perturbation analysis of the singular vectors and spaces for matrices is well studied. See, e.g., \cite{bhatia1987perturbation, stewart1990matrix}, and references therein. In particular, Weyl's perturbation theorem indicates that \begin{equation} \label{eq:matweyl0} \max_{1\le k\le d_{\min}} |\lambda_k-\tilde{\lambda}_k|\le \|\mathscr{T}-\tilde{\mathscr{T}}\|. \end{equation} When a singular value $\lambda_k$ has multiplicity more than one, its singular space has dimension more than one and singular vectors $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k$ and $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}_k$ are no longer uniquely identifiable. But if it is simple, i.e., $\lambda_{k-1}>\lambda_k>\lambda_{k+1}$, then the Davis-Kahan-Wedin $\sin\Theta$ theorem states that \begin{equation} \label{eq:davis0} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\le {\|\mathscr{T}-\tilde{\mathscr{T}}\|\over \min\{\tilde{\lambda}_{k-1}-\lambda_k,\lambda_k-\tilde{\lambda}_{k+1}\}}, \end{equation} provided that the denominator on the righthand side is positive. It is oftentimes more convenient to consider a modified version of the above bound for the singular vectors in terms of the gap between singular values of $\mathscr{T}$: \begin{equation} \label{eq:davis} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\le {2\|\mathscr{T}-\tilde{\mathscr{T}}\|\over \min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}}, \end{equation} which follows immediately from \eqref{eq:matweyl0} and \eqref{eq:davis0}. To see this, note that \eqref{eq:davis} holds trivially if $\|\mathscr{T}-\tilde{\mathscr{T}}\|\ge \min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}/2$. On the other hand, if $\|\mathscr{T}-\tilde{\mathscr{T}}\|< \min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}/2$, it follows from \eqref{eq:matweyl0} that \begin{eqnarray*} \min\{\tilde{\lambda}_{k-1}-\lambda_k,\lambda_k-\tilde{\lambda}_{k+1}\}&\ge& \min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}-\|\mathscr{T}-\tilde{\mathscr{T}}\|\\ &\ge& {1\over 2}\min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}, \end{eqnarray*} and therefore \eqref{eq:davis} follows from \eqref{eq:davis0}. It is worth noting that the dependence of any general perturbation bounds for singular vectors on the gap between singular values is unavoidable for matrices and can be illustrated by the following simple example from \cite{bhatia2013matrix}: \begin{equation} \label{eq:bhatia} \mathscr{T}=\left(\begin{array}{cc}1+\delta& 0\\ 0& 1-\delta\end{array}\right),\qquad {\rm and}\qquad \tilde{\mathscr{T}}=\left(\begin{array}{cc}1 & \delta\\ \delta& 1\end{array}\right). \end{equation} It is not hard to see that $\|\mathscr{T}-\tilde{\mathscr{T}}\|=\sqrt{2}\delta$ and can be made arbitrarily small at the choice of $\delta>0$. Yet the singular vectors of $\mathscr{T}$ and $\tilde{\mathscr{T}}$ are $\{(0,1)^\top,(1,0)^\top\}$ and $\{(1/\sqrt{2}, 1/\sqrt{2})^\top,(1/\sqrt{2}, -1/\sqrt{2})^\top\}$ respectively so that $$ \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})={\|\mathscr{T}-\tilde{\mathscr{T}}\|\over \lambda_1-\lambda_2}, $$ for $k=1,2$ and $q=1,2$. These classical perturbation bounds can be applied to higher-order tensors using matricization or flattening, as for HOSVD. More precisely, write ${\sf Mat}_q: \RR^{d_1\times\cdots\times d_p}\to$ \\$ \RR^{d_q\times d_{-q}}$ by collapsing all indices other than the $q$th one and therefore converting a $p$th order tensor into a $d_q\times d_{-q}$ matrix where $d_{-q}=d_1\cdots d_{q-1}d_{q+1}\cdots d_p$. For an odeco tensor $\mathscr{T}$, its SVD determines that of ${\sf Mat}_q(\mathscr{T})$. More specifically, $$ {\sf Mat}_q(\mathscr{T})=\bU^{(q)}({\rm diag}(\lambda_1,\ldots,\lambda_{d_{\min}}))(\bV^{(q)})^\top, $$ where $$ \bV^{(q)}=\bU^{(1)}\odot\cdots\odot\bU^{(q-1)}\odot\bU^{(q+1)}\odot\cdots\odot\bU^{(p)}. $$ Here $\odot$ stands for the Khatri-Rao product. This, in light of \eqref{eq:matweyl0} and \eqref{eq:davis}, immediately implies that \begin{proposition}\label{pr:matricize} Let $\mathscr{T}$ and $\tilde{\mathscr{T}}$ be two $d_1\times\cdots\times d_p$ odeco tensors with SVD: $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}], $$ and $$ \tilde{\mathscr{T}}=[\{\tilde{\lambda}_k: 1\le k\le d_{\min}\}; \tilde{\bU}^{(1)},\ldots,\tilde{\bU}^{(p)}], $$ respectively where $d_{\min}=\min\{d_1,\ldots, d_p\}$. If $\lambda_k$ is simple, then $$ |\lambda_k-\tilde{\lambda}_k|\le \min_{1\le q\le p}\|{\sf Mat}_q(\mathscr{T})-{\sf Mat}_q(\tilde{\mathscr{T}})\|, $$ and $$ \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\le {2\|{\sf Mat}_q(\mathscr{T})-{\sf Mat}_q(\tilde{\mathscr{T}})\|\over \min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}}. $$ \end{proposition} These bounds, however, are suboptimal and can be significantly improved in a couple of directions that highlight fundamental differences between matrices and higher-order tensors. First of all, we can derive perturbation bounds in terms of the tensor operator norm $\|\mathscr{T}-\tilde{\mathscr{T}}\|$. Although it is true that $\|\mathscr{A}\|=\|{\sf Mat}_q(\mathscr{A})\|$ for $q=1,\ldots, p$ for an odeco tensor $\mathscr{A}$, the difference between two odeco tensors is not necessarily odeco and as a result $\|\mathscr{T}-\tilde{\mathscr{T}}\|$ and $\|{\sf Mat}_q(\mathscr{T})-{\sf Mat}_q(\tilde{\mathscr{T}})\|$ can be quite different. As a simple example, consider the case when $\mathscr{T}=\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}\otimes \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}$ and $\tilde{\mathscr{T}}=\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}\otimes\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\otimes\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}$ where $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}=(0,1)^\top$ and $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}=(1,0)^\top$. It is easy to see that $\|\mathscr{T}-\tilde{\mathscr{T}}\|=1$ and yet $\|{\sf Mat}_1(\mathscr{T})-{\sf Mat}_1(\tilde{\mathscr{T}})\|=\sqrt{2}$. Note that we can always bound $$ \|{\sf Mat}_q(\mathscr{A})\|\le C_d\|\mathscr{A}\| $$ for a multiplicative factor $C_d$ that depends on the dimension $d$ so that we can translate the aforementioned bounds on $\tilde{\lambda}_k$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k$ in terms of tensor spectral norm $\|\mathscr{T}-\tilde{\mathscr{T}}\|$. This, however, is rather unsatisfactory when it comes to high dimensional problems ($d$ is large) as $C_d\ge \sqrt{d-1}$ as the following example shows. Let \begin{equation}\label{eq:ortho_cntrex} \mathscr{T}=\lambda \displaystyle\sum_{i=1}^{d-1}\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i \quad {\rm and } \quad \tilde{\mathscr{T}}=\lambda \displaystyle\sum_{i=1}^{d-1}({\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}}_i+\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V})\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i \end{equation} where $$\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}=\dfrac{1}{\sqrt{d-1}}\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_d-\dfrac{1}{d-1}\left(\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_1+\dots+\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_{d-1}\right).$$ It is easy to see that both are odeco. Note that $\mathscr{T}-\tilde{\mathscr{T}}=\lambda\displaystyle\sum_{i=1}^{d-1}\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i$ and hence $$\|\mathscr{T}-\tilde{\mathscr{T}}\|=\lambda\underset{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A},\bfm b} \def\bB{\bfm B} \def\BB{\mathbb{B},\bfm c} \def\bC{\bfm C} \def\CC{\mathbb{C}\in{\cal S}} \def\cS{{\cal S}^{d-1}}{\sup}\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\rangle \sum_{i=1}^{d-1}b_ic_i=\lambda \|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\|.$$ On the other hand, $${\sf Mat}_1(\mathscr{T}-\tilde{\mathscr{T}})=\lambda\displaystyle \sum_{i=1}^{d-1}\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\left(\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\odot \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\right)^\top.$$ With the two unit vectors $$\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}=\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}/\|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\|\qquad {\rm and} \qquad \bfm b} \def\bB{\bfm B} \def\BB{\mathbb{B}=\dfrac{1}{\sqrt{d-1}}\displaystyle\sum_{i=1}^{d-1}\left(\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\odot \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i\right),$$ we have $$\|{\sf Mat}_1(\mathscr{T}-\tilde{\mathscr{T}})\|\ge \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^\top{\sf Mat}_1(\mathscr{T}-\tilde{\mathscr{T}})\bfm b} \def\bB{\bfm B} \def\BB{\mathbb{B} =\lambda \|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\|\sqrt{d-1}= \sqrt{d-1}\|\mathscr{T}-\tilde{\mathscr{T}}\|.$$ This immediately suggests that the constant $C_d$ in the bound derived from matricization necessarily diverges as $d$ increases when $p>2$, and this renders the perturbation bounds derived from matricization ineffective in many applications where the focus is on pinpointing the effect of increasing dimensionality. Fortunately, as we shall show in the next subsection, much sharper perturbation bounds in terms of $\|\mathscr{T}-\tilde{\mathscr{T}}\|$ are available. Perhaps more importantly, another undesirable aspect of the aforementioned perturbation bounds for higher-order odeco tensors is the dependence on the gap between singular values. For matrices it is only meaningful to talk about singular spaces when a singular value is not simple, and the aforementioned bounds for $\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})$ does not tell us anything about the perturbation of the singular vectors at all when a singular value is not simple even though all essential singular vectors are identifiable for higher-order odeco tensors regardless of the multiplicity of its singular values. Indeed, as we shall show, that the gap $\min\{\lambda_{k-1}-\lambda_k,\lambda_k-\lambda_{k+1}\}$ is irrelevant for perturbation analysis of a higher order odeco tensors, and perturbation of each singular vectors is independent of other singular values. \subsection{Perturbation Bounds for Odeco Tensors} To appreciate the difference in perturbation effect between matrices and higher-order tensors, we first take a look at the Weyl's bound for singular values which states that, in the matrix case, i.e., $p=2$, \begin{equation} \label{eq:matweyl} \max_{1\le k\le d}|\lambda_k-\tilde{\lambda}_k|\le \|\mathscr{T}-\tilde{\mathscr{T}}\|. \end{equation} More generally, when $p$ is even, asymptotic bounds for simple singular values under infinitesimal perturbation have been studied recently by \cite{che2016perturbation}. Their result implies that, in our notation, if $p$ is even and a simple singular value $\lambda_j$ is sufficiently far away from $\lambda_{j-1}$ and $\lambda_{j+1}$, then $$ |\tilde{\lambda}_j-\lambda_j|\le \|\tilde{\mathscr{T}}-\mathscr{T}\|+O(\|\tilde{\mathscr{T}}-\mathscr{T}\|^2), $$ as $\|\tilde{\mathscr{T}}-\mathscr{T}\|\to 0$. This appears to suggest that it is plausible that \eqref{eq:matweyl} could continue to hold for higher-order odeco tensors. Unfortunately, this is not the case and \eqref{eq:matweyl} does not hold in general for higher-order odeco tensors. To see this, let $$ \mathscr{T}=2\mathbf{e}_1\otimes \mathbf{e}_1\otimes \mathbf{e}_1 $$ and $$ \tilde{\mathscr{T}}=(\mathbf{e}_1+\mathbf{e}_2)\otimes (\mathbf{e}_1+\mathbf{e}_2)\otimes (\mathbf{e}_1+\mathbf{e}_2)+(\mathbf{e}_1-\mathbf{e}_2)\otimes (\mathbf{e}_1-\mathbf{e}_2)\otimes (\mathbf{e}_1-\mathbf{e}_2). $$ Obviously $(\lambda_1,\lambda_2)=(2,0)$ and $(\tilde{\lambda}_1,\tilde{\lambda}_2)=(2\sqrt{2},2\sqrt{2})$ so that $$ \max\{|\lambda_1-\tilde{\lambda}_1|,|\lambda_2-\tilde{\lambda}_2|\}=2\sqrt{2}. $$ On the other hand, as shown by \cite{yuan2016tensor} $$ \|\tilde{\mathscr{T}}-\mathscr{T}\|=2\|\mathbf{e}_2\otimes \mathbf{e}_2\otimes \mathbf{e}_1+\mathbf{e}_2\otimes\mathbf{e}_1\otimes\mathbf{e}_2+\mathbf{e}_1\otimes\mathbf{e}_2\otimes\mathbf{e}_2\|=4/\sqrt{3}<2\sqrt{2}, $$ invalidating \eqref{eq:matweyl}. At a more fundamental level, for matrices, Weyl's bound can be viewed as a consequence of Courant-Fischer-Weyl min-max principle which states that \begin{equation} \label{eq:minmax1} \lambda_k=\min_{S: {\rm dim}(S)=d_1-k+1}\max_{\substack{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d_1-1}\cap S\\ \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}\in {\cal S}} \def\cS{{\cal S}^{d_2-1}}}\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}\rangle, \end{equation} and \begin{equation} \label{eq:minmax2} \lambda_k=\max_{S: {\rm dim}(S)=k}\min_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d_1-1}\cap S}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}\in {\cal S}} \def\cS{{\cal S}^{d_2-1}}\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}\rangle. \end{equation} Similar characterizations, however, do not hold for higher-order tensors. As an example, consider a $p$th order odeco tensor of dimension $d\times\cdots\times d$ and with equal singular values. The following proposition shows that neither \eqref{eq:minmax1} nor \eqref{eq:minmax2} holds, in particular for the smallest essential singular value $\lambda_d$ where the righthand side of both equations can be expressed as $$ \min_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}, \ldots, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle. $$ \begin{proposition}\label{pr:example} Let $\mathscr{T}$ be a $p$th ($p\ge 3$) order odeco tensor of dimension $d\times\cdots\times d$. If all its essential singular values are $\lambda$, then $$ \min_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}, \ldots, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle={\lambda\over \sqrt{d}}. $$ \end{proposition} Although straightforward generalizations of Weyl's bound to higher-order tensor do not hold, perturbation bounds in a similar spirit can still be established. More specifically, we have \begin{theorem}\label{th:odeco-weyl} Let $\mathscr{T}$ and $\tilde{\mathscr{T}}$ be two $d_1\times\cdots\times d_p$ ($p>2$) odeco tensors with SVD: $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}], $$ and $$ \tilde{\mathscr{T}}=[\{\tilde{\lambda}_k: 1\le k\le d_{\min}\}; \tilde{\bU}^{(1)},\ldots,\tilde{\bU}^{(p)}], $$ respectively where $d_{\min}=\min\{d_1,\ldots, d_p\}$. There exist a numerical constant $1\le C\le 17$ and a permutation $\pi: [d_{\min}]\to [d_{\min}]$ such that for all $k=1,\ldots, d_{\min}$, \begin{equation} \label{eq:odecoweyl0} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le C\|\tilde{\mathscr{T}}-\mathscr{T}\|, \end{equation} and \begin{equation} \label{eq:odecodavis0} \max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {C\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k}, \end{equation} with the convention that $1/0=+\infty$. \end{theorem} As discussed before, despite the similarity in appearance to the bounds for matrices, Theorem \ref{th:odeco-weyl} requires different proof techniques. Moreover, there are several intriguing differences between the bounds given in Theorem \ref{th:odeco-weyl} and classical ones for matrices. First of all, we do not necessarily match the $k$th singular value/vector tuple $(\lambda_k, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)},\ldots, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)})$ of $\mathscr{T}$ with that of $\tilde{\mathscr{T}}$. This is because we do not restrict that the singular values $\lambda_k$s are distinct and sufficiently apart from each other, and hence the singular vectors corresponding to $\tilde{\lambda}_k$ are not necessarily close to those corresponding to $\lambda_k$. As a simple example, consider the following $2\times 2\times 2$ tensors: $$ \mathscr{T}=(1+\delta)\mathbf{e}_1\otimes\mathbf{e}_1\otimes \mathbf{e}_1+(1-\delta)\mathbf{e}_2\otimes\mathbf{e}_2\otimes \mathbf{e}_2, $$ and $$ \tilde{\mathscr{T}}=(1-\delta)\mathbf{e}_1\otimes\mathbf{e}_1\otimes \mathbf{e}_1+(1+\delta)\mathbf{e}_2\otimes\mathbf{e}_2\otimes \mathbf{e}_2, $$ where $\delta>0$ represents a small perturbation. Obviously, $\lambda_1=\tilde{\lambda}_1=1+\delta$ and $\lambda_2=\tilde{\lambda}_2=1-\delta$. But the correct way to study the effect of perturbation is to compare $(1+\delta)\mathbf{e}_1\otimes\mathbf{e}_1\otimes \mathbf{e}_1$ with $(1-\delta)\mathbf{e}_1\otimes\mathbf{e}_1\otimes \mathbf{e}_1$, and $(1-\delta)\mathbf{e}_2\otimes\mathbf{e}_2\otimes \mathbf{e}_2$ with $(1+\delta)\mathbf{e}_2\otimes\mathbf{e}_2\otimes \mathbf{e}_2$, and not the other way around. In other words, we want to pair $\lambda_1$ with $\tilde{\lambda}_2$, and $\lambda_2$ with $\tilde{\lambda}_1$. Another notable difference is between the perturbation bound \eqref{eq:odecodavis0} for singular vectors and those from Wedin-Davis-Kahan $\sin\Theta$ theorems. The gap between singular values is absent in the bound \eqref{eq:odecodavis0}. This means that for higher-order odeco tensors, the perturbation affects the singular vectors separately. The perturbation bound \eqref{eq:odecodavis0} depends only on the amount of perturbation relative to their corresponding singular value. For either \eqref{eq:odecoweyl0} or \eqref{eq:odecodavis0} to hold, we can take the constant $C=17$. It is plausible that this constant can be further improved. In general, for any such bound to hold, it is necessary that the constant $C\ge 1$ again by considering two rank-one tensors differing only in the nonzero singular value or in one of its corresponding singular vectors. Our next result shows that when the perturbation is sufficiently small, or for large enough singular values, we can indeed take $C=1$ or arbitrarily close to $1$. \begin{theorem}\label{th:ortho-perturb} Let $\mathscr{T}$ and $\tilde{\mathscr{T}}$ be two $d_1\times\cdots\times d_p$ ($p>2$) odeco tensors with SVD: $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}], $$ and $$ \tilde{\mathscr{T}}=[\{\tilde{\lambda}_k: 1\le k\le d_{\min}\}; \tilde{\bU}^{(1)},\ldots,\tilde{\bU}^{(p)}], $$ respectively where $d_{\min}=\min\{d_1,\ldots, d_p\}$. There exists a permutation $\pi: [d_{\min}]\to [d_{\min}]$ such that for any $\varepsilon>0$, \begin{equation} \label{eq:odecoweyl} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le \|\tilde{\mathscr{T}}-\mathscr{T}\|, \end{equation} and \begin{equation} \label{eq:odecodavis} \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k}. \end{equation} provided that $\|\tilde{\mathscr{T}}-\mathscr{T}\|\le c_{\varepsilon}\lambda_k$ for some constant $c_{\varepsilon}>0$ depending on $\varepsilon$ only. \end{theorem} The dependence of $c_\varepsilon$ on $\varepsilon$ can also be made explicit. In particular, we can take $$ c_{\varepsilon}=\min\{[1+2(1+\varepsilon)]^{-1},\,h^{-1}(\varepsilon/(1+\varepsilon))\} $$ where \begin{align}\label{eq:cpdisp} h(x)&=(1+x)\left[1-\left({1-x}\over{1+x}\right)^{2}\right]^{1\over 2}+(1+\varepsilon)x{(1+x)}. \end{align} When considering infinitesimal perturbation in that $\|\tilde{\mathscr{T}}-\mathscr{T}\|=o(\lambda_k)$, we can express the bound \eqref{eq:odecodavis} for singular vectors as $$ \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k}+o\left({\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k}\right), $$ which is more convenient for asymptotic analysis. In general, if $\lambda_k=0$ for some $k<d_{\min}$, $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}$s are not identifiable and therefore one cannot bound the effect of perturbation on the singular vectors in a meaningful way, as Theorems \ref{th:odeco-weyl} and \ref{th:ortho-perturb} also indicate. An exception is the case when $0$ is a singular value and simple, i.e., $\lambda_k>0$ for $k=1,\dots,d_{\min}-1$ and $\lambda_{d_{\min}}=0$. In this case, we can also derive nontrivial bounds for $(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{d_{\min}}^{(1)},\ldots, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{d_{\min}}^{(p)})$ since $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{d_{\min}}^{(q)}$s are determined by $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}, \ldots, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{d_{\min}-1}^{(q)}$ for which the perturbation effect can be bounded appropriately. In particular, Theorems \ref{th:odeco-weyl} and \ref{th:ortho-perturb} provide perturbation bounds $$ \max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {C\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k} $$ for $1\le k\le d_{\min}-1$. By orthogonality of $\bU^{(q)}$ and $\tilde{\bU}^{(q)}$, this also means we have a perturbation bound for the last singular value-vector pair: $$ \max_{1\le q\le p} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{d_{\min}}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(d_{\min})}^{(q)}) \le \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{d_{\min}-1}}. $$ \subsection{Numerical Illustration} To further illustrate these bounds, we carried out a couple of numerical experiments following the earlier work of \cite{mu2015successive}. In the first setting, we simulated two sets of i.i.d. random orthogonal matrices $\bU^{(q)}$, and $\bar{\bU}^{(q)}$ of dimension $20\times 10$, for $q=1,2,3$. We next generated $\hat{\bU}^{(q)}$ as the matrix with columns as $\hat{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_i=\sqrt{1-\rho^2}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_i+\rho \bar{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_i$, for $\rho=15/\lambda$ and $i=1,\dots,10$. Then we computed the orthogonal matrices $\tilde{\bU}^{(q)}$ through the polar decomposition of $\hat{\bU}^{(q)}=\tilde{\bU}^{(q)}\bP^{(q)}$. Finally we took the two odeco tensors as $\mathscr{T}=\lambda\displaystyle\sum_{i=1}^{10}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(1)}\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(2)}\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(3)}$ and $\tilde{\mathscr{T}}=\lambda\displaystyle\sum_{i=1}^{10}\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_i^{(1)}\otimes\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_i^{(2)}\otimes\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_i^{(3)}$. We considered $\lambda=\omega\cdot d^{3/4}$ and vary $\omega$ over the 200 values in $\{1000,1000/2,\dots,1000/199,5\}$. Each point on the plot of Figure \ref{fig:rand_ortho} corresponds to one value of $\lambda$ and one random instance of $\mathscr{T}$ and $\tilde{\mathscr{T}}$. To fix ideas, on the Y axis we plot (in the notation of Theorem \ref{th:ortho-perturb}) the values $\max\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k,\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)})$ where the maximum is over $1\le q\le 3$ and $1\le k\le 10$. On the X axis, we have $\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda$ where the tensor spectral norm was evaluated based on 1000 random starts followed by power iteration. \begin{figure}[htbp] \centering \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{rand_ortho.pdf} \caption{Random Orthogonal Tensors}\label{fig:rand_ortho} \end{minipage}\hfill \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{lroat_gauss.pdf} \caption{Gaussian Errors}\label{fig:simul_gauss} \end{minipage} \end{figure} For reference, we add the $y=x$ line on the plot. It is evident that the maximum $\sin\angle$ distance between $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)}$ is less than $\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda $ on all instances, thus verifying Theorem \ref{th:ortho-perturb}. In Figure \ref{fig:rand_ortho}, the predicted bounds match almost exactly with the observed ones, showing the optimality of our bounds in this regime. In the second set of experiment, we took a $20\times 20\times 20$ tensor $\mathscr{T}=\lambda\displaystyle\sum_{i=1}^{10}\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_i^{\otimes 3}+\mathscr{E}$, where the error tensor $\mathscr{E}$ consists of i.i.d. random errors $\varepsilon_{ijk}\stackrel{iid}{\sim} N(0,1)$. We set $\lambda=\omega\cdot d^{3/4}$ and vary $\omega$ over the 200 values in $\{1000,1000/2,\dots,1000/199,5\}$. We computed an odeco approximation to $\mathscr{X}$ by random initialization followed by power iteration and successive deflation (as described in \cite{anand2014tensor} and \cite{mu2015successive}). Finally the LROAT algorithm of \cite{chen2009tensor} was used to obtain an odeco approximation $\tilde{\mathscr{T}}$. As before, each point on the plot corresponds to one value of $\lambda$ and one random instance of $\mathscr{E}$. This numerical study can be directly compared to the simulation studies from \cite{mu2015successive}, where an algorithm-dependent perturbation bound was derived in a similar setting. Our upper bound is significantly tighter, and appears to be optimal when the perturbation is small. \subsection{Perturbation of Nonessential Singular Vectors} Thus far, we have focused on the essential singular values and vectors of odeco tensors. In deriving their perturbation bounds, we actually established more precise characterization of the perturbation effect on $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k$s. See \eqref{eq:secordpert}. It turns out that we can leverage such a characterization to develop perturbation bounds for general real singular vectors of an odeco tensor. Denote by $r$ the rank of $\mathscr{T}$, or equivalently the number of nonzero $\lambda_k$. Write $$ \bM^{(q)}=\left[(\tilde{\mathscr{T}}-\mathscr{T})\times_{s\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(s)}\,\dots\, (\tilde{\mathscr{T}}-\mathscr{T})\times_{s\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{d_r}^{(s)} \right] $$ \begin{theorem}\label{th:allsingpert} Assume that $$ \max_{1\le q\le p}\left\|\bM^{(q)}\right\|\le C_1\|\tilde{\mathscr{T}}-\mathscr{T}\|\qquad and\qquad \|\tilde{\mathscr{T}}-\mathscr{T}\|\le C_2\lambda_rr^{-1/2(p-2)}. $$ for some constants $C_1,C_2>0$. If $(\lambda;\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(1)},\dots,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(p)})$ is a singular value-vector tuple of $\mathscr{T}$, then there exists a singular value/vector tuple $(\tilde{\lambda};\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(1)},\dots,\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(p)})$ of $\tilde{\mathscr{T}}$ such that $$ \max_{1\le q\le p}\|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}-\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}\|\le \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{\min}^*} $$ where $\lambda_{\min}^*=\min\{\lambda_k:|\langle\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(1)}_k\rangle|>0\}$ and $C$ is a constant depending on $C_1,C_2$ and $p$ only. Here we use the convention that $\lambda_{\min}^*=0$ if $\{\lambda_k:|\langle\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(1)}_k\rangle|>0\}=\emptyset$ and $1/0=+\infty$. Furthermore, if $\lambda>0$, then $$ \abs*{\tilde{\lambda}-\lambda}\le \dfrac{C\lambda\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{\min}^*}. $$ \end{theorem} \section{Nearly Orthogonal Tensors} The perturbation bounds for singular values and vectors derived in the previous section have a direct generalization to a larger class of tensors that are close to being odeco. In particular, let $\mathscr{T}_1$ and $\mathscr{T}_2$ be any two odeco approximations of a tensor $\mathscr{X}$. By triangle inequality we have $$ \|\mathscr{T}_1-\mathscr{T}_2\|\le \|\mathscr{T}_1-\mathscr{A}\|+\|\mathscr{T}_2-\mathscr{A}\|. $$ Now applying Theorems \ref{th:odeco-weyl} and \ref{th:ortho-perturb} one obtains perturbation bounds for the singular values and singular vectors of $\mathscr{T}_1$ and $\mathscr{T}_2$. Note that these bounds depend on the quality of approximation $\|\mathscr{T}_i-\mathscr{A}\|$. It is natural that a tensor $\mathscr{A}$ close to being odeco can be approximated better in this fashion. Finally, we do not require any optimality property of the odeco approximations. The perturbation bounds hold for any such approximation $\mathscr{T}_1$ and $\mathscr{T}_2$, although the bounds are sharper for better approximations. We now illustrate useful applications of these bounds in several more concrete settings. \subsection{Perturbation Bounds for Incoherent Tensors} Consider \begin{equation} \label{eq:incoherent} \mathscr{X}=\sum_{k=1}^{r} \eta_k \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(1)}\otimes\cdots\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(p)}\quad\text{and} \quad \tilde{\mathscr{X}}=\sum_{k=1}^{\tilde{r}} \tilde{\eta}_k \tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k^{(1)}\otimes\cdots\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k^{(p)} \end{equation} where $\eta_1\ge \ldots\ge\eta_{r}> 0$ and $\tilde{\eta}_1\ge\ldots\ge \tilde{\eta}_{\tilde{r}}>0$. Different from odeco tensors, the unit vectors $\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(q)}$s and $\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k^{(q)}$ in \eqref{eq:incoherent} are not required to be orthonormal but assumed to be close to being orthonormal. More specifically, we shall assume that $\bA^{(q)}$s and $\tilde{\bA}^{(q)}$ satisfy the isometry condition \begin{equation} \label{eq:isometry} 1-\delta\le \min\{\lambda_{\min}(\bA^{(q)}),\,\lambda_{\min}(\tilde{\bA}^{(q)})\}\le \max\{\lambda_{\max}(\bA^{(q)}), \lambda_{\max}(\tilde{\bA}^{(q)})\}\le 1+\delta \end{equation} for all $q=1,\dots,p$ for some $0\le \delta<1$, where $\bA^{(q)}=[\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_1^{(q)},\ldots,\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_{r}^{(q)}]$, $\tilde{\bA}^{(q)}=[\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_1^{(q)},\ldots,\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_{\tilde{r}}^{(q)}]$ and $\lambda_{\min}(\cdot)$ and $\lambda_{\max}(\cdot)$ evaluate the smallest and largest singular values, respectively, of a matrix. Clearly $\delta=0$ if $\bA^{(q)},\,\tilde{\bA}^{(q)}$ are orthonormal so that $\delta$ measures the incoherence of its column vectors. A canonical example of incoherent tensors arises in a probabilistic setting: let $\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_1^{(q)},\ldots, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_r^{(q)}$ be independently and uniformly sampled from the unit sphere; then it is not hard to see that $\delta=O_p(\sqrt{r/d_q})$. In light of Kruskal's Theorem \citep{kruskal1977three}, the decompositions in \eqref{eq:incoherent} are essentially unique and therefore $\mathscr{X}$ and $\tilde{\mathscr{X}}$ cannot be odeco unless $\delta=0$. However, $\mathscr{X}$, $\tilde{\mathscr{X}}$ are close to being odeco when $\delta$ is small. More specifically, let $\bA^{(q)}=\bU^{(q)}\bP^{(q)}$ and $\tilde{\bA}^{(q)}=\tilde{\bU}^{(q)}\tilde{\bP}^{(q)}$ be their polar decompositions, and \begin{equation} \label{eq:defodecX} \mathscr{T}=\sum_{k=1}^{r} \eta_k \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)}\quad\text{and }\quad \tilde{\mathscr{T}}=\sum_{k=1}^{\tilde{r}} \tilde{\eta}_k \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)}\otimes\cdots\otimes\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}. \end{equation} It is clear that $\mathscr{T}$ and $\tilde{\mathscr{T}}$ are odeco and moreover, we can show that \begin{theorem} \label{th:incoherent} Let $\mathscr{X}$, $\tilde{\mathscr{X}}$ be defined by \eqref{eq:incoherent} with the unit vectors $\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(q)}$s and $\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k^{(q)}$s obeying \eqref{eq:isometry}, and $\mathscr{T}$, $\tilde{\mathscr{T}}$ by \eqref{eq:defodecX}. Then $$ \|\mathscr{T}-\mathscr{X}\|\le (p+1)\delta\eta_1,\quad\text{and}\quad \|\tilde{\mathscr{T}}-\tilde{\mathscr{X}}\|\le (p+1)\delta\tilde{\eta}_1 $$ and $$ \max\left\{\max_{1\le q\le p}\sin\angle (\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}),\,\max_{1\le q\le p}\sin\angle (\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\right\}\le \delta/\sqrt{2}. $$ \end{theorem} Theorem \ref{th:incoherent} reveals that there exist odeco approximations $\mathscr{T}$ and $\tilde{\mathscr{T}}$ such that $\|\mathscr{X}-\mathscr{T}\|\le (p+1)\delta\eta_1$ and $\|\tilde{\mathscr{X}}-\tilde{\mathscr{T}}\|\le (p+1)\delta\tilde{\eta}_1$ respectively. This is conjunction with Theorem \ref{th:odeco-weyl} implies a perturbation bound for the components of $\mathscr{X}$ and $\tilde{\mathscr{X}}$. \begin{corollary} \label{co:incoherent} Let $\mathscr{X}$ and $\tilde{\mathscr{X}}$ be defined by \eqref{eq:incoherent} with the unit vectors $\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(q)}$s and $\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k^{(q)}$ obeying \eqref{eq:isometry}. Then there exist a numerical constant $C>0$ and a permutation $\pi: [d_{\min}]\to [d_{\min}]$ such that for any $1\le k\le r$, $$ |\eta_k-\tilde{\eta}_{\pi(k)}|\le C[(p+1)\delta(\eta_1+\tilde{\eta}_1)+\|\mathscr{X}-\tilde{\mathscr{X}}\|] $$ and $$ \max_{1\le q\le p}\sin\angle (\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(q)},\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_{\pi(k)}^{(q)})\le C\{(p+1)\delta(\eta_1+\tilde{\eta}_1)+\|\mathscr{X}-\tilde{\mathscr{X}}\|+\delta\}/\eta_k. $$ \end{corollary} We want to point out that Corollary \ref{co:incoherent} can also be viewed as a ``robust'' version of Theorem \ref{th:odeco-weyl} as the latter can be viewed as a special case of the former when $\delta=0$. \subsection{Additive Perturbation of Odeco Tensor} The perturbation bounds we derived are fairly general and can be applied to various problems in statistics and machine learning. For example, in a typical spectral learning scenario, we observe $\mathscr{X}$, which is $\mathscr{T}$ ``contaminated'' by an additive perturbation $\mathscr{E}$ and want to infer from $\mathscr{X}$ the $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}$s. See, e.g., \cite{janzamin2019spectral}. In general, $\mathscr{X}=\mathscr{T}+\mathscr{E}$ is no longer odeco, and we may not be able to define its SVD in the same fashion as \eqref{eq:svd}. However, when $\|\mathscr{E}\|\le \varepsilon$, any $\varepsilon$-odeco approximation of $\mathscr{X}$ is necessarily close to $\mathscr{T}$ as well and its singular values and vectors to those of $\mathscr{T}$. More precisely, we have the following result as an immediate consequence of Theorem \ref{th:odeco-weyl}. \begin{corollary} \label{co:nearly} Let $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}], $$ be an odeco tensor ($p\ge 3$), and $$ \tilde{\mathscr{T}}=[\{\tilde{\lambda}_k: 1\le k\le d\}; \tilde{\bU}^{(1)},\ldots,\tilde{\bU}^{(p)}] $$ be any odeco approximation to $\mathscr{X}:=\mathscr{T}+\mathscr{E}$. Then there are a numerical constant $C\ge 1$ and a permutation $\pi: [d]\to [d]$ such that \begin{equation} \label{eq:coweyl} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le C(\|\tilde{\mathscr{T}}-\mathscr{X}\|+\|\mathscr{E}\|), \end{equation} and \begin{equation} \label{eq:codavis} \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {C(\|\tilde{\mathscr{T}}-\mathscr{X}\|+\|\mathscr{E}\|)\over \lambda_k} \end{equation} for all $k=1,\ldots, d_{\min}$. \end{corollary} While Corollary \ref{co:nearly} holds for any odeco approximation to $\mathscr{X}$, it is oftentimes of interest to ``estimate'' the singular values and vectors of $\mathscr{T}$ using a ``good'' odeco approximation. In particular, one may consider the best odeco approximation: $$ \tilde{\mathscr{T}}^{\rm best}:=\inf_{\mathscr{A}{\rm \ is\ odeco}}\|\mathscr{X}-\mathscr{A}\|. $$ It is clear that $$ \|\tilde{\mathscr{T}}^{\rm best}-\mathscr{X}\|\le \|\mathscr{T}-\mathscr{X}\|=\|\mathscr{E}\|, $$ where $\tilde{\mathscr{T}}^{\rm best}=[\{\tilde{\lambda}_k: 1\le k\le d\}; \tilde{\bU}^{(1)},\ldots,\tilde{\bU}^{(p)}]$. The bounds \eqref{eq:coweyl} and \eqref{eq:codavis} now become: \begin{equation} \label{eq:coweyl1} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le C\|\mathscr{E}\|, \end{equation} and \begin{equation} \label{eq:codavis1} \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {C\|\mathscr{E}\|\over \lambda_k}. \end{equation} Indeed \eqref{eq:coweyl1} and \eqref{eq:codavis1} continue to hold for any odeco approximation $\tilde{\mathscr{T}}$ obeying \begin{equation} \label{eq:odecoappr} \|\tilde{\mathscr{T}}-\mathscr{X}\|\lessim \|\mathscr{E}\|. \end{equation} It is worth noting that computing an odeco approximation that satisfies \eqref{eq:odecoappr} is not always straightforward. In fact, development of efficient algorithms that can produce ``good'' odeco approximation to a nearly odeco tensor is an active research area with fervent interest. A flurry of recent works suggest that finding a $\mathscr{X}^{\rm odeco}$ satisfying \eqref{eq:odecoappr} is feasible at least when $\|\mathscr{E}\|$ is sufficiently small. Interested readers are referred to \cite{anand2014tensor, mu2015successive,mu2017greedy, belkin2018eigenvectors} and references therein for more detailed discussions regarding this aspect. \subsection{High Dimensional Tensor SVD} \label{sec:app} A particularly common type of perturbation $\mathscr{E}$ is a noisy tensor whose entries are independent standard normal random variables, in particular when $d_j$s are large. The so-called tensor SVD problem has been studied earlier by \cite{richard2014statistical,liu2017characterizing,zhang2018tensor} among others, and is among the most commonly used methods to reduce the dimensionality of the data, and oftentimes serves as a useful first step to capture the essential features in the data for downstream analysis. As noted before, a natural estimate of $\mathscr{T}$ is the best odeco approximation to $\mathscr{X}$: \begin{equation} \label{eq:defhatTsvd} \hat{\mathscr{T}}=[\{\hat{\lambda}_k: 1\le k\le d_{\min}\}; \hat{\bU}^{(1)},\ldots,\hat{\bU}^{(p)}]:=\argmin_{\mathscr{A} {\rm \ is\ odeco}}\|\mathscr{X}-\mathscr{A}\|. \end{equation} A standard argument yields $\|\mathscr{E}\|=O_p(\sqrt{d_1+\cdots+d_p})$. See, e.g., \cite{raskutti2019convex}. Together with Corollary \ref{co:nearly}, this implies that there exists a permutation $\pi: [d_{\min}]\to [d_{\min}]$ so that \begin{equation} \label{eq:svdbd1} \EE \max_{1\le k\le d_{\min}}|\hat{\lambda}_{\pi(k)}-\lambda_k|\le C\cdot\sqrt{d_1+\cdots+d_p}, \end{equation} and \begin{equation} \label{eq:svdbd2} \EE\max_{1\le q\le p} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\hat{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le C\cdot \min\left\{{\sqrt{d_1+\ldots+d_p}\over \lambda_k},1\right\}, \end{equation} for any $k=1,\ldots,d_{\min}$. Bounds similar to those given by \eqref{eq:svdbd2} are known when $\mathscr{T}$ is of rank one, that is, $\lambda_2=\cdots=\lambda_{d_{\min}}=0$. See, e.g., \cite{richard2014statistical}. \eqref{eq:svdbd2} indicates that the same bounds hold uniformly over all singular values and vectors of an odeco tensor. In other words, we can estimate any singular value and vectors of $\mathscr{T}$ at the same rate as if all other singular values are zero or equivalently as in the rank one case. This also draws contrast with the setting considered by \cite{zhang2018tensor}. Generalizing the rank-one model of \cite{richard2014statistical}, \cite{zhang2018tensor} studies efficient estimation strategies of $\mathscr{T}$ and its decomposition when it is of low multilinear ranks. Their analysis requires that $\mathscr{T}$ is nearly cubic, e.g., $d_1\asymp d_2\asymp d_3$, and the ranks are of an order up to $d_1^{1/2}$ among other conditions. Odeco tensors have more innate structure and consequently, as \eqref{eq:svdbd1} and \eqref{eq:svdbd2} show, if $\mathscr{T}$ is odeco, its shape and rank are irrelevant for estimating its singular values and vectors. Note that if instead we use the perturbation bounds derived from matricization, the factor $\sqrt{d_1+\cdots+d_p}$ on the righthand side of \eqref{eq:svdbd1} and \eqref{eq:svdbd2} becomes $\sqrt{d_1\times\cdots\times d_{p-1}}$ which can be significantly larger. Indeed, both bounds \eqref{eq:svdbd1} and \eqref{eq:svdbd2} can be shown to be minimax optimal in that no other estimates of the singular vectors or values based upon $\mathscr{X}$ could attain a faster rate of convergence, and therefore characterize the exact effect of dimensionality on our ability to infer $\lambda_k$s or $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k$s. \begin{theorem} \label{th:tpcalower} Consider the tensor SVD model $\mathscr{X}=\mathscr{T}+\mathscr{E}$ where $$ \mathscr{T}=[\{\lambda_k: 1\le k\le d_{\min}\}; \bU^{(1)},\ldots,\bU^{(p)}] $$ is odeco and $\mathscr{E}$ has independent standard normal entries. Then there exists a constant $c>0$ such that \begin{equation} \label{eq:svdbd3} \inf_{\tilde{\lambda}_k}\sup_{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d_q-1}: 1\le q\le p}\EE |\tilde{\lambda}_k-\lambda_k|\ge c\cdot\sqrt{d_1+\cdots+d_p}, \end{equation} and \begin{equation} \label{eq:svdbd4} \inf_{\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)},\ldots,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}}\sup_{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d_q-1}: 1\le q\le p}\EE\max_{1\le q\le p} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\ge c\cdot\min\left\{{\sqrt{d_1+\ldots+d_p}\over \lambda_k},1\right\}, \end{equation} where the infimum in \eqref{eq:svdbd3} and \eqref{eq:svdbd4} is taken over all estimates of the form $\hat{\mathscr{T}}=\sum_{k=1}^{d_{\min}}\hat{\lambda}_k\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)}\otimes\dots\otimes\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}$, (not necessarily odeco) based on observing $\mathscr{X}$. \end{theorem} Theorem \ref{th:tpcalower} again confirms that our perturbation bounds is optimal at least up to a numerical constant. \section{Proofs} \begin{proof}[Proof of Proposition \ref{pr:example}] For brevity, we shall assume that $\lambda=1$. For any $\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d-1}$, there exists $\bfm c} \def\bC{\bfm C} \def\CC{\mathbb{C}=(c_1,\ldots, c_d)^\top \in {\cal S}} \def\cS{{\cal S}^{d-1}$ such that $$ \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}=c_1\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}+\ldots+c_d\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_d^{(1)}. $$ Therefore \begin{eqnarray*} &&\min_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)},\ldots, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle\\ &=&\min_{\bfm c} \def\bC{\bfm C} \def\CC{\mathbb{C}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)},\ldots, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\langle \mathscr{T}, (c_1\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}+\ldots+c_d\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_d^{(1)})\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle\\ &=&\min_{\bfm c} \def\bC{\bfm C} \def\CC{\mathbb{C}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)},\ldots, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\langle \sum_{k=1}^d c_k\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(2)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle\\ &=&\min_{\bfm c} \def\bC{\bfm C} \def\CC{\mathbb{C}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\left\|\sum_{k=1}^d c_k\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(2)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)}\right\|. \end{eqnarray*} Note that since $$ \sum_{k=1}^d c_k\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(2)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)} $$ is a $(p-1)$th order odeco tensor, we get $$ \left\|\sum_{k=1}^d c_k\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(2)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(p)}\right\|=\max_{1\le k\le d}c_k, $$ so that $$ \min_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)},\ldots, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle=\min_{\bfm c} \def\bC{\bfm C} \def\CC{\mathbb{C}\in {\cal S}} \def\cS{{\cal S}^{d-1}}\max_{1\le k\le d}c_k={1\over \sqrt{d}} $$ The proof is now completed. \end{proof} \vskip 25pt \begin{proof}[Proof of Theorem \ref{th:odeco-weyl}] In fact, Theorem \ref{th:odeco-weyl} follows from Theorem \ref{th:ortho-perturb}. We shall now describe how we may proceed to prove Theorem \ref{th:odeco-weyl} in light of Theorem \ref{th:ortho-perturb}. Indeed, note that Theorem \ref{th:ortho-perturb} already shows that \eqref{eq:odecoweyl0} and \eqref{eq:odecodavis0} hold for any $\|\mathscr{T}-\tilde{\mathscr{T}}\|\le c_\varepsilon\lambda_k$ with an appropriate choice of constant $C=1+\varepsilon$. When $\|\mathscr{T}-\tilde{\mathscr{T}}\|>c_{\varepsilon}\lambda_k$, then for any $k'\in[d]$, $$ \max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{k'}^{(q)})\le 1\le {\|\mathscr{T}-\tilde{\mathscr{T}}\|\over c_\varepsilon\lambda_k}, $$ so that \eqref{eq:odecodavis0} holds with $C=\max\{(1+\varepsilon),1/c_{\varepsilon}\}$. A careful inspection of the proof to Theorem \ref{th:ortho-perturb} also shows that \eqref{eq:odecoweyl0} holds with $C=1$ not only for any $\lambda_k\ge \|\mathscr{T}-\tilde{\mathscr{T}}\|/c_\varepsilon$ but also for any $\tilde{\lambda}_k\ge \|\mathscr{T}-\tilde{\mathscr{T}}\|/c_\varepsilon$. On the other hand, for any $\lambda_k< \|\mathscr{T}-\tilde{\mathscr{T}}\|/c_\varepsilon$ and $\lambda_{k'}< \|\mathscr{T}-\tilde{\mathscr{T}}\|/c_\varepsilon$, we must have $$ |\lambda_k-\tilde{\lambda}_k|<\|\mathscr{T}-\tilde{\mathscr{T}}\|/c_\varepsilon, $$ which again suggests that \eqref{eq:odecoweyl0} holds with $C=1/c_\varepsilon$. Optimizing over all possible $\varepsilon$ we have that $\max\{1+\varepsilon,1/c_{\varepsilon}\}$ is minimized at $\varepsilon=2.94$ for an objective value of $16.48$, so that we can take $C=17$. \end{proof} \vskip 25pt \begin{proof}[Proof of Theorem \ref{th:ortho-perturb}] We shall derive the stronger statement: \begin{align*}\label{eq:secordpert} \max_{1\le q\le p}&\sin\angle\left(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{k}^{(q)} +\dfrac{1}{\lambda_{k}}(\tilde{\mathscr{T}}-\mathscr{T})\times_{s\neq q} \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{k}^{(s)}\right)\\ \le & \left(2+{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over\lambda_k}\right) \left(\dfrac{(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{k}}\right)^{p-1}\numberthis \end{align*} for $k\in[d_{\min}]$. The proof is fairly involved and we begin by giving a short summary of the main challenges and ideas. The proof proceeds by induction over $k$. For the basic case when $k=1$, we can derive the bounds for $\tilde{\lambda}_1$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_1$ using the variational characterization of $\lambda_1$ and $\tilde{\lambda}_1$. Special attention is needed to deal with the case when the best rank one approximation of $\mathscr{T}$ is not unique, or equivalently when $\lambda_1$ is not simple. In this case it is crucial to identify the right singular vectors of $\tilde{\mathscr{T}}$ to be matched with $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}$s. A heuristic argument why this is possible for $p>2$ can be illustrated in the case when $\mathscr{T}=\sum_{i=1}^d \mathbf{e}_i^{\otimes p}$. When $p=2$, any $\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}\in {\cal S}} \def\cS{{\cal S}^{d-1}$ satisfies $\langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}\otimes \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}\rangle=1$ so we cannot recover the singular vectors, even without perturbation. When $p>2$, however, $$ \max_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}\in {\cal S}} \def\cS{{\cal S}^{d-1}} \langle \mathscr{T}, \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}\otimes\cdots\otimes \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}\rangle=\sum_{i=1}^d x_i^p\le \max_{1\le i\le d} |x_i|^{p-2} $$ so that the maximum is attained only when $x=\mathbf{e}_i$ for some $i$. The case when $k>1$ is more delicate where we need to make use of the fact that we have matched all leading $k-1$ singular values and vectors of $\mathscr{T}$ and $\tilde{\mathscr{T}}$. Due to the lack of Courant-Fischer-Weyl min-max principle for higher order tensors, we can only resort to, again, the variational characterization of $\lambda_k$ and $\tilde{\lambda}_k$. Note that we could proceed in an identical fashion as the basic case if $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_l^{(q)}=\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_l^{(q)}$ for $l<k$. Of course this is not the case. Nonetheless we do have bounds for the perturbation of $\tilde{\lambda}_1,\ldots, \tilde{\lambda}_{k-1}$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_1^{(q)},\ldots, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{k-1}^{(q)}$ in the form of \eqref{eq:secordpert}. By carefully leveraging these perturbation bounds, along with the fact that $\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}$ must be orthogonal to $\{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\ldots, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{k-1}^{(q)}\}$ and $\{\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_1^{(q)},\ldots, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{k-1}^{(q)}\}$, we can derive the desired perturbation bounds for $\tilde{\lambda}_k$ and $\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_k$ and therefore conclude the induction. We are now in position to present the detailed proof. For notational convenience, we shall assume that $d_1=\cdots=d_p=:d$. The general proof follows by identical steps, with $d$ replaced everywhere by $d_{\min}$ and some equality signs replaced by ``less than or equal to" signs. In the body of the proof, we indicate which steps have such a change. As noted before, the proof proceeds by induction. To this end, we first consider the basic case when $k=1$. \paragraph{\textbf{Basic case}} Recall that $$ \lambda_1=\max_{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d-1}: q=1,\ldots, p} \langle\mathscr{T}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle $$ and $$ \tilde{\lambda}_1=\max_{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d-1}: q=1,\ldots, p} \langle\tilde{\mathscr{T}}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle $$ We consider separately the cases when $\tilde{\lambda}_1\le \lambda_1$ and $\tilde{\lambda}_1> \lambda_1$. \bigskip \paragraph{\textbf{Basic case (a): $\tilde{\lambda}_1\le \lambda_1$}} Observe that \begin{eqnarray*} \lambda_1&=&\langle \mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle\\ &\le&\langle \tilde{\mathscr{T}},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle+ |\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle| \\ &=&\sum_{k=1}^d \tilde{\lambda}_k\prod_{q=1}^p \langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle+ |\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle|. \end{eqnarray*} The first term can be further bounded by \begin{eqnarray*} &&\sum_{k=1}^d \tilde{\lambda}_k\prod_{q=1}^p \langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle\\ &\le&\max_{1\le k\le d}\left\{\tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p}\right\}\times\left(\sum_{k=1}^d\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{2/p}\right)\\ &\le&\max_{1\le k\le d}\left\{\tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p}\right\}\times \left(\prod_{q=1}^p\left(\sum_{k=1}^d|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^2\right)\right)^{1/p}\\ \footnotemark &\le &\max_{1\le k\le d}\left\{\tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p}\right\}, \end{eqnarray*} \footnotetext{\label{note1}This line holds with equality when $d_1=\dots=d_p$ but is a ``$\le$" in general.} \noindent where the second inequality follows from Holder's inequality. Denote by $\pi(1)$ the index that maximizes the rightmost hand side. When there are more than one maximizers, we take $\pi(1)$ to be an arbitrary maximizing index. Then $$ \lambda_1\le \tilde{\lambda}_{\pi(1)}+ |\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle|, $$ which, together with the fact that $\tilde{\lambda}_{\pi(1)}\le \tilde{\lambda}_1\le\lambda_1$, implies (by analogous calculations) that $$ |\lambda_1- \tilde{\lambda}_{\pi(1)}|\le \max\{|\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle|, \, |\langle \tilde{\mathscr{T}}-\mathscr{T},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(1)}\otimes\cdots \otimes\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(p)}\rangle| \}. $$ In addition, $$\begin{aligned} \lambda_1&\le \tilde{\lambda}_{\pi(1)}\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle|^{(p-2)/p}+ |\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle| \\&\le \lambda_1\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle|^{(p-2)/p}+ |\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle|. \end{aligned} $$ Thus, $$ \left(\prod_{q=1}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle|\right)^{1/p}\ge (1-\lambda_1^{-1} |\langle \tilde{\mathscr{T}}-\mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(p)}\rangle|)^{1/(p-2)} $$ for all $q=1,\ldots, p$. Now recall that $$ \sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})=\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}-\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}\rangle. $$ We get \begin{equation}\label{eq:prebd} \begin{split} \prod_{q=1}^p\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}) =&\prod_{q=1}^p(1-\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle^2) \\ \le & \left(1-\frac{1}{p}\sum_{q=1}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle^2\right)^p\\ \le & \left(1-\left(\prod_{q=1}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle|\right)^{2/p}\right)^p\\ \le & \left(1- (1-\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|)^{2/(p-2)}\right)^p. \end{split} \end{equation} In the above we use AM-GM inequality to get the first and second inequalities. We shall now use this to derive a sharper bound for the lefthand side. Note that $$ \mathscr{T}(\bI,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)}) =\lambda_1\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(1)}. $$ Thus, for any unit vector $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\perp\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(1)}_{\pi(1)}$ we get \begin{align*} \lambda_{1}\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(1)} \rangle =& \mathscr{T}(\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)})\\ =& (\mathscr{T}-\tilde{\mathscr{T}})(\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)}) +\tilde{\mathscr{T}}(\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)}). \end{align*} The second term on the rightmost hand side can be further bounded by \begin{eqnarray*} && |\tilde{\mathscr{T}}(\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)})|\\ &\footnotemark=& \abs*{\sum_{k\neq \pi(1)}\tilde{\lambda}_k \langle \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)},\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V} \rangle \prod_{q=2}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle}\\ &\le&\tilde{\lambda}_{\pi(1)}\sum_{k\neq \pi(1)} \prod_{q=2}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|\\ &\le&\tilde{\lambda}_{\pi(1)} \cdot \prod_{q=2}^p\left(\sum_{k\neq \pi(1)} |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^2\right)^{1/2}\\ &\le&\lambda_1 \prod_{q=2}^p\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}), \end{eqnarray*} \footnotetext{\label{note2} This equality is true even when $d_i$s are not necessarily equal.} where the second inequality follows from Cauchy-Schwarz inequality. The last inequality uses $\tilde{\lambda}_{\pi(1)}\le \lambda_1$.This gives \begin{align*}\label{eq:secopert} &\sin\angle\left(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(1)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(1)}+\dfrac{1}{\lambda_1}(\tilde{\mathscr{T}}-\mathscr{T})(\bI,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)})\right)\\ =&\sup_{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\in{\cal S}} \def\cS{{\cal S}^{d-1},\,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\perp \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_1^{(1)}} \abs*{\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(1)} +\dfrac{1}{\lambda_1}(\tilde{\mathscr{T}}-\mathscr{T})(\bI,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)})\rangle}\\ \le & \prod_{q=2}^p\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}).\numberthis \end{align*} \noindent Moreover, \begin{align*} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{\pi(1)}^{(1)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)}) =&\sup_{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\in{\cal S}} \def\cS{{\cal S}^{d-1},\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\perp\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(1)}}|\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(1)}\rangle|\\ \le & \dfrac{1}{\lambda_1} \|(\mathscr{T}-\tilde{\mathscr{T}})(\bI,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(p)})\| +\prod_{q=2}^p\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}). \end{align*} Similarly for each $q=1,\dots,p$ multiplying both sides by $ \lambda_1\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(1)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{\pi(1)}^{(1)})$, we have \begin{align*} &\lambda_1\sin^2\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{\pi(1)}^{(q)})\\ \le& \,\,\lambda_1\prod_{q=1}^p\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}) +\|\tilde{\mathscr{T}}-\mathscr{T}\|\cdot\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\numberthis. \end{align*} % In light of \eqref{eq:prebd}, the first term on the rightmost hand side can be bounded by $$ \begin{aligned} &\lambda_1\prod_{q=1}^p\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\\ \le &\lambda_1\left(\prod_{q=1}^p\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\right)^{{p-2}\over p}\cdot \max_{1\le q\le p}\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\\ \le& \lambda_1[1-(1-\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|)^{2/(p-2)}]^{p-2\over 2}\max_{1\le q\le p}\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}). \end{aligned} $$ Thus, rearranging terms in the above expression gives $$ \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\le \left(1-[1-(1-\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|)^{2/(p-2)}]^{p-2\over 2}\right)^{-1}\cdot{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_1}. $$ Note that the function $$ h_1(x)=\left(1-[1-(1-x)^{2/(p-2)}]^{p-2\over 2}\right)^{-1} $$ is monotonically increasing and continuously differentiable at $0$ with $h_1(0)=1$. We get $$ \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\le {(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_1}, $$ provided that $\|\tilde{\mathscr{T}}-\mathscr{T}\|\le c_{\varepsilon,p}\lambda_1$ for any positive numerical constant $c_{\varepsilon,p} < h_1^{-1}(1+\varepsilon)$. Plugging this back into \eqref{eq:secopert} analogously for all $q=1,\dots,p$ we have $$ \max_{1\le q\le p} \sin\angle\left(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{1}^{(q)} +\dfrac{1}{\lambda_1}(\tilde{\mathscr{T}}-\mathscr{T})\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)} \right) \le \left(\dfrac{(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_1}\right)^{p-1}. $$ \bigskip \paragraph{\textbf{Basic case (b): $\tilde{\lambda}_{\pi(1)}>\lambda_1$}} Next consider the case when $\lambda_1<\tilde{\lambda}_{\pi(1)}$. As in the previous case, we can derive that \begin{eqnarray*} \lambda_1&\le&\max_{1\le k\le d}\left\{\tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p}\right\} +|\langle\tilde{\mathscr{T}}-\mathscr{T},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(1)}\otimes\dots\otimes \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(p)}\rangle|\\ &=&\tilde{\lambda}_{\pi(1)}\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle|^{(p-2)/p} +|\langle\tilde{\mathscr{T}}-\mathscr{T},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(1)}\otimes\dots\otimes \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(p)}\rangle|. \end{eqnarray*} On the other hand, $$ \tilde{\lambda}_{\pi(1)}\le \tilde{\lambda}_1\le \lambda_1+\|\mathscr{T}-\tilde{\mathscr{T}}\|, $$ where the second inequality follows from triangular inequality. Therefore $$ \tilde{\lambda}_{\pi(1)}\left(1-\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\rangle|^{(p-2)/p}\right)\le 2\|\tilde{\mathscr{T}}-\mathscr{T}\| $$ leading to \begin{align*} \prod_{q=1}^p\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}) \le& \left(1-\left({1-2\tilde{\lambda}_{\pi(1)}^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|}\right)^{2/(p-2)}\right)^p\\ \le& \left(1-\left({1-\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|}\over {1+\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|} \right)^{2/(p-2)}\right)^p. \end{align*} Now following an identical argument as in the previous case, we can get \begin{eqnarray*} &&\lambda_1\max_{1\le q\le p}\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\\ &\le&(\lambda_1+\|\tilde{\mathscr{T}}-\mathscr{T}\|)\left[1-\left({1-\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|}\over {1+\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|} \right)^{2/(p-2)}\right]^{p-2\over 2}\times\\ &&\hskip 20pt\times\max_{1\le q\le p}\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})+\|\tilde{\mathscr{T}}-\mathscr{T}\|\cdot\max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}), \end{eqnarray*} leading to \begin{eqnarray*} &&\max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\\ &\le& \left(1-\left(1+\dfrac{\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_1}\right)\left[1-\left({1-\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|}\over {1+\lambda_1^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|} \right)^{2/(p-2)}\right]^{p-2\over 2}\right)^{-1}{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_1}. \end{eqnarray*} Note that the function $$ h_2(x)=\left[1-(1+x)\left[1-\left({1-x}\over {1+x} \right)^{2/(p-2)}\right]^{p-2\over 2}\right]^{-1} $$ is continuously differentiable at $0$ with $h_2(0)=1$, $h_2'(0)>0$. We get $$ \max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\le {(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_1}, $$ provided that $\|\tilde{\mathscr{T}}-\mathscr{T}\|\le c_{\varepsilon,p}\lambda_1$ for any positive numerical constant $c_{\varepsilon,p}\le h_2^{-1}(1+\varepsilon)$. Finally, following the same steps as in \eqref{eq:secopert}, we have \begin{align*} &\sin\angle\left(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}, \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)} +\dfrac{1}{\lambda_1}(\tilde{\mathscr{T}}-\mathscr{T})\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(k)}\right)\\ \le & \dfrac{\tilde{\lambda}_{\pi(1)}}{\lambda_1} \prod_{k\neq q}\sin\angle\left(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(k)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(k)}\right) \le \left(1+\dfrac{\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_1}\right) \left({(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_1}\right)^{p-1}. \end{align*} \bigskip \paragraph{\textbf{Induction}} Next we treat the more general case by induction. % To this end, assume that there exists an injective map $\pi:[l]\to [d]$ such that for all $k\le l (<r)$, \begin{equation} \label{eq:inductionlam} |\lambda_k-\tilde{\lambda}_{\pi(k)}|\le \|\tilde{\mathscr{T}}-\mathscr{T}\| \end{equation} and \begin{equation} \label{eq:inductionu} \max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\le {(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_k} \end{equation} we shall now argue they continue to hold for $k=l+1$. \paragraph{\textbf{Induction (a): $\lambda_{l+1}\ge \max_{k\notin\pi([l])}\tilde{\lambda}_{k}$}} Similar to before, \begin{eqnarray*} \lambda_{l+1}&=&\langle \mathscr{T},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)}\otimes\cdots\otimes \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(p)}\rangle\\ &\le&\langle \tilde{\mathscr{T}},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)}\otimes\cdots\otimes\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(p)}\rangle+\|\tilde{\mathscr{T}}-\mathscr{T}\|\\ &\le&\max_{1\le k\le d}\left\{\tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p}\right\}+\|\tilde{\mathscr{T}}-\mathscr{T}\|. \end{eqnarray*} We first argue that the index maximizing the rightmost hand side is not from $\pi([l])$. To this end, note that by the induction hypothesis, for any $k\in [l]$, \begin{eqnarray*} \tilde{\lambda}_{\pi(k)}\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\rangle|^{(p-2)/p}&\le&(\lambda_k+\|\tilde{\mathscr{T}}-\mathscr{T}\|)\left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)})\right)^{p-2}\\ &\le&(\lambda_k+\|\tilde{\mathscr{T}}-\mathscr{T}\|)\left((1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda_k\right)\\ &\le&(1+\varepsilon)(1+c_{\varepsilon,p})\|\tilde{\mathscr{T}}-\mathscr{T}\|. \end{eqnarray*} Therefore, \begin{eqnarray*} &&\max_{1\le k\le l}\left\{\tilde{\lambda}_{\pi(k)}\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\rangle|^{(p-2)/p}\right\}+\|\tilde{\mathscr{T}}-\mathscr{T}\|\\ &\le& [1+(1+\varepsilon)(1+c_{\varepsilon,p})]\|\tilde{\mathscr{T}}-\mathscr{T}\|\\ &\le& c_{\varepsilon,p}[1+(1+\varepsilon)(1+c_{\varepsilon,p})]\lambda_{l+1}<\lambda_{l+1}, \end{eqnarray*} by taking $0<c_{\varepsilon, p}<h_3^{-1}(1)$ for the function $h_3(x)=x[1+(1+\varepsilon)(1+x)]$. Thus the index, hereafter denoted by $\pi(l+1)$, that maximizes $$ \tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p} $$ must be different from $\{\pi(1),\ldots,\pi(l)\}$. In addition, because $$ \tilde{\lambda}_{\pi(l+1)}\le \lambda_{l+1}\le \tilde{\lambda}_{\pi(l+1)}\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}\rangle|^{(p-2)/p}+\|\tilde{\mathscr{T}}-\mathscr{T}\|\le \tilde{\lambda}_{\pi(l+1)}+\|\tilde{\mathscr{T}}-\mathscr{T}\|, $$ we immediately deduce that $$ |\tilde{\lambda}_{\pi(l+1)}-\lambda_{l+1}|\le \|\tilde{\mathscr{T}}-\mathscr{T}\|, $$ and \begin{equation} \label{eq:prelimbdu} \left(\prod_{q=1}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}\rangle|\right)^{1/p} \ge \left(1-\lambda_{l+1}^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|\right)^{1/(p-2)}. \end{equation} \noindent Similar to before, we can derive \begin{eqnarray*} \lambda_{l+1}\sin^2\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)}) &\,\,\footnoteref{note2}=& \mathscr{T}(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)}-\langle\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)}\rangle\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(p)})\\ &\le&\tilde{\mathscr{T}}(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)}-\langle\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)}\rangle\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(p)})\\ &&\hskip 50pt+\|\tilde{\mathscr{T}}-\mathscr{T}\|\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)}). \end{eqnarray*} Moreover, because $\lambda_{l+1}\ge \max_{k\notin\pi([l])}\tilde{\lambda}_{k}$, we get \begin{eqnarray*} &&\tilde{\mathscr{T}}(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)}-\langle\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)}\rangle\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(2)},\ldots,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(p)})\\ &\footnoteref{note2}=&\sum_{k\neq \pi(l+1)} \tilde{\lambda}_k\prod_{q=1}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle\\ &\le&\sum_{k=1}^l \tilde{\lambda}_{\pi(k)}\prod_{q=1}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\rangle+\lambda_{l+1}\sum_{k\notin \pi([l+1])} \prod_{q=1}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|\\ &\le&\sum_{k=1}^l \tilde{\lambda}_{\pi(k)}\prod_{q=1}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\rangle +\lambda_{l+1}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(1)}_{l+1},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(1)}_{\pi(l+1)}) \left(\sum_{k\notin \pi([l+1])}\prod_{q=2}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)} \rangle |^2\right)^{1/2} \\ &\le&\sum_{k=1}^l \tilde{\lambda}_{\pi(k)}\prod_{q=1}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\rangle +\lambda_{l+1}\prod_{q=1}^p\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_{l+1},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(l+1)}). \end{eqnarray*} The first term on the rightmost hand side can be bounded by \begin{align*}\label{eq:prevterms} &\sum_{k=1}^l \tilde{\lambda}_{\pi(k)}\prod_{q=1}^p\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\rangle\\ \le&\max_{1\le k\le l}\{\tilde{\lambda}_{\pi(k)}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(1)})\}\left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\right)^{p-1}\\ \le&\max_{1\le k\le l}\left\{(\lambda_k+\|\tilde{\mathscr{T}}-\mathscr{T}\|)\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(1)})\right\}\left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\right)^{p-1}\\ \le&(1+\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda_l)(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\right)^{p-1}\numberthis\\ \le&(1+\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda_l)(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\right)^2. \end{align*} On the other hand, as before, we can derive from \eqref{eq:prelimbdu} that $$ \prod_{q=1}^p\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\le \left(1-\left(1-{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_{l+1}}\right)^{2/(p-2)}\right)^p, $$ so that the second term can be bounded by \begin{eqnarray*} &&\lambda_{l+1}\prod_{q=1}^p\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_{l+1},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(l+1)})\\ &\le&\lambda_{l+1}\left[1-\left(1-{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_{l+1}}\right)^{2/(p-2)}\right]^{p-2\over 2}\max_{1\le q\le p}\sin^2\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}). \end{eqnarray*} Denote by $$ h_4(x;\varepsilon,p)=(1+x)\left[1-\left({1-x}\over{1+x}\right)^{2/(p-2)}\right]^{p-2\over 2}+(1+\varepsilon)x(1+x). $$ Then since $\lambda_{l+1}\le \lambda_l$ \begin{eqnarray*} &\lambda_{l+1}\sin^2\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)})\le &\lambda_{l+1}h_4\left({\|\tilde{\mathscr{T}}-\mathscr{T}\|\over \lambda_{l+1}}; \varepsilon, p\right)\max_{1\le q\le p}\sin^2\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)})\\ &&+\|\tilde{\mathscr{T}}-\mathscr{T}\|\cdot\max_{1\le q\le p}\sin\angle (\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}) \end{eqnarray*} implying $$ \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(1)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(1)})\le{1\over 1- h_4\left(\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda_{l+1}; \varepsilon, p\right)}\cdot{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over\lambda_{l+1}}. $$ Observe that $h_4$ is a continuous and increasing function of $x$ and $h_4(0)=0$. Provided that $\|\tilde{\mathscr{T}}-\mathscr{T}\|\le c_{\varepsilon,p}\lambda_{l+1}$ for some positive numerical constant $c_{\varepsilon,p}\le h_4^{-1}(\varepsilon/(1+\varepsilon))$, we get \begin{equation}\label{eq:sineind} \left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)}, \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\right)\le \dfrac{(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{l+1}}. \end{equation} Finally, for any unit vector $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\perp \tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}$ we can derive \begin{align*}\label{eq:indsecord} \lambda_{l+1}\langle\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}\rangle=&\mathscr{T}\times_q\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)}\\ =&(\mathscr{T}-\tilde{\mathscr{T}})\times_q\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)} +\tilde{\mathscr{T}}\times_q\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)}.\numberthis \end{align*} Then by similar calculation as above, we can bound \begin{align*} &|\tilde{\mathscr{T}}\times_q\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)}|\\ \le & \sum_{s=1}^l\tilde{\lambda}_{\pi(s)}\prod_{k\neq q}|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{l+1}^{(k)}\rangle|+\lambda_{l+1}\prod_{k\neq q}\sin\angle \left(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(k)}\right). \end{align*} Following \eqref{eq:prevterms}, we have \begin{align*} &\sum_{s=1}^l\tilde{\lambda}_{\pi(s)}\prod_{k\neq q}|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{l+1}^{(k)}\rangle|\\ \le& (1+\|\tilde{\mathscr{T}}-\mathscr{T}\|/\lambda_l)(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\| \left(\max_{1\le q\le p}\sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)})\right)^{p-2}\\ \le& \lambda_{l+1}\left(1+{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over\lambda_l}\right) \left(\dfrac{(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{l+1}}\right)^{p-1}, \end{align*} where we use \eqref{eq:sineind} in the last line. Now taking supremum over $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}$ on both sides of \eqref{eq:indsecord}, we have \begin{align*} &\sin\angle\left(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{(l+1)}^{(q)} +\dfrac{1}{\lambda_{l+1}}(\tilde{\mathscr{T}}-\mathscr{T})\times_{k\neq q} \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)} \right)\\ =&\sup_{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\in{\cal S}} \def\cS{{\cal S}^{d-1}, \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\perp\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}} \abs*{\langle \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{(l+1)}^{(1)} +\dfrac{1}{\lambda_{l+1}}(\tilde{\mathscr{T}}-\mathscr{T})\times_{k\neq q} \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)} \rangle}\\ \le & \dfrac{1}{\lambda_{l+1}}\sup_{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\in{\cal S}} \def\cS{{\cal S}^{d-1}, \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\perp\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}} {\tilde{\mathscr{T}}}\times_q\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}\times_{k\neq q}\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(k)}\\ \le & \left(2+{\|\tilde{\mathscr{T}}-\mathscr{T}\|\over\lambda_l}\right) \left(\dfrac{(1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{l+1}}\right)^{p-1}. \end{align*} \bigskip \paragraph{\textbf{Induction (b): $\lambda_{l+1}< \max_{k\notin\pi([l])}\tilde{\lambda}_{k}$}} Write $\tilde{\bU}_l^{(1)}=(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(1)}_{\pi(1)},\ldots,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(1)}_{\pi(l)})$. Then \begin{eqnarray*} \max_{k\notin\pi([l])}\tilde{\lambda}_{k}&=&\max_{\substack{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in{\cal S}} \def\cS{{\cal S}^{d-1}, 1\le q\le p\\ (\tilde{\bU}_l^{(1)})^\top\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}=0}}\langle \tilde{\mathscr{T}}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle\\ &\le&\max_{\substack{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in{\cal S}} \def\cS{{\cal S}^{d-1}, 1\le q\le p\\ (\tilde{\bU}_l^{(1)})^\top\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}=0}}\langle \mathscr{T}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle+\|\tilde{\mathscr{T}}-\mathscr{T}\| \end{eqnarray*} Observe that \begin{eqnarray*} &&\max_{\substack{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in{\cal S}} \def\cS{{\cal S}^{d-1}, 1\le q\le p\\ (\tilde{\bU}_l^{(1)})^\top\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}=0}}\langle {\mathscr{T}}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle\\ &=&\max_{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in{\cal S}} \def\cS{{\cal S}^{d-1}, 1\le q\le p}\langle {\mathscr{T}}(I-\tilde{\bU}_l^{(1)}(\tilde{\bU}_l^{(1)})^\top,I,\ldots,I), \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\rangle\\ &=&\max_{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(q)}\in{\cal S}} \def\cS{{\cal S}^{d-1}, 1\le q\le p}\left\langle \sum_{k=1}^d{\lambda}_k(I-\tilde{\bU}_l^{(1)}(\tilde{\bU}_l^{(1)})^\top){\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)}\otimes\cdots\otimes{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}, \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(1)}\otimes\cdots\otimes\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}^{(p)}\right\rangle\\ &=&\max_{1\le k\le d}\{{\lambda}_k\|(I-\tilde{\bU}_l^{(1)}(\tilde{\bU}_l^{(1)})^\top){\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)}\|\}. \end{eqnarray*} By the induction hypothesis, for any $k\le l$, \begin{eqnarray*} \lambda_k\|(I-\tilde{\bU}_l^{(1)}(\tilde{\bU}_l^{(1)})^\top)\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)}\|&\le& \lambda_k\|(I-\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(1)}(\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(1)})^\top)\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(1)}\|\\ &\le& (1+\varepsilon)\|\tilde{\mathscr{T}}-\mathscr{T}\|\\ &<&\lambda_{l+1}-\|\tilde{\mathscr{T}}-\mathscr{T}\|, \end{eqnarray*} by taking $c_{\varepsilon,p}>0$ small enough. Hence $$ \max_{k\notin\pi([l])}\tilde{\lambda}_{k}\le \max_{k>l}\{{\lambda}_k\|(I-\tilde{\bU}_l^{(1)}(\tilde{\bU}_l^{(1)})^\top){\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)}\|\}+\|\tilde{\mathscr{T}}-\mathscr{T}\|\le \lambda_{l+1}+\|\tilde{\mathscr{T}}-\mathscr{T}\|. $$ This suggests that the index, denoted by $\pi(l+1)$, that maximizes $$ \left\{\tilde{\lambda}_k\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}\rangle|^{(p-2)/p}\right\}. $$ is distinct from $\pi([l])$. Moreover, following the same argument as the previous case, we can derive that $$ \tilde{\lambda}_{\pi(l+1)}-\|\tilde{\mathscr{T}}-\mathscr{T}\|\le \lambda_{l+1}\le \tilde{\lambda}_{\pi(l+1)}\prod_{q=1}^p |\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}\rangle|^{(p-2)/p}+\|\tilde{\mathscr{T}}-\mathscr{T}\|, $$ so that $$ |\tilde{\lambda}_{\pi(l+1)}-\lambda_{l+1}|\le \|\tilde{\mathscr{T}}-\mathscr{T}\|, $$ and $$ \left(\prod_{q=1}^p|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{l+1}^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(l+1)}^{(q)}\rangle|\right)^{1/p}\ge \left(1-2\lambda_{l+1}^{-1}\|\tilde{\mathscr{T}}-\mathscr{T}\|\right)^{1/(p-2)}. $$ The rest of the proof is identical to the previous case and works with the same $c_{\varepsilon,p}$ and is therefore omitted for brevity. \noindent Gathering the conditions used through the proof, we need the constant $$ c_{\varepsilon,p}\le \min\{(1+\varepsilon)^{-1},\,h_1^{-1}(1+\varepsilon),\,h_2^{-1}(1+\varepsilon),\,h_3^{-1}(1),\,h_4^{-1}(\varepsilon/(1+\varepsilon))\} $$ where \begin{align}\label{eq:cpfns} h_1(x)&=\left(1-[1-(1-x)^{2/(p-2)}]^{p-2\over 2}\right)^{-1} \nonumber \\ h_2(x)&=\left[1-(1+x)\left[1-\left({1-x}\over{1+x}\right)^{2/(p-2)}\right]^{p-2\over 2}\right]^{-1} \\ h_3(x;\varepsilon)&= x[1+(1+\varepsilon)(1+x)] \nonumber \\ h_4(x;\varepsilon)&=(1+x)\left[1-\left({1-x}\over{1+x}\right)^{2/(p-2)}\right]^{p-2\over 2}+(1+\varepsilon)x(1+x) \nonumber \end{align} Note that although for preciseness, in the proof, we take the constant $c_{\varepsilon, p}>0$ depending on the order of the tensor, it can be taken to be strictly increasing with $p$ so that the argument holds if we take $c_{\varepsilon, 3}$ for all $p\ge 3$. \end{proof} \vskip 20pt \begin{proof}[Proof of Theorem \ref{th:allsingpert}] The proof uses the following Lemma which is proved in the appendix. \begin{lemma}\label{le:specnorm} Under the assumptions of Corollary \ref{th:allsingpert}, there exist a numerical constant $C>0$, a permutation $\pi:[d_{\min}]\to [d_{\min}]$ and vectors $\boldsymbol{\gamma}^{(q)}\in \{+1,-1\}^{d_{\min}}$ such that the $d_q\times k$ orthogonal matrices $$\bV^{(q)}_k=[\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_1^{(q)}\,\dots\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{k}^{(q)}]\,\,\text{ and}\,\, \tilde{\bV}^{(q)}_k=[\gamma_1^{(q)}\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(1)}^{(q)}\,\dots\,\gamma_{k}^{(q)}\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}]$$ satisfy \begin{equation}\label{eq:specnorm} \|\bV^{(q)}_k-\tilde{\bV}^{(q)}_k\|\le {C\|\tilde{\mathscr{T}}-\mathscr{T}\|\over{\lambda_k}} \end{equation} for $1\le k\le r$. \end{lemma} \medskip We first consider the case where $\lambda>0$. Following Lemma \ref{le:specnorm}, real singular vectors of $\mathscr{T}$ and $\tilde{\mathscr{T}}$ with nonzero singular values $\lambda$ and $\tilde{\lambda}$ can be written as: $$ \bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}=\bV^{(q)}_{r}\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\quad\text{and}\quad\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}=\tilde{\bV}^{(q)}_{r}\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}, $$ where $\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}$, $\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}\in \RR^{r}$ are two unit vectors with $|\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k|=\left(\tfrac{\lambda}{\lambda_k}\right)^{1/(p-2)}\mathbbm{1}(|\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k|\neq 0)$ and $|\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k|=\left(\tfrac{\tilde{\lambda}}{\tilde{\lambda}_{\pi(k)}}\right)^{1/(p-2)}\mathbbm{1}(|\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k|\neq 0)$ for $1\le k\le d_{\min}$. Given a singular vector tuple $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}$, with the active set $S=\{k:|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)},\,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}\rangle|>0\}$ (note that $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}$ have the same active set). Let $\{\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}\}$ be a corresponding singular vector tuple with signs $$ {\rm sign}\left(\langle\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)}\rangle\right) =\gamma_k^{(q)}{\rm sign}\left(\langle\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k\rangle\right) $$ where $\gamma_k^{(q)}$ and $\pi$ are respectively the signs and permutation from Lemma \ref{le:specnorm}. We will now show that $|\lambda-\tilde{\lambda}|$ and $\|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}-\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}\|$ are small. For notational convenience we prove only for the case where $$ {\rm sign}\left(\langle\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)},\,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}^{(q)}_{\pi(k)}\rangle\right) ={\rm sign}\left(\langle\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)}_k\rangle\right)=1 $$ for all $1\le q\le p$ and $1\le k \le d_{\min}$. The result for other possible signs will follow similarly. To begin with, we have \begin{align*}\label{eq:singtri} \|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}-\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}\|\le & \|\bV^{(q)}_K-\tilde{\bV}^{(q)}_K\|+\|\tilde{\bV}^{(q)}_K\|\|\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}-\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\|\numberthis \end{align*} where $K=\max\{k:|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)},\,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}\rangle|>0\}$. Notice that $\lambda_K=\min\{\lambda_k:|\langle \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(q)},\,\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}\rangle|>0\}$. By Lemma \ref{le:specnorm}, \begin{equation}\label{eq:matpart} \|\bV^{(q)}_K-\tilde{\bV}^{(q)}_K\|\le \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_K}=\dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{\min}^*}. \end{equation} In light of \eqref{eq:singtri}, it is thus enough to show the upper bound on $\|\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}-\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\|$. Writing $x=\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k=\lambda_k^{1/(p-2)}$ and $y=\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_k=\tilde{\lambda}_k^{1/(p-2)}$, we have \begin{align*} \abs*{\dfrac{1}{\lambda_k^{1/(p-2)}}-\dfrac{1}{\lambda_{\pi(k)}^{1/(p-2)}} =&\dfrac{|x-y|}{xy}\\ =&\dfrac{|x^{p-2}-y^{p-2}|}{xy(x^{p-3}+x^{p-4}y+\dots+xy^{p-4}+y^{p-3})}\\ \le & \dfrac{|x^{p-2}-y^{p-2}|}{x^{p-2}y+y^{p-2}x}\\ \le & \min\left\{\dfrac{|\lambda_k-\tilde{\lambda}_{\pi(k)}|}{\lambda_k(\tilde{\lambda}_{\pi(k)})^{1/(p-2)}},\dfrac{|\lambda_k-\tilde{\lambda}_{\pi(k)}|} {\tilde{\lambda}_{\pi(k)}(\lambda_{k})^{1/(p-2)}}\right\}\\ \le &\dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_k} \dfrac{1}{(\tilde{\lambda}_{\pi(k)})^{1/(p-2)}}, \end{align*} by Theorem \ref{th:ortho-perturb}, and thus for $S=\{k:\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k\neq 0\}$, \begin{align*} \sum_{k\in S}\abs*{\dfrac{1}{\lambda_k^{1/(p-2)}}-\dfrac{1}{\lambda_{\pi(k)}^{1/(p-2)}}}^2 \le& \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|^2}{\left(\lambda_{\min}^*\right)^2} \cdot\sum_{k\in S}\dfrac{1}{(\tilde{\lambda}_{\pi(k)})^{2/(p-2)}}\\ =& \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|^2}{\left(\lambda_{\min}^*\right)^2} \cdot\dfrac{1}{(\tilde{\lambda})^{2/(p-2)}}. \end{align*} We have showed that \begin{equation}\label{eq:scalediff} \norm*{\dfrac{1}{\lambda^{1/(p-2)}}\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}-\dfrac{1}{\tilde{\lambda}^{1/(p-2)}}\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}} \le \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{\min}^*}\cdot \dfrac{1}{\tilde{\lambda}^{1/(p-2)}}. \end{equation} Let us assume without loss of generality that $\tilde{\lambda}<\lambda$. By an analogous calculation, we can also show that \begin{equation}\label{eq:lamb_diff} \abs*{\dfrac{1}{{\lambda}^{1/(p-2)}}-\dfrac{1}{\tilde{\lambda}^{1/(p-2)}}}\le \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{\min}^*}\cdot \dfrac{1}{\tilde{\lambda}^{1/(p-2)}}, \end{equation} which when combined with \eqref{eq:scalediff} and the fact that $\|\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\|=\|\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}\|=1$ implies \begin{align*} \|\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}-\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}\| \le&\, \tilde{\lambda}^{1/(p-2)}\abs*{\dfrac{1}{{\lambda}^{1/(p-2)}}-\dfrac{1}{\tilde{\lambda}^{1/(p-2)}}}\|\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}\|+\tilde{\lambda}^{1/(p-2)} \norm*{\dfrac{1}{\lambda^{1/(p-2)}}\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}-\dfrac{1}{\tilde{\lambda}^{1/(p-2)}}\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}}\\ \le& \, \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_{\min}^*}. \end{align*} Plugging this bound back into \eqref{eq:singtri} along with \eqref{eq:matpart} finishes the proof for singular vectors. For the singular values $\lambda$ and $\tilde{\lambda}$, note that \begin{align*} |\lambda-\tilde{\lambda}| =&\left|\lambda^{\tfrac{1}{p-2}}-\tilde{\lambda}^{\tfrac{1}{p-2}}\right| \left(\lambda^{\tfrac{p-3}{p-2}}+\lambda^{\tfrac{p-4}{p-2}} \tilde{\lambda}^{\tfrac{1}{p-2}} +\dots+ \tilde{\lambda}^{\tfrac{p-3}{p-2}} \right)\\ \le & p\lambda^{\tfrac{p-3}{p-2}}\cdot\dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda^*_{\min}}\cdot\lambda^{\tfrac{1}{p-2}}\\ = & p\lambda \cdot\dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda^*_{\min}}. \end{align*} where we use that $\lambda>\tilde{\lambda}$ and \eqref{eq:lamb_diff} in the third inequality. Note that \begin{align*} \dfrac{1}{\lambda^{1/(p-2)}}=\sum_{k\in S}\dfrac{1}{\lambda_k^{1/(p-2)}}>\dfrac{1}{\left(\lambda_{\min}^*\right)^{1/(p-2)}} \end{align*} implying that $\lambda<\lambda_{\min}^*$. Thus $|\lambda-\tilde{\lambda}|\le C\|\tilde{\mathscr{T}}-\mathscr{T}\|$. Now consider the case when $\lambda=0$. Any set of unit vectors $\bfm w} \def\bW{\bfm W} \def\WW{\mathbb{W}^{(1)},\dots,\bfm w} \def\bW{\bfm W} \def\WW{\mathbb{W}^{(p)}$ is a singular vector tuple of $$ \sum_{k=1}^{d_{\min}}\lambda_k\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_k^{(1)}\otimes \dots\otimes \bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_k^{(p)}. $$ corresponding to singular value $\lambda=0$ if and only if $\langle\bfm w} \def\bW{\bfm W} \def\WW{\mathbb{W}^{(q)},\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_k^{(q)}\rangle =0$ for at least two values of $q\in\{1,\dots ,p\}$. Such singular vectors of $\mathscr{T}$ and $\tilde{\mathscr{T}}$ can be written as $\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}=\bV_{d_{\min}}^{(q)}\bfm w} \def\bW{\bfm W} \def\WW{\mathbb{W}^{(q)}$ and $\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}=\tilde{\bV}_{d_{\min}}^{(q)}\bfm w} \def\bW{\bfm W} \def\WW{\mathbb{W}^{(q)}$. The conclusion then follows directly from Lemma \ref{le:specnorm}. If $\langle \bfm w} \def\bW{\bfm W} \def\WW{\mathbb{W}^{(q)},\bfm e} \def\bE{\bfm E} \def\EE{\mathbb{E}_k^{(q)} \rangle$ for some $k$ such that $\lambda_k=0$, we use the vacuous bound $$ \|\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}^{(q)}-\tilde{\bfm v} \def\bV{\bfm V} \def\VV{\mathbb{V}}^{(q)}\|=1\le \dfrac{C\|\tilde{\mathscr{T}}-\mathscr{T}\|}{\lambda_k} $$ with the convention $1/0=+\infty$. \end{proof} \bigskip \begin{proof}[Proof of Theorem \ref{th:incoherent}] We show the results using $\mathscr{X}$ and its odeco approximation $\mathscr{T}$. The analogous results for $\tilde{\mathscr{X}}$ and $\tilde{\mathscr{T}}$ follow similarly. Recall that the polar factor of $\bA^{(q)}$ is the unitary matrix $$\bU^{(q)}=\bA^{(q)}[(\bA^{(q)})^\top \bA^{(q)}]^{-1/2}.$$ It is not hard to see that \begin{equation} \label{eq:polarnorm} \|\bA^{(q)}-\bU^{(q)}\|= \|[(\bA^{(q)})^\top \bA^{(q)}]^{1/2}-I\|\le \max_{1\le i \le d}|\lambda_i((\bA^{(q)})^\top \bA^{(q)})^{1/2}-1|\le\delta. \end{equation} We can then consider approximating $\mathscr{X}$by $$ \mathscr{T}=\sum_{i=1}^d \eta_i \bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(1)}\otimes\cdots\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(p)}. $$ Recall that $$ \|\mathscr{X}-\mathscr{T}\|=\sup_{\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d-1}: 1\le q\le p}\langle \mathscr{X}-\mathscr{T},\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle. $$ For any fixed $\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)}$s, \begin{eqnarray*} &&\langle \mathscr{X}-\mathscr{T},\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)}\otimes\cdots\otimes\bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)}\rangle\\ &=&\sum_{i=1}^d \eta_i\left(\prod_{q=1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle-\prod_{q=1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle\right)\\ &=&\sum_{i=1}^d \eta_i\left(\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle-\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(1)}\rangle\right)\prod_{q=2}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle\\ &&\hskip 20pt +\sum_{i=1}^d \eta_i\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(1)}\rangle\left(\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(2)}\rangle-\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(2)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}^{(2)}\rangle\right)\prod_{q=3}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle\\ &&\hskip 20pt +\ldots\ldots+\\ &&\hskip 20pt +\sum_{i=1}^d \eta_i\prod_{q=1}^{p-1}\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle\left(\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(p)}\rangle-\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(p)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(p)}\rangle\right). \end{eqnarray*} Each term on the rightmost hand side can be bounded via Cauchy-Schwarz inequality: \begin{eqnarray*} &&\sum_{i=1}^d \eta_i\prod_{q=1}^{k-1}\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle\left(\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(k)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(k)}\rangle-\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(k)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(k)}\rangle\right)\prod_{q=k+1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle\\ &\le&\|\bA^{(k)}-\bU^{(k)}\|\left[\sum_{i=1}^d \left(\eta_i^2\prod_{q=1}^{k-1}\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle^2\prod_{q=k+1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle^2\right)\right]^{1/2}\\ &\le&\eta_1\|\bA^{(k)}-\bU^{(k)}\|\left[\sum_{i=1}^d \left(\prod_{q=1}^{k-1}\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle^2\prod_{q=k+1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle^2\right)\right]^{1/2}\\ &\le&\delta\eta \left[\sum_{i=1}^d \left(\prod_{q=1}^{k-1}\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle^2\prod_{q=k+1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle^2\right)\right]^{1/2}. \end{eqnarray*} Note that $$ |\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle|, \qquad |\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle|\le 1, $$ and $$ \sum_{i=1}^d \langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle^2=1,\qquad \sum_{i=1}^d \langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle^2\le 1+\delta. $$ We immediately get $$ \sum_{i=1}^d \eta_i\left(\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle-\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(1)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(1)}\rangle\right)\prod_{q=2}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle\le \delta(1+\delta)\eta_1, $$ and for $k\ge 2$, $$ \sum_{i=1}^d \eta_i\prod_{q=1}^{k-1}\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(q)}\rangle\left(\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(k)},\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(k)}\rangle-\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(k)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_i^{(k)}\rangle\right)\prod_{q=k+1}^p\langle \bfm x} \def\bX{\bfm X} \def\XX{\mathbb{X}^{(q)},\,\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_i^{(q)}\rangle \le\delta\eta_1. $$ Hence $$ \|\mathscr{X}-\mathscr{T}\|\le (p+1)\delta\eta_1. $$ Note also that for any $1\le q \le p,$ using equation \eqref{eq:polarnorm} $$\begin{aligned} \max_{1\le j \le d}\sin\angle (\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_j^{(q)},\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_{\pi(j)}^{(q)})&\le \sqrt{1-\min_{j}\langle \bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_j^{(q)},\,\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_j^{(q)}\rangle^2 } \\&\le\sqrt{1-(1-\|\bA^{(q)}-\bU^{(q)}\|^2/2)}\le \delta/\sqrt{2}. \end{aligned} $$ The desired result then follows from Theorem \ref{th:ortho-perturb}. As mentioned before, the proof for $\tilde{\mathscr{X}}$ and $\tilde{\mathscr{T}}$ follows by identical steps. \end{proof} \vskip 20pt \begin{proof}[Proof of Corollary \ref{co:incoherent}] It is clear from Theorem \ref{th:incoherent} that there exist odeco approximations $\mathscr{T}$ and $\tilde{\mathscr{T}}$ of $\mathscr{X}$ and $\tilde{\mathscr{X}}$ respectively, such that $$\begin{aligned} \|\mathscr{T}-\tilde{\mathscr{T}}\|\le& \|\mathscr{T}-\mathscr{X}\|+\|\mathscr{X}-\tilde{\mathscr{X}}\|+\|\tilde{\mathscr{X}}-\tilde{\mathscr{T}}\| \\ \le & (p+1)\delta(\eta_1+\tilde{\eta}_1)+\|\mathscr{X}-\tilde{\mathscr{X}}\|. \end{aligned}$$ By Theorem \ref{th:odeco-weyl}, there is a permutation $\pi:[d_{\min}]\to[d_{\min}]$ and a constant $C>0$ such that $$|\eta_k-\tilde{\eta}_{\pi(k)}|\le C((p+1)\delta(\eta_1+\tilde{\eta}_1)+\|\mathscr{X}-\tilde{\mathscr{X}}\|)$$ and $$ \max_{1\le q\le p}\sin\angle\left(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_{\pi(k)}^{(q)}\right) \le C((p+1)\delta(\eta_1+\tilde{\eta}_1)+\|\mathscr{X}-\tilde{\mathscr{X}}\|+\delta)/\eta_k. $$ Finally, we use the second part of Theorem \ref{th:incoherent} to derive that, by triangle inequality, with the same permutation $\pi$ we also have $$ \max_{1\le q\le p}\sin\angle\left(\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}_k^{(q)},\tilde{\bfm a} \def\bA{\bfm A} \def\AA{\mathbb{A}}_{\pi(k)}^{(q)}\right) \le C((p+1)\delta(\eta_1+\tilde{\eta}_1)+\|\mathscr{X}-\tilde{\mathscr{X}}\|+\delta)/\eta_k. $$ \end{proof} \vskip 30pt \begin{proof}[Proof of Theorem \ref{th:tpcalower}] % First note that a lower bound for a special case is also a lower bound for the more general case. Therefore, % \begin{eqnarray*} &&\inf_{\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)},\ldots,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}}\sup_{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d_q-1}: 1\le q\le p}\EE\max_{1\le q\le p} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\\ &\ge& \inf_{\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)},\ldots,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}}\sup_{\substack{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d_q-1}: 1\le q\le p\\ \lambda_{k'}=0, \forall k'\neq k}}\EE\max_{1\le q\le p} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)}). \end{eqnarray*} The special case was simply the rank one case where $\mathscr{T}$ has only one nonzero singular value $\lambda_k$. It was shown by \cite{zhang2018tensor} that for this case, $$ \inf_{\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(1)},\ldots,\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(p)}}\sup_{\substack{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)}\in {\cal S}} \def\cS{{\cal S}^{d_q-1}: 1\le q\le p\\ \lambda_{k'}=0, \forall k'\neq k}}\EE\max_{1\le q\le p} \sin\angle(\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}_k^{(q)},\tilde{\bfm u} \def\bU{\bfm U} \def\UU{\mathbb{U}}_k^{(q)})\ge c\cdot{\sqrt{d_1+\cdots+d_p}\over \lambda_k}, $$ and thus \eqref{eq:svdbd4} follows. The lower bound \eqref{eq:svdbd3} for estimating the singular value follows by the same argument. \end{proof} \bibliographystyle{plainnat}
1,108,101,566,114
arxiv
\section{Introduction} We say that $\pi=\pi_{1}\pi_{2}\cdots\pi_{n}$ is a \textit{permutation} of \textit{length} $n$ (or an $n$-\textit{permutation}) if it is a sequence of $n$ distinct letters---not necessarily from 1 to $n$---in $\mathbb{P}$, the set of positive integers. For example, $\pi=47381$ is a permutation of length 5. Let $\left|\pi\right|$ denote the length of a permutation $\pi$ and let $\mathfrak{P}_{n}$ denote the set of all permutations of length $n$.\footnote{In Section \ref{s-section2}, we will in a few instances consider permutations with a letter 0. We note that, in these cases, every property of permutations that is used still holds when 0 is allowed to be a letter. } A \textit{permutation statistic} (or \textit{statistic}) $\st$ is a function defined on permutations such that $\st(\pi)=\st(\sigma)$ whenever $\pi$ and $\sigma$ are permutations with the same relative order.\footnote{Define the \textit{standardization} of an $n$-permutation $\pi$ to be the permutation of $[n]$ obtained by replacing the $i$th smallest letter of $\pi$ with $i$ for $i$ from 1 to $n$. Then two permutations are said to \textit{have the same relative order} if they have the same standardization.} Three classical examples of permutation statistics are the descent set $\Des$, the descent number $\des$, and the major index $\maj$. We say that $i\in[n-1]$ is a \textit{descent} of $\pi\in\mathfrak{P}_{n}$ if $\pi_{i}>\pi_{i+1}$. Then the \textit{descent set} \[ \Des(\pi)\coloneqq\{\, i\in[n-1]\mid\pi_{i}>\pi_{i+1}\,\} \] of $\pi$ is the set of its descents, the \textit{descent number} \[ \des(\pi)\coloneqq\left|\Des(\pi)\right| \] its number of descents, and the \textit{major index} \[ \maj(\pi)\coloneqq\sum_{k\in\Des(\pi)}k \] the sum of its descents. Let $\pi\in\mathfrak{P}_{m}$ and $\sigma\in\mathfrak{P}_{n}$ be \textit{disjoint} permutations, that is, permutations with no letters in common. We say that $\tau\in\mathfrak{P}_{m+n}$ is a \textit{shuffle} of $\pi$ and $\sigma$ if both $\pi$ and $\sigma$ are subsequences of $\tau$. The set of shuffles of $\pi$ and $\sigma$ is denoted $S(\pi,\sigma)$. For example, $S(53,16)=\{5316,5136,5163,1653,1536,1563\}$. It is easy to see that the number of permutations in $S(\pi,\sigma)$ is ${m+n \choose m}$. Richard Stanley's theory of $P$-partitions \cite{Stanley1972} implies that the descent set statistic has a remarkable property related to shuffles: for any disjoint permutations $\pi$ and $\sigma$, the multiset $\{\,\Des(\tau)\mid\tau\in S(\pi,\sigma)\,\}$---which encodes the distribution of the descent set over shuffles of $\pi$ and $\sigma$---depends only on $\Des(\pi)$, $\Des(\sigma)$, and the lengths of $\pi$ and $\sigma$ \cite[Exercise 3.161]{Stanley2011}. That is, if $\pi$ and $\pi^{\prime}$ are permutations of the same length with the same descent set, and similarly with $\sigma$ and $\sigma^{\prime}$, then the number of permutations in $S(\pi,\sigma)$ with any given descent set is the same as the number of permutations in $S(\pi^{\prime},\sigma^{\prime})$ with that descent set. Stanley also proved a similar but more refined result for the joint statistic $(\des,\maj)$, which is a special case of \cite[Proposition 12.6 (ii)]{Stanley1972}. Bijective proofs were later found by Goulden \cite{Goulden1985} and by Stadler \cite{Stadler1999}; they referred to this result as ``Stanley's shuffling theorem''. Recall that the $q$-binomial coefficient ${n \choose k}_{q}$ is defined by \[ \qbinom{n}{k}\coloneqq\frac{[n]_{q}!}{[k]_{q}!\,[n-k]_{q}!} \] where $[n]_{q}!\coloneqq(1+q)(1+q+q^{2})\cdots(1+q+\cdots+q^{n-1})$. \begin{thm}[Stanley's shuffling theorem] Let $\pi\in\mathfrak{P}_{m}$ and $\sigma\in\mathfrak{P}_{n}$ be disjoint permutations, and let $S_{k}(\pi,\sigma)$ be the set of shuffles of $\pi$ and $\sigma$ with exactly $k$ descents. Then \begin{multline} \qquad\sum_{\tau\in S_{k}(\pi,\sigma)}q^{\maj(\tau)}=q^{\maj(\pi)+\maj(\sigma)+(k-\des(\pi))(k-\des(\sigma))}\\ \times\qbinom{m-\des(\pi)+\des(\sigma)}{k-\des(\pi)}\qbinom{n-\des(\sigma)+\des(\pi)}{ k-\des(\sigma)}. \qquad \label{e-qshuffle} \end{multline} \end{thm} A variant of the theorem gives the formula \begin{equation} \sum_{\tau\in S(\pi,\sigma)}q^{\maj(\tau)}=q^{\maj(\pi)+\maj(\sigma)}\qbinom{m+n}{m};\label{e-maj} \end{equation} see \cite[p.~43]{Stanley1972}. These formulas show that the statistics $(\des,\maj)$ and $\maj$ have the same property as $\Des$, and setting $q=1$ in \eqref{e-qshuffle} shows that $\des$ has this property as well. We call this property ``shuffle-compatibility''. More precisely, we say that a permutation statistic $\st$ is \textit{shuffle-compatible} if for any disjoint permutations $\pi$ and $\sigma$, the multiset $\{\,\st(\tau)\mid\tau\in S(\pi,\sigma)\,\}$ depends only on $\st(\pi)$, $\st(\sigma)$, $\left|\pi\right|$, and $\left|\sigma\right|$. Hence $\Des$, $\des$, $\maj$, and $(\des,\maj)$ are examples of shuffle-compatible permutation statistics. This paper serves as the first in-depth investigation of shuffle-compatibility, and we focus on the shuffle-compatibility of descent statistics, which are statistics that depend only on the descent set and length of a permutation. All of the statistics mentioned so far are descent statistics. In Section \ref{s-section2}, we introduce some aspects of the general theory of descents in permutations and define some other descent statistics that we will be studying in this paper, including the peak set $\Pk$, the peak number $\pk$, the left peak set $\Lpk$, the left peak number $\lpk$, and the number of up-down runs $\udr$. There, we also give a bijective proof of the shuffle-compatibility of the descent set. In Section \ref{s-section3}, we define the ``shuffle algebra'' of a shuffle-compatible permutation statistic $\st$, which has a natural basis whose structure constants encode the distribution of $\st$ over shuffles of permutations (or more precisely, equivalence classes of permutations induced by the statistic $\st$). Our first result is a characterization of the major index shuffle algebra using the variant (\ref{e-maj}) of Stanley's shuffling theorem. We then prove several basic results that relate the shuffle algebras of permutation statistics that are related in various ways. Notably, if two statistics are related by a basic symmetry---reversion, complementation, or reverse complementation---and one of them is known to be shuffle-compatible, then both statistics are shuffle-compatible and have isomorphic shuffle algebras. In Section \ref{s-section4}, we introduce the algebra of quasisymmetric functions $\QSym$ (originally studied in \cite{Gessel1984}) and observe that it is isomorphic to the descent set shuffle algebra. We establish a necessary and sufficient condition for the shuffle-compatibility of a descent statistic, which shows that the shuffle algebra of any shuffle-compatible descent statistic is isomorphic to a quotient algebra of $\QSym$. Using this condition, we give explicit descriptions for the shuffle algebras of $\des$ and $(\des,\maj)$. We then observe that the peak set shuffle algebra is isomorphic to Stembridge's ``algebra of peaks'' arising from his study of enriched $P$-partitions \cite{Stembridge1997}---thus showing that the peak set $\Pk$ is shuffle-compatible---and use Stembridge's peak quasisymmetric functions to characterize the peak number shuffle algebra, thus showing that the peak number $\pk$ is shuffle-compatible. In the same vein, Petersen's work \cite{Petersen2006,Petersen2007} on left enriched $P$-partitions implies that the left peak set $\Lpk$ and left peak number $\lpk$ are shuffle-compatible. In Section \ref{s-section5}, we introduce the bialgebra of noncommutative symmetric functions $\mathbf{Sym}$ (originally studied in \cite{ncsf1}), whose coalgebra structure is dual to the algebra structure of $\QSym$. By exploiting this duality, we obtain a dual version of our shuffle-compatibility condition, which allows us to prove shuffle-compatibility of a descent statistic by constructing a suitable subcoalgebra of $\mathbf{Sym}$. We use this approach to describe the shuffle algebras of $(\pk,\des)$, $(\lpk,\des)$, $\udr$, and $(\udr,\des)$, thus showing that these statistics are all shuffle-compatible. Finally, in Section \ref{s-section6}, we provide proofs for an alternate characterization of the $\pk$ and $(\pk,\des)$ shuffle algebras, list some non-shuffle-compatible permutation statistics, and discuss some open questions and conjectures on the topic of shuffle-compatibility. The appendix of this paper contains two tables. Table 1 lists all permutation statistics that we know to be shuffle-compatible, and Table 2 lists various equivalences (as defined in Section \ref{s-section3}) among the statistics that are studied in this paper. We note that some permutation statistics, such as the number of inversions, satisfy a weak form of shuffle-compatibility: for disjoint permutations $\pi$ and $\sigma$, if every letter of $\pi$ is less than every letter of $\sigma$, then the multiset $\{\,\st(\tau)\mid\tau\in S(\pi,\sigma)\,\}$ depends only on $\st(\pi)$, $\st(\sigma)$, $\left|\pi\right|$, and $\left|\sigma\right|$. Permutation statistics with this property are associated with quotients of the Malvenuto--Reutenauer algebra (also called the algebra of free quasisymmetric functions). Some of these statistics have been studied by Vong \cite{Vong2013}, but we do not consider them here. Also, there is another class of algebras that are related to permutations and their descent sets, based on ordinary multiplication of permutations rather than shuffles. If $\st$ is a function defined on the $n$th symmetric group $\mathfrak{S}_n$, we may consider the elements \begin{equation*} K_\alpha \coloneqq \sum_{\substack{\pi\in \mathfrak{S}_n\\ \st(\pi) = \alpha}}\pi \end{equation*} in the group algebra of $\mathfrak{S}_n$, where $\alpha$ ranges over the image of $\st$. Louis Solomon \cite{Solomon1976} proved that if $\st$ is the descent set, then the $K_\alpha$ span a subalgebra of the group algebra of $\mathfrak{S}_n$, called the \emph{descent algebra} of $\mathfrak{S}_n$. Several other descent statistics give subalgebras of the descent algebra, including the descent number \cite{Loday1989}; the peak set \cite{Nyman2003, Schocker2005}; the left peak set, peak number, and left peak number \cite{Aguiar2004, Petersen2006, Petersen2007}; and the number of biruns and up-down runs \cite{Doyle2008, Josuat-Verges2016}. These descent statistics have the property that given values $\alpha$ and $\beta$ of $\st$, and $\tau\in \mathfrak{S}_n$, the number of pairs $(\pi,\sigma)$ of permutations in $\mathfrak{S}_n$ with $\st(\pi)=\alpha$, $\st(\sigma)=\beta$, and $\pi\sigma=\tau$ depends only on $\st(\tau)$. In other words, these statistics are ``compatible'' under the ordinary product of permutations, and our work is an analogue of Solomon's descent theory for statistics compatible under the shuffle product. Although there is a significant overlap between shuffle-compatible permutation statistics and statistics corresponding to subalgebras of the descent algebra, neither class is contained in the other, as the number of biruns is not shuffle-compatible and the pair $(\pk,\des)$ does not give a subalgebra of the descent algebra. The descent algebra and its subalgebras may also be studied through noncommutative symmetric functions (using the internal product of $\mathbf{Sym}$ \cite[Section 5]{ncsf1}) or quasisymmetric functions (using the internal coproduct of $\QSym$ \cite{Gessel1984}). \section{Permutations and descents} \label{s-section2} \subsection{Increasing runs and descent compositions} We begin with a brief exposition on some basic material in permutation enumeration relating to descents. Every permutation can be uniquely decomposed into a sequence of maximal increasing consecutive subsequences, which we call \textit{increasing runs} (or simply \textit{runs}). For example, the increasing runs of $21479536$ are $2$, $1479$, $5$, and $36$. Equivalently, an increasing run of $\pi$ is a maximal consecutive subsequence containing no descents. Let us call an increasing run \textit{short} if it has length 1, and \textit{long} if it has length at least 2. The \textit{initial run} of a permutation refers to its first increasing run, whereas the \textit{final run} refers to its last increasing run. For example, the initial run of $21479536$ is $2$ and its final run is $36$. (If a permutation has only one increasing run, then it is considered to be both an initial run and a final run.) The number of increasing runs of a nonempty permutation is one more than its number of descents; in fact, the lengths of the increasing runs determine the descents, and vice versa. Given a subset $A\subseteq[n-1]$ with elements $a_{1}<a_{2}<\cdots<a_{j}$, let $\Comp(A)$ be the composition $(a_{1},a_{2}-a_{1},\dots,a_{j}-a_{j-1},n-a_{j})$ of $n$, and given a composition $L=(L_{1},L_{2},\dots,L_{k})$ of $n$, let $\Des(L)\coloneqq\{L_{1},L_{1}+L_{2},\dots,L_{1}+\cdots+L_{k-1}\}$ be the corresponding subset of $[n-1]$. Then, $\Comp$ and $\Des$ are inverse bijections. If $\pi$ is an $n$-permutation with descent set $A\subseteq[n-1]$, then we call $\Comp(A)$ the \textit{descent composition} of $\pi$, which we also denote by $\Comp(\pi)$. By convention, let us say that the empty permutation (i.e., permutation of length 0) has descent composition $\varnothing$. Note that the descent composition of $\pi$ gives the lengths of the increasing runs of $\pi$. Conversely, if $\pi$ has descent composition $L$, then its descent set $\Des(\pi)$ is $\Des(L)$. A permutation statistic $\st$ is called a \textit{descent statistic} if it depends only on the descent composition, that is, if $\Comp(\pi)=\Comp(\sigma)$ implies $\st(\pi)=\st(\sigma)$ for any two permutations $\pi$ and $\sigma$. Equivalently, $\st$ is a descent statistic if it depends only on the descent set and length of a permutation. Since two permutations with the same descent composition must have the same value of $\st$ if $\st$ is a descent statistic, we shall use the notation $\st(L)$ to indicate the value of a descent statistic $\st$ on any permutation with descent composition $L$. We define several statistics based on increasing runs: the long run $\lr$, long initial run $\lir$, long final run $\lfr$, short initial run $\sir$, and short final run $\sfr$ statistics. Let $\lr(\pi)$ be the number of long runs of $\pi$, let $\lir(\pi)$ be 1 if the initial run of $\pi$ is long and 0 otherwise, and let $\lfr(\pi)$ be 1 if the final run of $\pi$ is long and 0 otherwise. Also, for nonempty $\pi$, let $\sir(\pi)\coloneqq1-\lir(\pi)$ and $\sfr(\pi)\coloneqq1-\lfr(\pi)$. By convention, if $\pi$ is empty, then all of these statistics are equal to zero. We will use these run statistics to give an alternative way of characterizing some of the descent statistics introduced in the next section. \subsection{Descent statistics} In the introduction to this paper, we saw four examples of descent statistics: the descent set $\Des$, descent number $\des$, major index $\maj$, and the joint statistic $(\des,\maj)$. The following are additional descent statistics that we will consider in our investigation of shuffle-compatibility: \begin{itemize} \item The comajor index $\comaj$. The \textit{comajor index} $\comaj(\pi)$ of $\pi\in\mathfrak{P}_{n}$, a variant of the major index, is defined to be \[ \comaj(\pi)\coloneqq\sum_{k\in\Des(\pi)}(n-k). \] \item The peak set $\Pk$ and peak number $\pk$. We say that $i$ (where $2\leq i\leq n-1$) is a \textit{peak} of $\pi\in\mathfrak{P}_{n}$ if $\pi_{i-1}<\pi_{i}>\pi_{i+1}$. The \textit{peak set} $\Pk(\pi)$ of $\pi$ is defined to be \[ \Pk(\pi)\coloneqq\{\,2\leq i\leq n-1\mid\pi_{i-1}<\pi_{i}>\pi_{i+1}\,\} \] and the \textit{peak number} $\pk(\pi)$ of $\pi$ to be \[ \pk(\pi)\coloneqq\left|\Pk(\pi)\right|. \] \item The valley set $\Val$ and valley number $\val$. We say that $i$ (where $2\leq i\leq n-1$) is a \textit{valley} of $\pi\in\mathfrak{P}_{n}$ if $\pi_{i-1}>\pi_{i}<\pi_{i+1}$. Then $\Val(\pi)$ and $\val(\pi)$ are defined in the analogous way. \item The left peak set $\Lpk$ and left peak number $\lpk$. We say that $i\in[n-1]$ is a \textit{left peak} of $\pi\in\mathfrak{P}_{n}$ if $i$ is a peak of $\pi$ or if $i=1$ and is a descent of $\pi$. Thus, left peaks of $\pi$ are peaks of $0\pi$ shifted by 1. The \textit{left peak set} $\Lpk(\pi)$ is the set of left peaks of $\pi$ and the \textit{left peak number} $\lpk(\pi)$ is the number of left peaks of $\pi$. \item The right peak set $\Rpk$ and right peak number $\rpk$. These are defined in the same way as the corresponding left peak statistics, except that right peaks of $\pi$ are peaks of $\pi 0$. \item The exterior peak set $\Epk$ and exterior peak number $\epk$. The \textit{exterior peak set} $\Epk(\pi)$ of $\pi$ is defined by \[ \Epk(\pi)\coloneqq\begin{cases} \Lpk(\pi)\cup\Rpk(\pi), & \mathrm{if}\text{ }\left|\pi\right|\neq1\\ \{1\}, & \mathrm{if}\text{ }\left|\pi\right|=1 \end{cases} \] and the \textit{exterior peak number} $\epk(\pi)$ of $\pi$ is defined by \[ \epk(\pi)\coloneqq\left|\Epk(\pi)\right|. \] \item The number of biruns $\br$ and the number of up-down runs $\udr$. A \textit{birun} of a permutation is a maximal monotone consecutive subsequence, and the number of biruns of $\pi$ is denoted $\br(\pi)$. An \textit{up-down run} of a permutation $\pi$ is either a birun or $\pi_{1}$ when $\pi_{1}>\pi_{2}$, and the number of up-down runs of $\pi$ is denoted $\udr(\pi)$. Thus the up-down runs of $\pi$ are essentially the biruns of $0\pi$. For example, the biruns of $\pi=871542$ are $871$, $15$, and $542$, and the up-down runs of $\pi$ are these biruns along with 8, so $\br(\pi)=3$ and $\udr(\pi)=4$. \item Ordered tuples of descent statistics, such as $(\pk,\des)$, $(\lpk,\des)$, and so on. \end{itemize} Before continuing, we give two lemmas that will help us understand some of the above statistics. The first lemma characterizes several statistics in terms of the run statistics introduced at the end of the previous section, and the second lemma reveals a close connection between the $\udr$ statistic and the $\lpk$ and $\val$ statistics. \begin{lem} \label{l-pkvalruns} Let $\pi\in\mathfrak{P}_n$ with $n\geq1$. Then \begin{enumerate} \item [\normalfont{(a)}] $\pk(\pi)=\lr(\pi)-\lfr(\pi)$ \item [\normalfont{(b)}] $\val(\pi)=\lr(\pi)-\lir(\pi)$ \item [\normalfont{(c)}] $\lpk(\pi) = \begin{cases} \lr(\pi)+\sir(\pi)-\lfr(\pi), & \mbox{if }n\geq2,\\ 0, & \mbox{otherwise}. \end{cases}$ \item [\normalfont{(d)}] $\rpk(\pi)=\lr(\pi)$ \item [\normalfont{(e)}] $\epk(\pi)=\val(\pi)+1$ \end{enumerate} \end{lem} \begin{proof} Part (a) follows from the fact that every non-final long run ends in a peak, and every peak is at the end of a non-final long run. The same is true for valleys and non-initial long runs, and for right peaks and long runs, thus implying (b) and (d). Next, \begin{equation*} \lpk(\pi) = \begin{cases} \pk(\pi)+\sir(\pi), & \mbox{if }n\geq2,\\ 0, & \mbox{otherwise}, \end{cases} \end{equation*} which together with (a) proves (c). Finally, \begin{align*} \epk(\pi) & =\rpk(\pi) + \sir(\pi)\\ & =\lr(\pi)+1-\lir(\pi)\\ & =\val(\pi)+1 \end{align*} proves (e). \end{proof} \begin{lem} \label{l-udr} Let $\pi\in\mathfrak{P}_{n}$ with $n\geq1$. Then \begin{enumerate} \item [\normalfont{(a)}]$\udr(\pi)=\lpk(\pi)+\val(\pi)+1$ \item [\normalfont{(b)}]$\lpk(\pi)=\left\lfloor \udr(\pi)/2\right\rfloor $ \item [\normalfont{(c)}]$\val(\pi)=\left\lfloor (\udr(\pi)-1)/2\right\rfloor $ \item [\normalfont{(d)}]If $n\geq2$ and the final run of $\pi$ is short, then $\lpk(\pi)=\val(\pi)+1$. Otherwise, $\lpk(\pi)=\val(\pi)$. \end{enumerate} \end{lem} This is Lemma 2.1 of \cite{Zhuang2016a}; a proof can be found there. According to this result, not only do $\lpk$ and $\val$ determine $\udr$, but $\udr$ determines both $\lpk$ and $\val$. In other words, $\udr$ and $(\lpk,\val)$ are equivalent permutation statistics in the sense that will be formally defined in Section 3.1. We note that the definitions and properties of descents, increasing runs, descent compositions, and descent statistics extend naturally to words on any totally ordered alphabet such as $[n]$ or $\mathbb{P}$ if we replace the strict inequality $<$ with the weak inequality $\leq$, which reflects the fact that increasing runs are allowed to be weakly increasing in this setting. For example, $i$ is a peak of the word $w=w_{1}w_{2}\cdots w_{n}$ if $w_{i-1}\leq w_{i}>w_{i+1}$. \subsection{Possible values of some descent statistics} In our study of shuffle-compatibility, it will be useful to determine all possible values that a descent statistic can achieve. It is clear that for $\pi\in\mathfrak{P}_{n}$ and $n\geq1$, we have $0\leq\des(\pi)\leq n-1$ and $\des(\pi)$ can attain any value in this range for some $\pi\in\mathfrak{P}_{n}$. It is also easy to check that the possible values of $\maj(\pi)$ and $\comaj(\pi)$ for $\pi\in\mathfrak{P}_{n}$ range from 0 to ${n \choose 2}$, and that all of these values are attainable. Finding such bounds for other descent statistics requires more work. Here, we determine all possible values for the $(\des,\maj)$, $(\des,\comaj)$, $(\pk,\des)$, $(\lpk,\des)$, and $(\udr,\des)$ statistics. \goodbreak \begin{prop}[Possible values of $(\des,\maj)$] \leavevmode \begin{enumerate} \item [\normalfont{(a)}] For any permutation $\pi\in\mathfrak{P}_{n}$ with $n\geq1$ and $\des(\pi)=j$, we have ${j+1 \choose 2}\leq\maj(\pi)\leq nj-{j+1 \choose 2}$. \item [\normalfont{(b)}] If $n\geq1$, $0\leq j\leq n-1$, and ${j+1 \choose 2}\leq k\leq nj-{j+1 \choose 2}$, then there exists $\pi\in\mathfrak{P}_{n}$ with $\des(\pi)=j$ and $\maj(\pi)=k$. \end{enumerate} \end{prop} \begin{proof} Among all $n$-permutations with $j$ descents, it is clear that the smallest possible value of $\maj$ is attained when the descent set is $\{1,2,\dots,j\}$, in which case the major index is equal to ${j+1 \choose 2}$. Similarly, the largest possible value of $\maj$ is attained when the descent set is $\{n-j,n-j+1,\dots,n-1\}$, in which case the major index is equal to $nj-{j+1 \choose 2}$. This proves (a). Next we prove (b). The case $j=0$ is easy, so we assume that $j\ge1$. A permutation in $\mathfrak{P}_{n}$ with descent set $\{1,2,\dots,j\}$ has major index ${j+1 \choose 2}$. Now let $\pi$ be a permutation in $\mathfrak{P}_n$ with $j$ descents, and suppose that for some $i\in\Des(\pi)$ we have $i\ne n-1$ and $i+1\notin\Des(\pi)$. Take $\sigma\in\mathfrak{P}_{n}$ to have descent set $(\Des(\pi)\setminus\{i\})\cup\{i+1\}$. (This is possible because for any subset $A$ of $[n-1]$, there exists an $n$-permutation with descent set $A$.) Then $\maj(\sigma)=\maj(\pi)+1$. We can repeat this process to increase the major index by 1 with every iteration until we reach a permutation with descent set $\{n-j,n-j+1,\dots,n-1\}$, and thus major index $nj-{j+1 \choose 2}$. This proves (b). \end{proof} \begin{prop}[Possible values of $(\des,\comaj)$] \label{p-comajvalues} \leavevmode \begin{enumerate} \item [\normalfont{(a)}] For any permutation $\pi\in\mathfrak{P}_{n}$ with $n\geq1$ and $\des(\pi)=j$, we have ${j+1 \choose 2}\leq\comaj(\pi)\leq nj-{j+1 \choose 2}$. \item [\normalfont{(b)}] If $n\geq1$, $0\leq j\leq n-1$, and ${j+1 \choose 2}\leq k\leq nj-{j+1 \choose 2}$, there exists $\pi\in\mathfrak{P}_{n}$ with $\des(\pi)=j$ and $\comaj(\pi)=k$. \end{enumerate} \end{prop} \begin{proof} This follows from the previous proposition and the formula $\comaj(\pi)=n\des(\pi)-\maj(\pi)$.\end{proof} \begin{prop}[Possible values of $(\pk,\des)$] \label{p-pkdesvalues} \leavevmode \begin{enumerate} \item [\normalfont{(a)}] For any permutation $\pi\in\mathfrak{P}_{n}$ with $n\geq1$, we have $0\leq\pk(\pi)\leq\left\lfloor (n-1)/2\right\rfloor $. In addition, $\pk(\pi) \leq \des(\pi) \leq n-\pk(\pi) -1$. \item [\normalfont{(b)}] If $n\geq1$, $0\leq j\leq\left\lfloor (n-1)/2\right\rfloor $, and $j\leq k\leq n-j-1$, then there exists $\pi\in\mathfrak{P}_{n}$ with $\pk(\pi)=j$ and $\des(\pi)=k$. \end{enumerate} \end{prop} \begin{proof} Fix $n\geq1$. Recall from Lemma \ref{l-pkvalruns} (a) that $\pk(\pi)$ is equal to the number of non-final long runs of $\pi$. It is clear that the number of non-final long runs of an $n$-permutation is between $0$ and $\left\lfloor (n-1)/2\right\rfloor $. Every peak is a descent, so $\pk(\pi)\leq\des(\pi)$. For each peak $i$, note that $i-1\in[n-1]$ is not a descent, so that $\pk(\pi)\leq n-1-\des(\pi)$ and therefore $\des(\pi)\leq n-\pk(\pi)-1$. This proves (a). To prove (b), it suffices to show that if $n\geq1$, $0\leq j\leq\left\lfloor (n-1)/2\right\rfloor $, and $j\leq k\leq n-j-1$ then there exists a composition of $n$ with $j$ non-final long parts (i.e., parts of size at least 2) and $k+1$ total parts. Such a composition is $(2^{j},1^{k-j},n-k-j)$. Hence, (b) is proved.\end{proof} \begin{prop}[Possible values of $(\lpk,\des)$] \label{p-lpkdesvalues} \leavevmode \begin{enumerate} \item [\normalfont{(a)}] For any permutation $\pi\in\mathfrak{P}_{n}$ with $n\geq1$, we have $0\leq\lpk(\pi)\leq\left\lfloor n/2\right\rfloor $. In addition, if $\lpk(\pi)=0$, then $\des(\pi)=0$; otherwise, $\lpk(\pi)\leq\des(\pi)\leq n-\lpk(\pi)$. \item [\normalfont{(b)}] If $n\geq1$, $1\leq j\leq\left\lfloor n/2\right\rfloor $, and $j\leq k\leq n-j$, then there exists $\pi\in\mathfrak{P}_{n}$ with $\lpk(\pi)=j$ and $\des(\pi)=k$. In addition, for any $n\geq1$, there exists $\pi\in\mathfrak{P}_{n}$ with $\lpk(\pi)=\des(\pi)=0$. \end{enumerate} \end{prop} \begin{proof} If $\lpk(\pi)=0$, then $\pi$ is an increasing permutation, so we also have $\des(\pi)=0$. The other inequalities of part (a) follow from applying Proposition \ref{p-pkdesvalues} (a) to the permutation $0\pi$. Now, fix $n\geq2$. (The case $n=1$ is obvious.) A permutation with descent composition $(n)$ has no left peaks and no descents. Suppose that $1\leq j\leq\left\lfloor n/2\right\rfloor $ and $j\leq k\leq n-j$. To complete the proof of (b), we show that there exists a composition $L$ of $n$ with exactly $k+1$ parts such that $\lpk(L)=\lr(L)+\sir(L)-\lfr(L)=j$. Such a composition is $(1^{k-j+1},2^{j-1},n-k-j+1)$. This completes the proof of (b).\end{proof} We say that $i\in[n-1]$ is an \textit{ascent} of an $n$-permutation $\pi$ if $\pi_i<\pi_{i+1}$. Let $\asc(\pi)$ denote the number of ascents of $\pi$. It is clear that $\des(\pi)=n-1-\asc(\pi)$. \begin{prop}[Possible values of $(\udr,\des)$] \label{p-udrdesvalues} \leavevmode \begin{enumerate} \item [\normalfont{(a)}] For any permutation $\pi\in\mathfrak{P}_{n}$ with $n\geq1$, we have $1\leq\udr(\pi)\leq n$. In addition, if $\udr(\pi)=1$, then $\des(\pi)=0$; otherwise, $\left\lfloor \udr(\pi)/2\right\rfloor \le \des(\pi) \le n - \left\lceil \udr(\pi)/2\right\rceil$. \item [\normalfont{(b)}] If $n\geq1$, $2\leq j\leq n $, and $\floor{j/2}\leq k\leq n-\ceil{j/2}$, then there exists $\pi\in\mathfrak{P}_{n}$ with $\lpk(\pi)=j$ and $\des(\pi)=k$. In addition, for any $n\geq1$, there exists $\pi\in\mathfrak{P}_{n}$ with $\udr(\pi)=1$ and $\des(\pi)=0$. \end{enumerate} \end{prop} \begin{proof} It is clear that every nonempty permutation has at least one up-down run, and every up-down run of a permutation ends with a different letter, so $1\leq\udr(\pi)\leq n$. The beginning of the $2i$th up-down run of $\pi$ is always a descent of $\pi$, so $\des(\pi)\ge \floor{\udr(\pi)/2}$. The beginning of the $(2i-1)$th up-down run of $\pi$ is an ascent of $\pi$ for $i\ge2$, so the number of ascents of $\pi$ is at least $\floor{(\udr(\pi) -1)/2} = \ceil{\udr(\pi)/2}-1$. Thus \[\des(\pi) = n-1-\asc(\pi)\le n-1-\left(\ceil{\udr(\pi)/2}-1\right) = n-\ceil{\udr(\pi)/2},\] completing the proof of (a). Now, fix $n\geq2$. (The case $n=1$ is obvious.) A permutation with descent composition $(n)$ has only one up-down run and no descents. Suppose that $1\leq j\leq n$ and $\floor{j/2}\leq k\leq n-\ceil{j/2}$. To complete the proof of (b), we show that there exists a composition $L$ of $n$ with exactly $k+1$ parts such that $\udr(L)=\lpk(L)+\val(L)+1=2\sir(L)+2\lr(L)-\lfr(L)=j$. For this, we consider three cases: \begin{itemize} \item If $j=2$, then we can take $(n-k,1^k)$. \item If $j>2$ and $j$ is even, then we can take $(1,n-j/2-k+2, 2^{j/2-2}, 1^{k-j/2+1})$. \item If $j$ is odd, then we can take $(1,1^{k-(j-1)/2},2^{(j-3)/2},n-(j+1)/2-k+2)$. \end{itemize} This completes the proof of (b). \end{proof} \subsection{A bijective proof of the shuffle-compatibility of the descent set} \label{s-bijproof} Here we give a simple proof that the descent set is a shuffle-compatible permutation statistic. The idea of the proof is inspired by the theory of $P$-partitions \cite{Stanley1972}. Recall that in Section 2.1, we defined the inverse bijections $\Comp$ and $\Des$ between compositions of $n$ and subsets of $[n-1]$ in the following way: for a set $A=\{a_{1},a_{2},\dots,a_{j}\}\subseteq[n-1]$ with $a_{1}<a_{2}<\cdots<a_{j}$, we let $\Comp(A)\coloneqq(a_{1},a_{2}-a_{1},\dots,a_{j}-a_{j-1},n-a_{j})$, and for a composition $L=(L_{1},L_{2},\dots,L_{k})$, we let $\Des(L)\coloneqq\{L_{1},L_{1}+L_{2},\dots,L_{1}+\cdots+L_{k-1}\}$. Observe that these maps extend to inverse bijections between weak compositions of $n$ and multisubsets of $\{0\}\cup[n]$. (A weak composition allows 0 as a part.) For example, if $n=7$ and $A=\{0,2,2,5\}$, then $\Comp(A)=(0,2,0,3,2)$. For two weak compositions $J=(J_{1},J_{2},\dots,J_{k})$ and $K=(K_{1},K_{2},\dots, K_{k})$ with the same number of parts, let $J+K$ denote the weak composition $(J_{1}+K_{1},J_{2}+K_{2},\dots,J_{k}+K_{k})$ obtained by summing the entries of $J$ and $K$ componentwise. Also, we define the \textit{refinement order} on weak compositions of $n$ analogously to the refinement order on compositions of $n$; that is, $M$ covers $L$ if and only if $M$ can be obtained from $L$ by replacing two consecutive parts $L_{i}$ and $L_{i+1}$ with $L_{i}+L_{i+1}$. We say that $L$ is a \textit{refinement} of $M$ if $L\leq M$ in the refinement order. \begin{lem} \label{t-des-sc} Let $\pi\in\mathfrak{P}_{m}$ and $\sigma\in\mathfrak{P}_{n}$ be disjoint permutations, and let $A\subseteq[m+n-1]$ and $L=\Comp(A)$. Then the number of shuffles of $\pi$ and $\sigma$ with descent set contained in $A$ is equal to the number of weak compositions $J$ of $m$ and $K$ of $n$ such that $J$ is a refinement of $\Comp(\pi)$, $K$ is a refinement of $\Comp(\sigma)$, $J$ and $K$ have the same number of parts as $L$, and $J+K=L$.\end{lem} \begin{proof} Suppose that $L$ has $k$ parts, and let $J$ and $K$ satisfy the above conditions. For every $i\in\Des(J)$, insert a bar immediately before the $(i+1)$th letter of $\pi$.% \footnote{Since $\Des(J)$ is a multiset, multiple bars may be inserted in any given position.% } Similarly, for every $i\in\Des(K)$, insert a bar immediately before the $(i+1)$th letter of $\sigma$. This creates $k$ blocks of letters in each of the permutations $\pi$ and $\sigma$ such that the letters in each block are increasing. For example, take $\pi=12879$, $\sigma=4635$, $A=\{1,5,6\}$, $L=(1,4,1,3)$, $J=(1,2,0,2)$, and $K=(0,2,1,1)$. Then this yields the ``barred permutations'' $1|28||79$ and $|46|3|5$. For each $1\leq i\leq k$, let $\tau^{(i)}$ denote the permutation obtained by merging the letters in the $i$th block of $\pi$ and the $i$th block of $\sigma$ in increasing order. Then let $\tau\in S(\pi,\sigma)$ be the concatenation $\tau^{(1)}\tau^{(2)}\cdots\tau^{(k)}$, which has descent set contained in $A$. For example, using the $\pi$ and $\sigma$ specified above, we have $\tau^{(1)}=1$, $\tau^{(2)}=2468$, $\tau^{(3)}=3$, and $\tau^{(4)}=579$, so $\tau=124683579$. Since $J+K=L$, the descent set of $\tau$ is contained in $A$. To show that this procedure gives a bijection between shuffles of $\pi$ and $\sigma$ with descent set contained in $A$ and pairs of weak compositions $(J,K)$ satisfying the stated conditions, we give an inverse procedure. Let $\tau\in S(\pi,\sigma)$ with $\Des(\tau)\subseteq A$, and let $k=\left|A\right|+1$. For every $i\in A$, insert a bar after the $i$th letter of $\tau$. Delete every letter in $\sigma$ from $\tau$ to obtain the permutation $\pi$ decorated with bars, which creates $k$ blocks of letters in $\pi$ such that the letters in each block are increasing. Similarly, by deleting every letter in $\pi$ from $\tau$, we obtain $k$ blocks of letters in $\sigma$ such that the letters in each block are increasing. Using the same example as above, we begin with $A=\{1,5,6\}$ and $\tau=124683579$. Inserting bars, we have $1|2468|3|579$, from which we obtain $1|28||79$ and $|46|3|5$. Now, for each $1\leq i\leq k$, let $J_{i}$ denote the size of the $i$th block in $\pi$ and let $K_{i}$ denote the size of the $i$th block in $\sigma$. Then define the weak compositions $J$ and $K$ by $J=(J_{1},J_{2},\dots,J_{k})$ and $K=(K_{1},K_{2},\dots,K_{k})$. Continuing the example, we have $J=(1,2,0,2)$ and $K=(0,2,1,1)$. Since the letters in every block are weakly increasing, $J$ is a refinement of $\Comp(\pi)$ and $K$ is a refinement of $\Comp(\sigma)$. Moreover, it is clear that $J$ and $K$ have the same number of parts as $L=\Comp(A)$ and that $J+K=L$. \end{proof} Lemma \ref{t-des-sc} shows that the number of shuffles of $\pi$ and $\sigma$ with descent set contained in a specified set $A$ depends only on $\Des(\pi)$, $\Des(\sigma)$, $\left|\pi\right|$, and $\left|\sigma\right|$. By inclusion-exclusion, it follows that the number of shuffles of $\pi$ and $\sigma$ with descent set equal to $A$ depends only on $\Des(\pi)$, $\Des(\sigma)$, $\left|\pi\right|$, and $\left|\sigma\right|$. In other words, the descent set is shuffle-compatible. We can use the shuffle-compatibility of the descent set to prove the shuffle-compatibility of a family of related statistics that we call ``partial descent sets''. For non-negative integers $i$ and $j$, define the \textit{partial descent set} $\Des_{i,j}$ by \[ \Des_{i,j}(\pi) \coloneqq \Des(\pi) \cap (\{1,2,\dots, i\}\cup\{n-1,\dots, n-j\}), \] where $n=\left|\pi\right|$. In other words, $\Des_{i,j}(\pi)$ is the set of descents of $\pi$ that occur in the first $i$ or last $j$ positions. For example, if $i+j\ge \left|\pi\right| -1$ then $\Des_{i,j}(\pi)=\Des(\pi)$, and for $\left|\pi\right|\ge2$, $\left|\Des_{1,0}(\pi)\right| = \sir(\pi)$ and $\left|\Des_{0,1}(\pi)\right| = \sfr(\pi)$. \begin{thm} \label{t-truncDessc} The partial descent sets $\Des_{i,j}$ for all $i,j\ge0$ are shuffle-compatible. \end{thm} \begin{proof} We write $\Des_{i,j}(S(\pi,\sigma))$ for the multiset $\{\, \Des_{i,j}(\tau) \mid \tau\in S(\pi,\sigma)\,\}$. We define the equivalence relation $\equiv_{i,j}$ on permutations of the same length by $\pi \equiv_{i,j} \pi'$ if and only if $\Des_{i,j}(S(\pi,\sigma))=\Des_{i,j}(S(\pi',\sigma))$ for all $\sigma$ disjoint from both $\pi$ and $\pi'$. (It is immediate from the above definition that $\equiv_{i,j}$ is reflexive and symmetric, and it is not hard to show that $\equiv_{i,j}$ is also transitive.) For $\pi$ and $\pi'$ in $\mathfrak{P}_m$, the following are sufficient conditions for $\pi \equiv_{i,j} \pi'$: \begin{enumerate} \item[(i)] If $\pi$ and $\pi'$ have the same descent set, then $\pi \equiv_{i,j} \pi'$. \item[(ii)] If $\pi_k=\pi_k'$ for all $k$ with $1 \leq k \leq i+1$ or $m-j \leq k \leq m$, then $\pi \equiv_{i,j} \pi'$. \end{enumerate} Condition (i) is a consequence of the shuffle-compatibility of the descent set. Condition (ii) follows from the fact that $\Des_{i,j}(\tau)$ for $\tau\in S(\pi,\sigma)$ does not depend on the values of $\pi_k$ with $i+1<k<m-j$.% \footnote{This is because if $i+1<k<m-j$, then upon shuffling $\pi$ with any permutation $\sigma$ disjoint from $\pi$, the letter $\pi_k$ cannot end up in the first $i+1$ or last $j+1$ positions of any element of $S(\pi,\sigma)$.} We claim that to prove the theorem it is sufficient to show that $\Des_{i,j}(\pi) = \Des_{i,j}(\pi')$ implies $\pi \equiv_{i,j} \pi'$. Indeed, let $\pi$ and $\pi'$ be two permutations of the same length with $\Des_{i,j}(\pi)=\Des_{i,j}(\pi')$ and similarly with $\sigma$ and $\sigma'$, where $\pi$ is disjoint from $\sigma$ and $\pi'$ is disjoint from $\sigma'$. By (i), we can assume without loss of generality that $\sigma$ is disjoint from $\pi'$ as well, and thus if we have $\pi \equiv_{i,j} \pi'$ and $\sigma \equiv_{i,j} \sigma'$, then $\Des_{i,j}(S(\pi,\sigma))=\Des_{i,j}(S(\pi',\sigma))=\Des_{i,j}(S(\pi',\sigma'))$. Now suppose that $\pi$ and $\pi'$ are in $\mathfrak{P}_m$ with $\Des_{i,j}(\pi) =\Des_{i,j}(\pi')$. We shall show that $\pi \equiv_{i,j} \pi'$, considering three cases separately: \begin{enumerate} \item First, suppose that $i+j\ge m-1$. Then $\Des(\pi) = \Des(\pi')$, so $\pi \equiv_{i,j} \pi'$ by (i). \item Next, suppose that $i+j\le m-3$. It is enough to find permutations $\bar\pi$ and $\bar\pi'$ such that $\bar\pi \equiv_{i,j} \pi$, $\bar\pi' \equiv_{i,j} \pi'$, and $\Des(\bar\pi)=\Des(\bar\pi')$. To do this, we may choose some $a\in\mathbb{P}$ greater than all the letters of $\pi$ and $\pi'$ and construct $\bar\pi$ and $\bar\pi'$ by replacing the letters in positions $i+2, i+3,\dots, m-j-1$ of both $\pi$ and $\pi'$ with the sequence $a \mskip 6mu a+1\,\cdots\, a+(m-i-j-3)$. \item Finally, suppose that $i+j=m-2$. In this case, $\Des_{i,j}(\pi)$ comprises all descents of $\pi $ except in position $i+1$, so that $\Des(\pi)$ and $\Des(\pi')$ are the same except that $i+1$ may be in one but not the other. If $\Des(\pi)=\Des(\pi')$ then $\pi \equiv_{i,j} \pi'$, so let us suppose that $i+1$ is a descent of exactly one of $\pi$ and $\pi'$. Let $\sigma\in\mathfrak{P}_n$ be disjoint from $\pi$. By (i), we may assume without loss of generality that no letter of $\pi$ or $\sigma$ has value strictly between $\pi_{i+1}$ and $\pi_{i+2}$. Let $\pi^*$ be the result of switching $\pi_{i+1}$ and $\pi_{i+2}$ in $\pi$. It is easy to see that switching $\pi_{i+1}$ and $\pi_{i+2}$ in an element $\tau$ of $S(\pi,\sigma)$ can change $\Des(\tau)$ only by adding or removing a single descent which is at least $i+1$ and at most $n+i+1=m+n-1-j$ and thus does not change $\Des_{i,j}(\tau)$. Thus, $\pi^* \equiv_{i,j} \pi$. Since $\Des(\pi^*) = \Des(\pi')$, we also have $\pi^* \equiv_{i,j} \pi'$, so $\pi \equiv_{i,j} \pi'$ as desired.\qedhere \end{enumerate} \end{proof} \section{Shuffle algebras} \label{s-section3} \subsection{Definition and basic results} Every permutation statistic $\st$ induces an equivalence relation on permutations; we say that permutations $\pi$ and $\sigma$ are $\st$-\textit{equivalent} if $\st(\pi)=\st(\sigma)$ and $\left|\pi\right|=\left|\sigma\right|$.\footnote{The notion of $\st$-equivalence should not be confused with that of ``$\st$-Wilf equivalence'' \cite{Dokos2012}. } We write the $\st$-equivalence class of $\pi$ as $[\pi]_{\st}$. For a shuffle-compatible statistic $\st$, we can then associate to $\st$ a $\mathbb{Q}$-algebra in the following way. First, associate to $\st$ a $\mathbb{Q}$-vector space by taking as a basis the $\st$-equivalence classes of permutations. We give this vector space a multiplication by taking \[ [\pi]_{\st}[\sigma]_{\st}=\sum_{\tau\in S(\pi,\sigma)}[\tau]_{\st}, \] which is well-defined (i.e., the choice of $\pi$ and $\sigma$ in an equivalence class does not matter) because $\st$ is shuffle-compatible. Conversely, if such a multiplication is well-defined, then $\st$ is shuffle-compatible. We denote the resulting algebra by ${\cal A}_{\st}$ and call it the \textit{shuffle algebra} of $\st$. Observe that ${\cal A}_{\st}$ is graded, and $[\pi]_{\st}$ belongs to the $n$th homogeneous component of ${\cal A}_{\st}$ if $\pi$ has length $n$. As an example, we describe the shuffle algebra of the major index $\maj$. \begin{thm}[Shuffle-compatibility of the major index] \label{t-majsc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The major index $\maj$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\maj}$ defined by \[ [\pi]_{\maj}\mapsto\frac{q^{\maj(\pi)}}{[\left|\pi\right|]_{q}!}x^{\left|\pi\right|} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\maj}$ to the span of \[ \left\{ \frac{q^{j}}{[n]_{q}!}x^{n}\right\} _{n\geq0,\:0\leq j\leq{n \choose 2}}, \] a subalgebra of $\mathbb{Q}[[q]][x]$. \item [\normalfont{(c)}] The $n$th homogeneous component of ${\cal A}_{\maj}$ has dimension ${n \choose 2}+1$.% \end{itemize} \end{thm} \begin{proof} We know from (\ref{e-maj}) that $\maj$ is shuffle-compatible, so there is no need to prove (a). Let $\phi \colon {\cal A}_{\maj}\rightarrow\mathbb{Q}[[q]][x]$ denote the map given in the statement of (b). Then by (\ref{e-maj}), for $\pi\in\mathfrak{P}_{m}$ and $\sigma\in\mathfrak{P}_{n}$, we have \begin{align*} \phi([\pi]_{\maj})\phi([\sigma]_{\maj}) & =\frac{q^{\maj(\pi)}}{[m]_{q}!}x^{m}\frac{q^{\maj(\sigma)}}{[n]_{q}!}x^{n}\\ & =\frac{q^{\maj(\pi)+\maj(\sigma)}}{[m]_{q}![n]_{q}!}x^{m+n}\\ & =\frac{q^{\maj(\pi)+\maj(\sigma)}}{[m+n]_{q}!} \qbinom{m+n}{m}x^{m+n}\\ & =\sum_{\tau\in S(\pi,\sigma)}\frac{q^{\maj(\tau)}}{[m+n]_{q}!}x^{m+n}\\ & =\phi([\pi]_{\maj}[\sigma]_{\maj}), \end{align*} so $\phi$ is an algebra homomorphism. The possible values for $\maj(\pi)$ for an $n$-permutation $\pi$ range from 0 to ${n \choose 2}$, and since the elements $q^{j}x^{n}/[n]_{q}!$ are linearly independent, $\phi$ gives an isomorphism from ${\cal A}_{\maj}$ to the stated subalgebra, thus proving (b) and (c). \end{proof} We say that two permutation statistics $\st_{1}$ and $\st_{2}$ are \textit{equivalent} if $[\pi]_{\st_{1}}=[\pi]_{\st_{2}}$ for every permutation $\pi$. In other words, $\st_{2}(\pi)$ depends only on $\st_{1}(\pi)$ and $\left|\pi\right|$ for every permutation $\pi$, and vice versa. As shown in Lemma \ref{l-udr}, $\udr$ and $(\lpk,\val)$ are equivalent statistics. It also follows from the formula $\comaj(\pi)=n\des(\pi)-\maj(\pi)$ that $(\des,\maj)$ and $(\des,\comaj)$ are equivalent statistics. \begin{thm} \label{t-esc} Suppose that $\st_{1}$ and $\st_{2}$ are equivalent statistics. If $\st_{1}$ is shuffle-compatible with shuffle algebra ${\cal A}_{\st_{1}}$, then $\st_{2}$ is also shuffle-compatible with shuffle algebra ${\cal A}_{\st_{2}}$ isomorphic to ${\cal A}_{\st_{1}}$.\end{thm} \begin{proof} Equivalent statistics have the same equivalence classes on permutations, so ${\cal A}_{\st_{1}}$ and ${\cal A}_{\st_{2}}$ (as vector spaces) have the same basis elements. If $\st_{1}$ and $\st_{2}$ are equivalent, then \[ [\pi]_{\st_{2}}[\sigma]_{\st_{2}}=[\pi]_{\st_{1}}[\sigma]_{\st_{1}}=\sum_{\tau\in S(\pi,\sigma)}[\tau]_{\st_{1}}=\sum_{\tau\in S(\pi,\sigma)}[\tau]_{\st_{2}}, \] which proves the result. \end{proof} For example, it is easy to see that $\Des_{1,0}$ is equivalent to $\sir$ and that $\Des_{0,1}$ is equivalent to $\sfr$. Thus, Theorem \ref{t-truncDessc} implies that $\sir$ and $\sfr$ are shuffle-compatible as well. We say that $\st_1$ is a \textit{refinement} of $\st_2$ if for all permutations $\pi$ and $\sigma$ of the same length, $\st_1(\pi) = \st_1(\sigma)$ implies $\st_2(\pi) = \st_2(\sigma)$. For example, the statistics of which the descent set is a refinement are exactly what we call descent statistics. \begin{thm} \label{t-quots} Suppose that $\st_1$ is shuffle-compatible and is a refinement of $\st_2$. Let $A$ be a $\mathbb{Q}$-algebra with basis $\{u_{\alpha}\}$ indexed by $\st_2$-equivalence classes $\alpha$, and suppose that there exists a $\mathbb{Q}$-algebra homomorphism $\phi\colon \mathcal{A}_{\st_1}\to A$ such that for every $\st_1$-equivalence class $\beta$, we have $\phi(\beta) = u_\alpha$ where $\alpha$ is the $\st_2$-equivalence class containing $\beta$. Then $\st_2$ is shuffle-compatible and the map $u_\alpha\mapsto\alpha$ extends by linearity to an isomorphism from $A$ to ${\cal A}_{\st_2}$. \end{thm} \begin{proof} It is sufficient to show that for any two disjoint permutations $\pi$ and $\sigma$, we have \begin{equation*} u_{[\pi]_{\st_2}}u_{[\sigma]_{\st_2}} =\sum_{\tau\in S(\pi, \sigma)} u_{[\tau]_{\st_2}}. \end{equation*} To see this, we have \begin{align*} u_{[\pi]_{\st_2}}u_{[\sigma]_{\st_2}} &= \phi([\pi]_{\st_1})\phi([\sigma]_{\st_1})\\ &= \phi([\pi]_{\st_1}[\sigma]_{\st_1})\\ &=\phi\Big(\sum_{\tau\in S(\pi, \sigma)} [\tau]_{\st_1}\Big)\\ &=\sum_{\tau\in S(\pi, \sigma)} u_{[\tau]_{\st_2}}.\qedhere \end{align*} \end{proof} \subsection{Basic symmetries yield isomorphic shuffle algebras} Here we consider three involutions on permutations given by symmetries\textemdash reversion, complementation, and reverse-complementation\textemdash and their implications for the shuffle-compatibility of permutation statistics. Given $\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in\mathfrak{P}_{n}$, we define the \textit{reversal} $\pi^{r}$ of $\pi$ to be $\pi^{r}\coloneqq\pi_{n}\pi_{n-1}\cdots\pi_{1}$, the \textit{complement} $\pi^{c}$ of $\pi$ to be the permutation obtained by (simultaneously) replacing the $i$th smallest letter in $\pi$ with the $i$th largest letter in $\pi$ for all $1\leq i\leq n$, and the \textit{reverse-complement} $\pi^{rc}$ of $\pi$ to be $\pi^{rc}\coloneqq(\pi^{r})^{c}=(\pi^{c})^{r}$. For example, given $\pi=139264$, we have $\pi^{r}=462931$, $\pi^{c}=941623$, and $\pi^{rc}=326149$. More generally, let $f$ be an involution on the set of permutations which preserves the length of a permutation. Then let $\pi^{f}$ denote $f(\pi)$. Given a set $X$ of permutations, let \[ X^{f}\coloneqq\{\,\pi^{f}\mid\pi\in X\,\}, \] so $f$ naturally induces an involution on sets of permutations as well. We say that two permutation statistics $\st_{1}$ and $\st_{2}$ are \textit{$f$-equivalent} if $\st_{1}\circ f$ is equivalent to $\st_{2}$. Equivalently, $\st_{1}$ and $\st_{2}$ are $f$-equivalent if $([\pi^{f}]_{\st_{1}})^{f}=[\pi]_{\st_{2}}$ for all $\pi$. It is easy to verify that $\st_{1}(\pi^{f})=\st_{2}(\pi)$ implies that $\st_{1}$ and $\st_{2}$ are $f$-equivalent (although this is not a necessary condition). For example, $\Lpk$ and $\Rpk$ are $r$-equivalent, $\pk$ and $\val$ are $c$-equivalent, $\Pk$ and $\Val$ are $c$-equivalent, $(\pk,\des)$ and $(\val,\des)$ are $rc$-equivalent, and $\maj$ and $\comaj$ are $rc$-equivalent. It is less obvious that $(\lpk,\val)$ and $(\lpk,\pk)$ are $rc$-equivalent, so we provide a proof below. \begin{prop} \label{p-lpkpk} $(\lpk,\val)$ and $(\lpk,\pk)$ are $rc$-equivalent statistics. \end{prop} \begin{proof} Fix a permutation $\pi$. We divide into four cases: (a) $\pi$ has a short initial run and a long final run, (b) $\pi$ has a short initial run and a short final run, (c) $\pi$ has a long initial run and a long final run, and (d) $\pi$ has a long initial run and short final run. In case (a), we know from Lemma \ref{l-udr} that $\lpk(\pi)=\val(\pi)$. Then $\pk(\pi^{rc})=\val(\pi)$, and $\pi^{rc}$ has a long initial run, so \[ \lpk(\pi^{rc})=\pk(\pi^{rc})=\val(\pi)=\lpk(\pi). \] Thus, $(\lpk,\val)(\pi)=(\lpk,\pk)(\pi^{rc})$. The other three cases can be verified in the same way. \end{proof} Let us say that $f$ is \textit{shuffle-compatibility-preserving} if for every pair of disjoint permutations $\pi$ and $\sigma$, there exist disjoint permutations $\hat{\pi}$ and $\hat{\sigma}$ with the same relative order as $\pi$ and $\sigma$, respectively, such that $S(\hat{\pi}^{f},\hat{\sigma}^{f})=S(\pi,\sigma)^{f}$ and $S(\pi^{f},\sigma^{f})=S(\hat{\pi},\hat{\sigma})^{f}$. We note that $f$-equivalences are not actually equivalence relations on statistics (although they are symmetric), but we shall show that if the statistics are shuffle-compatible and $f$ is shuffle-compatibility-preserving, then $f$-equivalences induce isomorphisms on the corresponding shuffle algebras. \begin{thm} \label{t-fsc} Let $f$ be shuffle-compatibility-preserving, and suppose that $\st_{1}$ and $\st_{2}$ are $f$-equivalent statistics. If $\st_{1}$ is shuffle-compatible with shuffle algebra ${\cal A}_{\st_{1}}$, then $\st_{2}$ is also shuffle-compatible with shuffle algebra ${\cal A}_{\st_{2}}$ isomorphic to ${\cal A}_{\st_{1}}$. \end{thm} \begin{proof} Let $\pi$ and $\bar{\pi}$ be permutations in the same $\st_{2}$-equivalence class and similarly with $\sigma$ and $\bar{\sigma}$, such that $\pi$ and $\sigma$ are disjoint and $\bar{\pi}$ and $\bar{\sigma}$ are disjoint. Since $\st_{1}$ and $\st_{2}$ are $f$-equivalent, it follows that \[ ([\pi^{f}]_{\st_{1}})^{f}=[\pi]_{\st_{2}}=[\bar{\pi}]_{\st_{2}}=([\bar{\pi}^{f}]_{\st_{1}})^{f}. \] Hence $[\pi^{f}]_{\st_{1}}=[\bar{\pi}^{f}]_{\st_{1}}$ and similarly $[\sigma^{f}]_{\st_{1}}=[\bar{\sigma}^{f}]_{\st_{1}}$. Since $f$ is shuffle-compatibility-preserving, there exist permutations $\hat{\pi},$ $\hat{\sigma}$, $\hat{\bar{\pi}}$, and $\hat{\bar{\sigma}}$\textemdash having the same relative order as $\pi$, $\sigma$, $\bar{\pi}$, and $\bar{\sigma}$, respectively\textemdash satisfying $S(\hat{\pi}^{f},\hat{\sigma}^{f})=S(\pi,\sigma)^{f}$, $S(\pi^{f},\sigma^{f})=S(\hat{\pi},\hat{\sigma})^{f}$, $S(\hat{\bar{\pi}}^{f},\hat{\bar{\sigma}}^{f})=S(\bar{\pi},\bar{\sigma})^{f}$, and $S(\bar{\pi}^{f},\bar{\sigma}^{f})=S(\hat{\bar{\pi}},\hat{\bar{\sigma}})^{f}$. By the ``same relative order'' property, we have \[ [\hat{\pi}^{f}]_{\st_{1}}=[\pi^{f}]_{\st_{1}}=[\bar{\pi}^{f}]_{\st_{1}}=[\hat{\bar{\pi}}^{f}]_{\st_{1}} \] and \[ [\hat{\sigma}^{f}]_{\st_{1}}=[\sigma^{f}]_{\st_{1}}=[\bar{\sigma}^{f}]_{\st_{1}}=[\hat{\bar{\sigma}}^{f}]_{\st_{1}}. \] Now, by shuffle-compatibility of $\st_{1}$, we have the equality of multisets \[ \{\,\st_{1}(\tau)\mid\tau\in S(\hat{\pi}^{f},\hat{\sigma}^{f})\,\}=\{\,\st_{1}(\tau)\mid\tau\in S(\hat{\bar{\pi}}^{f},\hat{\bar{\sigma}}^{f})\,\}, \] which is equivalent to \[ \{\,\st_{2}(\tau)\mid\tau^{f}\in S(\hat{\pi}^{f},\hat{\sigma}^{f})\,\}=\{\,\st_{2}(\tau)\mid\tau^{f}\in S(\hat{\bar{\pi}}^{f},\hat{\bar{\sigma}}^{f})\,\} \] by $f$-equivalence of $\st_{1}$ and $\st_{2}$, and from $S(\hat{\pi}^{f},\hat{\sigma}^{f})=S(\pi,\sigma)^{f}$ and $S(\hat{\bar{\pi}}^{f},\hat{\bar{\sigma}}^{f})=S(\bar{\pi},\bar{\sigma})^{f}$, we have \[ \{\,\st_{2}(\tau)\mid\tau\in S(\pi,\sigma)\,\}=\{\,\st_{2}(\tau)\mid\tau\in S(\bar{\pi},\bar{\sigma})\,\}. \] Therefore, $\st_{2}$ is shuffle-compatible. It remains to prove that ${\cal A}_{\st_{2}}$ is isomorphic to ${\cal A}_{\st_{1}}$. Observe that \[ \sum_{\tau\in S(\hat{\pi},\hat{\sigma})}[\tau]_{\st_{2}}=\sum_{\tau\in S(\pi,\sigma)}[\tau]_{\st_{2}}, \] since $\st_{2}$ is shuffle-compatible. Define the linear map $\varphi_{f} \colon {\cal A}_{\st_{2}}\rightarrow{\cal A}_{\st_{1}}$ by $[\pi]_{\st_{2}}\mapsto[\pi^{f}]_{\st_{1}}$. Then \begin{align*} \varphi_{f}([\pi]_{\st_{2}}[\sigma]_{\st_{2}}) & =\varphi_{f}\Big(\sum_{\tau\in S(\pi,\sigma)}[\tau]_{\st_{2}}\Big)\\ & =\sum_{\tau\in S(\pi,\sigma)}\varphi_{f}([\tau]_{\st_{2}})\\ & =\sum_{\tau\in S(\pi,\sigma)}[\tau^{f}]_{\st_{1}}\\ & =\sum_{\tau\in S(\hat{\pi},\hat{\sigma})}[\tau^{f}]_{\st_{1}}\\ & =\sum_{\tau\in S(\hat{\pi},\hat{\sigma})^{f}}[\tau]_{\st_{1}}\\ & =\sum_{\tau\in S(\pi^{f},\sigma^{f})}[\tau]_{\st_{1}}\\ & =[\pi^{f}]_{\st_{1}}[\sigma^{f}]_{\st_{1}}\\ & =\varphi_{f}([\pi]_{\st_{2}})\varphi_{f}([\sigma]_{\st_{2}}), \end{align*} so $\varphi_{f}$ is an isomorphism from ${\cal A}_{\st_{2}}$ to ${\cal A}_{\st_{1}}$. \end{proof} \begin{lem} Reversion, complementation, and reverse-complementation are shuffle-com\-pat\-i\-bil\-ity-preserving. \end{lem} \begin{proof} It is clear that $S(\pi^{r},\sigma^{r})=S(\pi,\sigma)^{r}$, so by taking $\hat{\pi}=\pi$ and $\hat{\sigma}=\sigma$, the equalities $S(\hat{\pi}^{r},\hat{\sigma}^{r})=S(\pi,\sigma)^{r}$ and $S(\pi^{r},\sigma^{r})=S(\hat{\pi},\hat{\sigma})^{r}$ come for free. Thus reversion is shuffle-compatibility-preserving. Unlike with reversion, it is not true in general that $S(\pi^{c},\sigma^{c})=S(\pi,\sigma)^{c}$. For disjoint permutations $\pi=\pi_{1}\pi_{2}\cdots\pi_{m}$ and $\sigma=\sigma_{1}\sigma_{2}\cdots\sigma_{n}$, let $P=\{\pi_{1},\dots,\pi_{m},\sigma_{1},\dots,\sigma_{n}\}$ be the set of letters appearing in $\pi$ and $\sigma$, and let $\rho:P\rightarrow P$ be the map sending the $i$th smallest letter of $P$ to the $i$th largest letter of $P$ for every $i$. By an abuse of notation, let $\rho(\pi)$ denote the permutation $\rho(\pi_{1})\rho(\pi_{2})\cdots\rho(\pi_{m})$ obtained by applying $\rho$ to each letter in $\pi$. Then, let $\hat{\pi}=\rho(\pi^{c})$ and $\hat{\sigma}=\rho(\sigma^{c})$. For example, let $\pi=413$ and $\sigma=25$. Then $P=[5]$, $\pi^{c}=143$, and $\sigma^{c}=52$, and so $\hat{\pi}=523$ and $\hat{\sigma}=14$. Clearly, $\pi$ has the same relative order as $\hat{\pi}$, and similarly with $\sigma$ and $\hat{\sigma}$. It is also easy to see that $\rho(\pi)=\widehat{\pi^{c}}=\hat{\pi}^{c}$ and $\rho(\sigma)=\widehat{\sigma^{c}}=\hat{\sigma}^{c}$. To see that $S(\hat{\pi}^{c},\hat{\sigma}^{c})=S(\pi,\sigma)^{c}$, first let $\tau\in S(\pi,\sigma)$. Then $\tau$ contains both $\pi$ and $\sigma$ as subsequences, and to show that $\tau^{c}\in S(\hat{\pi}^{c},\hat{\sigma}^{c})$, it suffices to show that $\tau^{c}$ contains both $\hat{\pi}^{c}=\rho(\pi)$ and $\hat{\sigma}^{c}=\rho(\sigma)$ as subsequences. However, this follows from the fact that, when taking the complement of $\tau$, the subsequence $\pi$ appearing in $\tau$ is transformed into $\rho(\pi)$, and similarly $\sigma$ turns into $\rho(\sigma)$. The other inclusion follows by the same reasoning, and the equality $S(\pi^{c},\sigma^{c})=S(\hat{\pi},\hat{\sigma})^{c}$ follows directly from $S(\hat{\pi}^{c},\hat{\sigma}^{c})=S(\pi,\sigma)^{c}$ and replacing $\pi$ and $\sigma$ with $\pi^{c}$ and $\sigma^{c}$, respectively. Hence complementation is shuffle-compatibility-preserving. Finally, the equalities $S(\pi^{r},\sigma^{r})=S(\pi,\sigma)^{r}$, $S(\hat{\pi}^{c},\hat{\sigma}^{c})=S(\pi,\sigma)^{c}$, and $S(\pi^{c},\sigma^{c})=S(\hat{\pi},\hat{\sigma})^{c}$ imply $S(\hat{\pi}^{rc},\hat{\sigma}^{rc})=S(\pi,\sigma)^{rc}$ and $S(\pi^{rc},\sigma^{rc})=S(\hat{\pi},\hat{\sigma})^{rc}$. Thus reverse-complementation is shuffle-compatibility-preserving. \end{proof} \begin{cor} \label{c-rcsc} Suppose that $\st_{1}$ and $\st_{2}$ are $r$-equivalent, $c$-equivalent, or $rc$-equivalent statistics. If $\st_{1}$ is shuffle-compatible with shuffle algebra ${\cal A}_{\st_{1}}$, then $\st_{2}$ is also shuffle-compatible with shuffle algebra ${\cal A}_{\st_{2}}$ isomorphic to ${\cal A}_{\st_{1}}$. \end{cor} For example, since $\maj$ and $\comaj$ are $rc$-equivalent, it follows from Theorem \ref{t-majsc} and Corollary \ref{c-rcsc} that $\comaj$ is shuffle-compatible and that its shuffle algebra ${\cal A}_{\comaj}$ is isomorphic to ${\cal A}_{\maj}$. \subsection{A note on Hadamard products} The operation of \textit{Hadamard product} $*$ on formal power series in $t$ is given by \[ \biggl(\sum_{n=0}^{\infty}a_{n}t^{n}\biggr)*\biggl(\sum_{n=0}^{\infty}b_{n}t^{n}\biggr)\coloneqq\sum_{n=0}^{\infty}a_{n}b_{n}t^{n}. \] Many shuffle algebras that we study in this paper can be characterized as subalgebras of various algebras in which the multiplication is the Hadamard product in a variable $t$. In the notation for these algebras, we write $t*$ to indicate that multiplication is the Hadamard product in $t$. For example, $\mathbb{Q}[[t*,q]][x]$ is the algebra of polynomials in $x$ whose coefficients are formal power series in $t$ and $q$, where multiplication is ordinary multiplication in the variables $x$ and $q$ but is the Hadamard product in $t$. We note that the Hadamard product is only used in descriptions of shuffle algebras and in the proof of Lemma \ref{l-monoidlike2}, where $t^m * t^n$ denotes the Hadamard product of $t^m$ and $t^n$. (Here, $t^m$ is the ordinary product of $m$ copies of $t$ and similarly with $t^n$.) All other expressions should be interpreted as using ordinary multiplication. For instance, any expression with an exponent such as $t^k$ or $(1+yt)^k$ is ordinary multiplication, and $(1-tf)^{-1}$ (as in Corollary \ref{c-monoidlike2}) denotes $\sum_{k=0}^\infty t^k f^k$. \section{Quasisymmetric functions and shuffle-compatibility} \label{s-section4} \subsection{The descent set shuffle algebra \texorpdfstring{$\QSym$}{QSym}} A formal power series $f\in\mathbb{Q}[[x_{1},x_{2},\dots]]$ of bounded degree in countably many commuting variables $x_{1},x_{2},\dots$ is called a \textit{quasisymmetric function} if for any positive integers $a_{1},a_{2},\dots,a_{k}$, if $i_{1}<i_{2}<\cdots<i_{k}$ and $j_{1}<j_{2}<\cdots<j_{k}$, then \[ [x_{i_{1}}^{a_{1}}x_{i_{2}}^{a_{2}}\cdots x_{i_{k}}^{a_{k}}]\,f=[x_{j_{1}}^{a_{1}}x_{j_{2}}^{a_{2}}\cdots x_{j_{k}}^{a_{k}}]\,f. \] It is clear that every symmetric function is quasisymmetric, but not every quasisymmetric function is symmetric. For example, $\sum_{i<j<k}x_{i}^{2}x_{j}x_{k}$ is quasisymmetric, but it is not symmetric because $x_{1}^{2}x_{2}x_{3}$ appears as a term yet $x_{1}x_{2}^{2}x_{3}$ does not. Let $L \vDash n$ indicate that $L$ is a composition of $n$, and let $\QSym_{n}$ be the set of quasisymmetric functions homogeneous of degree $n$, which is clearly a vector space. For a composition $L=(L_1,L_2,\dots, L_k)$, the \emph{monomial quasisymmetric function} $M_L$ is defined by \begin{equation*} M_L \coloneqq \sum_{i_1<i_2<\dots<i_k}x_{i_1}^{L_1}x_{i_2}^{L_2}\dots x_{i_k}^{L_k} \end{equation*} It is clear that $\{M_L\}_{L\vDash n}$ is a basis for $\QSym_n$, so for $n\ge1$, $\QSym_n$ has dimension $2^{n-1}$, the number of compositions of $n$. Another important basis for $\QSym_{n}$ (and the most important basis for our purposes) is the basis of \textit{fundamental quasisymmetric functions} $\{F_{L}\}_{L\vDash n}$ given by \[ F_{L}\coloneqq\sum_{\substack{i_{1}\leq i_{2}\leq\cdots\leq i_{n}\\ i_{j}<i_{j+1}\,\mathrm{if}\, j\in\Des(L) } }x_{i_{1}}x_{i_{2}}\cdots x_{i_{n}}. \] It is easy to see that \begin{equation} \label{e-FtoM} F_L = \sum_{\substack{\Des(K)\supseteq \Des(L)\\|K|=|L|}} M_K, \end{equation} so by inclusion-exclusion, $M_K$ can be expressed as a linear combination of the $F_L$. It follows that $\{F_{L}\}_{L\vDash n}$ spans $\QSym_n$, so this set must be a basis for $\QSym_n$ since it has the correct number of elements. The product of two quasisymmetric functions is quasisymmetric, with the product formula for the fundamental basis given by the following theorem, which may be proved using $P$-partitions; see \cite[Exercise 7.93]{Stanley2001}. This theorem may also be derived from Lemma \ref{t-des-sc}. \begin{thm} \label{t-fqsym} Let $c_{J,K}^{L}$ be the number of permutations with descent composition $L$ among the shuffles of a permutation $\pi$ with descent composition $J$ and a permutation $\sigma$ \emph{(}disjoint from $\pi$\emph{)} with descent composition $K$. Then \begin{equation} F_{J}F_{K}=\sum_{L}c_{J,K}^{L}F_{L}.\label{e-fundshuffle} \end{equation} \end{thm} If $f\in\QSym_{m}$ and $g\in\QSym_{n}$, then $fg\in\QSym_{m+n}$. Thus $\QSym\coloneqq\bigoplus_{n=0}^{\infty}\QSym_{n}$ is a graded $\mathbb{Q}$-algebra called the \textit{algebra of quasisymmetric functions} with coefficients in $\mathbb{Q}$, a subalgebra of $\mathbb{Q}[[x_{1},x_{2},\dots]]$. Motivated by Richard Stanley's theory of $P$-partitions, the first author introduced quasisymmetric functions in \cite{Gessel1984} and developed the basic algebraic properties of $\QSym$. Further properties of $\QSym$ and connections with many topics of study in combinatorics and algebra were developed in the subsequent decades. Basic references include \cite[Section 7.19]{Stanley2001}, \cite[Section 5]{Grinberg2014}, and \cite{Luoto2013}. Observe that Theorem \ref{t-fqsym} implies that $\QSym$ is isomorphic to the shuffle algebra for the descent set with the fundamental basis corresponding to the basis of $\Des$-equivalence classes. \goodbreak \begin{cor}[Shuffle-compatibility of the descent set] \leavevmode \begin{itemize} \item [\normalfont{(a)}] The descent set $\Des$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\Des}$ defined by \[ [\pi]_{\Des}\mapsto F_{\Comp(\pi)} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\Des}$ to $\QSym$. \end{itemize} \end{cor} Now, let $\st$ be a descent statistic. Then not only does $\st$ induce a equivalence relation on permutations, but it also induces a equivalence relation on compositions because permutations with the same descent composition are necessarily $\st$-equivalent. We establish a necessary and sufficient condition for the shuffle-compatibility of a descent statistic, which will also imply that the shuffle algebra of any shuffle-compatible descent statistic is isomorphic to a quotient of $\QSym$. \begin{thm} \label{t-scqsym} A descent statistic $\st$ is shuffle-compatible if and only if there exists a $\mathbb{Q}$-algebra homomorphism $\phi_{\st}\colon\QSym\rightarrow A$, where $A$ is a $\mathbb{Q}$-algebra with basis $\{u_{\alpha}\}$ indexed by $\st$-equivalence classes $\alpha$ of compositions, such that $\phi_{\st}(F_{L})=u_{\alpha}$ whenever $L\in\alpha$. In this case, the linear map on ${\cal A}_{\st}$ defined by \[ [\pi]_{\st}\mapsto u_{\alpha}, \] where $\Comp(\pi)\in\alpha$, is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\st}$ to $A$. \end{thm} \begin{proof} Suppose that $\st$ is a shuffle-compatible descent statistic. Let $A=\mathcal{A}_{\st}$ be the shuffle algebra of $\st$, and let $u_{\alpha} = [\pi]_{\st}$ for any $\pi$ satisfying $\Comp(\pi)\in\alpha$, so that \[ u_{\beta}u_{\gamma}=\sum_{\alpha}c_{\beta,\gamma}^{\alpha}u_{\alpha} \] where $c_{\beta,\gamma}^{\alpha}$ is the number of permutations with descent composition in $\alpha$ that are obtained as a shuffle of a permutation $\pi$ with descent composition in $\beta$ and a permutation $\sigma$ (disjoint from $\pi$) with descent composition in $\gamma$. Observe that $c_{\beta,\gamma}^{\alpha}=\sum_{L\in\alpha}c_{J,K}^{L}$ for any choice of $J\in\beta$ and $K\in\gamma$, where as before $c_{J,K}^{L}$ is the number of permutations with descent composition $L$ that are obtained as a shuffle of a permutation $\pi$ with descent composition $J$ and a permutation $\sigma$ (disjoint from $\pi$) with descent composition $K$. Define the linear map $\phi_{\st}\colon\QSym\rightarrow A$ by $\phi_{\st}(F_{L})=u_{\alpha}$ for $L\in\alpha$. Then any $J\in\beta$ and $K\in\gamma$ satisfy \begin{align*} \phi_{\st}(F_{J}F_{K}) & =\phi_{\st}\Big(\sum_{L}c_{J,K}^{L}F_{L}\Big)\\ & =\sum_{L}c_{J,K}^{L}\phi_{\st}(F_{L})\\ & =\sum_{\alpha}\sum_{L\in\alpha}c_{J,K}^{L}u_{\alpha}\\ & =\sum_{\alpha}c_{\beta,\gamma}^{\alpha}u_{\alpha}\\ & =u_{\beta}u_{\gamma}\\ & =\phi_{\st}(F_{J})\phi_{\st}(F_{K}), \end{align*} so $\phi_{\st}$ is a $\mathbb{Q}$-algebra homomorphism, thus completing one direction of the proof. The converse follows directly from Theorem \ref{t-quots}.\end{proof} It is immediate from Theorem \ref{t-scqsym} that when $\st$ is shuffle-compatible, its shuffle algebra is isomorphic to $\QSym/\ker(\phi_{\st})$. \begin{cor} The shuffle algebra of every shuffle-compatible descent statistic is isomorphic to a quotient algebra of $\QSym$. \end{cor} \subsection{Shuffle-compatibility of \texorpdfstring{$\des$}{des} and \texorpdfstring{$(\des,\maj)$}{(des, maj)}} \label{s-scdesmaj} We now use Theorem \ref{t-scqsym} to characterize the shuffle algebras of the two other shuffle-compatible statistics mentioned in the introduction: the descent number $\des$ and the pair $(\des,\maj)$. For the latter, we will actually characterize the shuffle algebra of $(\des,\comaj)$, but this is sufficient by Theorem \ref{t-esc} since $(\des,\maj)$ and $(\des,\comaj)$ are equivalent statistics. We note that these characterizations can be derived from Propositions 8.3 and 12.6 of Stanley \cite{Stanley1972} in a related way, though we emphasize the connection with quasisymmetric functions. We will first prove the result for $(\des,\comaj)$ and then derive from it the result for $\des$ using Theorem \ref{t-quots}. We denote the set of non-negative integers by $\mathbb{N}$. \begin{thm}[Shuffle-compatibility of $(\des,\comaj)$] \label{t-descomajsc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The ordered pair $(\des,\comaj)$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{(\des,\comaj)}$ defined by \[ [\pi]_{(\des,\comaj)} \mapsto q^{\comaj(\pi)}\qbinom{p-\des(\pi)+\left|\pi\right|-1}{\left|\pi\right|}x^{\left|\pi\right|} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{(\des,\comaj)}$ to the span of \[ \{1\}\bigcup\left\{ q^{k}\qbinom{p-j+n-1}{n}x^{n}\right\} _{n\geq1,\:0\leq j\leq n-1,\:{j+1 \choose 2}\leq k\leq nj-{j+1 \choose 2}}, \] a subalgebra of $\mathbb{Q}[q,x]^{\mathbb{N}}$, the algebra of functions $\mathbb{N}\rightarrow\mathbb{Q}[q,x]$ in the non-negative integer variable $p$. \item [\normalfont{(c)}] The linear map on ${\cal A}_{(\des,\comaj)}$ defined by \[ [\pi]_{(\des,\comaj)}\mapsto\begin{cases} \displaystyle{\frac{q^{\comaj(\pi)}t^{\des(\pi)+1}}{(1-t)(1-qt)\cdots(1-q^{\left|\pi\right|}t)}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{(\des,\comaj)}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{q^{k}t^{j+1}}{(1-t)(1-qt)\cdots(1-q^{n}t)}x^{n}\right\} _{n\geq1,\:0\leq j\leq n-1\:,{j+1 \choose 2}\leq k\leq nj-{j+1 \choose 2}}, \] a subalgebra of $\mathbb{Q}[[t*,q]][x]$. \item [\normalfont{(d)}] For $n\geq1$, the $n$th homogeneous component of ${\cal A}_{(\des,\comaj)}$ has dimension ${n \choose 3}+n$. \end{itemize} \end{thm} \begin{proof} We first prove parts (a) and (b). For $p$ a positive integer and $f$ a quasisymmetric function, let \begin{equation*} \phi^{(p)}_{(\comaj,\des)}(f) = f(x,qx,\dots, q^{p-1}x) \end{equation*} and let $\phi^{(0)}_{(\comaj,\des)}(f)$ be the constant term in $f$. It is clear that $\phi^{(p)}_{(\comaj,\des)}$ is a homomorphism from $\QSym$ to $\mathbb{Q}[q,x]$, so the map that takes $f$ to the function $p\mapsto f(x,qx,\dots, q^{p-1}x)$ is a homomorphism from $\QSym$ to $\mathbb{Q}[q,x]^\mathbb{N}$. If $L$ is a composition of $n\ge1$, then \begin{align*} F_{L}(x,qx,\dots,q^{p-1}x) & =\sum_{\substack{0\leq i_{1}\leq\cdots\leq i_{n}\leq p-1\\ i_{j}<i_{j+1}\,\mathrm{if}\,j\in\Des(L) } }q^{i_{1}+\cdots+i_{n}-n}x^n\\ & =q^{e(L)}\sum_{0\leq r_{1}\leq\cdots\leq r_{n}\leq p-1-\des(L)}q^{r_{1}+\cdots+r_{n}}x^n, \end{align*} where \[ r_{j}=i_{j}-\left|\left\{\, k : k\in \Des(L)\text{ and } k<j\,\right\} \right| \] and \[ e(L)=\sum_{j=1}^{n}\left|\left\{\, k: k\in\Des(L)\text{ and } k<j\,\right\} \right|=\comaj(L). \] Since \[ \sum_{0\leq r_{1}\leq\cdots\leq r_{n}\leq p-1-\des(L)}q^{r_{1}+\cdots+r_{n}}=\qbinom{p-\des(L)+n-1}{n} \] \cite[Proposition 1.7.3]{Stanley2011}, it follows that \[ \phi^{(p)}_{(\comaj,\des)}(F_L) =q^{\comaj(L)}\qbinom{p-\des(L)+n-1}{n}x^n, \] and for $n=0$ we have $\phi^{(p)}_{(\comaj,\des)}(F_\varnothing)=1$. Furthermore, it follows from the formula between equations (1.86) and (1.87) in \cite{Stanley2011} (a form of the $q$-binomial theorem) that \begin{align} \sum_{p=0}^{\infty}\qbinom{p-\des(L)+n-1}{n}t^{p} & =\sum_{p=0}^{\infty}\qbinom{p+n}{n}t^{p+\des(L)+1}\nonumber \\ & =\frac{t^{\des(L)+1}}{(1-t)(1-qt)\cdots(1-q^{n}t)}.\label{e-qbinom} \end{align} Equation \eqref{e-qbinom} implies that the functions $q^{k}\binom{p-j+n-1}{n}_{q}x^{n}$ are linearly independent as their generating functions are clearly linearly independent. Then parts (a) and (b) follow from Theorem \ref{t-scqsym} and Proposition \ref{p-comajvalues}. To prove (c), we define the map $\psi\colon\mathbb{Q}[q,x]^{\mathbb{N}}\rightarrow\mathbb{Q}[q,x][[t*]]$ by the formula \[ \psi(f)=\sum_{p=0}^{\infty}f(p)t^{p}. \] Then $\psi$ is clearly an isomorphism and by (\ref{e-qbinom}), the images of the basis elements in (b) are those given in (c), which are in $\mathbb{Q}[[t*,q]][x]$. For $n\geq1$, the number of $(\des,\comaj)$-equivalence classes for $n$-permutations is \[ \sum_{j=0}^{n-1}\left(\left(nj-{j+1 \choose 2}\right)-{j+1 \choose 2}+1\right)=\sum_{j=0}^{n-1}\left(nj-2{j+1 \choose 2}+1\right), \] which can be shown to be equal to ${n \choose 3}+n$ by a routine argument. This proves (d). \end{proof} \begin{thm}[Shuffle-compatibility of the descent number] \label{t-dessc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The descent number $\des$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\des}$ defined by \[ [\pi]_{\des} \mapsto {p-\des(\pi)+\left|\pi\right|-1 \choose \left|\pi\right|}x^{\left|\pi\right|} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\des}$ to the span of \[ \{1\}\bigcup\left\{ {p-j+n-1 \choose n}x^{n}\right\} _{n\geq1,\:0\leq j\leq n-1}, \] a subalgebra of $\mathbb{Q}[p,x]$. \item [\normalfont{(c)}] ${\cal A}_{\des}$ is isomorphic to the span of \[ \{1\}\cup\{p^{j}x^{n}\}_{n\geq1,\:1\leq j\leq n}, \] a subalgebra of $\mathbb{Q}[p,x]$. \item [\normalfont{(d)}] The linear map on ${\cal A}_{\des}$ defined by \[ [\pi]_{\des}\mapsto\begin{cases} \displaystyle{\frac{t^{\des(\pi)+1}}{(1-t)^{\left|\pi\right|+1}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\des}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{t^{j+1}}{(1-t)^{n+1}}x^{n}\right\} _{n\geq1,\:0\leq j\leq n-1}, \] a subalgebra of $\mathbb{Q}[[t*]][x]$. \item [\normalfont{(e)}] For $n\geq1$, the $n$th homogeneous component of ${\cal A}_{\des}$ has dimension $n$. \end{itemize} \end{thm} \begin{proof} Applying Theorem \ref{t-quots} to Theorem \ref{t-descomajsc} with the homomorphism that takes $q$ to~1, together with the observation that polynomial functions in characteristic zero may be identified with polynomials, yields (a), (b), and (d). Parts (c) and (e) follow easily from (b). \end{proof} \subsection{Shuffle-compatibility of the peak set and peak number} In \cite{Stembridge1997}, Stembridge defined a subalgebra $\Pi$ of $\QSym$ called the ``algebra of peaks'' using enriched $P$-partitions, a variant of Stanley's $P$-partitions. Here, we observe that Stembridge's algebra $\Pi$ is isomorphic to the shuffle algebra ${\cal A}_{\Pk}$ of the peak set $\Pk$, thus showing that $\Pk$ is shuffle-compatible, and we use further results of Stembridge on enriched $P$-partitions to show that the peak number $\pk$ is shuffle-compatible and to characterize its shuffle algebra. An enriched $P$-partition is a map defined for a poset $P$, but for our purposes, we only need to consider the case where $P$ is a chain. Then the notion of an enriched $P$-partition is equivalent to that of an ``enriched $\pi$-partition'' for a permutation $\pi$, which we define below.\footnote{We note that, in the notation of \cite{Stembridge1997}, we are setting $A=\mathbb{P}$, $\gamma = \pi$, and $P=([n],<)$.} Let $\mathbb{P}^{\prime}$ denote the set of nonzero integers with the following total ordering: \[ -1\prec+1\prec-2\prec+2\prec-3\prec+3\prec\cdots. \] For $\pi\in\mathfrak{P}_{n},$ an \textit{enriched $\pi$-partition }is a map $f\colon[n]\rightarrow\mathbb{P}^{\prime}$ such that for all $i<j$ in $[n]$, the following hold: \begin{enumerate} \item $f(i)\preceq f(j)$; \item $f(i)=f(j)>0$ implies $\pi(i)<\pi(j)$; \item $f(i)=f(j)<0$ implies $\pi(i)>\pi(j)$. \end{enumerate} Let ${\cal E}(\pi)$ denote the set of enriched $\pi$-partitions, and let \[ \Gamma(\pi)\coloneqq\sum_{f\in{\cal E}(\pi)}x_{\left|f(1)\right|}x_{\left|f(2)\right|}\cdots x_{\left|f(n)\right|} \] be the generating function for enriched $\pi$-partitions in which both $k$ and $-k$ receive the same weight $x_{k}$. For example, let $\pi=3125674$. Then the map $f$ given by $f(1)=-1$, $f(2)=-1$, $f(3)=-3$, $f(4)=3$, $f(5)=3$, $f(6)=-7$, $f(7)=9$ is an enriched $\pi$-partition, which contributes $x_{1}^{2}x_{3}^{3}x_{7}x_{9}$ to $\Gamma(\pi)$. It is clear that $\Gamma(\pi)$ is a quasisymmetric function homogeneous of degree $n$ which depends only on the descent set of $\pi$, but a stronger statement is true: $\Gamma(\pi)$ depends only on the peak set of $\pi$ \cite[Proposition 2.2]{Stembridge1997}. Hence, it makes sense to define the quasisymmetric function \[ K_{n,\Lambda}\coloneqq\Gamma(\pi) \] where $\pi$ is any $n$-permutation with $\Pk(\pi)=\Lambda$. These peak quasisymmetric functions $K_{n,\Lambda}$ are linearly independent over $\mathbb{Q}$ \cite[Theorem 3.1(a)]{Stembridge1997}. Let $F_{n}$ be the $n$th Fibonacci number defined by $F_{1}=F_{2}=1$ and $F_{n}=F_{n-1}+F_{n-2}$ for $n\geq3$. It is easy to see that, for $n\geq1$, there are exactly $F_{n}$ peak sets among all $n$-permutations, so the $\mathbb{Q}$-vector space $\Pi_{n}$ spanned by the $K_{n,\Lambda}$ has dimension $F_{n}$ with basis elements corresponding to peak sets of $n$-permutations. The peak quasisymmetric functions $K_{n,\Lambda}$ multiply by the rule \begin{equation} K_{m,\Pk(\pi)}K_{n,\Pk(\sigma)}=\sum_{\tau\in S(\pi,\sigma)}K_{m+n,\Pk(\tau)}\label{e-pkmult} \end{equation} \cite[Equation (3.1)]{Stembridge1997}, so $\Pi\coloneqq\bigoplus_{n=0}^{\infty}\Pi_{n}$ is a $\mathbb{Q}$-algebra, the \textit{algebra of peaks}. Then the shuffle-compatibility of $\Pk$ and our characterization of the shuffle algebra ${\cal A}_{\Pk}$ is immediate from (\ref{e-pkmult}). \begin{thm}[Shuffle-compatibility of the peak set] \leavevmode \begin{itemize} \item [\normalfont{(a)}] The peak set $\Pk$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\Pk}$ defined by \[ [\pi]_{\Pk}\mapsto K_{\left|\pi\right|,\Pk(\pi)} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\Pk}$ to $\Pi$. \end{itemize} \end{thm} By Corollary \ref{c-rcsc}, the valley set $\Val$ is also shuffle-compatible and ${\cal A}_{\Val}$ is isomorphic to $\Pi$. Note that (\ref{e-pkmult}) implies that the map $F_{L}\mapsto K_{n,\Pk(L)}$ is a $\mathbb{Q}$-algebra homomorphism from $\QSym$ to itself, a fact that we shall use in the proof of the next theorem, which is the analogous result for the peak number (and by Lemma \ref{l-pkvalruns}, Corollary \ref{c-rcsc}, and Theorem \ref{t-esc}, the valley number and exterior peak number as well). \begin{thm}[Shuffle-compatibility of the peak number] \label{t-pksc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The peak number $\pk$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\pk}$ defined by \[ [\pi]_{\pk}\mapsto\begin{cases} \displaystyle{\frac{2^{2\pk(\pi)+1}t^{\pk(\pi)+1}(1+t)^{\left|\pi\right|-2\pk(\pi)-1}}{(1-t)^{\left|\pi\right|+1}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\pk}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{2^{2j+1}t^{j+1}(1+t)^{n-2j-1}}{(1-t)^{n+1}}x^{n}\right\} _{n\geq1,\:0\leq j\leq\left\lfloor \frac{n-1}{2}\right\rfloor }, \] a subalgebra of $\mathbb{Q}[[t*]][x]$. \item [\normalfont{(c)}] The $\pk$ shuffle algebra ${\cal A}_{\pk}$ is isomorphic to the span of \[ \{1\}\cup\{p^{j}x^{n}\}_{n\geq1,\:1\leq j\leq n,\: j\equiv n\,(\mathrm{mod}\,2)}, \] a subalgebra of $\mathbb{Q}[p,x]$. \item [\normalfont{(d)}] For $n\geq1$, the $n$th homogeneous component of ${\cal A}_{\pk}$ has dimension $\left\lfloor (n+1)/2\right\rfloor $. \end{itemize} \end{thm} The proof below implies parts (a), (b), and (d). We postpone the proof of part (c) until Section \ref{s-altdes}. \begin{proof} For a quasisymmetric function $f$, let $f(1^{k})$ denote $f$ evaluated at $x_i=1$ for $1\leq i\leq k$ and $x_i=0$ for $i>k$. Define $\phi_{\pk}\colon\QSym\rightarrow\mathbb{Q}[[t*]][x]$ by the formula \[ \phi_{\pk}(F_{L})=\sum_{k=0}^{\infty}K_{n,\Pk(L)}(1^{k})t^{k}x^{n} \] for $L\vDash n>0$ and $\phi_{\pk}(F_\varnothing)=1/(1-t$). Then $\phi_{\pk}$ is the composition of the map $F_{L}\mapsto K_{n,\Pk(L)}$ with the map $f\mapsto\sum_{k=0}^{\infty}f(1^{k})t^{k}x^{n}$ (where $f$ is homogeneous of degree $n$); since both of these maps are $\mathbb{Q}$-algebra homomorphisms, it follows that $\phi_{\pk}$ is a $\mathbb{Q}$-algebra homomorphism as well. Stembridge \cite[Theorem 4.1]{Stembridge1997} showed that \[ \sum_{k=0}^{\infty}K_{n,\Pk(L)}(1^{k})t^{k}=\frac{2^{2\pk(L)+1}t^{\pk(L)+1}(1+t)^{n-2\pk(L)-1}}{(1-t)^{n+1}}, \] so in fact \[ \phi_{\pk}(F_{L})=\frac{2^{2\pk(L)+1}t^{\pk(L)+1}(1+t)^{n-2\pk(L)-1}}{(1-t)^{n+1}}x^{n}. \] We know from Proposition \ref{p-pkdesvalues} that for an $n$-permutation $\pi$, the possible values of $\pk(\pi)$ range from 0 to $\left\lfloor (n-1)/2\right\rfloor $. Since the elements $2^{2j+1}t^{j+1}(1+t)^{n-2j-1}x^{n}/(1-t)^{n+1}$ are linearly independent, the result follows from Theorem \ref{t-scqsym}. \end{proof} An alternative proof of Theorem \ref{t-pksc} can be given using Theorems \ref{t-quots} and \ref{t-pkdessc}. \subsection{Shuffle-compatibility of the left peak set and left peak number} Motivated by Stembridge's theory of enriched $P$-partitions and the study of peak algebras \cite{Nyman2003}, Petersen \cite{Petersen2006,Petersen2007} defined another variant of $P$-partitions called ``left enriched $P$-partitions'' that tells a parallel story for left peaks. As before, we restrict our attention to when $P$ is a chain. Let $\mathbb{P}^{(\ell)}$ denote the set of integers with the following total ordering: \[ 0\prec-1\prec+1\prec-2\prec+2\prec-3\prec+3\prec\cdots. \] Then for $\pi\in\mathfrak{P}_{n},$ a \textit{left enriched $\pi$-partition }is a map $f\colon[n]\rightarrow\mathbb{P}^{(\ell)}$ such that for all $i<j$ in $[n]$, the following hold: \begin{enumerate} \item $f(i)\preceq f(j)$; \item $f(i)=f(j)\geq0$ implies $\pi(i)<\pi(j)$; \item $f(i)=f(j)<0$ implies $\pi(i)>\pi(j)$. \end{enumerate} Let ${\cal E}^{(\ell)}(\pi)$ denote the set of left enriched $\pi$-partitions, and let \[ \Gamma^{(\ell)}(\pi)\coloneqq\sum_{f\in{\cal E}^{(\ell)}(\pi)}x_{\left|f(1)\right|}x_{\left|f(2)\right|}\cdots x_{\left|f(n)\right|}. \] Just as the generating function $\Gamma(\pi)$ for enriched $\pi$-partitions depends only on the peak set of $\pi$, Petersen proved that $\Gamma^{(\ell)}(\pi)$ depends only on the left peak set \cite[Corollary 6.5]{Petersen2007}, so we can define \[ K_{n,\Lambda}^{(\ell)}\coloneqq\Gamma^{(\ell)}(\pi) \] for any $\pi\in\mathfrak{P}_{n}$ with $\Lpk(\pi)=\Lambda$. Unlike the peak functions $K_{n,\Lambda}$, the $K_{n,\Lambda}^{(\ell)}$ are not quasisymmetric functions but rather type B quasisymmetric functions.\footnote{We omit the definition of a type B quasisymmetric function, as they play no further role in this paper, but we refer the reader to \cite{Chow2001}.} Petersen briefly mentions that the span of the left peak functions $K_{n,\Lambda}^{(\ell)}$ forms a graded subalgebra $\Pi^{(\ell)}$ of the algebra of type B quasisymmetric functions, called the \textit{algebra of left peaks} \cite[p. 604]{Petersen2007}.\footnote{Petersen actually calls this algebra the ``left algebra of peaks'', but the ``algebra of left peaks'' seems to us a more natural name.} The $n$th homogeneous component of $\Pi^{(\ell)}$ has dimension $F_{n+1}$, which is easily seen to be the number of left peak sets among $n$-permutations. He does not explicitly state a multiplication rule for the $K_{n,\Lambda}^{(\ell)}$, but it follows from the fundamental lemma of left enriched $P$-partitions \cite[Lemma 4.2]{Petersen2007} that the multiplication is given by \[ K_{m,\Lpk(\pi)}^{(\ell)}K_{n,\Lpk(\sigma)}^{(\ell)}=\sum_{\tau\in S(\pi,\sigma)}K_{m+n,\Lpk(\tau)}^{(\ell)}, \] which implies the shuffle-compatibility of the left peak set (and by Corollary \ref{c-rcsc}, the right peak set as well). \begin{thm}[Shuffle-compatibility of the left peak set] \label{t-Lpksc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The left peak set $\Lpk$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\Lpk}$ defined by \[ [\pi]_{\Lpk}\mapsto K_{\left|\pi\right|,\Lpk(\pi)}^{(\ell)} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\Lpk}$ to $\Pi^{(\ell)}$. \end{itemize} \end{thm} Although Petersen was the first to explicitly construct the algebra of left peaks, Theorem \ref{t-Lpksc} also follows from the work of Aguiar, Bergeron, and Nyman, who constructed the coalgebra dual to the algebra of left peaks \cite[Proposition 8.3 and Remark 8.7.3]{Aguiar2004}. We will extensively study coalgebras dual to shuffle algebras in Section \ref{s-section5}. Petersen's work can also be used (in conjunction with Proposition \ref{p-lpkdesvalues} and Theorem \ref{t-scqsym}) to prove the shuffle-compatibility of the left peak number. The proof is similar to the proof of Theorem \ref{t-pksc}, but we use the identity \[ \sum_{p=0}^{\infty}K_{n,\Lpk(L)}^{(\ell)}(1^{p})t^{p}=\frac{2^{2\lpk(L)}t^{\lpk(L)}(1+t)^{n-2\lpk(L)}}{(1-t)^{n+1}} \] \cite[Theorem 4.6]{Petersen2007}. Alternatively, Theorems \ref{t-quots} and \ref{t-lpkdessc} can be used to produce a different proof. \begin{thm}[Shuffle-compatibility of the left peak number] \label{t-lpksc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The left peak number $\lpk$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\lpk}$ defined by \[ [\pi]_{\lpk}\mapsto\begin{cases} {\displaystyle \frac{2^{2\lpk(\pi)}t^{\lpk(\pi)}(1+t)^{\left|\pi\right|-2\lpk(\pi)}}{(1-t)^{\left|\pi\right|+1}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \] is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\lpk}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{2^{2j}t^{j}(1+t)^{n-2j}}{(1-t)^{n+1}}x^{n}\right\} _{n\geq1,\:0\leq j\leq\left\lfloor n/2\right\rfloor }, \] a subalgebra of $\mathbb{Q}[[t*]][x]$. \item [\normalfont{(c)}] The $n$th homogeneous component of ${\cal A}_{\lpk}$ has dimension $\left\lfloor n/2\right\rfloor +1$. \end{itemize} \end{thm} By Theorem \ref{c-rcsc}, the right peak number (or, the number of long runs; see Lemma \ref{l-pkvalruns} (d)) is also shuffle-compatible. \section{Noncommutative symmetric functions and shuffle-compatibility} \label{s-section5} \subsection{Algebras, coalgebras, and graded duals} In this section, we introduce another criterion for shuffle-compatibility that will be in a sense ``dual'' to the criterion in Theorem \ref{t-scqsym}. For this, we shall need the notion dual to an algebra, which requires the following equivalent definition of an algebra. Let $R$ be a commutative ring. An $R$-\textit{algebra} $A$ is an $R$-module with an $R$-linear map $\mu\colon A\otimes A\rightarrow A$ such that the following diagram commutes: \noindent \begin{center} $\begin{CD} A \otimes A \otimes A @>\id \otimes \mu >> A \otimes A\\ @V \mu \otimes \id VV @VV \mu V\\ A \otimes A @>> \phantom{bb} \mu \phantom{bb} > A \end{CD}$ \par\end{center} \noindent The map $\mu$ is called a \textit{multiplication}.% \footnote{The multiplication map $\mu$ satisfies $\mu(a\otimes b)=ab$ under the original definition of an algebra; from this, it is clear why $\mu$ is called ``multiplication''.% } The notion dual to an algebra is a coalgebra, defined as follows. An $R$-\textit{coalgebra} $C$ is an $R$-module with an $R$-linear map $\Delta\colon C\rightarrow C\otimes C$ such that the following diagram commutes: \begin{center} $\begin{CD} C \otimes C \otimes C @<\id \otimes \Delta << C \otimes C\\ @A \Delta \otimes \id AA @AA \Delta A\\ C \otimes C @<< \phantom{bb} \Delta \phantom{bb} < C \end{CD}$ \par\end{center} \noindent Observe that this diagram is essentially the diagram in the definition of an algebra, but with arrows reversed. The map $\Delta$ is called a \textit{comultiplication}.% \footnote{\noindent Typically, the definition of an algebra requires an additional linear map called a ``unit'' which satisfies a certain commutative diagram, and the definition of a coalgebra requires the dual concept of a ``counit'', but these will not be necessary for our work.} If an $R$-module $A$ is simultaneously an $R$-algebra and an $R$-coalgebra such that its comultiplication map is an $R$-algebra homomorphism, then we call $A$ an $R$-\textit{bialgebra}. Suppose now that $R$ is a field and that $V=\bigoplus_{n\geq0}V_{n}$ is a graded $R$-vector space of finite type, that is, each component $V_{n}$ is finite-dimensional. Let $V^{o}$ denote the \textit{graded dual} $V^{o}\coloneqq\bigoplus_{n\geq0}V_{n}^{*}$, which is contained inside the dual space $V^{*}$ of $V$. We say that a linear map $\phi\colon V\rightarrow W$ is \textit{graded} if, for every $n\geq0$, $\phi(V_{n})$ is contained inside the $n$th homogeneous component of $W$. Every graded linear map $\phi\colon V\rightarrow W$ induces a graded linear map $\phi^{o}\colon W^{o}\rightarrow V^{o}$ given by \[ \phi^{o}(f)(v)=f(\phi(v)) \] for $f\in W^{o}$ and $v\in V$. In particular, if $A$ is a graded $R$-algebra---meaning that its vector space and multiplication are graded---and is of finite type, then by reversing the arrows in the commutative diagram, we see that $A^{o}$ has the structure of a graded $R$-coalgebra. In fact, if $A$ has basis $\{a_{i}\}$ with structure constants $\{c_{j,k}^{i}\}$, i.e., \[ a_{j}a_{k}=\sum_{i}c_{j,k}^{i}a_{i}, \] then the $\{c_{j,k}^{i}\}$ are also the structure constants for the comultiplication of the dual basis $\{f_{i}\}$ in $A^{o}$: \[ \Delta(f_{i})=\sum_{j,k}c_{j,k}^{i}f_{j}\otimes f_{k}. \] Similarly, the graded dual of a graded $R$-coalgebra is a graded $R$-algebra, with the same correspondence of structure constants. If $\phi$ is an $R$-algebra homomorphism, then $\phi^{o}$ is an $R$-coalgebra homomorphism, and vice versa. \subsection{Noncommutative symmetric functions} The graded dual of $\QSym$ is the coalgebra of noncommutative symmetric functions, which also has an algebra structure. We begin by defining the algebra of noncommutative symmetric functions before introducing the comultiplication. Let $\mathbb{Q}\langle\langle X_{1},X_{2},\dots\rangle\rangle$ be the $\mathbb{Q}$-algebra of formal power series in countably many noncommuting variables $X_{1},X_{2},\dots$. Consider the elements \[ \mathbf{h}_{n}\coloneqq\sum_{i_{1}\leq\cdots\leq i_{n}}X_{i_{1}}X_{i_{2}}\cdots X_{i_{n}} \] of $\mathbb{Q}\langle\langle X_{1},X_{2},\dots\rangle\rangle$, with $\mathbf{h}_{0}=1$, which are noncommutative versions of the complete symmetric functions $h_{n}$. Note that $\mathbf{h}_{n}$ is the noncommutative generating function for weakly increasing words of length $n$ on the alphabet $\mathbb{P}$ of positive integers. For example, the weakly increasing word $13449$ is encoded by $X_{1}X_{3}X_{4}^{2}X_{9}$, which appears as a term in $\mathbf{h}_{5}$. Given a composition $L=(L_{1},\dots,L_{k})$, we let \begin{equation} \label{e-hL-def} \mathbf{h}_{L}\coloneqq\mathbf{h}_{L_{1}}\cdots\mathbf{h}_{L_{k}}. \end{equation} Equivalently, \[ \mathbf{h}_{L}=\sum_{i_{1},\dots,i_{n}}X_{i_{1}}X_{i_{2}}\cdots X_{i_{n}} \] where the sum is over all $i_{1},\dots,i_{n}$ satisfying \[ \underset{L_{1}}{\underbrace{i_{1}\leq\cdots\leq i_{L_{1}}}},\underset{L_{2}}{\underbrace{i_{L_{1}+1}\leq\cdots\leq i_{L_{1}+L_{2}}}},\dots,\underset{L_{k}}{\underbrace{i_{L_{1}+\cdots+L_{k-1}+1}\leq\cdots\leq i_{n}}}, \] so $\mathbf{h}_{L}$ is the noncommutative generating function for words in $\mathbb{P}$ whose descent set is contained in $\Des(L)$. Let $\mathbf{Sym}_{n}$ denote the vector space spanned by $\{\mathbf{h}_{L}\}_{L\vDash n}$, and let $\mathbf{Sym}\coloneqq\bigoplus_{n=0}^{\infty}\mathbf{Sym}_{n}$. Then $\mathbf{Sym}$ is a graded $\mathbb{Q}$-algebra called the \textit{algebra of noncommutative symmetric functions} with coefficients in $\mathbb{Q}$, a subalgebra of $\mathbb{Q}\langle\langle X_{1},X_{2},\dots\rangle\rangle$. The study of $\mathbf{Sym}$ was initiated in \cite{ncsf1}, although noncommutative symmetric functions have appeared implicitly in earlier work, including the first author's Ph.D. thesis \cite{gessel-thesis}. Also see \cite{Gessel2014,Zhuang2016,Zhuang2016a} for a series of recent papers by the present authors on the subject of permutation enumeration in which $\mathbf{Sym}$ plays a role. In the following sections, we will work with noncommutative symmetric functions with coefficients in either the ring $\mathbb{Q}[x,y]$ of polynomials in $x$ and $y$ with rational coefficients or the ring $\Q[[t*]][x,y]$ of polynomials in $x$ and $y$ with coefficients in the ring of formal power series in $t$ in which multiplication is the Hadamard product in $t$ but ordinary multiplication in $x$ and~$y$. We will also need to use formal sums of noncommutative symmetric functions of unbounded degree with these coefficient rings, for example, $\sum_{n=0}^\infty \mathbf{h}_n x^n$. We will use the notation $\Sym_{xy}$ for the algebra of \ncsf s\ of unbounded degree with coefficients in $\mathbb{Q}[x,y]$ and $\Sym_{txy}$ for \ncsf s\ with coefficients in $\Q[[t*]][x,y]$. For a composition $L=(L_{1},\dots,L_{k})$, we define \[ \mathbf{r}_{L}\coloneqq\sum_{i_{1},\dots,i_{n}}X_{i_{1}}X_{i_{2}}\cdots X_{i_{n}} \] where the sum is over all $i_{1},\dots,i_{n}$ satisfying \[ \underset{L_{1}}{\underbrace{i_{1}\leq\cdots\leq i_{L_{1}}}}>\underset{L_{2}}{\underbrace{i_{L_{1}+1}\leq\cdots\leq i_{L_{1}+L_{2}}}}>\cdots>\underset{L_{k}}{\underbrace{i_{L_{1}+\cdots+L_{k-1}+1}\leq\cdots\leq i_{n}}}. \] Then $\mathbf{r}_{L}$ is the noncommutative generating function for words on the alphabet $\mathbb{P}$ with descent composition $L$. Note that \begin{equation} \mathbf{h}_{L}=\sum_{\substack{\Des(K)\subseteq\Des(L)\\ \left|K\right|=\left|L\right| } }\mathbf{r}_{K},\label{e-hitor} \end{equation} so by inclusion-exclusion, \begin{equation} \mathbf{r}_{L}=\sum_{\substack{\Des(K)\subseteq\Des(L)\\ \left|K\right|=\left|L\right| } }(-1)^{l(L)-l(K)}\mathbf{h}_{K}\label{e-ritoh} \end{equation} where $l(L)$ denotes the number of parts of the composition $L$. Hence the $\mathbf{r}_{L}$ are noncommutative symmetric functions, and are in fact noncommutative versions of the ribbon skew Schur functions $r_{L}$. Since $\mathbf{r}_{L}$ and $\mathbf{r}_{M}$ have no terms in common for $L\neq M$, it is clear that $\{\mathbf{r}_{L}\}_{L\vDash n}$ is linearly independent. From (\ref{e-hitor}), we see that $\{\mathbf{r}_{L}\}_{L\vDash n}$ spans $\mathbf{Sym}_{n}$, so $\{\mathbf{r}_{L}\}_{L\vDash n}$ is a basis for $\mathbf{Sym}_{n}$. Because $\{\mathbf{h}_{L}\}_{L\vDash n}$ spans $\mathbf{Sym}_{n}$ and has the same cardinality as $\{\mathbf{r}_{L}\}_{L\vDash n}$, we conclude that $\{\mathbf{h}_{L}\}_{L\vDash n}$ is also a basis for $\mathbf{Sym}_{n}$. Let us also consider the noncommutative generating function \[ \mathbf{e}_{n}\coloneqq\sum_{i_{1}>\cdots>i_{n}}X_{i_{1}}X_{i_{2}}\cdots X_{i_{n}} \] for decreasing words of length $n$ on the alphabet $\mathbb{P}$. Then $\mathbf{e}_n$ is a noncommutative version of the elementary symmetric function $e_n$, and $\mathbf{e}_n\in \mathbf{Sym}_n$ since $\mathbf{e}_n=\r_{(1^n)}$. Let \[ \mathbf{h}(x)\coloneqq\sum_{n=0}^{\infty}\mathbf{h}_{n}x^{n} \] be the generating function for the noncommutative complete symmetric functions $\mathbf{h}_{n}$, where $x$ commutes with all the variables $X_i$, and let \[ \mathbf{e}(x)\coloneqq\sum_{n=0}^{\infty}\mathbf{e}_{n}x^{n} \] be the generating function for the $\mathbf{e}_{n}$. Then \begin{equation} \mathbf{e}(x)=\mathbf{h}(-x)^{-1},\label{e-ehr} \end{equation} which is a consequence of the infinite product formulas \begin{equation*} \mathbf{h}(x) =(1-X_1x)^{-1}(1-X_2x)^{-1}\cdots \text{\quad and \quad} \mathbf{e}(x) = \cdots (1+X_2x)(1+X_1x) \end{equation*} (cf. \cite[p.~38]{gessel-thesis} or \cite[Section 7.3]{ncsf1}). The algebra $\mathbf{Sym}$ can be given a coalgebra structure by defining the comultiplication $\Delta:\mathbf{Sym}\rightarrow\mathbf{Sym}\otimes\mathbf{Sym}$ by \begin{equation} \label{e-hcomult} \Delta\mathbf{h}_{n}=\sum_{i=0}^{n}\mathbf{h}_{i}\otimes\mathbf{h}_{n-i} \end{equation} and extending by the rule \begin{equation} \Delta(fg)=(\Delta f)(\Delta g).\label{e-deltahomo} \end{equation} Since $\Delta$ is an algebra homomorphism, $\mathbf{Sym}$ is a bialgebra.% \footnote{In fact, both $\mathbf{Sym}$ and $\QSym$ are Hopf algebras (see \cite{Grinberg2014} for a definition) and the duality between $\mathbf{Sym}$ and $\QSym$ given in the next theorem is in fact a Hopf algebra duality. However, we will not need the antipode in this paper, nor will we be concerned with the coalgebra structure of $\QSym$.} The comultiplication $\Delta$ extends naturally to $\Sym_{xy}$ and $\Sym_{txy}$ (but note that now tensor products are over the coefficient ring). Next, we show that the graded dual of the algebra $\QSym$ is the coalgebra $\mathbf{Sym}$; cf.~\cite[Theorem 6.1]{ncsf1} or \cite[Section 5.3]{Grinberg2014}. We may extend the definition of $\mathbf{h}_L$ to weak compositions $L$ by \eqref{e-hL-def}, so that if $L$ is a weak composition then $\mathbf{h}_L = \mathbf{h}_{L'}$ where $L'$ is the composition obtained from $L$ by removing all zero parts. Recall that, as defined in Section \ref{s-bijproof}, weak compositions are added componentwise. \begin{lem} \label{l-hcoprod} Let $L$ be a composition. Then $\Delta \mathbf{h}_L = \sum_{J,K} \mathbf{h}_J \otimes\mathbf{h}_K$, where the sum is over all pairs of weak compositions $J$ and $K$ with the same number of parts such that $J+K = L$. \end{lem} \begin{proof} This follows easily from the fact that $\Delta \mathbf{h}_{(L_1,\dots, L_m)} = \Delta \mathbf{h}_{L_1}\cdots \Delta \mathbf{h}_{L_m}$ together with \eqref{e-hcomult}. \end{proof} \begin{thm} \label{t-duality} The graded dual of the algebra $\QSym$ of quasisymmetric functions is isomorphic to the coalgebra $\mathbf{Sym}$ of noncommutative symmetric functions. In particular, the monomial basis $\{M_{L}\}$ of\/ $\QSym$ is dual to the complete basis $\{\mathbf{h}_{L}\}$ of\/ $\mathbf{Sym}$ and the fundamental basis $\{F_{L}\}$ of\/ $\QSym$ is dual to the ribbon basis $\{\mathbf{r}_{L}\}$ of\/ $\mathbf{Sym}$. \end{thm} \begin{proof} We first consider the product of two monomial quasisymmetric functions. Define coefficients $b^{L}_{J,K}$ by \begin{equation} \label{e-Mproduct} M_J M_K = \sum_L b^{L}_{J,K}M_L. \end{equation} It is easy to see that $b^L_{J,K}$ is the number of pairs of weak compositions $(J',K')$ with the same number of parts such that $J'$ is obtained from $J$ by inserting zeros, $K'$ is obtained from $K$ by inserting zeros, and $J'+K' = L$. Lemma \ref{l-hcoprod} implies that \begin{equation*} \Delta \mathbf{h}_L = \sum_{J,K} b_{J,K}^L \mathbf{h}_J\otimes \mathbf{h}_K, \end{equation*} where the coefficients $b_{J,K}^L$ are the same as those in equation \eqref{e-Mproduct}. Thus $\{M_L\}_{L\vDash n}$ and $\{\mathbf{h}_L\}_{L\vDash n}$ are dual bases for $\QSym_n$ and $\mathbf{Sym}_n$. We may define a pairing between $\QSym$ and $\mathbf{Sym}$ by \begin{equation*} \pair{M_{K}}{\mathbf{h}_{L}}=\delta_{K,L}= \begin{cases} 1, & \text{if }K=L,\\ 0, & \text{otherwise.} \end{cases} \end{equation*} Then \begin{align*} \pair{F_K}{\r_L} &= \biggl\langle\,\sum_{\Des(I)\supseteq\Des(K)}M_I, \sum_{\Des(J)\subseteq \Des(L)}(-1)^{l(L) - l(J)}\mathbf{h}_J\biggr\rangle\\ &=\sum_{\substack{\Des(J)\supseteq\Des(K)\\ \Des(J)\subseteq\Des(L)}}(-1)^{l(L) - l(J)}=\delta_{K,L}, \end{align*} and this implies that $\{F_L\}$ and $\{\r_L\}$ are dual bases. \end{proof} \subsection{Monoidlike elements} We call an element $f$ of a bialgebra \emph{monoidlike} if $\Delta f = f\otimes f$. It is straightforward to show that the product of two monoidlike elements is monoidlike and that the inverse of a monoidlike element, if it exists, is monoidlike.% \footnote{% A monoidlike element $f$ of a bialgebra is called \emph{grouplike} if $\varepsilon(f)$ is the identity element of the coefficient ring, where $\varepsilon$ is the counit. In our bialgebras, the counit is the coefficient of $\mathbf{h}_0$, the identity element of $\mathbb{Q}$ or $\mathbb{Q}[x,y]$ is 1, and the identity element of $\Q[[t*]][x,y]$ is $(1-t)^{-1}=\sum_{k=0}^\infty t^k$. Nearly all of our monoidlike elements are actually grouplike, but exceptions occur in Corollary \ref{c-monoidlike2}.} \begin{lem} \label{l-hemonoidlike} $\mathbf{h}(x)$, $\mathbf{e}(x)$, and $\mathbf{e}(xy)$ are monoidlike in $\Sym_{xy}$. \end{lem} \begin{proof} We have \begin{align*} \Delta\mathbf{h}(x) & =\sum_{n=0}^{\infty}\Delta\mathbf{h}_{n}x^{n}\\ & =\sum_{n=0}^{\infty}\sum_{i+j=n}(\mathbf{h}_{i}\otimes\mathbf{h}_{j})x^{n}\\ & =\sum_{n=0}^{\infty}\sum_{i+j=n}\mathbf{h}_{i}x^{i}\otimes\mathbf{h}_{j}x^{j}\\ & =\sum_{i,j=0}^{\infty}\mathbf{h}_{i}x^{i}\otimes\mathbf{h}_{j}x^{j}\\ & =\Big(\sum_{i=0}^{\infty}\mathbf{h}_{i}x^{i}\Big)\otimes\Big(\sum_{j=0}^{\infty}\mathbf{h}_{j}x^{j}\Big), \end{align*} so $\mathbf{h}(x)$ is monoidlike. Since $\mathbf{e}(x)=\mathbf{h}(-x)^{-1}$, this implies that $\mathbf{e}(x)$ and $\mathbf{e}(xy)$ are monoidlike. \end{proof} \begin{lem} \label{l-monoidlike2} Let $f=\sum_{n=0}^\infty a_n t^n$ be an element of $\Sym_{txy}$ where each $a_n$ is an element of $\Sym_{xy}$. Then $f$ is monoidlike in $\Sym_{txy}$ if and only if each $a_n$ is monoidlike in $\Sym_{xy}$. \end{lem} \begin{proof} We have \begin{align*} f\otimes f&= \sum_{m,n=0}^\infty a_mt^m \otimes a_n t^n\\ &= \sum_{m,n=0}^\infty (a_m \otimes a_n) (t^m * t^n)\\ &=\sum_{n=0}^\infty (a_n \otimes a_n) t^n \shortintertext{and} \Delta f&=\sum_{n=0}^\infty \Delta a_n t^n. \end{align*} Thus $\Delta f = f\otimes f$ if and only if $\Delta a_n = a_n \otimes a_n$ for each $n$. \end{proof} The next result follows immediately from Lemma \ref{l-monoidlike2}. \begin{cor} \label{c-monoidlike2} Suppose that $f$ is monoidlike in $\Sym_{xy}$. Then $(1-tf)^{-1}$, $(1-t^2f)^{-1}$, and $1+tf$ are monoidlike in $\Sym_{txy}$. \end{cor} \subsection{Implications of duality to shuffle-compatibility} Let $\st$ be a descent statistic. For each $\st$-equivalence class $\alpha$ of compositions, let \[ \mathbf{r}_{\alpha}^{\st} \coloneqq \sum_{L\in\alpha}\mathbf{r}_{L}. \] We call the noncommutative symmetric functions $\mathbf{r}_{\alpha}^{\st}$ $\st$-\textit{ribbons}. The following is the dual version of Theorem \ref{t-scqsym}. \begin{thm} \label{t-rib} A descent statistic $\st$ is shuffle-compatible if and only if for every $\st$-equivalence class $\alpha$ of compositions, there exist constants $c_{\beta,\gamma}^{\alpha}$ for which \[ \Delta\mathbf{r}_{\alpha}^{\st}=\sum_{\beta,\gamma}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}; \] that is, the $\st$-ribbons $\mathbf{r}_{\alpha}^{\st}$ span a subcoalgebra of $\mathbf{Sym}$. In this case, the $c_{\beta,\gamma}^{\alpha}$ are the structure constants for ${\cal A}_{\st}$. \end{thm} \begin{proof} By Theorem \ref{t-duality}, we have a pairing between quasisymmetric functions and noncommutative symmetric functions for which \[ \pair{F_{L}}{\mathbf{r}_{J}}=\begin{cases} 1, & \mbox{if }L=J,\\ 0, & \mbox{otherwise.} \end{cases} \] Suppose that the $\st$-ribbons $\mathbf{r}_{\alpha}^{\st}$ span a subcoalgebra of $\mathbf{Sym}$ with structure constants $c_{\beta,\gamma}^{\alpha}$. Let $D$ be the subcoalgebra spanned by the $\mathbf{r}_{\alpha}^{\st}$ and let $i\colon D\rightarrow\mathbf{Sym}$ be the canonical inclusion map, a $\mathbb{Q}$-coalgebra homomorphism. Then $i$ induces a $\mathbb{Q}$-algebra homomorphism $i^{o}\colon\QSym\rightarrow D^{o}$ given by \begin{align*} i^{o}(F_{L})(\mathbf{r}_{\alpha}^{\st}) & =\pair{F_{L}}{i(\mathbf{r}_{\alpha}^{\st})}\\ & =\pair{F_{L}}{\mathbf{r}_{\alpha}^{\st}}\\ & =\begin{cases} 1, & \mbox{if }L\in\alpha,\\ 0, & \mbox{otherwise.} \end{cases} \end{align*} Observe that $i^{o}(F_{L})=i^{o}(F_{J})$ whenever $L$ and $J$ belong to the same $\st$-equivalence class. Hence, we can define $f_{\alpha} \coloneqq i^{o}(F_{L})$ for $L\in\alpha$. Then $\{f_{\alpha}\}$ is the basis of $D^{o}$ dual to $\{\mathbf{r}_{\alpha}^{\st}\}$, so \[ f_{\beta}f_{\gamma}=\sum_{\alpha}c_{\beta,\gamma}^{\alpha}f_{\alpha}. \] By Theorem \ref{t-scqsym}, $\st$ is shuffle-compatible with shuffle algebra isomorphic to $D^{o}$. We omit the proof of the reverse implication, as it is similar; we begin with a quotient algebra of $\QSym$ and then show that its basis elements are dual to the $\st$-ribbons $\mathbf{r}_{\alpha}^{\st}$. \end{proof} While Theorem \ref{t-scqsym} tells us that we can prove the shuffle-compatibility of a descent statistic by constructing suitable quotients of $\QSym$, Theorem \ref{t-rib} tells us that we could, alternatively, construct suitable subcoalgebras of $\mathbf{Sym}$, and this is what we will do in Sections \ref{s-pkdes} to \ref{s-udrdes}. Moreover, because it is straightforward to compute coproducts of noncommutative symmetric functions, Theorem \ref{t-rib} is useful for showing that a descent statistic is not shuffle-compatible and for conjecturing that a statistic is shuffle-compatible, which is not the case for Theorem~\ref{t-scqsym}. Although Theorem \ref{t-rib} does not give us a way to describe the dual algebra ${\cal A}_{\st}$, we can describe ${\cal A}_{\st}$ explicitly using the following theorem. For an $\st$-equivalence class $\alpha$ of compositions, we let $|\alpha|$ be the sum of the parts of any composition $L\in \alpha$. \begin{thm} \label{l-monoidlikesc} Let $\st$ be a descent statistic and let $u_{\alpha}\in \Q[[t*]][x,y]$ be linearly independent elements \textup{(}\kern -1pt over $\mathbb{Q}$\textup{)} indexed by $\st$-equivalence classes $\alpha$ of compositions. Suppose that $f=\sum_{\alpha}u_{\alpha}\mathbf{r}_{\alpha}^{\st}$ is monoidlike in $\Sym_{txy}$ and that there exist constants $c_{\beta,\gamma}^{\alpha}$ such that $u_{\beta}u_{\gamma}=\sum_{\alpha}c_{\beta,\gamma}^{\alpha}u_{\alpha}$ for all $\st$-equivalence classes $\beta$ and~$\gamma$, where $c_{\beta,\gamma}^{\alpha}=0$ unless $|\alpha| = |\beta|+|\gamma|$. Then $\st$ is shuffle-compatible and the linear map defined by \[ [\pi]_{\st}\mapsto u_{\alpha}, \] where $\Comp(\pi)\in\alpha$, is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\st}$ to the subalgebra of $\Q[[t*]][x,y]$ spanned by the $u_{\alpha}$. \end{thm} \begin{proof} Since $f$ is monoidlike, we have that \begin{align*} \sum_{\alpha}u_{\alpha}\Delta\mathbf{r}_{\alpha}^{\st & =\Delta f =\Big(\sum_{\beta}u_{\beta}\mathbf{r}_{\beta}^{\st}\Big)\otimes\Big(\sum_{\gamma}u_{\gamma}\mathbf{r}_{\gamma}^{\st}\Big)\\ & =\sum_{\beta,\gamma}u_{\beta}u_{\gamma}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}\\ & =\sum_{\alpha}u_{\alpha}\sum_{\beta,\gamma}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}. \end{align*} Extracting the linear combinations of elements of $\mathbf{Sym}_i\otimes \mathbf{Sym}_j$, where $i+j=n$, we obtain \[ \sum_{|\alpha| = n}u_{\alpha}\Delta\mathbf{r}_{\alpha}^{\st}=\sum_{|\alpha| = n}u_{\alpha}\sum_{\beta,\gamma}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}. \] Since these are finite sums, linear independence of the $u_{\alpha}$ implies \[ \Delta\mathbf{r}_{\alpha}^{\st}=\sum_{\beta,\gamma}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st} \] and it follows from Theorem \ref{t-rib} that $\st$ is shuffle-compatible and that the $c_{\beta,\gamma}^{\alpha}$ are the structure constants for ${\cal A}_{\st}$. Since \[ u_{\beta}u_{\gamma}=\sum_{\alpha}c_{\beta,\gamma}^{\alpha}u_{\alpha} \] for all $\st$-equivalence classes $\beta$ and $\gamma$, the map $[\pi]_{\st}\mapsto u_{\alpha}$ is an algebra homomorphism from ${\cal A}_{\st}$ to the subalgebra of $\Q[[t*]][x,y]$ spanned by the $u_{\alpha}$, and since the $u_{\alpha}$ are linearly independent, this map is an isomorphism. \end{proof} We note that Theorem \ref{l-monoidlikesc} can be generalized to a statement about monoidlike elements of more general graded bialgebras; we stated it only in the special case that we will use. Unfortunately, in our applications, it is difficult to show directly that the desired $u_{\alpha}$ are closed under multiplication. The following variant of Theorem \ref{l-monoidlikesc} uses a change of basis argument to deal with this problem. \begin{thm} \label{l-monoidlikesc1} Let $\st$ be a descent statistic and let $u_{\alpha}\in\Q[[t*]][x,y]$ be linearly independent elements \textup{(}\kern -1pt over $\mathbb{Q}$\textup{)} indexed by $\st$-equivalence classes $\alpha$ of compositions. Suppose that $f=\sum_{\alpha}u_{\alpha}\mathbf{r}_{\alpha}^{\st}$ is monoidlike in $\Sym_{txy}$, where $u_\alpha$ is $x^{|\alpha|}$ times an element of $\mathbb{Q}[[t*]][y]$. Let $\mathbf{s}_{n,p,q}$ be the coefficient of $x^n y^p t^q $ in $\sum_{\alpha}u_{\alpha}\mathbf{r}_{\alpha}^{\st}$ and suppose that $\mathbf{r}_{\alpha}^{\st}\in \Span_{\mathbb{Q}}\{\mathbf{s}_{n,p,q}\}$ for each $\alpha$. Then $\st$ is shuffle-compatible and the linear map defined by \[ [\pi]_{\st}\mapsto u_{\alpha}, \] where $\Comp(\pi)\in\alpha$, is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\st}$ to the subalgebra of $\Q[[t*]][x,y]$ spanned by the $u_{\alpha}$. \end{thm} \begin{proof} Equating coefficients of $x^n$ in \begin{equation*} f = \sum_{\alpha}u_{\alpha}\mathbf{r}_{\alpha}^{\st}=\sum_{n,p,q} x^n y^p t^q \mathbf{s}_{n,p,q} \end{equation*} gives \begin{equation*} \sum_{|\alpha|=n}u_{\alpha}\mathbf{r}_{\alpha}^{\st}=x^n\sum_{p,q}y^p t^q \mathbf{s}_{n,p,q}. \end{equation*} Since the sum on the left is finite, this shows that $\mathbf{s}_{n,p,q}\in\Span_{\mathbb{Q}}\{\mathbf{r}_{\alpha}^{\st}\}$, so $\Span_{\mathbb{Q}}\{\mathbf{r}_{\alpha}^{\st}\}=\Span_{\mathbb{Q}}\{\mathbf{s}_{n,p,q}\}$. Let $f_q$ be the coefficient of $t^q$ in $f$. Then since $f$ is monoidlike, $f_{q}$ is monoidlike by Lemma \ref{l-monoidlike2}, so \begin{align*} \sum_{n,p} x^n y^p \Delta\mathbf{s}_{n,p,q} &= \Delta f_{q} =f_{q}\otimes f_{q}\\ &=\Big(\sum_{n_1,p_1} x^{n_1} y^{p_1} \mathbf{s}_{n_1,p_1,q}\Big) \otimes \Big(\sum_{n_2,p_2} x^{n_2} y^{p_2} \mathbf{s}_{n_2,p_2,q}\Big)\\ &=\sum_{n_1,p_1,n_2,p_2}x^{n_1+n_2}y^{p_1+p_2} \mathbf{s}_{n_1,p_1,q}\otimes\mathbf{s}_{n_2,p_2,q} \end{align*} Equating coefficients of $x^ny^p$ shows that $\Span_{\mathbb{Q}}\{\mathbf{s}_{n,p,q}\}$ is a subcoalgebra of $\mathbf{Sym}$ and thus so is $\Span_{\mathbb{Q}}\{\mathbf{r}_{\alpha}^{\st}\}$. As a result, there exist constants $c_{\beta,\gamma}^{\alpha}$ such that \[ \Delta\mathbf{r}_{\alpha}^{\st}=\sum_{\beta,\gamma}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}, \] so it follows from Theorem \ref{t-rib} that $\st$ is shuffle-compatible and that the $c_{\beta,\gamma}^{\alpha}$ are the structure constants for ${\cal A}_{\st}$. Moreover, since $\sum_{\alpha}u_{\alpha}\mathbf{r}_{\alpha}^{\st}$ is monoidlike, we have \begin{align*} \sum_{\beta,\gamma}\sum_{\alpha}u_{\alpha}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st} & =\sum_{\alpha}u_{\alpha}\sum_{\beta,\gamma}c_{\beta,\gamma}^{\alpha}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}\\ & =\sum_{\alpha}u_{\alpha}\Delta\mathbf{r}_{\alpha}^{\st}\\ & =\Delta\Big(\sum_{\alpha}u_{\alpha}\mathbf{r}_{\alpha}^{\st}\Big)\\ & =\Big(\sum_{\beta}u_{\beta}\mathbf{r}_{\beta}^{\st}\Big)\otimes\Big(\sum_{\gamma}u_{\gamma}\mathbf{r}_{\gamma}^{\st}\Big)\\ & =\sum_{\beta,\gamma}u_{\beta}u_{\gamma}\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}. \end{align*} Using the linear independence of the $\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}$ and the fact that for each $i$ and $j$, $\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}\in \mathbf{Sym}_i \otimes \mathbf{Sym}_j$ for only finitely many $\beta$ and $\gamma$, we may equate coefficients of $\mathbf{r}_{\beta}^{\st}\otimes\mathbf{r}_{\gamma}^{\st}$ to obtain $u_{\beta}u_{\gamma}=\sum_{\alpha}c_{\beta,\gamma}^{\alpha}u_{\alpha}$. Thus the map $[\pi]_{\st}\mapsto u_{\alpha}$ is an algebra homomorphism from ${\cal A}_{\st}$ to the subalgebra of $\mathbb{Q}[[t*]][x,y]$ spanned by the $u_{\alpha}$, and since the $u_{\alpha}$ are linearly independent, this map is an isomorphism. \end{proof} Before applying Theorem \ref{l-monoidlikesc1} to prove new results, let us see how it works in a simpler case, the shuffle-compatibility of the descent number (Theorem \ref{t-dessc}). We start with the formula \begin{equation} \label{e-desh} (1-t\mathbf{h}(x))^{-1}=\frac{1}{1-t} +\sum_{n=1}^{\infty}\sum_{L\vDash n} \frac{t^{\des(L)+1}}{(1-t)^{n+1}}x^{n}\mathbf{r}_{L}, \end{equation} which is the case $y=0$ of Equation (\ref{e-pkdes}) below, but is easily proved directly \cite[p.~83, Equation (3)]{gessel-thesis}. Let $\mathbf{r}_{n,j}^{\des}$, for $n\ge1$, denote the noncommutative symmetric function $\mathbf{r}_{\alpha}^{\des}$ where $\alpha$ is the $\des$-equivalence class of compositions corresponding to $n$-permutations with $j-1$ descents, and let $\r_{0,j}^{\des}=\delta_{0,j}$. Let \begin{equation} \label{e-udes} u_{n,j}=u_{\alpha}=\frac{t^{j}}{(1-t)^{n+1}}x^{n} \end{equation} for $n\ge0$. Then $\sum_\alpha u_\alpha \r_{\alpha}^{\des}$ is equal to \eqref{e-desh}, which is monoidlike in $\Q[[t*]][x]$ by Lemma \ref{l-hemonoidlike} and Corollary \ref{c-monoidlike2}. With the notation of Theorem \ref{l-monoidlikesc1}, we have for fixed $n\ge1$, \[ \sum_{q=0}^{\infty}t^{q}\mathbf{s}_{n,0,q} =\sum_{j=1}^{n}\frac{t^{j}}{(1-t)^{n+1}}\mathbf{r}_{n,j}^{\des}, \] Multiplying both sides by $(1-t)^{n+1}$ and equating coefficients of powers of $t$ shows that $\r_{n,j}^{\des}\in \Span_{\mathbb{Q}}\{\mathbf{s}_{n,0,q}\}$. So by Theorem \ref{l-monoidlikesc1}, we obtain part (d) of Theorem \ref{t-dessc}. \subsection{Shuffle-compatibility of \texorpdfstring{$(\pk,\des)$}{(pk, des)}} \label{s-pkdes} In the remainder of Section \ref{s-section5}, we use Theorem \ref{l-monoidlikesc1} to establish the shuffle-compatibility and describe the shuffle algebras of the descent statistics $(\pk,\des)$, $(\lpk,\des)$, $(\udr,\des)$, and $\udr$. All computations are done in the algebra $\Sym_{txy}$ of noncommutative symmetric functions with coefficients in $\Q[[t*]][x,y]$. We start with the shuffle-compatibility of $(\pk,\des)$. \begin{thm}[Shuffle-compatibility of $(\pk,\des)$] \label{t-pkdessc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The pair $(\pk,\des)$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{(\pk,\des)}$ defined by \begin{multline*} [\pi]_{(\pk,\des)}\mapsto\\ \begin{cases} {\displaystyle \frac{t^{\pk(\pi)+1}(y+t)^{\des(\pi)-\pk(\pi)}(1+yt)^{\left|\pi\right|-\pk(\pi)-\des(\pi)-1}(1+y)^{2\pk(\pi)+1}}{(1-t)^{\left|\pi\right|+1}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \end{multline*} is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{(\pk,\des)}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{t^{j+1}(y+t)^{k-j}(1+yt)^{n-j-k-1}(1+y)^{2j+1}}{(1-t)^{n+1}}x^{n}\right\} _{\substack{n\geq1,\hphantom{...................}\\ 0\leq j\leq\left\lfloor (n-1)/2\right\rfloor ,\\ j\leq k\leq n-j-1\hphantom{......} } }, \] a subalgebra of $\Q[[t*]][x,y]$. \item [\normalfont{(c)}] The $(\pk,\des)$ shuffle algebra ${\cal A}_{(\pk,\des)}$ is isomorphic to the span of \[ \{1\}\cup\{p^{n-j}(1+y)^{n}(1-y)^{n-2k}x^{n}\}_{n\geq1,\:0\leq j\leq n-1,\:0\leq k\leq\left\lfloor j/2\right\rfloor }, \] a subalgebra of $\mathbb{Q}[p,x,y]$. \item [\normalfont{(d)}] For $n\geq1$, the $n$th homogeneous component of ${\cal A}_{(\pk,\des)}$ has dimension $\left\lfloor (n+1)^{2}/4\right\rfloor $. \end{itemize} \end{thm} We prove here parts (a), (b), and (d). We postpone the proof of part (c) until Section \ref{s-altdes}. \begin{proof} By Lemma 4.1 of \cite{Zhuang2016a}, we have the formula \begin{multline} \label{e-pkdes} (1-t\mathbf{e}(xy)\mathbf{h}(x))^{-1}=\frac{1}{1-t}+\\ \sum_{n=1}^{\infty}\sum_{L\vDash n}\frac{t^{\pk(L)+1}(y+t)^{\des(L)-\pk(L)}(1+yt)^{n-\pk(L)-\des(L)-1}(1+y)^{2\pk(L)+1}}{(1-t)^{n+1}}x^{n}\mathbf{r}_{L}. \end{multline} Let $\mathbf{r}_{n,j,k}^{(\pk,\des)}$ denote the noncommutative symmetric function $\mathbf{r}_{\alpha}^{(\pk,\des)}$ where $\alpha$ is the $(\pk,\des)$-equivalence class of compositions corresponding to $n$-permutations with $j-1$ peaks and $k-1$ descents. By (\ref{e-pkdes}) and Proposition \ref{p-pkdesvalues}, we have \begin{align*} & (1-t\mathbf{e}(xy)\mathbf{h}(x))^{-1}\\ & \qquad\qquad=\frac{1}{1-t}+\sum_{n=1}^{\infty}\sum_{j=0}^{\left\lfloor (n-1)/2\right\rfloor }\sum_{k=j}^{n-j-1}\frac{t^{j+1}(y+t)^{k-j}(1+yt)^{n-j-k-1}(1+y)^{2j+1}}{(1-t)^{n+1}}x^{n}\mathbf{r}_{n,j+1,k+1}^{(\pk,\des)}\\ & \qquad\qquad=\frac{1}{1-t}+\sum_{n=1}^{\infty}\sum_{j=1}^{\left\lfloor (n+1)/2\right\rfloor }\sum_{k=j}^{n-j+1}\frac{t^{j}(y+t)^{k-j}(1+yt)^{n-j-k+1}(1+y)^{2j-1}}{(1-t)^{n+1}}x^{n}\mathbf{r}_{n,j,k}^{(\pk,\des)}, \end{align*} and this is monoidlike by Lemma \ref{l-hemonoidlike} and Corollary \ref{c-monoidlike2}. Now define $\mathbf{s}_{n,p,q}$ by \begin{align*} \sum_{n,p,q=0}^{\infty}x^{n}y^{p}t^{q}\mathbf{s}_{n,p,q} & =(1-t\mathbf{e}(xy)\mathbf{h}(x))^{-1}. \end{align*} For fixed $n\geq1,$ we have \[ \sum_{p,q=0}^{\infty}y^{p}t^{q}\mathbf{s}_{n,p,q}=\sum_{j=1}^{\left\lfloor (n+1)/2\right\rfloor }\sum_{k=j}^{n-j+1}\frac{t^{j}(y+t)^{k-j}(1+yt)^{n-j-k+1}(1+y)^{2j-1}}{(1-t)^{n+1}}\mathbf{r}_{n,j,k}^{(\pk,\des)}. \] This identity can be inverted to obtain \[ \sum_{j=1}^{\left\lfloor (n+1)/2\right\rfloor }\sum_{k=j}^{n-j+1}y^{j}t^{k}\mathbf{r}_{n,j,k}^{(\pk,\des)}=(1+u)\left(\frac{1-v}{1+uv}\right)^{n+1}\sum_{p,q=0}^{\infty}u^{p}v^{q}\mathbf{s}_{n,p,q}, \] where \[ u=\frac{1+t^{2}-2yt-(1-t)\sqrt{(1+t)^{2}-4yt}}{2(1-y)t} \] and \[ v=\frac{(1+t)^{2}-2yt-(1+t)\sqrt{(1+t)^{2}-4yt}}{2yt}, \] in the formal power series ring $\mathbb{Q}[[t,y]]$. It is easily checked that $u$ and $v$ are both formal power series divisible by $t$, so $(1-v)/(1+uv)$ is a well-defined formal power series in $t$ and $y$. Equating coefficients of $y^p t^q $ shows that each $\mathbf{r}_{n,j,k}^{(\pk,\des)}$ is a linear combination of the $\mathbf{s}_{n,p,q}$. (Since $u$ and $v$ are divisible by $t$, only finitely many terms on the right will contribute a term in $t^q $.) Parts (a) and (b) then follow from Theorem \ref{l-monoidlikesc1}. By Proposition \ref{p-pkdesvalues}, we know that for $n\geq1$, the number of $(\pk,\des)$-equivalence classes for $n$-permutations is \[ \sum_{j=0}^{\left\lfloor (n-1)/2\right\rfloor }((n-j-1)-j+1) =\sum_{j=0}^{\left\lfloor (n-1)/2\right\rfloor }(n-2j), \] which is easily shown to be equal to $\left\lfloor (n+1)^{2}/4\right\rfloor$. This proves (d). \end{proof} Note that $(\pk,\des)$ and $(\val,\des)$ are $rc$-equivalent statistics, and that $(\val,\des)$ and $(\epk,\des)$ are equivalent statistics. Thus, by Corollary \ref{c-rcsc} and Theorem \ref{t-esc}, we know that $(\val,\des)$ and $(\epk,\des)$ are also shuffle-compatible and have shuffle algebras isomorphic to $\cal{A}_{(\pk,\des)}$. \subsection{Shuffle-compatibility of \texorpdfstring{$(\lpk,\des)$}{(lpk, des)}} We now prove the shuffle-compatibility of $(\lpk,\des)$ and characterize its shuffle algebra. \begin{thm}[Shuffle-compatibility of $(\lpk,\des)$] \label{t-lpkdessc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The pair $(\lpk,\des)$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{(\lpk,\des)}$ defined by \begin{multline*} [\pi]_{(\lpk,\des)}\mapsto\\ \begin{cases} \displaystyle{\frac{t^{\lpk(\pi)}(y+t)^{\des(\pi)-\lpk(\pi)}(1+yt)^{\left|\pi\right|-\lpk(\pi)-\des(\pi)}(1+y)^{2\lpk(\pi)}}{(1-t)^{\left|\pi\right|+1}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \end{multline*} is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{(\lpk,\des)}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{(1+yt)^{n}}{(1-t)^{n+1}}x^{n}\right\} _{n\geq1}\bigcup\left\{ \frac{t^{j}(y+t)^{k-j}(1+yt)^{n-j-k}(1+y)^{2j}}{(1-t)^{n+1}}x^{n}\right\} _{\substack{n\geq2,\hphantom{............}\\ 1\leq j\leq\left\lfloor n/2\right\rfloor ,\\ j\leq k\leq n-j\phantom{...} } }, \] a subalgebra of $\Q[[t*]][x,y]$. \item [\normalfont{(c)}] The $n$th homogeneous component of ${\cal A}_{(\lpk,\des)}$ has dimension $\left\lfloor n^{2}/4\right\rfloor +1$. \end{itemize} \end{thm} \begin{proof} By Lemma 4.6 of \cite{Zhuang2016a}, we have the formula \begin{multline*} \mathbf{h}(x)(1-t\mathbf{e}(xy)\mathbf{h}(x))^{-1}=\frac{1}{1-t}+\\ \sum_{n=1}^{\infty}\sum_{L\vDash n}\frac{t^{\lpk(L)}(y+t)^{\des(L)-\lpk(L)}(1+yt)^{n-\lpk(L)-\des(L)}(1+y)^{2\lpk(L)}}{(1-t)^{n+1}}x^{n}\mathbf{r}_{L}. \end{multline*} Let $\mathbf{r}_{n,j,k}^{(\lpk,\des)}$ denote $\mathbf{r}_{\alpha}^{(\lpk,\des)}$ where $\alpha$ is the $(\lpk,\des)$-equivalence class of compositions corresponding to $n$-permutations with $j$ left peaks and $k$ descents. Define $\mathbf{s}_{n,p,q}$ by \begin{align*} \sum_{n,p,q=0}^{\infty}x^{n}y^{p}t^{q}\mathbf{s}_{n,p,q} & =\mathbf{h}(x)(1-t\mathbf{e}(xy)\mathbf{h}(x))^{-1}. \end{align*} Then the proofs for parts (a) and (b) follow in the same manner as for Theorem \ref{t-pkdessc}, using Proposition \ref{p-lpkdesvalues} and Corollary \ref{c-monoidlike2} along the way. By Proposition \ref{p-lpkdesvalues}, the number of $(\lpk,\des)$-equivalence classes for $n$-permutations is \[ 1+\sum_{j=1}^{\left\lfloor n/2\right\rfloor }((n-j)-j+1)=1+\sum_{j=1}^{\left\lfloor n/2\right\rfloor }(n-2j+1), \] which is easily shown to be equal to $\left\lfloor n^{2}/4\right \rfloor +1$. This proves (c). \end{proof} Although $(\lpk, \des)$ and $(\rpk, \des)$ are not equivalent, $r$-equivalent, $c$-equivalent, or $rc$-equivalent, this argument does show that $(\rpk, \des)$ is shuffle-compatible and has shuffle algebra isomorphic to that of $(\lpk, \des)$ because $(\lpk, \des)$ is $r$-equivalent to $(\rpk, \asc)$---where $\asc$ is the number of ascents% ---and $(\rpk, \asc)$ is equivalent to $(\rpk, \des)$. \subsection{Shuffle-compatibility of \texorpdfstring{$\udr$}{udr} and \texorpdfstring{$(\udr,\des)$}{(udr,des)}} \label{s-udrdes} Finally, we prove our result for the pair $(\udr,\des)$ and derive from it the analogous result for $\udr$, the number of up-down runs. \begin{thm}[Shuffle-compatibility of $(\udr,\des)$] \label{t-udsc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The pair $(\udr,\des)$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{(\udr,\des)}$ defined by \begin{equation*} [\pi]_{(\udr,\des)}\mapsto\\ \begin{cases} \displaystyle{\frac{N_\pi}{(1-t)(1-t^2)^{\left|\pi\right|}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \end{equation*} where \def\ceil{\udr(\pi)/2}{\ceil{\udr(\pi)/2}} \def\floor{\udr(\pi)/2}{\floor{\udr(\pi)/2}} \begin{multline*} N_\pi = t^{\udr(\pi)}(1+y)^{\udr(\pi)-1}(1+yt^{2})^{|\pi|-\des(\pi)-\ceil{\udr(\pi)/2}} (y+t^{2})^{\des(\pi)-\floor{\udr(\pi)/2}}\\ \times(1+yt)^{\ceil{\udr(\pi)/2}-\floor{\udr(\pi)/2}}(y+t)^{1-\ceil{\udr(\pi)/2}+\floor{\udr(\pi)/2}}, \end{multline*} is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{(\udr,\des)}$ to the span of \begin{multline*} \left\{ \frac{1}{1-t}\right\} \bigcup\, \left\{ \frac{t(1+yt)(1+yt^2)^{n-1}}{(1-t)(1-t^2)^{n}}x^{n}\right\} _{\!n\geq1} \\ \bigcup\, \left\{ \frac{t^{j}(1+y)^{j-1}(1+yt^2)^{n-k-\ceil{j/2}}(y+t^2)^{k-\floor{j/2}}S_j} {(1-t)(1-t^2)^n}x^n \right\} _{\substack{n\geq1,\hfill\\ 2\le j\le n ,\hfill\\ \floor{j/2}\leq k\leq n-\ceil{j/2} }}, \end{multline*} where $S_j$ is $1+yt$ if $j$ is odd and is $y+t$ if $j$ is even, a subalgebra of $\Q[[t*]][x,y]$. \item [\normalfont{(c)}] The $n$th homogeneous component of ${\cal A}_{(\udr,\des)}$ has dimension $\binom{n}{2}+1$. \end{itemize} \end{thm} \begin{proof} By Lemma 4.11 of \cite{Zhuang2016a}, together with Lemma \ref{l-udr} (b) and (c), we have \begin{equation} \label{e-ud1} (1-t^{2}\mathbf{h}(x)\mathbf{e}(xy))^{-1}(1+t\mathbf{h}(x)) =\frac{1}{1-t}+\sum_{n=1}^{\infty} \sum_{L\vDash n}\frac{N_L}{(1-t)(1-t^{2})^{n}}x^{n}\mathbf{r}_{L} \end{equation} where \begin{multline*} N_L = t^{\udr(L)}(1+y)^{\udr(L)-1}(1+yt^{2})^{n-\des(L)-\ceil{\udr(L)/2}} (y+t^{2})^{\des(L)-\floor{\udr(L)/2}}\\ \times(1+yt)^{\ceil{\udr(L)/2} -\floor{\udr(L)/2}}(y+t)^{1-\ceil{\udr(L)/2}+\floor{\udr(L)/2}}.\quad \end{multline*} Note that $\ceil{\udr(L)/2} -\floor{\udr(L)/2}$ is 1 if $\udr(L)$ is odd and is 0 if $\udr(L)$ is even. The left-hand side of \eqref{e-ud1} is monoidlike by Lemma \ref{l-hemonoidlike} and Corollary \ref{c-monoidlike2}. Let $\mathbf{r}_{n,j,k}^{(\udr,\des)}$ denote $\mathbf{r}_{\alpha}^{(\udr,\des)}$ where $\alpha$ is the $(\udr,\des)$-equivalence class of compositions corresponding to $n$-permutations with $j$ up-down runs and $k$ descents. Then by \eqref{e-ud1} and Proposition \ref{p-udrdesvalues}, we have \begin{multline} \label{e-ud2} (1-t^{2}\mathbf{h}(x)\mathbf{e}(xy))^{-1}(1+t\mathbf{h}(x)) =\frac{1}{1-t}+\sum_{n=1}^{\infty} \biggl( \frac{t(1+yt)(1+yt^2)^{n-1}}{(1-t)(1-t^2)^{n}}x^{n}\mathbf{r}_{n,1,0}^{(\udr,\des)}\\ +\sum_{\substack{2\le j\le n\\ \floor{j/2}\le k\le n-\ceil{j/2}}} \frac{t^{j}(1+y)^{j-1}(1+yt^2)^{n-k-\ceil{j/2}}(y+t^2)^{k-\floor{j/2}}S_j} {(1-t)(1-t^2)^n} x^n \mathbf{r}_{n,j,k}^{(\udr,\des)}\biggr) \end{multline} with $S_j$ as in the statement of the theorem. Define $\mathbf{s}_{n,p,q}$ by \begin{equation} \label{e-s-ud} \sum_{n,p,q=0}^{\infty}x^{n}y^{p}t^{q}\mathbf{s}_{n,p,q} =(1-t^{2}\mathbf{h}(x)\mathbf{e}(xy))^{-1}(1+t\mathbf{h}(x)). \end{equation} To prove (a) and (b), as in Theorems \ref{t-pkdessc} and \ref{t-lpkdessc}, it is sufficient to show that each $\mathbf{r}_{n,j,k}^{(\udr,\des)}$ is in the span of the $\mathbf{s}_{n,p,q}$. Because of the floor and ceiling functions in \eqref{e-ud2}, we are not able to use the generating function inversion method that we used in the proofs of Theorems \ref{t-pkdessc} and \ref{t-lpkdessc}, so we take a different approach. Expanding the right side of \eqref{e-ud2} and comparing with \eqref{e-s-ud} shows that, for fixed $n$, each $\mathbf{s}_{n,p,q}$ is a linear combination (with integer coefficients) of the $\mathbf{r}_{n,j,k}^{(\udr,\des)}$. We will show that these relations can be inverted to express each $\mathbf{r}_{n,j,k}^{(\udr,\des)}$ as a linear combination of the $\mathbf{s}_{n,p,q}$. We totally order $\mathbb{N}\times \mathbb{N}$ colexicographically, so $(p_1,q_1)\le (p_2,q_2)$ if and only if $q_1<q_2$ or $q_1=q_2$ and $p_1\le p_2$. We shall show that for each $j$ and $k$, there exist $p$ and $q$ such that $\mathbf{r}_{n,j,k}^{(\udr,\des)}$ appears with coefficient 1 in $\mathbf{s}_{n,p,q}$ and if $\mathbf{r}_{n,j',k'}^{(\udr,\des)}$ appears in $\mathbf{s}_{n,p,q}$ then $(k', j')\le (k,j)$. This will imply, by induction, that $\mathbf{r}_{n,j,k}^{(\udr,\des)}$ is in $\Span_{\mathbb{Q}}\{\mathbf{s}_{n,p,q}\}$. With this total order, the monomial $y^{p}t^{q}$ with minimal $(p,q)$ that appears in the coefficient of $x^n \mathbf{r}_{n,j,k}^{(\udr,\des)}$ on the right side of \eqref{e-ud2} is easily seen to be $y^{k_j}t^j$ (with coefficient 1), where $k_j$ is $k-\floor{j/2}+1$ if $j$ is even and is $k-\floor{j/2}$ if $j$ is odd. In other words, $\mathbf{s}_{n,p,q}$ does not contain any $\mathbf{r}_{n,j,k}^{(\udr,\des)}$ for which $(p,q)<(k_j,j)$. Replacing $p$ and $q$ with $k_j$ and $j$, and replacing $k$ and $j$ with $k'$ and $j'$, we have that \begin{equation*} \mathbf{s}_{n,k_j,j}=\mathbf{r}_{n,j,k}^{(\udr,\des)}+\sum_{j'\!,\, k'}c_{j'\!,\, k'} \mathbf{r}_{n,j',k'}^{(\udr,\des)} \end{equation*} where $c_{j'\!,\, k'}=0$ unless $(k'_{j'}, j') < (k_j,j)$. It is easy to see that $(k'_{j'}, j') < (k_j,j)$ implies $(k', j') < (k,j)$, so we have \begin{equation*} \mathbf{s}_{n,k_j,j}=\mathbf{r}_{n,j,k}^{(\udr,\des)}+\sum_{(k'\!,\, j')<(k,j)}c_{j'\!,\, k'} \mathbf{r}_{n,j',k'}^{(\udr,\des)} \end{equation*} and this completes the proof of (b). By Proposition \ref{p-udrdesvalues}, the number of $(\udr,\des)$-equivalence classes for $n$-permutations is \[ 1+\sum_{j=2}^{n}(n-\floor{j/2} - \ceil{j/2}+1) = 1+\sum_{j=2}^n (n-j+1) = 1+\binom{n}{2}. \] This proves part (c). \end{proof} We know from Lemma \ref{l-udr} that $\udr$ and $(\lpk,\val)$ are equivalent statistics, from Lemma \ref{l-pkvalruns} (d) that $\val$ is equivalent to $\epk$, and from Proposition \ref{p-lpkpk} that $(\lpk,\val)$ is $rc$-equivalent to $(\lpk,\pk)$. It follows that $(\udr,\des)$ is equivalent to $(\lpk,\val,\des)$ and $(\lpk,\epk,\des)$, and is $rc$-equivalent to $(\lpk,\pk,\des)$. Thus, by Theorem \ref{t-esc} and Corollary \ref{c-rcsc}, the statistics $(\lpk,\val,\des)$, $(\lpk,\epk,\des)$, and $(\lpk,\pk,\des)$ are all shuffle-compatible and have shuffle algebras isomorphic to ${\cal A}_{(\udr,\des)}$. \begin{thm}[Shuffle-compatibility of the number of up-down runs] \label{t-udrsc} \leavevmode \begin{itemize} \item [\normalfont{(a)}] The number of up-down runs $\udr$ is shuffle-compatible. \item [\normalfont{(b)}] The linear map on ${\cal A}_{\udr}$ defined by \begin{align*} [\pi]_{\udr}\mapsto & \begin{cases} {\displaystyle \frac{2^{\udr(\pi)-1}t^{\udr(\pi)}(1+t^{2})^{\left|\pi\right|-\udr(\pi)}}{(1-t)^{2}(1-t^{2})^{\left|\pi\right|-1}}x^{\left|\pi\right|}}, & \text{if }\left|\pi\right|\geq1,\\ 1/(1-t), & \text{if }\left|\pi\right|=0, \end{cases} \end{align*} is a $\mathbb{Q}$-algebra isomorphism from ${\cal A}_{\udr}$ to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{2^{j-1}t^{j}(1+t^{2})^{n-j}}{(1-t)^{2}(1-t^{2})^{n-1}}x^{n}\right\} _{n\geq1,\:1\leq j\leq n}, \] a subalgebra of $\mathbb{Q}[[t*]][x]$. \item [\normalfont{(c)}] For $n\geq1$, the $n$th homogeneous component of ${\cal A}_{\udr}$ has dimension $n$. \end{itemize} \end{thm} \begin{proof} Let $\phi$ be the homomorphism from $\Q[[t*]][x,y]$ to $\Q[[t*]][x]$ obtained by setting $y$ to 1. It is easy to check that $\phi$ takes the image of $[\pi]_{(\udr,\des)}$ as described in Theorem \ref{t-udsc} (b) to the image of $[\pi]_{\udr}$ as given in (b). Then (a) and (b) follow from Theorem \ref{t-quots}. Part (c) follows from Proposition \ref{p-udrdesvalues}. \end{proof} Since $\udr$ and $(\lpk,\val)$ are equivalent statistics, $(\lpk,\val)$ is shuffle-compatible and ${\cal A}_{(\lpk,\val)}$ is isomorphic to ${\cal A}_{\udr}$. Furthermore, since $(\lpk,\val)$ is $rc$-equivalent to $(\lpk,\pk)$, we have also proven the shuffle-compatibility of $(\lpk,\pk)$ and characterized the shuffle algebra ${\cal A}_{(\lpk,\pk)}$. Similar reasoning implies that $(\lpk,\epk)$, $(\rpk,\val)$, $(\rpk,\pk)$, $(\rpk,\epk)$, $(\lr,\val)$, $(\lr,\pk)$, and $(\lr,\epk)$ are shuffle-compatible and that their shuffle algebras are all isomorphic to ${\cal A}_{\udr}$. \section{Miscellany} \label{s-section6} \subsection{An alternate description of the \texorpdfstring{$\pk$}{pk} and \texorpdfstring{$(\pk,\des)$}{(pk, des)} shuffle algebras} \label{s-altdes} In Section 5.5, we showed that the $(\pk,\des)$ shuffle algebra ${\cal A}_{(\pk,\des)}$ is isomorphic to the span of \[ \left\{ \frac{1}{1-t}\right\} \bigcup\left\{ \frac{t^{j+1}(y+t)^{k-j}(1+yt)^{n-j-k-1}(1+y)^{2j+1}}{(1-t)^{n+1}}x^{n}\right\} _{\substack{n\geq1,\hphantom{...................}\\ 0\leq j\leq\left\lfloor (n-1)/2\right\rfloor ,\\ j\leq k\leq n-j-1\hphantom{......} } } \] where the multiplication is the Hadamard product in $t$. Let \[ P_{n,j,k}(y,t)\coloneqq t^{j+1}(y+t)^{k-j}(1+yt)^{n-j-k-1}(1+y)^{2j+1} \] for $n\geq1$, $0\leq j\leq\left\lfloor (n-1)/2\right\rfloor $, and $j\leq k\leq n-j-1$. Then by \cite[Corollary 4.3.1]{Stanley2011}, we can write \[ \frac{P_{n,j,k}(y,t)}{(1-t)^{n+1}}=\sum_{p=1}^{\infty}R_{n,j,k}(p,y)t^{p} \] where $R_{n,j,k}(p,y)$ is a polynomial in $p$ of degree at most $n$, with coefficients that are polynomials in $y$. In this section, we give a simple description of the span of the polynomials $R_{n,j,k}(p,y)$, which yields an alternate characterization of the $(\pk,\des)$ shuffle algebra that was stated in part (c) of Theorem \ref{t-pkdessc}. Similarly, a simple description of the span of the polynomials $R_{n,j,k}(p,1)$ yields an alternate characterization of the $\pk$ shuffle algebra, which is part (c) of Theorem \ref{t-pksc}. It is simpler to work with the following transformations of the polynomials $R_{n,j,k}(p,y)$ and $P_{n,j,k}(y,t)$; let \[ Q_{n,j,k}(p,z)\coloneqq(1-z)^{n}R_{n,j,k}\left(p,\frac{1+z}{1-z}\right) \] and let \begin{align*} A_{n,j,k}(t,z) & \coloneqq(1-z)^{n}P_{n,j,k}\left(\frac{1+z}{1-z},t\right)\\ & =(1-z)^{n}t^{j+1}\left(\frac{1+z}{1-z}+t\right)^{k-j}\left(1+\frac{1+z}{1-z}t\right)^{n-j-k-1}\left(1+\frac{1+z}{1-z}\right)^{2j+1}\\ & =2^{2j+1}t^{j+1}(1+t+z(1-t))^{k-j}(1+t-z(1-t))^{n-j-k-1}, \end{align*} so that \begin{equation} \label{e-A(t,z)} \frac{A_{n,j,k}(t,z)}{(1-t)^{n+1}}=\sum_{p=1}^{\infty}Q_{n,j,k}(p,z)t^{p}. \end{equation} Also, define $\bar{A}_{n,j,k}(t,z)$ by \begin{equation} \label{e-barA(t,z)} \frac{\bar{A}_{n,j,k}(t,z)}{(1-t)^{n+1}}=\sum_{p=0}^{\infty}Q_{n,j,k}(-p,z)t^{p}. \end{equation} \begin{lem} \label{l-Qnct} Each $Q_{n,j,k}(p,z)$, as a polynomial in $p$, has no constant term.\end{lem} \begin{proof} By \cite[Proposition 4.2.3]{Stanley2011}, from \eqref{e-A(t,z)} and \eqref{e-barA(t,z)} follows the equality of rational functions \[ \frac{\bar{A}_{n,j,k}(t,z)}{(1-t)^{n+1}}=-\frac{A_{n,j,k}(1/t,z)}{(1-(1/t))^{n+1}}, \] which implies \begin{align*} \bar{A}_{n,j,k}(t,z) & =(-1)^{n}t^{n+1}A_{n,j,k}(1/t,z)\\ & =(-1)^{n}2^{2j+1}t^{n+1}\left(\frac{1}{t}\right)^{j+1}\left(1+\frac{1}{t}+z\left(1-\frac{1}{t}\right)\right)^{k-j}\left(1+\frac{1}{t}-z\left(1-\frac{1}{t}\right)\right)^{n-j-k-1}\\ & =(-1)^{n}2^{2j+1}t^{j+1}(1+t-z(1-t))^{k-j}(1+t+z(1-t))^{n-j-k+1}. \end{align*} Evaluating at $t=0$ yields $\bar{A}_{n,j,k}(0,z)=0$, so by \eqref{e-barA(t,z)}, $Q_{n,j,k}(0,z)=0$. \end{proof} \begin{lem} Let $n\geq1$. Then the polynomials $Q_{n,j,k}(p,z)$ for $0\leq j\leq\left\lfloor (n-1)/2\right\rfloor $ and $j\leq k\leq n-j-1$ are linearly independent.\end{lem} \begin{proof} It is easy to see that the polynomials $P_{n,j,k}(y,t)$ are linearly independent, and that a linear dependence relation for the polynomials $Q_{n,j,k}(p,z)$ would imply a linear dependence relation for the polynomials $P_{n,j,k}(y,t)$. \end{proof} Essentially the same argument can be used to show that the polynomials $R_{n,j,k}(p,y)$ are also linearly independent. \begin{thm} Let $n\geq1$. Then \[ \Span_{\mathbb{Q}}\{Q_{n,j,k}(p,z)\}_{\substack{0\leq j\leq\left\lfloor (n-1)/2\right\rfloor ,\\ j\leq k\leq n-j-1\hphantom{......} } }=\Span_{\mathbb{Q}}\{p^{n-a}z^{a-2b}\}_{\substack{0\leq a\leq n-1,\\ 0\leq b\leq\left\lfloor a/2\right\rfloor } } \] \end{thm} \begin{proof} First, we show that each $Q_{n,j,k}(p,z)$ can be written as a linear combination of the polynomials $p^{n-a}z^{a-2b}$. Note that \begin{align*} \sum_{p=1}^{\infty}Q_{n,j,k}(p,z)t^{p} & =\frac{2^{2j+1}t^{j+1}(1+t+z(1-t))^{k-j}(1+t-z(1-t))^{n-j-k-1}}{(1-t)^{n+1}} \end{align*} is a linear combination of terms of the form \[ \frac{z^{l}t^{q}(1-t)^{l}}{(1-t)^{n+1}} = \frac{t^{q}z^{l}}{(1-t)^{n-l+1}} = \sum_{p=0}^{\infty}z^{l}{n-l+p-q \choose n-l}t^{p} \] where $0\leq l\leq n-2j-1$ and $j+1\leq q\leq n-j-l$. Moreover, ${n-l+p-q \choose n-l}$ is a polynomial in $p$ of degree $n-l$, so it is a linear combination of $1,p,p^{2},\dots,p^{n-l}$. This shows that each $Q_{n,j,k}(p,z)$ is a linear combination of terms of the form $p^{n-a}z^{l}$ with $n-a\leq n-l$, or equivalently, $l\leq a$, and $a\leq n-1$ by Lemma \ref{l-Qnct}. We set $c=a-l$, so that $p^{n-a}z^{l}=p^{n-a}z^{a-c}$. It remains to show that $c$ must be even. Observe that $(-p)^{n-a}(-z)^{a-c}=(-1)^{n}(-1)^{c}p^{n-a}z^{a-c}$. Thus, it suffices to show that $Q_{n,j,k}(p,z)=(-1)^{n}Q_{n,j,k}(-p,-z)$. Recall that \[ \bar{A}_{n,j,k}(t,z)=(-1)^{n}t^{n+1}A_{n,j,k}(1/t,z), \] so that \[ \bar{A}_{n,j,k}(t,-z)=(-1)^{n}t^{n+1}A_{n,j,k}(1/t,-z). \] Since \begin{align*} A_{n,j,k}(t,z) & =2^{2j+1}t^{j+1}(t+1-z(t-1))^{k-j}(t+1+z(t-1))^{n-j-k+1}\\ & =2^{2j+1}t^{n+1}\left(\frac{1}{t}\right)^{j+1}\left(1+\frac{1}{t}-z\left(1-\frac{1}{t}\right)\right)^{k-j}\left(1+\frac{1}{t}+z\left(1-\frac{1}{t}\right)\right)^{n-j-k-1}\\ & =t^{n+1}A_{n,j,k}(1/t,-z), \end{align*} we have \begin{align*} \sum_{p=1}^{\infty}(-1)^{n}Q_{n,j,k}(p,z)t^{p} & =\frac{(-1)^{n}A(t,z)}{(1-t)^{n+1}}\\ & =\frac{(-1)^{n}t^{n+1}A(1/t,-z)}{(1-t)^{n+1}}\\ & =\frac{\bar{A}(t,-z)}{(1-t)^{n+1}}\\ & =\sum_{p=1}^{\infty}Q_{n,j,k}(-p,-z)t^{p}. \end{align*} Therefore, $Q_{n,j,k}(p,z)=(-1)^{n}Q_{n,j,k}(-p,-z)$, so each $Q_{n,j,k}(p,z)$ is a linear combination of the polynomials $p^{n-a}z^{a-2b}$. Since we know that the polynomials $Q_{n,j,k}(m,z)$ are linearly independent, it suffices to show that the two sets of polynomials have the same cardinality. The restrictions $0\leq a\leq n-1$ and $0\leq b\leq\left\lfloor a/2\right\rfloor $ can be reformulated as $0\leq b\leq\left\lfloor (n-1)/2\right\rfloor $ and $2b\leq a\leq n-1$; the restriction on $b$ matches the condition on $j$, and the number of possible values of $a$ for a fixed $b$ is equal to the number of possible values of $k$ for a fixed $j$. Hence, the two sets are equinumerous and thus their spans are equal. \end{proof} We are now ready to prove our alternate characterization of ${\cal A}_{(\pk,\des)}$ and of ${\cal A}_{\pk}$. \begin{proof}[Proof of Theorem \ref{t-pkdessc} $\mathrm{(}c\mathrm{)}$] In this proof, we identify ${\cal A}_{(\pk,\des)}$ with its characterization given in part (b) of Theorem \ref{t-pkdessc}. Let $\psi\colon{\cal A}_{(\pk,\des)}\rightarrow\mathbb{Q}[p,x,y]$ be the linear map defined by \[ \psi\Big(\sum_{p=1}^{\infty}R_{n,j,k}(p,y)t^{p}x^{n}\Big)=R_{n,j,k}(p,y)x^{n} \] and $\psi(1/(1-t))=1$. With the usual multiplication of $\mathbb{Q}[p,x,y]$, it is easy to see that $\psi$ is an algebra homomorphism, and thus restricts to an algebra isomorphism from ${\cal A}_{(\pk,\des)}$ to the subalgebra of $\mathbb{Q}[p,x,y]$ spanned by the $R_{n,j,k}(p,y)x^{n}$. Observe that \[ \Span_{\mathbb{Q}}\{R_{n,j,k}(p,y)\}_{\substack{0\leq j\leq\left\lfloor (n-1)/2\right\rfloor ,\\ j\leq k\leq n-j-1\hphantom{......} } }=\Span_{\mathbb{Q}}\{p^{n-a}(1+y)^{n}(1-y)^{a-2b}\}_{\substack{0\leq a\leq n-1,\\ 0\leq b\leq\left\lfloor a/2\right\rfloor } }; \] this is immediate from the previous theorem and applying the inverse transformation: dividing by $(1-z)^{n}$ and setting $z=(y-1)/(1+y)$. Then the result follows. \end{proof} \begin{proof}[Proof of Theorem \ref{t-pksc} $\mathrm{(}c\mathrm{)}$] First note that setting $y=1$ in the basis for ${\cal A}_{(\pk,\des)}$ given by part (b) of Theorem \ref{t-pkdessc} gives the basis for ${\cal A}_{\pk}$ described in part (b) of Theorem \ref{t-pksc}. Thus setting $y=1$ in the basis for ${\cal A}_{(\pk,\des)}$ given by part (c) of Theorem \ref{t-pkdessc} will give a spanning set for ${\cal A}_{\pk}$. The only polynomials $p^{n-a}(1+y)^{n}(1-y)^{a-2b}x^{n}$ that are nonzero after setting $y=1$ are those for which $a=2b$, yielding the polynomials $2^{n}p^{n-2b}x^{n}$ for $0\leq2b\leq n-1$. The span of these polynomials is equal to the span of $p^{j}x^{n}$ for $1\leq j\leq n$ with $j$ having the same parity as $n$. \end{proof} We note that part (c) of Theorem \ref{t-pksc} can also be proven using Stembridge's self-reciprocity property for enriched order polynomials \cite[Proposition 4.2]{Stembridge1997}. Unfortunately, we were unable to use the approach in this section to give an alternate characterization of any of the shuffle algebras ${\cal A}_{\lpk}$, ${\cal A}_{(\lpk,\des)}$, ${\cal A}_{\udr}$, or ${\cal A}_{(\udr,\des)}$. \subsection{Non-shuffle-compatible permutation statistics} Although many well-known descent statistics have been shown to be shuffle-compatible, there are many descent statistics that are not shuffle-compatible. Here we list some of them. \begin{thm} \label{t-pairfalse} The set $\Pk\cup\Val$ and the tuples $(\pk,\val)$, $(\pk,\val,\des)$, $(\Pk,\des)$, $(\Pk,\val)$, $(\Pk,\val,\des)$, $(\Pk,\Val)$, $(\Lpk,\des)$, $(\Lpk,\val,\des)$, and $(\Epk,\des)$ are not shuffle-compatible. \end{thm} Recall that a birun of a permutation is a maximal monotone consecutive subsequence, and that $\br(\pi)$ is the number of biruns of $\pi$. The number of biruns is not shuffle-compatible, and the only joint statistics involving $\br$ that we have found that seem to be shuffle-compatible are $(\Lpk, \br)$ and $(\Epk, \br)$; however, these are easily shown to be equivalent to $\Epk$, which is shuffle-compatible (see the discussion following Conjecture \ref{cj-sc}). \begin{thm} The number of biruns $\br$ and the tuples $(\br,\des)$, $(\br,\maj)$, $(\br,\des,\maj)$, $(\br,\pk)$, $(\br,\pk,\des)$, $(\br,\lpk)$, $(\br,\lpk,\des)$, and $(\Pk,\br)$ are not shuffle-compatible. \end{thm} Although $(\des,\maj)$ is shuffle-compatible, we have not found any other shuffle-compatible joint statistics involving the major index. \begin{thm} The tuples $(\pk,\maj)$, $(\pk,\des,\maj)$, $(\lpk,\maj)$, $(\lpk,\des,\maj)$, $(\Pk, \maj)$, $(\Lpk,\maj)$, $(\udr,\maj)$, $(\udr,\des,\maj)$, and $(\lir,\maj)$ are not shuffle-compatible. \end{thm} In addition to the descent statistics examined in this paper, we mention that there are two additional families of descent statistics, one based on the classical notion of double descents and one based on the more recent notion of alternating descents. We say that $i$ (where $2\leq i\leq n-1$) is a \textit{double descent} of $\pi\in\mathfrak{P}_{n}$ if $\pi_{i-1}>\pi_{i}>\pi_{i+1}$; then we can define the double descent set and double descent number---as well as variations of these such as the left double descent set and left double descent number---in the obvious way. We say that $i\in[n-1]$ is an \textit{alternating descent} if $i$ is an even ascent or an odd descent; then we can define the alternating descent set, alternating descent number, and alternating major index in the obvious way. Alternating descents were introduced by Chebikin \cite{chebikin} and have been more recently studied by Remmel \cite{remmel} and by the present authors \cite{Gessel2014}. Aside from the alternating descent set\textemdash which is equivalent to the descent set\textemdash none of these statistics mentioned above are shuffle-compatible. Among joint statistics that involve one or more of these statistics, we have not found any that seem to be shuffle-compatible (other than a few that are equivalent to statistics that we know to be shuffle-compatible). Lastly, among permutation statistics that are not descent statistics, we have not found any that seem to be shuffle-compatible. \subsection{Open problems and conjectures} To conclude this paper, we state a couple permutation statistics that we conjecture to be shuffle-compatible based on empirical evidence, and a few more general open problems and conjectures on the topic of shuffle-compatibility. \begin{conjecture} \label{cj-sc} The tuples $(\udr,\pk)$ and $(\udr,\pk,\des)$ are shuffle-compatible. \end{conjecture} In a preliminary version of this paper, we included as part of Conjecture \ref{cj-sc} the conjectured shuffle-compatibility of the exterior peak set $\Epk$ and the tuples $(\Pk,\val,\des)$, $(\Pk,\udr)$, $(\Lpk,\val)$, and $(\Lpk,\val,\des)$. All of these have been addressed by Darij Grinberg. Specifically, Grinberg proved that $\Epk$ is shuffle-compatible using a $P$-partition argument \cite{Grinberg}, noted that $(\Pk,\udr)$ and $(\Lpk,\val)$ are both equivalent to $\Epk$, and found counterexamples showing that $(\Pk,\val,\des)$ and $(\Lpk,\val,\des)$ are not shuffle-compatible \cite{Grinberg2017}. Prior to this, Grinberg had shown that $\QSym$ is a ``dendriform algebra'' \cite{Grinberg2017a}, an algebra whose multiplication can be split into a ``left multiplication'' and a ``right multiplication'' satisfying certain nice axioms. Together with the shuffle-compatibility of $\Epk$, Grinberg proved that ${\cal A}_{\Epk}$ is a dendriform quotient of $\QSym$. More generally, he proved that a descent statistic is a dendriform quotient of $\QSym$ if and only if it is both ``left-shuffle-compatible'' and ``right-shuffle-compatible'', which are combinatorial conditions that, together, refine the notion of shuffle-compatibility. Other descent statistics that Grinberg has shown to be both left- and right-shuffle-compatible include the descent number $\des$, the pair $(\des,\maj)$, and the left peak set $\Lpk$. On the other hand, the major index $\maj$, the peak set $\Pk$, and the right peak set $\Rpk$ are neither left- nor right-shuffle-compatible. From Theorem \ref{t-pairfalse}, we know that a pair of two shuffle-compatible statistics need not be shuffle-compatible. Hence, we pose the following question. \begin{question} Suppose that $\st_{1}$ and $\st_{2}$ are shuffle-compatible statistics. Are there simple conditions that imply that the pair $(\st_{1},\st_{2})$ is shuffle-compatible? \end{question} Similarly, if a pair is shuffle-compatible, then that does not imply that the individual statistics in the pair are both shuffle-compatible. \begin{question} Suppose that the pair $(\st_{1},\st_{2})$ is shuffle-compatible. Are there simple conditions that imply that $\st_{1}$ and $\st_{2}$ are both shuffle-compatible? \end{question} Recall that Goulden \cite{Goulden1985} and Stadler \cite{Stadler1999} gave combinatorial proofs for the shuffle-compatibility of $(\des,\maj)$, and in Section \ref{s-bijproof} we provided combinatorial proofs for the shuffle-compatibility of the descent set $\Des$ and partial descent sets $\Des_{i,j}$. \begin{question} Can we find combinatorial proofs for the shuffle-compatibility of other statistics? \end{question} Finally, we present the following conjecture. \begin{conjecture} Every shuffle-compatible permutation statistic is a descent statistic. \end{conjecture} \vspace{10bp} \noindent \textbf{Acknowledgements.} We thank Bruce Sagan and an anonymous referee for providing extensive feedback on a preliminary version of this paper, as well as Marcelo Aguiar, Sami Assaf, Darij Grinberg, and Kyle Petersen for helpful discussions on this project.